title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "Iron Line Profiles in Strong Gravity", "Iron Line Profiles in Strong Gravity" ]
[ "Kris Beckwith \nDepartment of Physics\nUniversity of Durham\nSouth RoadDH1 3LEDurhamUK\n", "Chris Done \nDepartment of Physics\nUniversity of Durham\nSouth RoadDH1 3LEDurhamUK\n" ]
[ "Department of Physics\nUniversity of Durham\nSouth RoadDH1 3LEDurhamUK", "Department of Physics\nUniversity of Durham\nSouth RoadDH1 3LEDurhamUK" ]
[ "Mon. Not. R. Astron. Soc" ]
We describe a new code which can accurately calculate the relativistic effects which distort the emission from an accretion disc around a black hole. We compare our results for a disk which extends from the innermost stable orbit to 20r g in both Schwarzschild and maximal (a = 0.998) Kerr spacetimes with the two line profile codes which are on general release in the XSPEC spectral fitting package. These models generally give a very good description of the relativistic smearing of the line for this range of radii. However, these models have some limitations. In particular we show that the assumed form of the angular emissivity law (limb darkening or brightening) can make significant changes to the derived line profile where lightbending is important. This is always the case for extreme Kerr spacetimes or high inclination systems, where the observed line is produced from a very large range of different emitted angles. In these situations the assumed angular emissivity can affect the derived radial emissivity. The line profile is not simply determined by the well defined (but numerically difficult) physical effects of strong gravity, but is also dependent on the poorly known astrophysics of the disc emission.
10.1111/j.1365-2966.2004.07955.x
[ "https://export.arxiv.org/pdf/astro-ph/0402199v4.pdf" ]
7,416,690
astro-ph/0402199
6393a7bea019deb8e35c7b55f405b8d7befc301c
Iron Line Profiles in Strong Gravity 2004 Kris Beckwith Department of Physics University of Durham South RoadDH1 3LEDurhamUK Chris Done Department of Physics University of Durham South RoadDH1 3LEDurhamUK Iron Line Profiles in Strong Gravity Mon. Not. R. Astron. Soc 0002004Released 2004 Xxxxx XXPrinted 20 March 2022 (MN L a T E X style file v2.2)accretion discs -lines: profile -relativity We describe a new code which can accurately calculate the relativistic effects which distort the emission from an accretion disc around a black hole. We compare our results for a disk which extends from the innermost stable orbit to 20r g in both Schwarzschild and maximal (a = 0.998) Kerr spacetimes with the two line profile codes which are on general release in the XSPEC spectral fitting package. These models generally give a very good description of the relativistic smearing of the line for this range of radii. However, these models have some limitations. In particular we show that the assumed form of the angular emissivity law (limb darkening or brightening) can make significant changes to the derived line profile where lightbending is important. This is always the case for extreme Kerr spacetimes or high inclination systems, where the observed line is produced from a very large range of different emitted angles. In these situations the assumed angular emissivity can affect the derived radial emissivity. The line profile is not simply determined by the well defined (but numerically difficult) physical effects of strong gravity, but is also dependent on the poorly known astrophysics of the disc emission. INTRODUCTION Material in an accretion disk around a black hole is orbiting at high velocity, close to the speed of light, in a strong gravitational potential. Hence its emission is distorted by doppler shifts, length contraction, time dilation, gravitational redshift and lightbending. The combined impact of these special and general relativistic effects was first calculated in the now seminal paper of Cunningham (1975), where he used a transfer function to describe the relativistic effects. The observed spectrum from an accretion disc around a Kerr black hole is the convolution of this with the intrinsic disc continuum emission. While such models have been used to try to determine the gravitational potential from the observed accretion disk spectra (e.g. Laor & Netzer 1989;Ebisawa, Mitsuda & Hanawa 1991;Ebisawa et al. 1993;Makishima et al. 2000;Gierlinski, Maciolek-Niedzwiecki & Ebisawa 2001), these attempts suffer from our limited knowledge of the spectral shape of the intrinsic accretion disk emission (see e.g. the review by Blaes 2002). It is much easier to determine the relativistic effects from a sharp spectral feature, such as the iron fluorescence line expected from X-ray illumination of an accretion disc (Fabian et al. 1989). An originally narrow atomic transition is transformed into broad, skewed profile whose shape is given directly by the transfer function. Observationally, evidence for a relativistically smeared iron line first came from the ASCA observation of the active galactic nuclei (AGN) MCG-6-30-15 (Tanaka et al. 1995). Further observations showed evidence for the line profile being so broad as to require a maximally spinning black hole (Iwasawa et al. 1996). More recent data from XMM are interpreted as showing that the line is even wider than expected from an extreme Kerr disk, requiring direct extraction of the spin energy from the central black hole as well as the immense gravitational potential (Wilms et al. 2001). Such results are incredibly exciting, but Xray spectral fitting is not entirely unambiguous. There is a complex reflected continuum as well as the line (Nayakshin, Kazanas & Kallman 2000;Ballantyne, Ross & Fabian 2001). For an ionised disk (as inferred for MCG-6-30-15) the current models in general use (pexriv in the XSPEC spectral fitting package) are probably highly incomplete (Ross, Fabian & Young 1999). Complex ionised absorption also affects AGN spectra (e.g. Kaspi et al. 2002) and the illuminating continuum itself can have complex curvature rather than being a simple power law. However, in MCG-6-30-15 these issues have been examined in detail, and the results on the dramatic line width appear robust (Vaughan & Fabian 2003;Reynolds et al. 2004). Thus there is a clear requirement that the extreme relativistic effects are well modelled. There are two models which are currently widely available to the observational community, within the XSPEC spectral fitting package, diskline (based on Fabian et al. 1989) and laor (Laor 1991). The analytic diskline code models the line profile from an accretion disc around a Schwarzschild black hole (so of course cannot be used to describe the effects in a Kerr geometry). Also, it does not include the effects of lightbending (although Fabian et al. 1989 outline a scheme for incorporating this) and hence does not accurately calculate all the relativistic effects for r < 20rg (where rg = GM/c 2 ). By contrast, the laor model numerically calculates the line profile including lightbending for an extreme Kerr black hole, but uses a rather small set of tabulated transfer functions which limit its resolution and accuracy (see Section 3.3). While there are other relativistic codes in the literature which do not suffer from these limitations, these are not generally readily and/or easily available for observers to use. There is a clear need for a fast, accurate, high resolution code which can be used to fit data from the next generation of satellites. In this paper we describe our new code for computing the relativistic iron line profile in both the Schwarzschild and Kerr metrics. We compare this with the diskline and laor models in XSPEC for discs which extend down to the last stable orbit in their respective spacetimes, and highlight both the strengths and limitations of these previous models. CALCULATING STRONG GRAVITATIONAL EFFECTS We follow the standard approach (e.g. Cunningham 1975;Fabian et al. 1989;Fanton et al. 1997) and calculate an infinitesimal amount of flux, dFo observed at energy, Eo due to a patch on the disk which subtends a solid angle dΞ on the image of the disc at the observer (see Fig. 1 and 2). dFo (Eo) = Io (Eo) dΞ = g 3 Ie (Ee) dΞ(1) where the redshift factor g = Eo/Ee and specific intensity in the observers and emitters frame, Io and Ie are related through the relativistic invariant I/ν 3 . For an emission line with rest energy Eint, then Ie (Ee) = ε (re, µe) δ (Ee − Eint), where ε (re, µe) is the emissivity, which can be a function of the radius, re and angle, µe at which the photon is emitted (as defined in Figure 1). The infinitesimal flux becomes dFo (Eo) = g 4 ε (re, µe) δ (Eo − gEint) dΞ(2) The total flux can be obtained by integrating over all the entire image of the disk in the observers sky. We can write dΞ = dαdβ/r 2 o where α, β are the x, y coordinates of the image of the disc at the observer with coordinates (ro, θo) (see Figure 2), such that Fo (Eo) = 1 r 2 o g 4 ε (re, µe) δ (Eo − gEint) dαdβ (3) The α, β position of the image of the disc section is related to the conserved quantities, λ, q which describe the contributions to the photons angular momentum from the radial, polar and azimuthal directions (de Felice & Preti 1999), via: Photons that are emitted from the disc at some distance re from the hole are seen at coordinates α, β on the image of the disc at the observer. α = − λ sin θo (4) β = ± q 2 − λ 2 cot 2 θo (5) For a thin, Keplerian disc, these constants of motion can be written in terms of the redshift factor of the photon, g and the radius of emission, re and angle of emission, µe of the photon (as previously defined): λ = 1 Ω 1 − e −ψ g (6) q = reµe g(7) Here, Ω describes the azimuthal velocity profile of the emitting region and e −ψ is the 'redshift function' (Fanton et al. 1997;Martocchia 2000), which for a geometrically thin, Keplerian disc located in the equatorial plane are given by: Ω = 1 a + √ r 3 e(8)e ψ = 1 − 2 re (1 − aΩ) 2 − r 2 e + a 2 Ω 2 − 1 2 (9) Thus the problem reduces to finding the area on the observers sky subtended by all parts of the disc which contribute to a given Eo. g = g i + 1 2 (g i+1 − g i ) with width dg = g i+1 − g i . Red narrow lines show examples of the divisions used to create a set of meshed trapezoids that enable the area of the redshift bin to be determined. For the purposes of clarity, this mesh is far coarser than that used in the calculations. The photons (null geodesics) that link the accretion disc with the observer can only be found by determining the full general relativistic light travel paths which link the disc to the observer. These null geodesics are given by solutions of the geodesic equations (Carter 1968;Misner, Thorne and Wheeler 1973;Chandrasekhar 1983), which can be obtained numerically (e.g. Karas, Vokrouhlicky & Polnarev 1992), but can also be given in terms of analytic functions (Rauch & Blandford 1994;Agol 1997;Cadez, Fanton & Calvani 1998), which enable them to be solved quickly and with arbitrary accuracy. We develop a technique based solely on the image of the accretion disc in the α, β plane, which defines the flux received by the observer, similar to that employed by Cadez, Calvani & Fanton (2003). This allows us to generate high resolution, accurate line profiles numerically while avoiding the issues surrounding the partial derivatives of the geodesic equations (Viergutz 1993). We use the analytic solutions of the geodesic equations as tabulated by Rauch & Blandford (1994) to find the complete set of light travel paths that link the accretion disk and the observer at (ro, θo). We sort these by redshift factor, and use adaptive griding to find the boundarys on the (α, β)-plane for all lines of constant g. Two adjacent boundaries, gi and gi+1, therefore define the area of the redshift bin g = gi + 1 2 (gi+1 − gi) with width dg = gi+1 − gi when projected onto the (α, β)-plane (as is shown in Figure 3). We can simply determine the area of this region by dividing it up into a set of tessellating trapezoids, as shown in Figure 3, the area of of each of which can be determined by a simple geometric formula. The final area of the redshift bin is determined by summing together the contributions from all such trapezoids internal to (gi, gi+1). Each individual trapezoid is small, so that there is no significant change in re or µe (though this is not necessarily true across the total area dαdβ). The emissivity law can be convolved into the calculation using the emission coordinates at the centre of each trapezoid to weight its area before performing the summation over all trapeziods. This approach allows us to calculate line profiles at high spectral resolution on timescales of a few minutes on a 2GHz desktop PC. We have extensively tested the routines that calculate the null geodesic paths against those supplied by Eric Agol (Agol 1997) and have found them to be indistinguishable. We have also compared the line profiles generated by our code to those presented previously in the literature, in particular those generated from the code described by Fanton et al. (1997) and have again found them to be indistinguishable. HIGHLY RELATIVISTIC LINE PROFILES Introduction We have taken a disc from rms to rmax = 20rg (beyond which strong gravitational effects become of diminishing importance) for both the Schwarzschild (a = 0, rms = 6rg) and maximal Kerr (a = 0.998, rms = 1.235rg ) cases for θo = 5 • , 30 • , 60 • and 85 • . We first consider the extent of lightbending effects in a Schwarzschild spacetime. Figure 4 shows the threedimensional surface in (µe, re, g) for the complete set of light travel paths connecting the accretion disc to the observer. There is a considerable range of µe contributing to the observed emission at all inclinations. For low inclinations the effect is fairly uniform, with each radius contributing a similar range in µe, but with a systematic shift to larger emission angles (smaller µe) with smaller radii. By contrast, at higher inclinations the lightbending is strong enough to gravitationally lens the far side of the disc. This leads to a much larger range of µe which contribute to the disc image at small radii. In all cases, lightbending means there is a range of µe which contribute to the observed disc emission, so that in general, the line profile will depend on the angular distribution of the emitted flux. Fig. 5 shows the corresponding surfaces for the extreme Kerr case. The disc now extends down to 1.235rg , far closer to the corresponding event horizon than in the Schwarzschild case. This introduces a greater complexity to the geodesic surfaces. The range of emission angles is from zero to unity in all cases, including the nearly face on disc at 5 • , which has important consequences for the calculation of the line profile. To construct the relativistic line profile, we map these surfaces on to the (α, β) plane as discussed in the previous section, forming images of the accretion disc, as have been previously calculated by e.g. Cunningham & Bardeen (1973); Luminet (1979); Hollywood & Melia (1997); Fanton et al. (1997) ;Falcke, Melia, & Agol (2000). In . Surfaces in the (µe, re, g) parameter space describing geodesics that reach an observer at a given inclination for a standard accretion disk around a Schwarzschild black hole. Notice that, for every (re, g) pair, there are two geodesics that reach any given observer, corresponding respectively to geodesics that are emitted from the side of the disc closest to the observer (lower surface) and geodesics that are emitted from the opposite side of the disc to the observer (upper surface). These are the two geodesics referred to by Cunningham (1975). Figure 4 for a maximal (a = 0.998) Kerr black hole. These surfaces are exhibit a far greater complexity than those in the Schwarzschild case. The range of accessible redshift is increased for a given disc-observer system, whilst the range of emission angles is from zero to unity for all inclinations. Matt, Perola and Stella (1993), Zakharov and Repin (2003). By contrast, images on the right-hand side of the figure has each redshift bin coloured by its area on the observers sky, i.e. g 4 dαdβ, again with the scale defined at the top of each image. Strong gravitational lensing effects can now clearly be seen in the high inclination images. Photons from the far side of the disc pass close to the black hole, so the disc image is strongly distorted (Matt, Perola & Stella 1993;Zakharov & Repin 2003). Since the area of the disc is magnified, its contribution to the observed flux should be large. However, we stress that the low inclination images are also affected by lightbending (see Figure 4a and 5a), though they are not magnified by gravitational lensing. The form of the line profile is now determined from the flux image (representing the effects of strong gravity), together with the assumed form for the emissivity (determined by the energy release and radiative transfer processes), which is generally taken as (ignoring azimuthal de- ε (re, µe) = ǫ (re) f (µe) (10) While the flux image is a difficult numerical problem, it depends on well known physics. By contrast, the emissivity laws considered have rather simple forms, but are determined by the poorly known astrophysics of the disc. Of course, there are many other outstanding theoretical issues that can produce a substantial impact on the line profile, including (but not limited to) returning radiation or lightbending that can enhance the emissivity of the inner part of the disc (Cunningham 1975 Comparison with the Diskline Model The diskline code assumes a Schwarzchild metric (a = 0) and additionally that light travels in straight lines (so the angular emissivity term is irrelevant). In its XSPEC implementation it allows both arbitrary power law ǫ (re) ∝ r q and point source illumination. However, its analytic structure means that any radial emissivity law is easy to incorpo- rate. We choose to use q = −3, as this is approximately the form of the gravitational energy release per unit disc area (see e.g. Zycki, Done & Smith 1989). Figure 7 shows our line profiles assuming f (µe) = 1 (no angular dependance of the emissivity) compared with those from the diskline code. We see that our new model matches very closely to the XSPEC diskline model for a nearly face on disk. Whilst the key difference between our model and diskline is the inclusion of light-bending effects, the impact of this is small at low inclinations if there is no angular dependance to the emissivity (but see Section 3.4). By contrast, at high inclinations, lightbending not only means that the line is formed from many different µe, but gravitational lensing enhances the flux from the far side of the disc. This lensing effect gives clear differences between our model and diskline. The lensing magnifies the image of the far side of the disc, which has velocity mostly tangential to the line of sight, so is not strongly doppler shifted. This boosts the line profile at g ∼ 1 (see Matt, Perola & Stella 1993). Since the line profiles are all normalised to a single photon, then this also makes the blue peak smaller. In summary, the diskline model as incorporated into XSPEC produces line profiles which are accurate to ∼ 10% for inclinations of less than 30 • . Obviously, if the inner disc edge rmin > rms then the lightbending effects become correspondingly smaller and the match between the two codes becomes even closer. At higher inclinations the differences between diskline and our code become larger due to the effects of gravitational lensing, which leads to an effective redistribution in flux between the blue peak and the center of the line compared to that predicted from straight light travel paths. b) ǫ (re) ∝ r −3 e , f (µe) ∝ µ −1 e (blue lines), (c) ǫ (re) ∝ r −3 e , f( µe) ∝ (1 + 2.06µe) (green lines) for a maximal Kerr black hole with the disc extending from 1.235 − 20rg. The relative height of the blue wing changes by ∼ 35% for different angular emissivity laws, anti-correlated with the slope of the red wing. Comparison with the Laor Model By contrast, the laor code is written for extreme Kerr, and includes a standard limb darkening law f (µe) ∝ (1+2.06µe). The code is based on a series of photon trajectory calculations, where the disk is split up into a set of rings of width dre at re. Each part of the ring radiates with total emissivity (radial plus angular) given simply by the limb darkening law (i.e. no radial dependance, q = 0) and the line profile from that ring is built up from many light travel paths which connect the disc to the observer. This produces a series of transfer functions T (re, Eo − gEint) at each radius, analogous to Figure 6a-d but including the limb darkening law. These tabulated transfer functions are read by the laor code in XSPEC and used to build a total line profile for any given radial emissivity Fo(Eo) = ε(re)T (re, Eo − gEint)redredg. We compare this with our code, using a q = −3 emissivity for both as in the diskline comparisons above. We include the same limb darkening law as used by laor and the results (Figure 8) show that the overall match between our code and laor is good to ∼ 5 − 10%. THE ROLE OF THE ANGULAR EMISSIVITY AND BLACK HOLE SPIN The effect of appling a radial emissivity is straightforward. The transfer function describing all the relativistic effects from a given radial ring of the disc is unaffected, so the effect is simply to change the weighting of the line profile from each radial ring of the disc. By contrast, the effect of the angular distribution is far more subtle. A given radial ring on the disc can contribute to the line profile from a range of emission angles. The relative Figure 10. As in Fig. 9 with the disc now extending from 1.235− 400rg . There is still a ∼ 25% change in the blue wing height and significant change in red wing slope for the different angular emissivities, despite the inclusion of the outer disc regions. Figure 11. As in Fig. 9 with the disc now extending from 1.235− 6rg. The additional magenta line is for a limb darkened angular emissivity with more centrally concentrated radial emissivity, ∝ r −4.5 e . This is very similar to the blue line profile derived from a very different radial emissivity, ∝ r −3 e , with a limb brightened angular emissivity. weighting of these is determined by the angular emissivity, so it forms part of the calculation of the transfer function itself. Different angular emissivity laws can have striking effects on the form of the relativistic line profile, which we illustrate in Fig. 9 for a maximal Kerr geometry (a=0.998) with the disc extending (as previously) from 1.235 − 20rg and θo = 30 • . The line profiles here all implement the stan- Figure 12. As in Fig. 9 but with the disc extending from 6−400rg in an extreme Kerr (solid line) and Schwarzchild (dashed line) spacetime. The differences between the line profiles produced for the same sized disc in different assumed spacetimes is of order ∼ 5% for a given angular emissivity. The effect of changing the angular emissivity is also similarly small (∼ 5 − 10%). This contrasts with the much larger effects seen in the extreme Kerr metric for a disc extending down to 1.235rg, where lightbending is much more important (Fig. 9). dard radial emissivity law of r −3 . However, we now compare a range of angular emissivity laws, these being (from top to bottom at the blue peak in Fig. 9) the standard limb darkening law (as discussed in §3.3), followed by the constant angular emissivity case (as used in §3.2). An ionized disc could also be limb brightened, with the probable limiting case of f (µ) ∝ 1/µ as expected from optically thin material, shown as the bottom line in Fig. 9. There is a ∼ 35% difference in the height of the blue peak depending of the form of the angular emissivity used. However, such a limited range of radii is probably not very realistic. The disc should extend out to much greater distances from the black hole, where the relativistic effects (including lightbending) are less extreme. However, realistic emissivities strongly weight the contribution from the innermost regions, so the effective dilution of the relativistic effects by including the outer disc is not overwhelming. Fig. 10 shows the line profiles generated using the same angular emissivity laws for a disc extending from 1.235−400rg , again with θo = 30 • . There are still significant differences in the line profiles, with a ∼ 25% difference in the height of the blue peak while the red wing slope changes from Fo(Eo) ∝ E 3.5 o (limb darkened) to ∝ E 2.5 o (limb brightened). Despite the expectation of an extended disc, some recent observational studies (e.g. Reynolds et al. 2004) have tentatively suggested that the disc is very small, from ∼ 1.235 − 6rg. This enhances the importance of lightbending. Fig. 11 shows the line profiles for a disc extending from 1.235-6rg using the different angular emissivity laws of Fig. 9. The blue peak height differences are ∼ 40%, and the red wing slopes are different. For comparison we also show a limb darkened profile obtained from a very different radial emissivity of r −4.5 . This is very similar to the extreme limb brightened profile obtained from the r −3 radial weighting. We caution that uncertainties in the angular distribution of the line emissivity can change the expected line profile due to lightbending effects even at low/moderate inclinations, and that this can affect the derived radial emissivity. Currently, the only available models in XSPEC have either zero or maximal spin. A zeroth order approximation to spacetimes with different spins is to use the maximal Kerr results but with a disc with inner radius given by the minimum stable orbit for the required value of a (e.g. Laor 1991). We test this for the most extreme case of a = 0 modelled by a maximal Kerr spacetime with rmin = 6rg. Fig. 12 compares this with a true Schwarzschild calculation for a disc extending from 6 − 400rg with θo = 30 • for a range of angular emissivities. The differences between the spacetimes (for a given angular emissivity) are at most ∼ 5%. This is roughly on the same order as the effect of changing the angular emissivity, which is much reduced here compared to Fig. 9 due to the larger rmin. Assumptions about both spin and angular emissivity become somewhat more important for smaller outer disc radii. Fig. 13 shows this for a disc between 6 − 20rg (directly comparable to Fig. 7). CONCLUSIONS Recent observational studies have provided evidence for highly broadened fluorescent iron Kα lines. While there are a variety of line profiles seen (e.g. Lubinski & Zdziarski 2001), there are some objects where the line implies that there is material down to the last stable orbit in a maximally spinning Kerr spacetime (most notably MCG-6-30-15: Wilms et al. 2001). However, the strong gravity codes generally used to model these effects are now over a decade old. Increased computer power means that it is now possible to improve on these models. We describe our new code to calculate these effects, which uses uses fully adaptive gridding to map the image of the disc at the observer using the analytic solutions of the light travel paths. This is a very general approach, so the code can easily be modified to incorporate different emission geometries. We compare the results of our new code with those from diskline and laor (publically available in the XSPEC spectral fitting package) for Schwarzchild and extreme Kerr spacetimes. These previous models are accurate to ∼ 10% with realistic (∝ r −3 ) radial emissivities. However, they make specific assumptions regarding the angular dependence of the emitted flux, which may or may not be valid. Lightbending is always important for a disc which extends down below 20rg, in that the image of the disc at the observer always consists of a range of different emission angles. This can produce significant changes to the derived line profile, especially in extreme Kerr spacetimes. Whilst calculating strong gravitational effects is a difficult numerical problem, the underlying physics is well known. By contrast, the angular emissivity is an astrophysical problem, and is not at all well known as it depends on the ionization state of the disc as a function both of height and radius. Before we can use the line profiles to provide a sensitive test General Relativity and probe the underlying physics, we will need to have a much better understanding of the astrophysics of accretion. This code will be publically released for inclusion as a convolution model in the XSPEC spectral fitting package. This will include arbitrary spin and inner and outer disc radii as well as allowing both angular and radial emissivities to be specified. After this paper was submitted we learnt of the independent work by (Dovciak, Karas & Yaqoob 2004) which also develops a new strong gravity code. Their results match very closely with those presented here. Figure 1 . 1The coordinate system used for the disc. The emission is defined in the rest frame of the disc material. The polar and azimuthal emission angles Θ, Φ are obtained by taking the dotproducts of the photon four-momentum with the basis vectors of this frame, where µe = cos Θ. This disc frame can be connected to the frame which co-rotates with the black hole spacetime via a simple boost which depends on the velocity structure of the disc. Figure 2 . 2Diagram showing the link between the observers frame of reference and the global coodinate system defined by the black hole. Figure 3 . 3Heavy blue lines denote adjacent contours of constant redshift, g i < g i+1 on the observers sky (the α, β-plane) that define the area of the redshift bin Figure 6 we present images of the Schwarzchild disc for the 30 • (top row) and θo = 85 • (bottom row) cases (others provide no new information qualitatively). Images on the left-hand side of the figure are coloured by values of the redshift factor, g, as defined by the scale at the top of each image. Figure 4 4Figure 4. Surfaces in the (µe, re, g) parameter space describing geodesics that reach an observer at a given inclination for a standard accretion disk around a Schwarzschild black hole. Notice that, for every (re, g) pair, there are two geodesics that reach any given observer, corresponding respectively to geodesics that are emitted from the side of the disc closest to the observer (lower surface) and geodesics that are emitted from the opposite side of the disc to the observer (upper surface). These are the two geodesics referred to by Cunningham (1975). Figure 5 . 5As in Figure 6 . 6Redshift images (left hand panels) and flux image (right hand panels)of the accretion disc on the (α, β) plane for a Schwarzschild black hole, for the θo = 30 • (top row) and θo = 85 • (bottom row) cases. Redshift images are coloured by the associated values of g as measured by the distant observer. Flux images colored by the g 4 r 2 o dΞ = g 4 dαdβ component of the relativistic line profile. Note the appearance of strong light bending effects in the θo = 85 • case, as previously reported by Figure 7 . 7Comparison of the relativistic line profile computed by our model (red solid line) with that computed by the XSPEC diskline model (blue dashed line) for ǫ (re) ∝ r −3 and f (µe) = 1. At inclinations of < 30 • , the profiles match to within ∼ 10%, but the increaing importance of lightbending (which is not included in the diskline code) gives a 40% discrepancy in the profile shapes for inclinations > 60 • . In this and all subsequent figures the line profiles are normalised such that they contain one photon, and all our results are unsmoothed. Figure 8 . 8Comparison of the relativistic line profile computed by our model (red solid line) with that computed by the XSPEC laor model (blue dashed line) for ǫ (re) ∝ r −3 and f (µe) ∝ (1 + 2.06µe). The profiles produced by the two models match to within 5 − 10%. Figure 9 . 9Comparison of the relativistic line profiles generated by our model with (a) ǫ (re) ∝ r −3 e , f (µe) = 1 (red lines), ( Figure 13 . 13As inFig. 12but with the disc extending from 6 − 20rg . The differences now are of order 15-20%. ACKNOWLEDGEMENTSWe are grateful to an anonymous referree for useful comments and suggestions on previous versions of this manuscript. We would also like to thank E. Agol and M. Calvani for useful discussions and encouragement. . E Agol, Santa BarbaraUniv. CaliforniaPh.D. thesisAgol E.; Ph.D. thesis, 1997, Univ. California, Santa Bar- bara. . D R Ballantyne, R R Ross, A C Fabian, MNRAS. 20110Ballantyne, D.R.; Ross, R.R.; Fabian, A.C. MNRAS 201, 327, 10. Physics Fundamentals of Luminous Accretion Disks Around Black Holes in "Accretion Disks, Jets, and High Energy Phenomena in Astrophysics. O M Blaes, Proceedings of Session LXXVIII of Les Houches Summer School. Session LXXVIII of Les Houches Summer SchoolFranceBlaes, O.M.; Physics Fundamentals of Luminous Accretion Disks Around Black Holes in "Accretion Disks, Jets, and High Energy Phenomena in Astrophysics", Proceedings of Session LXXVIII of Les Houches Summer School, Cha- monix, France, August 2002. . A Cadez, C Fanton, M ; Calvani, Newa, 3647Cadez, A.; Fanton, C.; Calvani, M.; NEWA Vol. 3, No. 8, 647. . A Cadez, M Calvani, C Fanton, Mmsai, 74Carter, BPhRv, 1968, 174.1559CCadez, A.; Calvani, M.; Fanton, C.; MmSAI Vol. 74, 2003. Carter, B.; PhRv, 1968, 174.1559C. S Chandrasekhar, The Mathematical Theory of Black Holes. N.Y.Oxford University PressChandrasekhar, S.; The Mathematical Theory of Black Holes, Oxford University Press, N.Y., 1983. . C T Cunningham, ApJ. 202788Cunningham, C.T.; ApJ, 1975, 202, 788 . C T Cunningham, J Bardeen, ApJ. 183237Cunningham, C.T.; Bardeen, J. M.; ApJ, 1973, 183, 237 . F De Felice, G Preti, Class. Quantum Grav. 162929de Felice, F.; Preti, G..; Class. Quantum Grav. 1999, 16, 2929 . M Dovciak, V Karas, T Yaqoob, astro-ph/0403541ApJ SS. Dovciak, M.; Karas, V.; Yaqoob, T.; ApJ SS 2004, accepted (astro-ph/0403541). . K Ebisawa, K Mitsuda, T Hanawa, ApJ. 367213Ebisawa, K.; Mitsuda, K.; Hanawa, T.; ApJ, 1991, 367, 213 . K Ebisawa, F Makino, K Mitsuda, T Belloni, A P Cowley, P C Schmidtke, A Treves, ApJ. 403684Ebisawa, K.; Makino, F.; Mitsuda, K.; Belloni, T.; Cowley, A. P.; Schmidtke, P.C.; Treves, A.; ApJ, 1993, 403, 684 . A C Fabian, M J Rees, L Stellar, N E White, MNRAS. 238729Fabian, A.C.; Rees, M.J.; Stellar, L.; White, N. E.; MNRAS 1989, 238, 729. . H Falcke, F Melia, E Agol, ApJ. 52813Falcke, H.; Melia, F.; Agol, E.; ApJ, 2000, 528, L13 . C Fanton, M Calvani, F De Felice, A Cadez, PASJ. 49159Fanton, C.; Calvani, M.; de Felice, F.; Cadez, A.; PASJ 1997, 49, 159 . M Gierlinski, A Maciolek-Niedzwiecki, K Ebisawa, MNRAS. 3251253Gierlinski, M.; Maciolek-Niedzwiecki, A.; Ebisawa, K.; MNRAS 2001, 325, 1253. . J M Hollywood, F Melia, K Iwasawa, A C Fabian, C S Reynolds, K Nandra, C Otani, H Inoue, K Hayashida, W N Brandt, T Dotani, H Kunieda, M Matsuoka, Y ; Tanaka, Mn-Ras, ApJ. 1121038Hollywood, J.M.; Melia, F.; ApJ 1997, 112, 423. Iwasawa, K.; Fabian, A.C.; Reynolds, C. S.; Nandra, K.; Otani, C.; Inoue, H.; Hayashida, K.; Brandt, W.N.; Dotani, T.; Kunieda, H.; Matsuoka, M.; Tanaka,Y.; MN- RAS 1996, 282, 1038. . V Karas, A Martocchia, L Subr, PASJ. 53189Karas, V.; Martocchia, A.; Subr, L.; PASJ 2001, 53, 189 . V Karas, D Vokrouhlicky, A G Polnarev, MNRAS. 259569Karas, V.; Vokrouhlicky, D.; Polnarev, A. G.; MNRAS 1992, 259, 569 . S Kaspi, ApJ. 574643Kaspi, S. et al.; ApJ 2002, 574, 643 . A Laor, A Laor, H Netzer, MNRAS. 376897ApJLaor, A.; ApJ 1991, 376, 90. Laor, A.; Netzer, H.; MNRAS 1989, 238, 897. . A Laor, H Netzer, T Piran, Mnras ; 560, P Lubinski, A A Zdziarski, 37. Luminet, J-P. 242228A&ALaor, A.; Netzer, H.; Piran, T.; MNRAS 1990, 242, 560. Lubinski, P.; Zdziarski, A. A.; MNRAS 2001, 323, 37. Luminet, J-P.; A&A 1979, 75, 228. . K Makishima, A Kubota, T Mizuno, T Ohnishi, M Tashiro, Y Aruga, K Asai, T Dotani, K Mitsuda, Y Ueda, S Uno, K Yamaoka, K Ebisawa, Y Kohmura, K Okada, ApJ. 535632Makishima, K.; Kubota, A.; Mizuno, T.; Ohnishi, T.; Tashiro, M.; Aruga, Y.; Asai, K.; Dotani, T.; Mitsuda, K.; Ueda, Y.; Uno, S.; Yamaoka, K.; Ebisawa, K.; Kohmura, Y.; Okada, K.; ApJ 2000, 535, 632. . A Martocchia, Trieste Sissa-Isas, A Martocchia, V Karas, G Matt, MNRAS. 817PhD ThesisMartocchia, A.; PhD Thesis, 2000, SISSA-ISAS, Trieste Martocchia, A.; Karas, V.; Matt, G.; MNRAS, 2000, 312, 817. . G Matt, G C Perola, L Stella, A&A. 2672643Matt, G.; Perola, G.C.; Stella, L.; A&A, 1993, 267, 2, 643. . C S Misner, K S Thorne, J A Wheeler, W H Gravitation, Freeman, Misner, C. S.; Thorne, K. S.; Wheeler, J. A.; Gravitation, W H Freeman, 1973. . S Nayakshin, D Kazanas, T R Kallman, K P Rauch, R D Blandford, C S Reynolds, M C Begelman, C S Reynolds, J Wilms, M C Begelman, R Staubert, E Kendziorra, ApJ. 537109ApJ. MNRAS 2004 in press (astro-ph/0401305Nayakshin, S.; Kazanas, D.; Kallman, T. R. ApJ 2000, 537, 833. Rauch, K. P.; Blandford, R.D.; ApJ, 1994, 421, 46. Reynolds C.S.; Begelman, M.C.; ApJ, 1997, 488, 109. Reynolds C.S.; Wilms, J.; Begelman, M.C.; Staubert R., Kendziorra E.; MNRAS 2004 in press (astro-ph/0401305) . R R Ross, A C Fabian, A J Young, MNRAS. 306461Ross, R.R.; Fabian, A.C.; Young,A.J.; MNRAS 1999, 306, 461. . Y Tanaka, K Nandra, A C Fabian, H Inoue, C Otani, T Dotani, K Hayashida, K Iwasawa, T Kii, H Kunieda, F Makino, M Matsuoka, A C Fabian, S Vaughan, Mnras, Nature. 37528Tanaka, Y.; Nandra, K.; Fabian, A.C.; Inoue, H.; Otani, C.; Dotani, T.; Hayashida, K.; Iwasawa, K.; Kii, T.; Kunieda, H.; Makino, F.; Matsuoka, M. Nature, 1995, 375, 659. Fabian, A. C. Vaughan, S. MNRAS 2003, 340, L28 . S U Viergutz, A&A. 355Viergutz, S. U.; A&A, 1993, 272, 355. . J Wilms, C S Reynolds, M C Begelman, J Reeves, S Molendi, R Staubert, E Kendziorra, MNRAS. 32827Wilms, J.; Reynolds, C. S.; Begelman, M. C.; Reeves, J.; Molendi, S.; Staubert, R.; Kendziorra, E.; MNRAS 2001, 328, L27. . A F Zakharov, S V Repin, P T A&amp;a ; 7. Zycki, C Done, D A Smith, MNRAS. 406231Zakharov, A. F., Repin, S. V. A&A, 2003, 406, 7. Zycki, P. T.; Done, C.; Smith, D. A.; MNRAS 1998, 301, 231.
[]
[ "HARMONIC FUNCTIONS AND THE MASS OF 3-DIMENSIONAL ASYMPTOTICALLY FLAT RIEMANNIAN MANIFOLDS", "HARMONIC FUNCTIONS AND THE MASS OF 3-DIMENSIONAL ASYMPTOTICALLY FLAT RIEMANNIAN MANIFOLDS" ]
[ "Hubert L Bray ", "Demetre P Kazaras ", "ANDMarcus A Khuri ", "Daniel L Stern " ]
[]
[]
An explicit lower bound for the mass of an asymptotically flat Riemannian 3-manifold is given in terms of linear growth harmonic functions and scalar curvature. As a consequence, a new proof of the positive mass theorem is achieved in dimension three. The proof has parallels with both the Schoen-Yau minimal hypersurface technique and Witten's spinorial approach. In particular, the role of harmonic spinors and the Lichnerowicz formula in Witten's argument is replaced by that of harmonic functions and a formula introduced by the fourth named author in recent work, while the level sets of harmonic functions take on a role similar to that of the Schoen-Yau minimal hypersurfaces.
10.1007/s12220-022-00924-0
[ "https://arxiv.org/pdf/1911.06754v1.pdf" ]
208,076,727
1911.06754
8d1403db505f5e7bdc42eb774811b7e9cdfb51a3
HARMONIC FUNCTIONS AND THE MASS OF 3-DIMENSIONAL ASYMPTOTICALLY FLAT RIEMANNIAN MANIFOLDS Hubert L Bray Demetre P Kazaras ANDMarcus A Khuri Daniel L Stern HARMONIC FUNCTIONS AND THE MASS OF 3-DIMENSIONAL ASYMPTOTICALLY FLAT RIEMANNIAN MANIFOLDS An explicit lower bound for the mass of an asymptotically flat Riemannian 3-manifold is given in terms of linear growth harmonic functions and scalar curvature. As a consequence, a new proof of the positive mass theorem is achieved in dimension three. The proof has parallels with both the Schoen-Yau minimal hypersurface technique and Witten's spinorial approach. In particular, the role of harmonic spinors and the Lichnerowicz formula in Witten's argument is replaced by that of harmonic functions and a formula introduced by the fourth named author in recent work, while the level sets of harmonic functions take on a role similar to that of the Schoen-Yau minimal hypersurfaces. Introduction Let (M, g) be a smooth connected 3-dimensional asymptotically flat Riemannian manifold with nonnegative scalar curvature R g ≥ 0. The notion of asymptotic flatness means that there is a compact set K ⊂ M such that M \ K = ∪ k 0 k=1 M k end where the ends M k end are pairwise disjoint and diffeomorphic to the complement of a ball R 3 \ B 1 , and there exists in each end a coordinate system satisfying (1.1) |∂ l (g ij − δ ij )(x)| = O(|x| −q−l ), for some q > 1 2 and with l = 0, 1, 2. The scalar curvature is assumed to be integrable R g ∈ L 1 (M ) so that the ADM mass of each end is well-defined [1] and given by (1.2) m = lim r→∞ 1 16πˆS r i (g ij,i − g ii,j )υ j dA, where υ is the unit outer normal to the coordinate sphere S r of radius r = |x| and dA denotes its area element. The positive mass theorem asserts that this parameter has a sign, and it characterizes Euclidean space as the unique manifold in this class with vanishing mass. This theorem was first established in the late 1970's by Schoen and Yau [13,14] via a contradiction argument, and is based on the existence of stable minimal hypersurfaces along with manipulations of the stability inequality. Shortly after this Witten [12,16] found an alternate proof in which the mass is expressed as a sum of squares. This proof relies on the existence of harmonic spinors and the Lichnerowicz formula. More recently two other proofs have been given in the general case. One by Huisken and Ilmanen [7], which arose out of their study of the Penrose inequality, follows from the existence of a weak version of inverse mean curvature flow and monotonicity of Hawking mass. The other is a Ricci flow proof and is due to Li [8]. Further proofs have been given in special cases, such as that of Brill [4] in the axisymmetric setting. It should also be noted that Lohkamp [17] showed how the positive mass theorem can be reduced to the nonexistence of positive scalar curvature metrics on the connected sum N #T of a compact manifold N with a torus T . See [6] for a survey of these topics. Furthermore, we point out the articles of Schoen and Yau [15] and Lohkamp [18] which address the higher dimensional case. The purpose of the current article is to give an explicit lower bound for the mass in terms of linear growth harmonic functions and scalar curvature. This approach is based on an integral inequality due to Stern [19], and leads to a new and relatively simple proof of Theorem 1.1. Associated with each asymptotic end M end there is a corresponding exterior region M ext ⊃ M end , which is diffeomorphic to the complement of a finite number of balls (with disjoint closure) in R 3 and has minimal boundary [7,Lemma 4.1]. Theorem 1.2. Let (M ext , g) be an exterior region of a complete asymptotically flat Riemannian 3-manifold (M, g) with mass m. Let u be a harmonic function on (M ext , g) satisfying Neumann boundary conditions at ∂M ext , and which is asymptotic to one of the asymptotically flat coordinate functions of the associated end. Then (1.3) m ≥ 1 16πˆM ext |∇ 2 u| 2 |∇u| + R g |∇u| dV. In particular, if the scalar curvature is nonnegative then m ≥ 0. Furthermore, if m = 0 then (M, g) ∼ = (R 3 , δ). Two approaches to the positive mass theorem will be presented within the context of the harmonic level set technique. They differ in their handling of the exterior region boundary, and in their use of the asymptotically flat geometry. In the first method Neumann boundary conditions are imposed on ∂M ext in order to deal with boundary terms appearing in Stern's integration formula [19], while in the second method the Mantoulidis neck construction [10] is used to cap-off the boundary spheres so that the resulting manifold is diffeomorphic to R 3 and still possesses nonnegative scalar curvature. Within the asymptotic end harmonic coordinates are employed along with cylindrical domains in the first approach to extract the mass and compute total geodesic curvatures. On the other hand, the second approach utilizes a density theorem to reduce the asymptotic geometry to that of Schwarzschild where the analysis is then performed on coordinate spheres. Preparing the Data Within the context of the 3-dimensional positive mass theorem, simplifications of the asymptotics and topology may be assumed without loss of generality. More precisely Schoen and Yau [14] showed that metrics with harmonic asymptotics are dense in the relevant class of metrics, and Bray [2] (see also [5,Proposition 3.3]) extended this to show that in fact harmonic asymptotics may be replaced with Schwarzschild asymptotics. As for the topology of M , one may consider the portion of M outside the outermost minimal surface, and fill in the resulting spherical holes using work of Mantoulidis [10] and the Miao smoothing [11]. This procedure allows one to reduce the topology of M to R 3 . It should be noted that this reduction is specific to dimension 3. Proposition 2.1. Let (M, g) be a smooth 3-dimensional complete asymptotically flat Riemannian manifold having nonnegative scalar curvature R g ≥ 0, and with mass m of a designated end M + end . Given ε > 0, there exists a smooth 3-dimensional complete asymptotically flat Riemannian manifold (M , g) with nonnegative scalar curvature R g ≥ 0 and satisfying the following properties. (1) The underlying manifold M is diffeomorphic to R 3 . (2) The mass m of the single end M end satisfies |m − m| < ε. (3) In the asymptotic coordinates of M end , g = 1 + m 2r 4 δ. Proof. By passing to the orientable double cover if necessary we may assume that M is orientable. Moreover by applying an appropriate conformal deformation with conformal factor approximating 1, such that the deformed mass differs from the original by an arbitrarily small amount, we may assume that (M, g) has positive scalar curvature R g > 0 everywhere. Let S ⊂ M denote the trapped region in the sense of [7]. If S = ∅ then M is diffeomorphic to R 3 [7,M + = R 3 \ ∪ k i=1 B i , where the spheres ∪ i S 2 i = ∪ i ∂B i are homologically area outer-minimizing. Since each submanifold (S 2 i , γ i = g| S 2 i ) is a 2-sided stable minimal surface in an ambient space of positive scalar curvature, the principal eigenvalue of −∆ γ i +K γ i is positive, where K γ i denotes Gaussian curvature. The hypotheses of [10, Corollary 2.2.13] are then satisfied, so that for each i = 1, . . . , k there is a Riemannian 3-ball (D i , g i ) with positive scalar curvature, minimal boundary, and satisfying ∂( D i , g i ) ∼ = (S 2 i , γ i ). Glue in these 3-balls to form (2.2) M = M + S ∪ k i=1 D i , and equip M with a C 0,1 -Riemannian metric that agrees with g on M + and g i on each D i , see Figure 1. Next, smooth a tubular neighborhood of S followed by a conformal deformation as in [11,Sections 3 & 4], to obtain an asymptotically flat metricg on M with nonnegative scalar curvature and mass satisfying |m −m| < ε/2. Now apply Bray's density result [5,Proposition 3.3] tog to produce the desired metric g with mass m satisfying |m − m| < ε/2. Remark 2.2. In Proposition 2.1, the conclusion that M is diffeomorphic to R 3 relies on deep results outlined in [7]. In light of this, it is worth pointing out that ultimately we do not require the full strength of this topological simplification. Indeed, the only time this portion of Proposition 2.1 is utilized, is in the proof of Lemma 3.1 where only the triviality of H 2 (M ; Z) is needed. This weaker simplification may be achieved via more elementary means. By following the arguments of [6, page 140], there exists M + containing M + end whose boundary consists of minimal spheres and satisfies H 2 ( M + , ∂ M + ; Z) = 0. Then filling in with discs as above yields the desired conclusion. A 3-dimensional Riemannian manifold satisfying points (1) and (3) of Proposition 2.1 will be referred to as Schwarzschildian. This proposition allows the proof of Theorem 1.1 to be reduced to the following Schwarzschildian case, which will be established in Section 5. be the conformal Laplacian. According to the conformal invariance of this operator, (3.2) L g v = w −5 L δ (wv) for any function v. Let (x) = a i x i be a linear function in the asymptotically flat coordinates {x i } 3 i=1 on M end . Since R g ≡ 0 in M end it follows that (3.3) ∆ g ( w −1 ) = L g ( w −1 ) = w −5 L δ = w −5 ∆ δ = 0. We can now find harmonic functions on M with the following prescribed linear asymptotics. Given a i there exists a constant a such that (3.4)    ∆ g u = 0 on M, u(x) = a i x i 1+ m 2r + a r + O 2 (r −2 ) in M end , where the notation v = O l (r −k ) asserts that |∂ j v| ≤ Cr −k−j for j ≤ l. To see this, let u 0 ∈ C ∞ (M ) be any smooth function satisfying u 0 ≡ w −1 in M end , and set f = −∆ g u 0 . Notice that (3.3) implies f ≡ 0 in M end . By a standard argument [13,Lemma 3.2], there exists a function u 1 ∈ C ∞ (M ) solving (3.5) ∆ g u 1 = f on M, u 1 (x) = a r + O 2 (r −2 ) in M end , for some constant a. The desired unique solution of (3.4) is u = u 0 + u 1 . Proof. The discussion preceding the lemma establishes the existence of the solutions u , and uniqueness follows from the maximum principle. For such an asymptotically linear harmonic function u , let t be a regular value of u and consider the level set Σ t = u −1 (t). Suppose that there is a compact connected component Σ t ⊂ Σ t . Notice that Σ t is a properly embedded submanifold and is 2-sided (has trivial normal bundle). Since M = R 3 has trivial homology, Σ t must bound a compact region of M . By uniqueness of solutions to the Dirichlet problem for harmonic functions, u ≡ t on this region. However this contradicts the assumption that t is a regular value. It follows that all components of Σ t are noncompact. Furthermore, since it is properly embedded Σ t is a closed subset of M . Thus if any component of Σ t stays within M r , the compact region bounded by the coordinate sphere S r ⊂ M end , it must be compact which is a contradiction. We conclude that each component must extend outside S r for all r. The asymptotics of u imply that there exists a constant C, such that for all sufficiently large r the level set Σ t lies within the slab {x ∈ M \ M r | t − C < (x) < t + C}. More precisely, the implicit function theorem shows that Σ t is represented uniquely in this region as a graph over the plane t = (x). It follows that Σ t is connected and has a single end modeled on R 2 \ B 1 . Harmonic coordinates. In the general case of an asymptotically flat 3-manifold (M, g), not necessarily Schwarzschildian, consider the exterior region M ext associated with a given end M end . Let y i , i = 1, 2, 3 denote the given asymptotically flat coordinate system in M end . The analysis of [1, Theorem 3.1] may be appropriately modified in order to produce harmonic coordinates satisfying Neumann boundary conditions. That is, there exist functions x i ∈ C ∞ (M ext ) satisfying (3.6) ∆ g x i = 0 on M ext , ∂ υ x i = 0 on ∂M ext , |x i − y i | = o(|y| 1−q ) as |y| → ∞, where q is the order of asymptotically flat decay in (1.1). This decay is still valid for the harmonic coordinates, that is (3.7) |∂ l (g ij − δ ij )(x)| = O(|x| −q−l ), l = 0, 1, 2. Harmonic coordinates are particularly well suited for studying the mass [1], and will play an important role in the computation of asymptotic boundary terms appearing in the integral inequalities of Section 4 below. Relating Scalar Curvature to Level Set Geometry The purpose of this section is to obtain integral inequalities for the scalar curvature of a compact Riemannian manifold equipped with a harmonic function, building on the techniques introduced by the fourth named author in [19]. Note that our setting is slightly different from that of [19], which studies closed 3-manifolds with harmonic maps to S 1 , while we work with harmonic functions on compact manifolds with boundary where additional boundary conditions are needed. As in [19], the first step in obtaining the relevant identities is to apply the Gauss equations to extract scalar curvature on a regular level set of a harmonic function. Note that in the next result, the dimension is not restricted to three. Lemma 4.1. Suppose that (M, g) is a Riemannian manifold and u : M → R is harmonic with regular level set Σ. Then, on Σ, the following identity holds (4.1) Ric(∇u, ∇u) = 1 2 |∇u| 2 (R g − R Σ ) + |∇|∇u|| 2 − 1 2 |∇ 2 u| 2 , where R g and R Σ denote the respective scalar curvatures. Proof. Since Σ is a regular level set, its unit normal is given by ν = ∇u |∇u| . Taking two traces of the Gauss equations then yields (4.2) R g − 2Ric ∇u |∇u| , ∇u |∇u| = R Σ + |II| 2 − H 2 , where H and II are the mean curvature and second fundamental form of Σ. The second fundamental form is given by II = ∇ 2 Σ u |∇u| , where ∇ 2 Σ u denotes the Hessian of u restricted to T Σ ⊗ T Σ. It follows that (4.3) |II| 2 = |∇u| −2 |∇ 2 u| 2 − 2|∇|∇u|| 2 + [∇ 2 u(ν, ν)] 2 , and because u is harmonic (4.4) H = Tr Σ II = |∇u| −1 Tr g ∇ 2 u − ∇ 2 u(ν, ν) = −|∇u| −1 ∇ 2 u(ν, ν). Combining equations (4.3) and (4.4) produces |II| 2 − H 2 = |∇u| −2 |∇ 2 u| 2 − 2|∇|∇u|| 2 . (4.5) Inserting this into (4.2) gives the desired result. The formula of Lemma 4.1 will be combined with Bochner's identity and integrated by parts over a compact manifold with boundary, while applying the coarea formula with harmonic level sets. For a function u : Ω → R on a compact manifold Ω, let u and u be the maximum and minimum values of u, respectively. The following computation plays a key role in both of our approaches for obtaining lower bounds on the ADM mass. We remark that related computations for S 1 -valued harmonic maps with homogeneous Neumann condition can be found in the paper [3], where several applications to the geometry of compact 3-manifolds are obtained. Proposition 4.2. Let (Ω 3 , g) be an 3-dimensional oriented compact Riemannian manifold with boundary decomposed into ∂Ω = P 1 P 2 . Let u : Ω → R be a harmonic function satisfying the Neumann condition ∂ υ u ≡ 0 on P 1 and the nondegeneracy condition |∇u| P 2 | > 0 on P 2 . Then u u ˆΣ t 1 2 |∇ 2 u| 2 |∇u| 2 + R g dA +ˆ∂ Σt∩P 1 H P 1 dt ≤ˆu u 2πχ(Σ t ) −ˆ∂ Σt∩P 2 κ ∂Σt dt +ˆP 2 ∂ υ |∇u|dA, (4.6) where κ ∂Σt denotes the geodesic curvature of ∂Σ t ⊂ Σ t , H P 1 denotes the mean curvature of P 1 , and υ is the unit outer normal to ∂Ω. In the case P 1 = ∅, we record also the equivalent formulation |∇ 2 u| 2 |∇u| 2 + R g − R Σt dAdt ≤ˆ∂ Ω ∂ υ |∇u|. Proof. Let ε > 0 and consider φ = |∇u| 2 + ε. By Bochner's identity ∆ g φ = ∆ g |∇u| 2 2φ − |∇|∇u| 2 | 2 4φ 3 =φ −1 |∇ 2 u| 2 + Ric(∇u, ∇u) − φ −2 |∇u| 2 |∇|∇u|| 2 . (4.8) It follows that on a regular level set Σ, Lemma 4.1 may be applied to find (4.9) ∆ g φ ≥ 1 2 φ −1 |∇ 2 u| 2 + |∇u| 2 (R g − R Σ ) . Let A ⊂ [u, u] be an open set containing the critical values of u, and denote the complementary closed set by B ⊂ [u, u]. Note that, by virtue of the boundary conditions for u, A also contains all critical values for the restriction u| ∂Ω of u to the boundary. Now, integration by parts yields (4.10)ˆ∂ Ω ∂ υ φdA =ˆΩ ∆ g φdV =ˆu −1 (A) ∆ g φdV +ˆu −1 (B) ∆ g φdV. In order to control the integral over u −1 (A), observe that (4.8) and Cauchy-Schwarz give the estimate (4.11) ∆ g φ ≥ φ −1 Ric(∇u, ∇u) ≥ − Ric |∇u|. Applying the coarea formula to u : u −1 (A) → A then produces (4.12) −ˆu −1 (A) ∆ g φdV ≤ˆu −1 (A) Ric |∇u|dV ≤ Cˆt ∈A H 2 (Σ t )dt, for some constant C independent of ε and the choice of A. In addition, applying the coarea formula to u : u −1 (B) → B in conjunction with (4.9) gives (4.13)ˆu −1 (B) ∆ g φdV ≥ 1 2ˆt ∈BˆΣt φ −1 |∇ 2 u| 2 |∇u| 2 + (R g − R Σt ) dAdt. Putting this all together yields (4.14) 1 2ˆt ∈BˆΣt φ −1 |∇u| |∇ 2 u| 2 |∇u| 2 + (R g − R Σt ) dAdt ≤ˆ∂ Ω ∂ υ φdA + Cˆt ∈A H 2 (Σ t )dt. Next, we employ the homogeneous Neumann condition ∂ υ u ≡ 0 to rewrite the boundary integraĺ P 1 ∂ υ φ. Indeed, note that, away from critical points of u along P 1 , we have (4.15) ∂ υ φ = φ −1 ∇ ∇u ∇u, υ = −φ −1 ∇u, ∇ ∇u υ , where in the last line the Neumann condition was used. Writing ν = ∇u |∇u| and continuing to use the homogeneous Neumann condition, a brief calculation shows that (4.16) ν, ∇ ν υ = II ∂Ω (ν, ν) = H P 1 − κ ∂Σt , so we can rewrite (4.15) as (4.17) ∂ υ φ = −φ −1 |∇u| 2 (H P 1 − κ ∂Σt ). In particular, applying the coarea formula for the restriction u| P 1 -and using the homogeneous Neumann condition to see that |∇u| P 1 | = |∇u| along P 1 -we find that (4.18)ˆP 1 ∂ υ φdA = −ˆt ∈Bˆ∂Σt∩P 1 φ −1 |∇u|(H P 1 − κ ∂Σt ) +ˆt ∈Aˆ∂Σt∩P 1 |∇u| −1 φ −1 ∇ ∇u ∇u, υ . Since (4.19) |∇u| −1 φ −1 | ∇ ∇u ∇u, υ | ≤ |II P 1 | ≤ C, it follows that (4.20)ˆP 1 ∂ υ φdA ≤ −ˆt ∈Bˆ∂Σt∩P 1 φ −1 |∇u|(H P 1 − κ ∂Σt ) + Cˆt ∈A H 1 (∂Σ t ∩ P 1 ). Apply (4.20) in (4.14) to obtain 1 2ˆt ∈BˆΣt |∇u| φ |∇ 2 u| 2 |∇u| 2 + R g dAdt ≤ˆt ∈B 1 2ˆΣ t |∇u| φ R Σt dA +ˆ∂ Σt∩P 1 |∇u| φ (κ ∂Σt − H P 1 ) dt +ˆP 2 ∂ υ φdA + Cˆt ∈A H 2 (Σ t ) + H 1 (∂Σ t ∩ P 1 ) . (4.21) Observe that |∇u| is uniformly bounded from below on u −1 (B), since B is a closed subset of the regular values of u. Recalling that φ = (|∇u| 2 + ε) 1/2 , we may take ε → 0 in the preceding inequality to conclude that 1 2ˆt ∈BˆΣt |∇ 2 u| 2 |∇u| 2 + R g dAdt ≤ˆt ∈B 1 2ˆΣ t R Σt dA +ˆ∂ Σt∩P 1 (κ ∂Σt − H P 1 ) dt +ˆP 2 ∂ υ |∇u|dA + Cˆt ∈A H 2 (Σ t ) + H 1 (∂Σ t ∩ P 1 ) =ˆt ∈B 2πχ(Σ t ) −ˆ∂ Σt∩P 2 κ ∂Σt −ˆ∂ Σt∩P 1 H P 1 dt +ˆP 2 ∂ υ |∇u|dA + Cˆt ∈A H 2 (Σ t ) + H 1 (∂Σ t ∩ P 1 ) ,(4.22) where in the second step we have applied the Gauss-Bonnet theorem to Σ t . Finally, by Sard's theorem, we may take the measure |A| of A to be arbitrarily small. Since (4.23) t → H 2 (Σ t ) + H 1 (Σ t ∩ P 1 ) is integrable over [u, u] by the coarea formula, taking |A| → 0 in the preceding inequality yields the desired conclusion. , and it is easy to check that the preceding argument carries over to this case without difficulty. The Schwarzschildian Approach In this section we prove Theorem 1.1 by establishing Theorem 2.3 and applying the Schwarzschildian reduction of Proposition 2.1. Unless stated otherwise, in this section (M, g) will denote a 3dimensional Schwarzschildian manifold. 5.1. Connectivity of level sets. Consider a coordinate sphere S r ⊂ M end and let M r be the compact component of M \ S r . In order to apply the identity (4.7) to Ω = M r , a computation of the Gauss curvature piece is required, which is given in Proposition 5.2 below. Before proceeding to this calculation, properties concerning the topology of regular level sets in M r will be recorded. Let u be an asymptotically linear harmonic function as in Lemma 3.1, and for t ∈ R, r > 0 set Σ r t = Σ t ∩ M r . Lemma 5.1. Let (M, g) and u be as above. There are constants r 0 > 0 and c 0 > 0 such that for all r ≥ r 0 and t ∈ [−r + c 0 , r − c 0 ] with t a regular value, Σ r t is connected with boundary ∂Σ r t = S 1 . Proof. Let = a i x i be the nontrivial linear function on M end to which u converges. By performing an orthogonal transformation if necessary, noting that this does not disturb the Schwarzschild asymptotics of g, it may be assumed that = a 1 x 1 for some a 1 = 0. Since a −1 1 u is harmonic and has the same level sets as u, we may assume without loss of generality that = x 1 . In what follows x 1 will be denoted by x for convenience. The first step is to show that Σ t transversely intersects S r ⊂ M end , away from the north and south pole on the x-axis. More precisely, we claim that there exist r 0 > 0 and c 0 > 0 such that for r ≥ r 0 and |t| ≤ r − c 0 , Σ t transversely intersects S r . To see this observe that using (5.25), (5.26), and (5.27) yields (5.1) δ(∇u, ∂ r ) = x r 1 + m 2r −1 + mx 2r 2 1 + m 2r −2 + O(r −2 ) = x r + O(r −2 ), and (5.2) |∇u| δ = 1 − m 2r + mx 2 2r 3 + O(r −2 ), |∂ r | δ = 1. Therefore (5.3) δ(∇u, ∂ r ) |∇u| δ |∂ r | δ = x r 1 + m 2r − mx 2 2r 3 + O(r −2 ), so that for |x| ≤ r − c * with appropriately chosen c * > 0 we have (5.4) | cos θ| = |δ(∇u, ∂ r )| |∇u| δ |∂ r | δ ≤ 1 − 1 r + O(r −2 ). Here θ represents the angle between ∇u and ∂ r , which stays away from zero for large r. The desired claim now follows since t = x + O(1) on Σ t ∩ M end . Now let r ≥ r 0 and |t| ≤ r − c 0 so that Σ t intersects S r transversely. Additionally, suppose that t is a regular value of u. Since Σ t ∩ S r is transverse and nonempty, it consists of a finite number of disjoint embedded circles γ 1 , . . . , γ p . Since, by Lemma 3.1, Σ t is connected and noncompact with only one end, removing the circles yields the decomposition (5.5) Σ t \ ∪ p i=1 γ i = U C, where U is unbounded and connected, and C is bounded and compact. Evidently Σ r t ⊂ C. If C = Σ r t , then there is a path component C ⊂ C which lies outside of M r . Since C is compact, there is a largest r > r so that S r ∩ C = ∅, see Figure 3. In this intersection, S r is tangential to Σ t . This, however, contradicts the transversality established above. We conclude that Σ r t = C. Now assume that ∂Σ r t is disconnected, so that there are at least two distinct circles γ 1 and γ 2 . Since U is connected, there is a path in U from γ 1 to γ 2 . Let r > r be the smallest radius so that there is such a path σ from γ 1 to γ 2 which lies entirely within Σ r t , see Figure 4. Since r is the smallest such radius, σ must intersect S r . At the intersection Σ t will be tangential to S r since all perturbations of σ, within Σ t and supported on this intersection, either push σ outside of Σ r or possess a point of tangency. This again contradicts transversality and we conclude that ∂Σ r t is connected. Since Σ r t has no closed components, all points in Σ r t can be connected to its boundary and we conclude that Σ r t itself is connected. for some constant ω ∈ (0, ∞) independent of r, where K is the Gaussian curvature of Σ r t . Proof. As in the proof of Lemma 5.1 we may assume without loss of generality that = x, where the asymptotic coordinates on M end will be denoted by (x, y, z). Observe that on a t-level set Σ r t γ 1 A A A U γ 2 P P P P q σ S r S r(5.7) t = u = x 1 + m 2r + a r + O 2 (r −2 ), which implies that (5.8) x = t + c(t) r + O 2 (r −2 ), c(t) = tm 2 − a. In the expressions to follow, the subindex l of O l will be ignored for convenience. By the implicit function theorem we may solve for x = x(y, z) when r is large. Let (5.9) r 2 = x 2 + y 2 + z 2 = x 2 + ρ 2 ,r 2 = t 2 + ρ 2 , then a calculation shows that (5.10) x(y, z) = t + c(t) r + O(r −1 ). Furthermore, in the asymptotic end (5.11) g = 1 + m 2r 4 dx 2 + dy 2 + dz 2 , so that the induced metric on Σ r t ∩ M end is given by (5.12) γ = 1 + m 2r 4 (1 + x 2 y )dy 2 + 2x y x z dydz + (1 + x 2 z )dz 2 . From (5.8) the partial derivatives may be computed (5.13) x y = − c r 3 (xx y + y) + O |xx y | + |y| r 4 + 1 r 3 ⇒ x y = − c(t)y r 3 + O(r −2 ), and similarly (5.14) x z = − c(t)z r 3 + O(r −2 ). Hence (5.15) γ = 1 + m 2r 4 1 + c 2 y 2 r 6 dy 2 + 2c 2 yz r 6 dydz + 1 + c 2 z 2 r 6 dz 2 + O(r −3 )dx i dx j . Since the Gauss curvature consists of second derivatives and quadratic first derivatives, it follows that Now let us restrict attention to the range |t| ≤ r − c 0 , so that c 2 √ r ≤ ρ ≤ r for some constant c 2 > 0. According to Lemma 5.1, for regular values t in this range, Σ r t is a connected smooth submanifold with boundary ∂Σ r t = S 1 ⊂ S r . Let α : [0, θ 0 ] → ∂Σ r t be a parameterization and letν be an inward pointing normal vector to ∂Σ r t tangent to Σ r t , both to be chosen later. The geodesic curvature of the circle ∂Σ r t is given by KdA dt = O(r −1 ). x Σ r t for t ∈ [r − c 0 , u] Σ r t for t ∈ [−r + c 0 , r − c 0 ] X X X X X X X X y 9 Σ r t for t ∈ [u, r − c 0 ](5.20) κ = ν, ∇ α |α | α |α | = ν, ∇ α α |ν||α | 2 , where ∇ is the Levi-Civita connection for g = ·, · , ν =ν |ν| is the unit normal of ∂Σ r t tangent to Σ r t , and α = ∂ θ α is the velocity vector associated with α. Write this curve as (5.21) α(θ) = (x(θ), y(θ), z(θ)) where these functions are defined by the equations (5.22) x(θ) 2 + y(θ) 2 + z(θ) 2 = r 2 , u(α(θ)) = t. In order to compute α , observe that the equations defining α (up to scaling) are (5.23) α · α = 0, ∇u · α = 0, where · represents the Euclidean inner product. It follows that α and θ 0 can be chosen so that (5.24) α = (zu y − yu z )∂ x + (xu z − zu x )∂ y + (yu x − xu y )∂ z . The partial derivatives have the expansions (5.25) u x = 1 + m 2r −1 − x 1 + m 2r −2 −mx 2r 3 − ax r 3 + O(r −2 ) = 1 − m 2r + mx 2 2r 3 + O(r −2 ), (5.26) u y = −x 1 + m 2r −2 −my 2r 3 − ay r 3 + O(r −2 ) = mxy 2r 3 + O ρ r 3 + 1 r 2 , (5.27) u z = −x 1 + m 2r −2 −mz 2r 3 − az r 3 + O(r −2 ) = mxz 2r 3 + O ρ r 3 + 1 r 2 . Therefore (5.28) α = α x ∂ x + α y ∂ y + α z ∂ z = O ρ r 2 ∂ x + −z + mz 2r + O ρ r 2 ∂ y + y − my 2r + O ρ r 2 ∂ z , and (5.29) |α | 2 = 1 + m 2r 4 O ρ 2 r 2 + ρ 2 1 − m r + m 2 4r 2 = ρ 2 1 + m r + O(r −2 ) . At this point, it is convenient to estimate the value θ 0 of the parameterizing interval. On one hand, the length of ∂Σ r t may be computed from (5.29), (5.30) Length(∂Σ r t ) =ˆθ 0 0 |α |dθ =ˆθ 0 0 ρ 1 + m 2r + O(r −2 ) dθ. On the other hand, we can parameterize the yz-projection of ∂Σ r t by ϑ → (ρ(ϑ) cos ϑ, ρ(ϑ) sin ϑ) for ϑ ∈ [0, 2π], and use (5.15) to find (5.31) Length(∂Σ r t ) =ˆ2 π 0 det γ| ∂Σ r t dϑ =ˆ2 π 0 ρ 1 + m r + O(r −2 ) dϑ. Using (5.8), it follows that ρ = √ r 2 − x 2 is a constant (depending on r and t) along ∂Σ r t up to O(r −2 ). Thus we may subtract (5.30) and (5.31) to obtain (5.32) θ 0 = 2π 1 + m 2r + O(r −2 ) . Let us return to our calculation of (5.20). The normal vectorν must satisfy (5.33) α ·ν = 0, ∇u ·ν = 0, and so we may choose (5.34)ν = (α z u y − α y u z )∂ x + (α x u z − α z u x )∂ y + (α y u x − α x u y )∂ z . It follows that the components have the expansions (5.35)ν x = α z u y − α y u z = mxρ 2 2r 3 + O ρ r 2 , (5.36)ν y = α x u z − α z u x = −y + my r − mx 2 y 2r 3 + O ρ r 2 , (5.37)ν z = α y u x − α x u y = −z + mz r − mx 2 z 2r 3 + O ρ r 2 , and (5.38) |ν| 2 = ρ 2 1 + m 2r 4 1 − 2m r + mx 2 r 3 + O(r −2 ) = ρ 2 1 + mx 2 r 3 + O(r −2 ) . We now compute the covariant derivative portion of (5.20). Observe that (5.39) ∇ α α = α i ∇ i α j ∂ j = α i ∂ i α j ∂ j + α i α j Γ l ij ∂ l , where Γ l ij are Christoffel symbols. Furthermore (5.40) α i ∂ i α x = O ρ r 2 , α i ∂ i α y =O ρ r 2 O ρ r 2 + −z + mz 2r + O ρ r 2 − myz 2r 3 + O 1 r 2 + ρ 2 r 4 + y − my 2r + O ρ r 2 −1 + m 2r − mz 2 2r 3 + O 1 r 2 + ρ 2 r 4 = − y + my r + O ρ r 2 + ρ 3 r 4 , (5.41) α i ∂ i α z =O ρ r 2 O ρ r 2 + −z + mz 2r + O ρ r 2 1 − m 2r + my 2 2r 3 + O 1 r 2 + ρ 2 r 4 + y − my 2r + O ρ r 2 myz 2r 3 + O 1 r 2 + ρ 2 r 4 = − z + mz r + O ρ r 2 + ρ 3 r 4 . (5.42) Hence (5.43) ∇ α α = −y + my r ∂ y + −z + mz r ∂ z + O ρ r 2 ∂ l + α i α j Γ l ij ∂ l . To compute the Christoffel symbols write let w = 1 + m 2r and use (5.11) to find Γ l ij = 1 2 g lk (∂ i g kj + ∂ j g ki − ∂ k g ij ) = 1 2 w −4 δ lk δ kj ∂ i w 4 + δ ki ∂ j w 4 − δ ij ∂ k w 4 =2 δ l j ∂ i log w + δ l i ∂ j log w − δ ij ∂ l log w . (5.44) Therefore using the orthogonality ofν and α yields (5.45) ν, α i α j Γ l ij ∂ l = w 4 δ klν k α i α j Γ l ij = −2|α | 2νl ∂ l log w. Since (5.46)ν l ∂ l log w =ν l 1 + m 2r −1 − mx l 2r 3 = mρ 2 2r 3 + O(r −2 ), we then have (5.47) ν, α i α j Γ l ij ∂ l = − mρ 4 r 3 + O ρ 2 r 2 . Putting this altogether produces ν, ∇ α α = 1 + m 2r 4 O ρ r 2 O |x|ρ r 2 + ρ r 2 − mρ 4 r 3 + O ρ 2 r 2 + −y + my r − mx 2 y 2r 3 + O ρ r 2 −y + my r + O ρ r 2 + −z + mz r − mx 2 z 2r 3 + O ρ r 2 −z + mz r + O ρ r 2 =ρ 2 1 + mx 2 2r 3 − mρ 2 r 3 + O(r −2 ) . (5.48) We also have (5.49) |ν||α | = ρ 2 1 + m 2r + mx 2 2r 3 + O(r −2 ) , and therefore (5.50) ν, ∇ α α |ν||α | = 1 − m 2r − mρ 2 r 3 + O(r −2 ). Combining this with (5.32) we find that (5.51)ˆ∂ Σ r t κds =ˆθ 0 0 ν, ∇ α α |ν||α | dθ = 2π 1 − m r + mt 2 r 3 + O(r −2 ). By Sard's theorem we may restrict attention to regular level sets when computing (5.6). Moreover since for regular levels in the range |t| ≤ r − c 0 with r ≥ r 0 , the compact surface Σ r t is connected with nonempty boundary (Lemma 5.1), its Euler characteristic satisfies χ(Σ r t ) ≤ 1. Thus using (5.18), (5.19), and the Gauss-Bonnet theorem we find that u 2 x + u 2 y + u 2 z 1 2 = 1 − 3m 2r + mx 2 2r 3 + O 1 (r −2 ). It follows that (5.55) ∂ υ |∇u| = 1 + m 2r −2 ∂ r |∇u| = 3m 2r 2 − mx 2 2r 4 + O(r −3 ) , and therefore (5.56)ˆS r ∂ υ |∇u|dA = 4π 3m 2 − m 6 + O(r −1 ) = 16π 3 m + O(r −1 ), where we have used (5.57)ˆS r x 2 dA δ = 1 3ˆS r (x 2 + y 2 + z 2 )dA δ = 1 3ˆS r r 2 dA δ = 4π 3 r 4 . This combined with Proposition 5.2 and letting r → ∞ yields (5.58) 16πm ≥ˆ∞ −∞ˆΣt |∇ 2 u| 2 |∇u| 2 + R g dAdt, from which we find that m ≥ 0. Consider now the case of equality m = 0. Inequality (5.58) implies that R g ≡ 0 and |∇ 2 u| ≡ 0. In particular, ∇u is a parallel vector field. The same procedure above may be applied to second and third harmonic functions v and w of Lemma 3.1 asymptotic to the linear functions = y and = z, respectively, so that ∇v and ∇w are also parallel. Since these three vector fields are linearly independent, (M, g) is flat. Since (M, g) is also complete it must be isometric to Euclidean 3-space. Proof of Theorem 1.1. Let (M, g) be complete of nonnegative scalar curvature, and asymptotically flat with m the mass of a designated end M + end . Let (M , g) be the Schwarzschildian manifold of Proposition 2.1 with mass m satisfying |m − m| < ε. According to Theorem 2.3, m ≥ 0. Since ε > 0 is arbitrarily small, we conclude that m ≥ 0. The conclusion in the case of equality, m = 0, follows from the positive mass inequality as in [13]. Namely, one shows through conformal deformation that (M, g) is scalar flat, and then that it is Ricci flat via an infinitesimal Ricci flow. The Harmonic Coordinate Method In this section, we give another way to derive the total mass of an asymptotically flat manifold. Instead of using the trick of approximating by Schwarzschild metrics as in the previous section, we show how the mass term falls out naturally from our boundary term at infinity. Let (M, g) be a complete asymptotically flat Riemannian 3-manifold, and let M ext be the exterior region associated with a specified end M end . According to [7,Lemma 4.1] the exterior region is diffeomorphic to R 3 minus a finite number of disjoint balls, and has minimal boundary. Let {x 1 , x 2 , x 3 } be harmonic coordinates on M ext as in Section 3.2, with homogeneous Neumann condition on ∂M ext , and let x = (x 1 , x 2 , x 3 ). For a unit vector a ∈ S 2 ⊂ R 3 , it obviously follows that u = x · a is harmonic on M ext with homogeneous Neumann condition. For L > 0 sufficiently large, consider the coordinate cylinders C L := D ± L ∪ T L where (6.1) D ± L := { x | x · a = ±L, | x| 2 − ( x · a) 2 ≤ L 2 }, T L := { x | | x · a| ≤ L, | x| 2 − ( x · a) 2 = L 2 }. Set Ω L ⊂ M ext to be the closure of the bounded component of M ext \ C L . Following the arguments of [1,Section 4], if the scalar curvature R g is integrable then the mass of M ext is given by (6.2) m = lim L→∞ 1 16πˆC L i (g ij,i − g ii,j )υ j dA. where υ is the outward unit normal to C L . 6.1. Computation of the mass. To prove inequality (1.3), begin by applying Proposition 4.2 to u on the cylindrical domains Ω L (so that P 2 = C L and P 1 = ∂M ext ) to find that 1 2ˆΩ L |∇ 2 u| 2 |∇u| + R g |∇u| dV ≤ˆL −L 2πχ(Σ L t ) −ˆΣ L t ∩T L κ t,L dt +ˆC L ∂ υ |∇u|dA, (6.3) where Σ L t := {u = t} ∩ Ω L , and κ t,L is the geodesic curvature of the curve Σ L t ∩ T L viewed as the boundary of Σ t . Note that the asymptotics guarantee that, for L sufficiently large, the level sets Σ L t indeed meet T L transversely. We claim next that for every regular value t ∈ (−L, L), Σ L t consists of a single connected component, intersecting T L along the circle Σ L t ∩ T L . Indeed, if this is not the case, then there is a regular value t ∈ (−L, L) and a component Σ ⊂ Σ L t disjoint from T L . Since M ext is diffeomorphic to the compliment of finitely many balls in R 3 , there is a domain E ⊂ Ω L such that ∂E \ ∂M ext = Σ and E ∩ T L = ∅. But since u is harmonic with Neumann boundary conditions on ∂M ext and identically t on Σ , the maximum principle would then imply that u ≡ t in E, contradicting the fact that t is a regular value. Thus, Σ L t has only one component, with boundary given by Σ L t ∩ T L , and as a consequence χ(Σ L t ) ≤ 1. In particular, applying this in the preceding computation gives 1 2ˆΩ L |∇ 2 u| 2 |∇u| + R g |∇u| dV ≤ 4πL −ˆL −L ˆΣ L t ∩T L κ t,L dt +ˆC L ∂ υ |∇u|dA. (6.4) The remainder of the proof of Theorem 1.2 rests on a computation of the boundary terms in inequality (6.4). To carry out these computations, it will be useful to take a = ∂ x 1 , so that u = x 1 is the distinguished coordinate. In what follows the notation´D± L ±f represents´D+ L f −´D− L f . Lemma 6.1. In the notation fixed above, we havê C L ∂ υ |∇u|dA = 1 2ˆD ± L ± j (g 1j,j − g jj,1 )dA + 1 2LˆT L x 2 (g 21,1 − g 11,2 ) + x 3 (g 31,1 − g 11,3 ) dA + O(L 1−2q ). (6.5) Proof. To begin, note that (6.6) ∇|∇u| = ∇(g 11 ) 1/2 = − 1 2 ∇g 11 + O(|x| −1−2q ), where in the second line we have used the decay rates (3.7). Next since the outer normal υ to C L is given by (6.7) υ = ±∂ 1 + O(|x| −q ) on D ± L , and υ = x 2 ∂ 2 + x 3 ∂ 3 L + O(|x| −q ) on T L , it follows that (6.8)ˆC L ∂ υ |∇u|dA = − 1 2ˆD ± L ±g 11,1 dA − 1 2LˆT L (x 2 g 11,2 + x 3 g 11,3 )dA + O(L 1−2q ). Now, because x 1 is harmonic we see that g 11,1 = − 2g(∇ ∂ 1 ∂ 1 , ∂ 1 ) + O(|x| −1−2q ) =2g(∇ ∂ 2 ∂ 1 , ∂ 2 ) + 2g(∇ ∂ 3 ∂ 1 , ∂ 3 ) + O(|x| −1−2q ) = − 2g 21,2 − 2g 31,3 + g 22,1 + g 33,1 + O(|x| −1−2q ), (6.9) and thereforeˆC L ∂ υ |∇u|dA =ˆD ± L ±(g 12,2 + g 13,3 − 1 2 g 22,1 − 1 2 g 33,1 )dA − 1 2LˆT L (x 2 g 11,2 + x 3 g 11,3 )dA + O(L 1−2q ) = 1 2ˆD ± L ±(g 12,2 − g 22,1 + g 13,3 − g 33,1 )dA +ˆD + L 1 2 (g 12,2 + g 13,3 )dA −ˆD − L 1 2 (g 12,2 + g 13,3 )dA − 1 2LˆT L (x 2 g 11,2 + x 3 g 11,3 )dA + O(L 1−2q ). (6.10) Applying the divergence theorem to the penultimate line above, and subsequently employing the fundamental theorem of calculus on T L yieldŝ C L ∂ υ |∇u|dA = 1 2ˆD ± L ±(g 12,2 − g 22,1 + g 13,3 − g 33,1 )dA +ˆ∂ D + L 1 2L (x 2 g 12 + x 3 g 13 )dA −ˆ∂ D − L 1 2L (x 2 g 12 + x 3 g 13 )dA − 1 2LˆT L (x 2 g 11,2 + x 3 g 11,3 )dA + O(L 1−2q ) = 1 2ˆD ± L ±(g 12,2 − g 22,1 + g 13,3 − g 33,1 )dA +ˆT L ∂ 1 x 2 2L g 12 + x 3 2L g 13 dA − 1 2LˆT L (x 2 g 11,2 + x 3 g 11,3 )dA + O(L 1−2q ) = 1 2ˆD ± L ±(g 12,2 − g 22,1 + g 13,3 − g 33,1 )dA + 1 2LˆT L [x 2 (g 21,1 − g 11,2 ) + x 3 (g 31,1 − g 11,3 )]dA + O(L 1−2q ). (6.11) Lemma 6.2. In the notation established above, we havê L −L ˆΣ L t ∩T L κ t,L dt =4πL + 1 2LˆT L x 2 (g 33,2 − g 23,3 ) + x 3 (g 22,3 − g 32,2 ) dA + O(L 1−2q + L −q ). (6.12) Proof. To begin, recall that the geodesic curvature κ t,L is given by (6.13) κ t,L = ∇ τ β, τ = − β, ∇ τ τ , where τ is a unit tangent vector to Σ L t ∩ T L and β is the outward pointing unit normal to Σ L t ∩ T L along Σ L t . Let (6.14) X := x 2 ∂ 2 + x 3 ∂ 3 and Y := x 3 ∂ 2 − x 2 ∂ 3 . Then by settingX := X − X, τ τ we may take (6.15) τ = Y |Y | and β =X |X| . Consequently κ t,L = − X |X| , ∇ τ τ = −1 |X||Y | 3 (|Y | X, ∇ Y Y − X, Y ∇|Y |, Y ) . (6.16) The decay conditions (3.7) imply that (6.17) |∇|Y || = O(|x| −q ) and X, Y = O(|x| 2−q ). It follows that (6.18) X, Y ∇|Y |, Y |X||Y | 3 = O |x| 2−q |x| −q |x| |x| 4 = O(|x| −1−2q ), and hence (6.19) κ t,L = − X, ∇ Y Y |X||Y | 2 + O(|x| −1−2q ). A direct computation gives (6.20) ∇ Y Y = −X + (x 3 ) 2 ∇ ∂ 2 ∂ 2 + (x 2 ) 2 ∇ ∂ 3 ∂ 3 − 2x 2 x 3 ∇ ∂ 2 ∂ 3 . Upon expanding X, ∇ Y Y = x 2 ∂ 2 + x 3 ∂ 3 , ∇ Y Y in terms of the metric derivatives, we see that (6.16) becomes κ t,L = |X| 2 |X||Y | 2 + O(L −1−2q ) − X, (x 3 ) 2 ∇ ∂ 2 ∂ 2 + (x 2 ) 2 ∇ ∂ 3 ∂ 3 − 2x 2 x 3 ∇ ∂ 2 ∂ 3 L 3 = |X| |Y | 2 + O(L −2−q + L −1−2q ) − 1 2L 3 (x 2 (x 3 ) 2 g 22,2 + (x 2 ) 2 x 3 g 33,3 ) + [(x 2 ) 2 x 3 + 1 2 (x 3 ) 3 ] g 22,3 L 3 + [(x 3 ) 2 x 2 + 1 2 (x 2 ) 3 ] g 33,2 L 3 − (x 2 ) 3 g 23,3 L 3 − (x 3 ) 3 g 32,2 L 3 ,(6.21) where the decay properties (3.7) have been used repeatedly. At this point it will be useful to parameterize Σ L t ∩ T L by [0, 2π] s → γ(s) := (t, L cos(s), L sin(s)). Notice that γ (s) = −Y . We then havê (cos 2 (s) − sin 2 (s))g 22 (γ(s))ds +ˆ2 π 0 (sin 2 (s) − cos 2 (s))g 33 (γ(s))ds +ˆ2 π 0 4 sin(s) cos(s)g 23 (γ(s))ds = − 4πL + 1 2ˆC L j (g ij,j − g jj,i )υ i dA + o(1), (6.27) Therefore (6.28) 1 2ˆΩ L |∇ 2 u| 2 |∇u| + R g |∇u| dV ≤ 1 2ˆC L j (g ij,j − g jj,i )υ i dA + o(1), and taking the limit as L → ∞ gives the desired inequality (1.3). Consider now the case of equality when m = 0. From the arguments above, this implies that the harmonic coordinate function is linear |∇ 2 u| ≡ 0 and that the Euler characteristic of the level sets is constant χ(Σ t ) = 1. In particular the boundary of the exterior region is empty ∂N = ∅, and thus M ∼ = R 3 . Since there are three linearly independent harmonic coordinate functions with ∇ 2 u ≡ 0, the manifold is flat, yielding the isometry (M, g) ∼ = (R 3 , δ). Σ L t ∩T L |X| |Y | 2 =ˆ2 Theorem 1. 1 . 1If (M, g) is complete and asymptotically flat with nonnegative scalar curvature then m ≥ 0, and m = 0 if and only if (M, g) ∼ = (R 3 , δ). Theorem 2 . 3 . 23If (M, g) is complete and Schwarzschildian with nonnegative scalar curvature then m ≥ 0, and m = 0 if and only if (M, g) ∼ = (R 3 , δ). 3 . 3Linear Growth Harmonic Functions 3.1. Harmonic functions on Schwarzschildian ends. Suppose that (M, g) is Schwarzschildian, and in M end write g = w 4 δ where w = 1 + m 2r . Let Figure 1 . 1A schematic description of the construction in Proposition 2.1. Lemma 3 . 1 . 31Let (M, g) be complete and Schwarzschildian. For any linear function in the coordinates of M end , there exists a unique solution u of (3.4). Moreover, all regular level sets of u are connected and noncompact with a single end modeled on R 2 \ B 1 . Figure 2 . 2Possible level sets of a harmonic function from Lemma 3.1. Remark 4. 3 . 3Note that for our applications in Section 6, the boundary component P 2 in Proposition 4.2 will be a piecewise smooth surface, diffeomorphic to the boundary ∂(D 2 × [0, 1]) of the solid cylinder D 2 × [0, 1]. However, our harmonic function u in this case will be constant on the disks D 2 × {0} and D 2 × {1}, with nonvanishing gradient along the cylindrical portion S 1 × [0, 1] Figure 3 . 3The case where C = Σ r t in the proof of Lemma 5.1. Figure 4 . 4The argument showing ∂Σ r t is connected in Lemma 5.1.5.2.The Gaussian curvature. The Gaussian curvature integral appearing in formula (4.7) in Proposition 4.2 will now be computed.Proposition 5.2. Suppose that (M, g) is complete and Schwarzschildian. Let u be a harmonic function of Lemma 3.1 which is asymptotic to a linear function , and let u and u denote the maximum and minimum values of u within M r , respectively. Then ≤ mω + O(r −1 ) ( 5 . 516) K = O(r −3 ). Let r 0 > 0 and c 0 > 0 be the constants given by Lemma 5.1. From now on, we will only consider r ≥ r 0 . Let u and u be the max and min levels for u within M r . Then u = r − m 2 + O(r −1 ) and u = −r − m 2 + O(r −1 ). At this point, we will break the interval [u, u] into three pieces: [u, −r + c 0 ], [−r + c 0 , r − c 0 ], and [r − c 0 , u], see Figure 5. Consider t ∈ [r − c 0 , u], then 0 ≤ ρ ≤ c 1 √ r for some constant c 1 . On this region all t-levels are regular Figure 5 . 5The decomposition of M r used to estimate the integral in Proposition 5.2. . 5 5Proof of the positive mass theorem. Proof of Theorem 2.3. Let (M, g) be complete with nonnegative scalar curvature and Schwarzschildian. Consider the harmonic function u of Lemma 3.1 asymptotic to the linear function (x, y, z) = x. Apply identity (4.7) of Proposition 4.2 to Ω = M u| 2 |∇u| 2 + R g − 2K dAdt, where u and u denote the maximum and minimum values of u on M r , and K is the Gaussian curvature of Σ r t . In order to compute the boundary integral in (5.53), use (5.25), (5.26), and ( ( |X| 2 − |Y | 2 |Y |(|X| + |Y |) (γ(s)|X| 2 − |Y | 2 )(γ(s))ds + O(L −2q ). [x 3 [x 2 g 22, 3 + x 3 g 33 [(x 2 ) 2 − (x 3 ) 2 ](x 2 g 23, 3 − x 3 g 32 t 3333223322 ) 2 − (x 3 ) 2 )(x 2 g 23,3 − x 3 g 23,2 )] • γds, (6.23)where in the final line we have integrated by parts and used the double angle formulas to write sin(2s) and cos(2s) in terms of x 2 and x 3 . Combining this with (6.x 3 [x 3 g 22,2 + x 2 g 33,3 ] + O(L −2q ) (x 2 ) 2 − (x 3 ) 2 ](x 2 g 23,3 − x 3 g 32,2 ). ∩T L (x 2 ) 3 g 23,3 L 3 + (x 3 ) 3 g 32,2 L 3 + O(L −1−q + L −2q ) =2π + 1 2LˆΣLt ∩T L (x 3 g 22,3 − x 3 g 32,2 + x 2 g 33,2 − x 2 g 23,3 ) + O(L −1−q + L −2q ). integrating over [−L, L] gives the desired identity. Acknowledgements. The authors would like to thank Dan Lee for several helpful comments on an earlier version of this manuscript, in particular for pointing out the simplification in Remark 2.2. The mass of an asymptotically flat manifold. R Bartnik, Comm. Pure Appl. Math. 395R. Bartnik, The mass of an asymptotically flat manifold, Comm. Pure Appl. Math., 39 (1986), no. 5, 661-693. The Penrose inequality in general relativity and volume comparison theorems involving scalar curvature. H Bray, Dissertation, Stanford UniversityH. Bray, The Penrose inequality in general relativity and volume comparison theorems involving scalar curvature, Dissertation, Stanford University, 1997. Scalar curvature and harmonic one-forms on 3-manifolds with boundary. H Bray, D Stern, in preparationH. Bray and D. Stern Scalar curvature and harmonic one-forms on 3-manifolds with boundary, in preparation. On the positive definite mass of the Bondi-Weber-Wheeler time-symmetric gravitational waves. D Brill, Ann. Phys. 7D. Brill, On the positive definite mass of the Bondi-Weber-Wheeler time-symmetric gravitational waves, Ann. Phys., 7 (1959), 466-483. Scalar curvature and the Einstein constraint equations. J Corvino, D Pollack, Surveys in Geometric Analysis and Relativity. Somerville, MAInt. Press20J. Corvino, and D. Pollack, Scalar curvature and the Einstein constraint equations, Surveys in Geometric Analysis and Relativity, 145-188, Adv. Lect. Math. (ALM), 20, Int. Press, Somerville, MA, 2011. . D Lee, Geometric Relativity, Graduate Studies in Mathematics. 201D. Lee, Geometric Relativity, Graduate Studies in Mathematics, Volume 201, 2019. The inverse mean curvature flow and the Riemannian Penrose inequality. G Huisken, T Ilmanen, J. Differential Geom. 59G. Huisken, and T. Ilmanen, The inverse mean curvature flow and the Riemannian Penrose inequality, J. Differ- ential Geom., 59 (2001), 353-437. Ricci flow on asymptotically Euclidean manifolds. Y Li, Geom. Topol. 22Y. Li, Ricci flow on asymptotically Euclidean manifolds, Geom. Topol., 22 (2018), 1837-1891. On the Bartnik mass of apparent horizons. C Mantoulidis, R Schoen, Class. Quant. Grav. 3220ppC. Mantoulidis, and R. Schoen, On the Bartnik mass of apparent horizons, Class. Quant. Grav., 32 (2015), no. 20, 205002, 16 pp. Geometric variational problems in mathematical physics. C Mantoulidis, Stanford UniversityPh.D. thesisC. Mantoulidis, Geometric variational problems in mathematical physics, Ph.D. thesis, Stanford University, (2017) Positive mass theorem on manifolds admitting corners along a hypersurface. P Miao, Adv. Theor. Math. Phys. 66P. Miao, Positive mass theorem on manifolds admitting corners along a hypersurface, Adv. Theor. Math. Phys., 6 (2002), no. 6, 1163-1182. On Witten's proof of the positive energy theorem. T Parker, C Taubes, Comm. Math. Phys. 842T. Parker, and C. Taubes, On Witten's proof of the positive energy theorem, Comm. Math. Phys., 84 (1982), no. 2, 223-238. On the proof of the positive mass conjecture in general relativity. R Schoen, S.-T Yau, Comm. Math. Phys. 651R. Schoen, and S.-T. Yau, On the proof of the positive mass conjecture in general relativity, Comm. Math. Phys., 65 (1979), no. 1, 45-76. The energy and the linear momentum in of space-times in general relativity. R Schoen, S.-T Yau, Comm. Math. Phys. 791R. Schoen, and S.-T. Yau, The energy and the linear momentum in of space-times in general relativity, Comm. Math. Phys., 79 (1981), no. 1, 47-51. Positive scalar curvature and minimal hypersurface singularities, preprint. R Schoen, S.-T Yau, arXiv:1704.05490R. Schoen, and S.-T. Yau, Positive scalar curvature and minimal hypersurface singularities, preprint, 2017. arXiv:1704.05490 A simple proof of the positive energy theorem. E Witten, Comm. Math. Phys. 803E. Witten, A simple proof of the positive energy theorem, Comm. Math. Phys. 80 (1981), no. 3, 381-402. Scalar curvature and hammocks. J Lohkamp, Math. Ann. 3133J. Lohkamp, Scalar curvature and hammocks, Math. Ann., 313 (1999), no. 3, 385-407. The higher dimensional positive mass theorem I, preprint. J Lohkamp, arXiv:math/0608795J. Lohkamp, The higher dimensional positive mass theorem I, preprint, 2016. arXiv:math/0608795 Scalar curvature and harmonic maps to S 1 , preprint. D Stern, arXiv:1908.09754Durham, NC, 27708, USA E-mail addressDepartment of Mathematics, Duke [email protected]. Stern, Scalar curvature and harmonic maps to S 1 , preprint, 2019. arXiv:1908.09754 Department of Mathematics, Duke University, Durham, NC, 27708, USA E-mail address: [email protected]
[]
[ "Hidden Sector Monopole Dark Matter with Matter Domination", "Hidden Sector Monopole Dark Matter with Matter Domination" ]
[ "Michael L Graesser [email protected] \nTheoretical Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n", "Jacek K Osiński \nTheoretical Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n\nDepartment of Physics and Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA\n" ]
[ "Theoretical Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA", "Theoretical Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA", "Department of Physics and Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA" ]
[]
The thermal freeze-out mechanism for relic dark matter heavier than O(10−100 TeV) requires cross-sections that violate perturbative unitarity. Yet the existence of dark matter heavier than these scales is certainly plausible from a particle physics perspective, pointing to the need for a non-thermal cosmological history for such theories. Topological dark matter is a well-motivated scenario of this kind. Here the hidden-sector dark matter can be produced in abundance through the Kibble-Zurek mechanism describing the non-equilibrium dynamics of defects produced in a second order phase transition. We revisit the original topological dark matter scenario, focusing on hidden-sector magnetic monopoles, and consider more general cosmological histories. We find that a monopole mass of order (1-10 5 ) PeV is generic for the thermal histories considered here, if monopoles are to entirely reproduce the current abundance of dark matter. In particular, in a scenario involving an early era of matter domination, the monopole number density is always less than or equal to that in a pure radiation dominated equivalent provided a certain condition on critical exponents is satisfied. This results in a larger monopole mass needed to account for a fixed relic abundance in such cosmologies.1 The U (1) can be broken at a much lower scale. 2 Consequences of kinetic mixing leading to milli-magnetically charged monopoles are explored in[23][24][25].
10.1007/jhep11(2020)133
[ "https://arxiv.org/pdf/2007.07917v1.pdf" ]
220,546,045
2007.07917
ca4b040438f5eaccb7147a0c649b9e401665fc41
Hidden Sector Monopole Dark Matter with Matter Domination 15 Jul 2020 Michael L Graesser [email protected] Theoretical Division Los Alamos National Laboratory 87545Los AlamosNMUSA Jacek K Osiński Theoretical Division Los Alamos National Laboratory 87545Los AlamosNMUSA Department of Physics and Astronomy University of New Mexico 87131AlbuquerqueNMUSA Hidden Sector Monopole Dark Matter with Matter Domination 15 Jul 2020LA-UR-20-25052 The thermal freeze-out mechanism for relic dark matter heavier than O(10−100 TeV) requires cross-sections that violate perturbative unitarity. Yet the existence of dark matter heavier than these scales is certainly plausible from a particle physics perspective, pointing to the need for a non-thermal cosmological history for such theories. Topological dark matter is a well-motivated scenario of this kind. Here the hidden-sector dark matter can be produced in abundance through the Kibble-Zurek mechanism describing the non-equilibrium dynamics of defects produced in a second order phase transition. We revisit the original topological dark matter scenario, focusing on hidden-sector magnetic monopoles, and consider more general cosmological histories. We find that a monopole mass of order (1-10 5 ) PeV is generic for the thermal histories considered here, if monopoles are to entirely reproduce the current abundance of dark matter. In particular, in a scenario involving an early era of matter domination, the monopole number density is always less than or equal to that in a pure radiation dominated equivalent provided a certain condition on critical exponents is satisfied. This results in a larger monopole mass needed to account for a fixed relic abundance in such cosmologies.1 The U (1) can be broken at a much lower scale. 2 Consequences of kinetic mixing leading to milli-magnetically charged monopoles are explored in[23][24][25]. Introduction The period between the end of inflation and the beginning of big bang nucleosynthesis (BBN) is a natural period for the production of dark matter (DM), though it is currently inaccessible to observations. The most popular dark matter candidate has traditionally been a weakly interacting massive particle (WIMP), produced in the right abundance by thermal freeze-out in the standard thermal history of radiation domination (RD) between inflation and BBN. This standard picture is now increasingly strained, with certain models excluded by indirect searches over much of the cosmologically interesting range for the WIMP mass [1,2]. Nonthermal production mechanisms, which depart from the assumptions of local thermal and chemical equilibrium of dark matter with Standard Model particles in the early Universe, and/or radiation domination, have become more widespread [3]. Spontaneous symmetry breaking in the early Universe prior to BBN provides a natural mechanism to produce interesting objects through an out-of-equilibrium process. Specifically, symmetry breaking via a second order phase transition can produce a large density of topological defects via the Kibble-Zurek mechanism (KZM) [4][5][6], and their density, can "leave an immediate imprint on the Universe and will be critically important" [6]. While the KZM theory was developed some time ago, it is only recently that the theory has received firm experimental support, at least for describing classical second order phase transitions, as certain key predictions of the theory have been confirmed in laboratory settings. In particular, the scaling of the density of topological defects with respect to the quenching rate has been verified in a number of two-and three-dimensional materials [7][8][9][10][11]. What is of focus here, is that the KZM is a plausible nonthermal mechanism for the production of an interesting class of dark matter candidates dubbed topological dark matter [12]. A key finding of [12] is that in this scenario, the dark matter mass must be of O(PeV) scale to obtain the correct relic abundance. Our main motivation for the present work is to explore the robustness of this finding, when other cosmological histories in the early Universe are considered. Topological dark matter is studied by [12] in the context of a standard thermal history, in which the phase transition that produces topological defects occurs during a radiation dominated era, and where the temperature of the symmetry breaking and visible sectors are assumed for simplicity to be equal. We explore this scenario in several different directions. We allow for an intervening phase of matter domination (MD) in the early Universe, during which the symmetry breaking occurs. We also allow the symmetry breaking sector to have a temperature different than that of the visible sector (VS) of Standard Model particles. For since the two sectors interact only very weakly, if at all, there is no reason to expect them to have the same temperature. Phases of early matter domination (EMD) in the period between inflation and BBN are a generic prediction of early Universe string constructions and are commonly achieved via moduli which acquire a pressureless equation of state and drive the Universe toward matter domination before their eventual decay [13][14][15][16][17][18][19]; for a review see [20]. An early matter dominated era can also easily happen when a decoupled massive particle comes to dominate the energy density for some time before decaying and subsequently reheating the Universe. We will consider an era of early matter domination to be caused by either a modulus or a decoupled particle, and allow the phase transition to occur anywhere before, during, or after this era. Our cosmological scenario actually consists of two hidden sectors: a sector driving an early matter domination phase; and a second sector with the symmetry breaking by a second order phase transition. Couplings between these two sectors would be interesting to explore -leading to a more complicated cosmological historybut we do not do so here, simply to avoid over complicating the narrative. While the original work on topological dark matter [12] considered the production of domain walls, strings, monopoles, or skyrmions, here we focus for simplicity on the case where the produced defects are magnetic monopoles, charged under an unbroken U (1) left over after the phase transition. 1 The abundance of magnetic monopoles charged under the U (1) of electromagnetism is constrained by observations, such as the Parker limit, to be less than that required for it to account for all of the DM [21,22]. We will therefore avoid such constraints altogether in this work by considering the simplest scenario in which the monopoles are not charged under electromagnetism, but instead charged under a hidden sector U (1), and further, that the hidden sector U (1) does not kinetically mix with electromagnetism, so that monopoles of the hidden sector do not couple to (visible sector) electromagnetic fields. 2 Our scenario begins in a radiation dominated phase after inflation, where we allow for the dominant energy component to be radiation in either the visible or hidden sector. As the Universe expands, each sector cools independently of the other, and we enter an early matter dominated phase caused by a modulus or by a heavy particle which has decoupled from either sector. As this phase proceeds, the dominating field continually decays into radiation in the visible sector, until the decay completes (at reheating) and we transition back to a radiation dominated phase of Standard Model particles, leading to the standard cosmology at the onset of BBN. We suppose that a secondorder phase transition occurs in the hidden sector as the temperature in the hidden sector drops below some critical temperature T (hid) C , resulting in a significant production of magnetic monopoles in the hidden sector due to the Kibble-Zurek mechanism. We allow the phase transition to occur at any time in the pre-BBN thermal history of our scenario. We a posteriori neglect any subsequent annihilations of monopoles due to their high mass (PeV and above) and consequently low number density. As mentioned above, we also do not consider any non-gravitational interactions between the sectors, other than that which provides the decay that reheats our Universe. Our main results are shown in Figures 4, 7, and 8. We generally find that hidden sector monopoles in the mass range O(1-10 5 ) PeV can be dark matter candidates, with values for the monopole mass giving rise to the current dark matter relic abundance correlated with other particle and cosmological parameters. Furthermore, a long intervening era of matter domination in the early Universe significantly increases the hidden sector monopole mass needed to obtain the current relic abundance, compared to a purely radiation-dominated history, provided that the critical exponents, defined below, satisfy 2ν ≤ 1+µ. An analytic argument for this observation is presented in Section 4, which is also confirmed by our numerical results given in subsequent sections. We begin with an overview of monopole production via the Kibble-Zurek mechanism in Section 2, followed by an overview of a cosmological history involving EMD in Section 3. In Section 4, we present analytical forms for the monopole abundance in the presence of an EMD phase, including monopole production before, during, and after EMD. We then present numerical results for the cases of EMD by a modulus or a heavy decoupled particle in Sections 5 and 6 respectively. Section 7 shows the monopole mass and cosmological parameters that give the correct present-day relic abundance for dark matter, using an analytic approximation that we show well-describes the relic abundance obtained using numerical methods. We conclude with a brief discussion, including a summary of important caveats to our work, in Section 8. A number of detailed results are summarized in several Appendices. We include a table of notation in Appendix A. Appendix B describes the relation of a key cosmological parameter in our work -the length of the matter-dominated phase -to other defined cosmological parameters. Appendices C and D gather usual formulae for the decoupling of a relativistic particle, and Appendix E gives the constraint on cosmological parameters from requiring that a matter-dominated phase caused by a decoupled particle lasts at all. Brief review of Kibble-Zurek mechanism theory We now summarize the theory of the Kibble-Zurek mechanism describing the non-equilibrium dynamics of topological defects produced in a second order phase transition. We refer the reader to the original references [4][5][6] and recent review [26], which give several reasons for why (2.2) shown below gives the typical distance scale between topological defects. In the KZM theory, a system is assumed to be driven through a second-order phase transition at temperature T C by a quench that importantly, is assumed to be of a finite timescale; it is neither instantaneous, nor extremely long. In a cosmological context, the quench is driven by the cosmological expansion of the Universe itself, a point we return to below. If the quench is slow enough, the system has time to quasi-equilibrate and therefore as t → t C the correlation length continues to grow with some critical scaling, namely ξ(t) = ξ 0 | (t)| −ν ,(2.1) for some critical exponent ν. The key point is that there is a time scale t * prior to the phase transition, such that for times t > t * , the correlation length exceeds the sound horizon. Subsequent to that time, the quench is fast compared to the timescale over which the system can respond. According to the KZM theory, after this cross-over time t * , fluctuations become frozen, and therefore ξ(t * ) sets the scale of the topological defects, namely [4][5][6], ξ(t * ) = u(t * )|t * − t C | , (2.2) where u(t) = u 0 (t) µ−ν , for a critical exponent µ and typical velocity u 0 , is the characteristic velocity of perturbations in the system. 3 The characteristic correlation time scale τ (t) is then τ (t) ≡ ξ(t)/u(t) = τ 0 (t) −µ ∼ ξ(t) z , (2.3) for a typical timescale τ 0 = ξ 0 /u 0 . We now arrive at the main prediction of the KZM theory. For this finite speed quench, the frozen correlation length is then predicted to be ξ(t * ) ≈ ξ 0 τ Q τ 0 ν 1+µ , (2.4) with approximately one topological defect (monopole) produced per correlation volume ξ(t * ) −3 [4][5][6]. The size of the frozen length scale is set by physical properties in ξ 0 , τ 0 , and the critical exponents, and by the timescale of the quench τ Q set by either the laboratory conditions or by the Hubble expansion rate, depending on the context. It follows that the number density of point-like defects in D = 3 spatial dimensions is 4 n def ects ∼ ξ(t * ) −3 ≈ τ − 3ν 1+µ Q . (2.5) This scaling of defect density has been experimentally confirmed in a number of two and three dimensional condensed matter systems, such as 3-D ferroelectric crystals [7], 2-and 3-D Bose-Einstein condensate gases [8][9][10], and multiferroic hexagonal manganite crystals [11]. A critical dynamical assumption leading to these predictions is that fluctuations in spatial regions separated by more than this correlation length are randomly oriented and, subsequent to the above cross-over time, independent of each other. While this is a reasonable expectation for a classical phase transition, Zurek raises a caveat for systems such as the normal-to-superfluid transition in 4 He in which quantum mechanical effects are all important [4]. Namely, correlations between regions separated by several correlation lengths may only appear to be random and independent, but in fact could be secretly strongly correlated due to conservation laws (for the vortices studied in [4], notably angular momentum), in analogy to spin correlations in EPR experiments. Should this situation occur, the predicted topological number density would be smaller and these estimates for the cosmological relic density would need to be revisited [4]. But recent experimental results do suggest that -at least in the case of vortex formationdefects are indeed random and independent, reaffirming the KZM expectations. For the KZM theory also makes some statements about this randomness, as it specifies how the net winding number of vortices W in a fixed spatial region of circumference C should scale with the correlation length. Specifically, the typical absolute value |W| and dispersion W 2 are both predicted to have the same scaling at large |W| 1, namely W 2 ∼ |W| ∼ C/ξ, whereas at small winding number the KZM predicts different scaling laws for the absolute value and dispersion of W [4,27]. In both limits the KZM predictions for these two quantities have been dramatically confirmed in 3-dimensional ferroelectric crystals [7]. In a laboratory setting, in the non-relativistic mean field approximation (i.e., Landau-Ginzburg theory), the potential part of the free-energy of a system described by an order parameter φ is approximated by the Landau-Ginzburg potential, V (φ) = (T − T C )m|φ| 2 +(1/2)λ|φ| 4 ,(2.6) with the time-evolution of φ approximately described by the Gross-Pitaevskii equation, which is first order in time. This leads to the critical exponents µ = 1 and ν = 1/2, predicting ξ(t * ) ≈ ξ 0 (τ Q /τ 0 ) 1/4 . But in a relativistic quantum field theory context the scaling laws are different because the equation of motion for φ is second-order in time. For example, in a cosmological context the equation of motion for φ leads to the critical exponents µ = ν = 1/2. Here then, [12]. As noted above, when the phase transition occurs in an expanding Universe, the quench time can be re-expressed in terms of the Hubble rate at the critical time as H −1 C . To see that, first note that the quench is characterized by ξ(t * ) ≈ ξ 0 (τ Q /τ 0 ) 1/3 and n def ects ∼ τ −1 Q [6](t) ≡ (T (t) − T C )/T C , (2.7) where T (t) is the time-dependent temperature of the system. Close to the time of the phase transition t C , this quantity scales linearly with time, (t) = (t C − t)/τ Q ,(2.8) which also defines the quenching time-scale τ Q . For example, in a cosmological context where the scale factor a increases as a(t) = (t/t C ) p , p = 2/3 (1/2) for MD (RD), then with t ≡ t C − ∆t, |∆t| t C , (t) = p∆t/t C and τ Q = t C /p, or in other words, τ Q = H −1 (t C ) . (2.9) That is, the characteristic time-scale τ Q of the quench is always given by the Hubble parameter at the time of the phase transition, generalizing from the pure RD scenario given in [6] to more general equations of state. We take the initial correlation sizes to be set by the mass m σ of the σ particle, which for a pure scalar φ 4 theory at weak coupling is given by m σ λT C /4 [28]. That is, [12]. Although µ = ν = 1/2 is the prediction for the critical exponents in the approximation that the second-order phase transition is described by a weakly coupled scalar field, for our analysis we consider more general values for the critical exponents. 5 In terms of cosmological quantities, the frozen correlation length is then ξ 0 ≈ τ 0 ∼ m −1 σ ∼ (T C √ λ) −1ξ(t * ) ≈ m −1 σ m σ H C ν 1+µ = 1 T C √ λ T C √ λ H C ν 1+µ (2.10) regardless of the type of dominant energy density (matter or radiation), with the understanding that the temperature dependence of the Hubble parameter when the system is at the critical temperature, H C , does depend on the form of the dominant energy density component. After the phase transition is complete, the monopole number density is n M ≈ ξ(t * ) −3 and the comoving number density is fixed as their abundance simply redshifts through the remaining history of the Universe. We will neglect any subsequent annihilations of monopoles because the masses needed to account for the entire current DM abundance will turn out to be quite high, with correspondingly low number densities. 6 For a general second order phase transition, quantum or classical, in the KZM theory the frozen correlation length setting the density of topological defects depends only on the critical temperature of the phase transition, the typical timescale of the quench, and the critical exponents. For a classical Landau-Ginzburg second order phase transition, however, the mass of the defect -here the monopole mass m M -is not independent of the critical temperature. For a 't Hooft-Polyakov monopole, m M = hT C , with h the magnetic coupling 2π/e h , and recall that φ ∼ T C . Thus for a classical phase transition, the monopole mass and critical temperature are parametrically at the same scale. Throughout this work we will assume the monopoles are produced in the early Universe by a classical second order phase transition, so the implied relation between the critical temperature and monopole mass is an important caveat to many of our results. But such a mass-temperature (m − T ) relation is not expected to be true in general. On the contrary, one expects the monopole mass and critical temperature to be unrelated. The N = 2 Seiberg-Witten theory [32,33] is a prominent example of this kind, where near certain points on the moduli space the low-energy theory contains nearly massless composite particles charged under a magnetic U (1). Here one would like to know whether the theory ends up near these points as the theory is cooled through the phase transition, and what the order of the transition is. For the former question, the answer is affirmative, at least in the pure N = 2 SU (2) theory [34]. The latter remains an open question. Because of this expectation, we will indicate which of our results are independent of any assumption about a m − T relation. The most important of these is the ratio of the monopole number density to photon entropy density, such as (4.2), (4.4), and (4.5) given below. In the low-density limit where monopole annihilations are negligible, these depend only on the critical temperature but not the monopole mass. As previously mentioned, we will also vary the critical exponents µ and ν away from the Landau-Ginzburg value of 1/2, as a guide to future work. Summary of the cosmological history with an early matter-dominated era In order to proceed, we must address the relationship between the Hubble expansion rate and the temperatures of the different radiation components of the Universe. In this section we therefore introduce the general expansion history we will be considering, define terminology, and obtain relations between the Hubble parameter and key parameters during the different eras prior to reheating. First, we begin with radiation domination (RD) by either the hidden or visible sector (or any combination) some time after inflation, with other energy densities comparatively negligible. In this era, the Hubble expansion rate is given by H 2 = (ρ (vis) r + ρ (hid) r ) 3M 2 P = π 2 90 g (hid) * (1 + f ) T (hid) 4 M 2 P , (3.1) where the second equation implicitly defines the factor f ≡ ρ (vis) r /ρ (hid) r as the ratio of the radiation energy densities of the visible and hidden sectors. Also, T (hid) is the temperature of the HS, g (hid) * is the number of relativistic degrees of freedom in the HS at temperature T (hid) , and M P ≈ 2.4 × 10 18 GeV is the reduced Planck mass. In this period, the factor (1 + f ) is well approximated by its initial value (1 + f i ) regardless of the distribution of initial radiation among the two sectors, and we will make this substitution when using (3.1) below. We consider the visible and hidden sectors to have independent temperatures, each with their own g * factors depending on the specific particle content (Standard Model for the visible sector), and we could have equivalently expressed (3.1) in terms of visible sector quantities. The g * factors of course depend on the temperature of their respective sector, but we will treat g (hid) * as roughly constant at high temperatures in order to avoid overly specifying the details of the HS. We achieve early matter domination (EMD) through the presence of a scalar modulus, or by the decoupling of a heavy particle from either the hidden or visible sectors during this initial RD phase. In both cases we refer to the modulus and the heavy particle as Φ, and based on the context, there should not be any confusion. We assume that Φ couples to lighter particles through higher dimension operators suppressed by the Planck scale, with a decay rate Γ Φ ∼ α 2 2π m 3 Φ M 2 P , (3.2) where m Φ is the Φ mass. We have also included a possible loop factor α in the case that Φ decay occurs predominantly through a loop, but we will set α = 1 throughout unless otherwise noted. The decay is complete when H ≈ H RH ≡ Γ Φ , which marks the approximate time of reheating, and we avoid having significant amounts of left over hidden radiation by requiring Φ to decay predominantly to the Standard Model particles, H 2 RH = π 2 90 g (vis) * RH 1 + 1 f RH T (vis) RH 4 M 2 P ,(3.3) where T (vis) RH is the visible sector temperature at reheating, and g (vis) * RH is the number of relativistic degrees of freedom in the visible sector at this temperature. In order to preserve standard Big Bang Nucleosynthesis (BBN), the visible sector reheat temperature must be larger than O(10 MeV). The ratio of the visible sector radiation energy density to that of the HS at reheating, denoted by f RH , depends on the duration of the EMD phase as well as the initial factor f i , but is typically large due to our visible sector reheating requirement, and thus always satisfies f RH > 1 and f RH > f i (this statement is demonstrated in Appendix B). This conclusion, together with our assumption that Φ predominantly decays to SM particles, ensures that the temperature of the HS at reheating, T (hid) RH , is correspondingly always smaller than that of the visible sector. We also point out that this ratio remains fixed after reheating due to the absence of any further decays. From (3.2) and (3.3), we additionally see that a given choice for the visible sector reheat temperature and α determines a corresponding Φ mass. In order to have a well defined EMD phase, we assume the energy density of Φ is large enough to dominate well-before reheating. During EMD, the scaling of the Hubble rate with the visible sector temperature is altered from a typical MD redshift relation because the visible sector is fed by the decay of Φ; however, from entropy conservation, the scaling of H with the HS temperature remains unaffected: H 2 ∝ h * (hid) T (hid) 3 . Based on the initial energy density of VS radiation, there can be a phase of ordinary redshift for the VS temperature even during EMD, but once the effect of the decay wins over this dilution, the relation becomes (see (20) of [35] for a derivation): H = π √ 10 12 g (vis) * g (vis) * RH T (vis) 4 T (vis) RH 2 M P . (3.4) This relation is always true just before reheating, but may not start until deep within the EMD phase if the initial VS radiation energy density is large. 7 At the end of the EMD phase, once reheating completes, we enter the RD era with the Hubble rate given by H 2 = π 2 90 g (vis) * 1 + 1 f RH T (vis) 4 M 2 P , (3.5) where the factor f RH is large such that the visible sector is dominant, thus recovering the standard thermal history leading up to BBN. Monopole production with an era of early matter domination Recall that we are interested in producing monoples during a second order phase transition occurring in a hidden sector, so the critical temperature appearing in (2.10) refers to the temperature of the hidden sector at the critical time. In this section we address monopole production in the context of the thermal history presented in the previous section. The effects of EMD on the monopole abundance can be understood regardless of the mechanism for establishing MD in this early period, and we obtain analytical expressions below that do not depend on the identity of the field Φ. In addition to the start time of EMD, what matters is that the dominant energy density component decays to visible sector radiation at a rate Γ Φ , thus setting the end time of EMD. The overall effect is to slow the redshift of visible sector radiation relative to the HS such that only the visible sector is dominant after EMD even if it was not initially. Because we only consider HS magnetic monopoles, this offset in the visible sector and HS temperatures generally results in a lower number density of monopoles of a given mass, where the magnitude of the offset is determined by the duration of EMD and the initial abundances of visible and hidden radiation. We label the start of EMD by H = H MD , with visible and HS temperatures T respectively, and the end of the EMD phase occurs when H ≈ Γ Φ . Recall that the visible sector reheat temperature, which we restrict to be larger than O(10 MeV) such that reheating occurs before BBN, is the primary parameter that determines the end of EMD. Case I: phase transition occurs before EMD We will start with the case where the HS phase transition occurs in the RD period before EMD, resulting in a frozen monopole number density that is redshifted through the remainder of the RD phase as well as the full EMD period. This results in considerable dilution and a need for higher monopole masses in order to maintain a fixed contribution to the energy density of the Universe. Using (2.10) and recalling that the number density of monopoles produced in the phase transition is approximately one per correlation volume, we have (see Appendix A for a table of notation) (n M ) (before) RH = ξ(t * ) −3 T (hid) MD T (hid) C 3 a MD a RH 3 = ξ(t * ) −3 H MD H C 3/2 Γ Φ H MD 2 , (4.1) where the first factor in parentheses on the right-side accounts for the redshift of the monopole number density from the critical time to the start of EMD, and the second factor gives the redshift from the start of EMD to reheating. We have also defined a MD and a RH to be the scale factors at the onset of matter domination and at reheating, respectively. At this point we do not need to redshift any further, and can obtain a fixed comoving abundance by normalizing by the visible sector entropy density at reheating, as both number density and entropy density dilute as the cube of the scale factor once the significant entropy production from reheating stops. This leads to n M s (vis) (before) RH = ξ(t * ) −3 H MD H C 3/2 Γ Φ H MD 2 /(2π 2 h (vis) * RH T (vis) RH 3 /45) = 45(T (hid) C √ λ) 3− 3ν 1+µ H 3ν 1+µ C 2π 2 h (vis) * RH T (vis) RH 3 Γ 2 Φ H 3/2 C H 1/2 MD . (4.2) The factor h (vis) * tracks the visible sector relativistic degrees of freedom for entropy and is nearly equal to g (vis) * for the high temperatures in our scenario as well as the low temperature today [36,38] (it is evaluated at reheating in the expression above, as indicated by the subscript). Note that the Hubble rate at the critical time is given by (3.1). Case II: phase transition occurs during EMD If the phase transition occurs during the EMD phase, the frozen monopole number density only redshifts through the remaining duration of EMD, and we have (n M ) (during) RH = ξ(t * ) −3 a C a RH 3 = ξ(t * ) −3 Γ Φ H C 2 . (4.3) Again normalizing to the visible sector entropy density at reheating, one has n M s (vis) (during) RH = 45(T (hid) C √ λ) 3− 3ν 1+µ H 3ν 1+µ C 2π 2 h (vis) * RH T (vis) RH 3 Γ Φ H C 2 (4.4) The dependence of H C on the HS temperature is that of ordinary MD redshift, while the relation to the visible sector temperature is more complicated, for it depends on how much visible sector radiation was present at the onset of EMD. If the visible sector energy density at H = H MD is greater than the subsequent contribution from the decay of Φ at H = H MD , then to evaluate H C one will need to include the effect of a period of ordinary MD redshift for the visible sector temperature as well. Once the decay contribution takes over well within the EMD phase, we have the relation (3.4). We note that this modified scaling can begin much earlier, even before EMD, if the initial visible sector radiation energy density is small. Case III: phase transition occurs after EMD Finally, if the phase transition occurs in the RD period after reheating but still before BBN, so as to leave the later evolution of the Universe unchanged, the abundance can be evaluated directly at the critical time, without need of redshifting: n M s (vis) (after) C = ξ(t * ) −3 /(2π 2 h (vis) * C T (vis) C 3 /45) = 45(T (hid) C √ λ) 3− 3ν 1+µ H 3ν 1+µ C 2π 2 h (vis) * C T (vis) C 3 . (4.5) This expression is also valid for a thermal history that does not involve EMD at all, where the HS radiation energy density is lower than or equal to that of the visible sector by a constant factor, as both energy densities simply redshift with time. The Hubble rate at the critical time is given by (3.5) in terms of visible sector quantities, but is easily related to the corresponding HS quantities by multiplying by the square root of the constant factor. Finally, we note that all of the results in these three subsections are independent of any possible relation between the monopole mass and the critical temperature. Monopole production: analytic approximation at boundaries In this subsection we obtain analytical expressions to better understand the effect of EMD in more detail. The three cases of monopole production described above are separated by production at the start and end of EMD, and we can easily obtain expressions below for the monopole abundance corresponding to these boundaries. For production at the start of EMD, the HS temperature at the critical point is T (hid) C = T (hid) MD with corresponding H C = H MD . From (3.1) and (4.4), we obtain the frozen abundance of monopoles at reheating: n M s (vis) (start) RH = 45λ 3 2 − 3ν 2(1+µ) π 2 90 g (hid) * MD (1 + f i ) 3ν 2(1+µ) −1 T (hid) C 3ν 1+µ −1 Γ 2 Φ 2π 2 h (vis) * RH T (vis) RH 3 M 3ν 1+µ −2 P . (4.6) Aside from the parameters of the phase transition, the final abundance is determined by the visible sector reheat temperature, the initial ratio of visible sector to HS radiation, and the monopole mass. Monopole production at the end of EMD corresponds to a HS critical temperature of T (hid) C = T (hid) RH , with H C = H RH = Γ Φ . This results in a frozen monopole abundance of n M s (vis) (end) RH = 45(T (hid) C √ λ) 3− 3ν 1+µ Γ 3ν 1+µ Φ 2π 2 h (vis) * RH T (vis) RH 3 , (4.7) with the implicit relation between Γ Φ and T (vis) RH given by (3.3). Note that this expression does not depend on the initial ratio of radiation energy densities as it only involves the time of reheating. Requiring EMD to start before reheating, these two expressions for production at the boundaries of EMD significantly constrain the allowed parameter space. For a realistic scenario, even the shortest EMD period will have a finite duration such that EMD is well defined, ensuring that we never quite access the limiting case where the start and end of EMD are coincident. This case, rather, corresponds to the absence of EMD altogether. Present-day hidden sector monopole abundance We will now obtain the present day relic abundance of monopoles. In the three main cases of monopole production -before, during, or after EMD -as well as the two boundary cases of production at the start and end of EMD, the parameters µ, ν, and λ, are determined by the details of the phase transition, as is the ratio x M ≡ m M /T (hid) C . The ratio x M is the magnetic coupling, and typically has a value of O(10) [12] -we will assume x M = 50 in our numerical results below. The current abundance of monopoles, expressed as a fractional energy density Ω M h 2 , is related to the frozen abundance provided in the previous sections by Ω M h 2 = Ω γ h 2 2h (vis) * 0 m M 3T (vis) 0 n M s (vis) 0 = Ω γ h 2 2h (vis) * 0 m M 3T (vis) 0 n M s (vis) (EMD) RH/C ,(4.8) where Ω γ h 2 = 2.47 × 10 −5 corresponds to the current photon energy density, ρ (vis) γ,0 = 2π 2 T (vis)4 0 /30. Also, h (vis) * 0 = 43/11 = 3.91 is the present-day era total entropy density pre-factor, assuming three massless species of neutrinos. The subscript '0' labels the current era, and the final term labeled by '(EMD)' refers to any one of the five above cases. The subscript 'RH' on the final term means this quantity is evaluated at reheating if the phase transition occurs before reheating, whereas in the circumstance that the phase transition occurs after reheating, 'C' means the quantity is simply evaluated at the time of the phase transition. In order for monopoles to constitute all of dark matter, the value of Ω M h 2 must reach the observed value of 0.12 [39]. For comparison with our numerical results in subsequent sections, analytical expressions for Ω M h 2 can be obtained in the three main periods of our scenario by noting that H 2 C ≈              π 2 90 g (hid) * (1 + f i ) T (hid) C 4 M 2 P (case I : before) π 2 90 g (hid) * (1 + f i ) 3/4 H 1/2 MD T (hid) C 3 M 3/2 P (case II : during) π 2 90 g (hid) * (1 + f RH ) T (hid) C 4 M 2 P (case III : after) (4.9) where the cases refer to monopole production before, during, or after the EMD phase. In the period before EMD, we have the RD relation (3.1), while in the period after EMD we have this same functional form, but with a different constant factor offsetting the visible sector and HS radiation energy densities. The expression for H C during EMD is obtained by using entropy conservation in the hidden-sector radiation, together with redshifting during the EMD era between the start of EMD to when the temperature of the hidden sector reaches T T (hid) ∝ H 2/3 . Next, using (4.2), (4.4), (4.5), (4.8), and (4.9), one obtains analytical estimates for the monopole abundance produced in the three periods by direct substitution 8 (Ω M h 2 ) (before) Ω γ h 2 ≈ 15λ 3/4 h (vis) * 0 Γ 2 Φ m 5/2 M π 2 x 3/2 M h (vis) * RH T (vis) RH 3 T (vis) 0 H 1/2 MD π 2 g (hid) * (1 + f i )m 2 M 90λx 2 M M 2 P 3ν 2(1+µ) − 3 4 , (4.10) (Ω M h 2 ) (during) Ω γ h 2 ≈ 15λ 1/2 h (vis) * 0 Γ 2 Φ m 2 M π 2 x M h (vis) * RH T (vis) RH 3 T (vis) 0 π 2 g (hid) * (1 + f i )H 2/3 MD m 4/3 M 90λ 4/3 x 4/3 M M 2 P 9ν 8(1+µ) − 3 4 , (4.11) (Ω M h 2 ) (after) Ω γ h 2 ≈ 15λ 3/4 h (vis) * 0 Γ 3/2 Φ m 5/2 M π 2 x 3/2 M h (vis) * RH T (vis) RH 3 T (vis) 0 π 2 g (hid) * (1 + f RH )m 2 M 90λx 2 M M 2 P 3ν 2(1+µ) − 3 4 . (4.12) In the model-independent discussion of this section, the Hubble rate at the onset of EMD has been an independent parameter. In Sections 5 and 6 below, where we address two examples for establishing a period of EMD, we provide expressions for H MD in terms of the underlying model parameters. It is useful to extract the functional dependence of the energy density of monopoles on the monopole mass, produced during any of the three periods of before, during, or after EMD. From (4.10)-(4.12) above, we have Here we have factored the dependence of the energy density on the mass into an explicit factor arising from the mass itself, and an implicit factor due to the number density. The RD case applies to monopole production both before and after EMD, and we have again assumed a constant factor, x M , between the monopole mass and T (hid) C . Note that in general, the type of cosmology in which the phase transition occurs -here either an EMD or RD era -affects the monopole energy and number densities through a different power-law dependence on the critical exponents. Ω M h 2 ∝      m M · m 3ν 1+µ M (RD) m M · m Before moving on to consider specific scenarios for establishing EMD, we can see that, depending on the relative sizes of the critical exponents, the presence of an intervening EMD phase in the period before BBN can push the preferred monopole mass for DM higher than in a purely RD equivalent. For the two prefactors in (4.13) are not the same in each case. Fixing the phase transition parameters (µ, ν, λ, and x M ) as well as the monopole mass, m M , we must first identify the equivalent RD scenario, which comes down to specifying the constant factor f (RD) between the VS and HS radiation energy densities in the RD scenario. We obtain this by decreasing the duration of EMD until we arrive at the limiting RD scenario to use for comparison. If EMD is preceded by a period of RD by the VS, the limiting scenario is one which preserves the initial ratio of VS-to-HS radiation: f (RD) = f i . However, if HS radiation is dominant before EMD, the limiting case is one of f (RD) = 1 because we wish to avoid RD by the HS at the onset of BBN. In short, f (RD) ≡ max(1, f i ) , (4.14) and consequently, f (RD) ≥ f i . To proceed, for all three cases we define the ratio of the scale factors at reheating and the onset of the EMD phase to be e f ≡ a RH a MD ≥ 1 ,(4.15) which we show in Appendix B (see (B.6)) to be equivalent to f (1 + f i )e f , and because of (B.8), is always larger than f (RD) by the factor e f > 1, so long as Φ preferentially decays to the VS. The factor e f is fixed for a given EMD phase, regardless of the value of f i or the timing of the phase transition. f (EMD) RH f (RD) e f .(4. Using (4.2)-(4.5), and recalling that the HS temperature redshifts as T (hid) ∝ a −1 in all periods of our scenarios, be they EMD or RD, we arrive at the ratio of the current monopole abundance between an EMD and a pure RD scenario: Ω (EMD) M Ω (RD) M = 1 e 3/4 f                      1+f (EMD) C 1+f (RD) 3ν 2(1+µ) (case I : before) 1+f (EMD) RH 1+f (RD) 3ν 2(1+µ) T (hid) RH (EMD) T (hid) C 3ν 2(1+µ) (case II : during) 1+f (EMD) C 1+f (RD) 3ν 2(1+µ) (h (vis) * C ) (RD) (h (vis) * C ) (EMD) (g (vis) * C ) (EMD) (g (vis) * C ) (RD) 3/4 (case III : after) (4.17) As with the previous expressions (4.2), (4.4), and (4.5) given above for the ratio of monopole number density to visible sector entropy density, in deriving these equations we have not made use of any relationship between the monopole mass and the temperature of the phase transition. In all three cases, the products involving f 's and the critical exponents are the ratios of the monopole number densities produced at the critical time between the EMD and RD scenarios. We note that since T (hid) C and λ appearing in the correlation length (2.10) are fixed between the two scenarios, this ratio is simply given by the ratio of Hubble parameters H C . In the first two cases, we normalize the monopole number densities by the VS entropy density at the time of reheating (when the VS temperature is equal to the reheat temperature), accounting for the redshift factors, while in the third case, because monopole production occurs in RD after EMD, there is no need for redshifting, and we normalize by the VS entropy densities at the critical time. The factor of 1/e 3/4 f , in the first two cases, is the ratio of the redshift factors from the time of monopole production to the time when T (vis) = T (vis) RH between the EMD and RD scenarios respectively, while in the third case, it, along with the terms involving the relativistic degrees of freedom, comes from the ratio of entropy densities at the critical time between the two scenarios. Note that the relativistic degrees of freedom in the VS can be different at the critical time between the EMD and RD scenarios because it is the HS critical temperature, not the visible, that is the same across the scenarios. We note in the limit of no EMD phase, the above expressions for the three cases smoothly go over to Ω (EMD) M = Ω (RD) M . For cases I and III this statement is readily apparent, since in this limit e f → 1, f (EMD) → f (RD) , and ((g, h) (vis) * C ) (EMD) → ((g, h) (vis) * C ) (RD) . To see that for case II requires one additional remark. By definition of this scenario, (T (hid) RH ) (EMD) ≤ T (hid) C ≤ (T (hid) MD ) (EMD) , so as the EMD phase disappears, (T (hid) RH ) (EMD) → T (hid) C , and also (T • For case I, of monopole production before EMD, the right-side of (4.17) is always less than one. To see that, first focus on the ratio of f factors appearing in (4.17). Recall that f (EMD) C = f i ≤ f (RD) = max (1, f i ), and therefore 1 2 ≤ 1 + f (EMD) C 1 + f (RD) ≤ 1 . (4.18) Thus the number density of monopoles just after their production is smaller than, or at most equal to, the number density in a RD equivalent scenario. Furthermore, the factor e f > 1, and therefore the number density experiences more redshift due to the EMD phase than the RD equivalent number density, resulting in a smaller frozen abundance. For the other two cases, whether the monopole relic abundance is larger or smaller in the EMD scenario compared to the RD-equivalent scenario depends on the relative sizes of the critical exponents, and for case II, additionally on the ratio of the temperature of the hidden sector at reheating to the critical temperature. A sufficient condition for the right-side of (4.17) to be less than or equal to one is 2ν ≤ 1 + µ . This condition can be verified by considering the relative sizes of the numerical factors involved: • For case II, note that 1 2 e f < 1 + f (EMD) RH 1 + f (RD) RH < 1 + e f (4.20) so that this fraction of f 's is bracketed by e f . Thus for critical exponents satisfying (4.19), the factor of e 3/4 f in the denominator of (4.17) due to the redshift is always larger than the ratio of number densities at the production time, irrespective of the relative size of (T . But for critical exponents violating (4.19), then the right-side of (4.17) can in principal be larger than 1, but whether that occurs depends on the relative of size of e f and the ratio of temperatures (T (hid) RH ) (EMD) /T (hid) C . • In the last case, of monopole production after EMD, the ratio of f 's is the same as for case II, because f (EMD) C = f (EMD) RH . To further simplify the analysis, assume that the visible sector degrees of freedom are the same in the two scenarios when the phase transition occurs in the hidden sector (which may occur at different visible sector temperatures). Then if the critical exponents satisfy (4.19) the ratio on the right-side of (4.17) is always less than one. We therefore conclude that provided the critical exponents satisfy 2ν ≤ 1 + µ, the current frozen monopole abundance in a scenario involving EMD is always less than or equal to that in a pure RD equivalent, for a fixed monopole mass. This, along with the mass-dependence of (4.13), results in a larger monopole mass needed to account for a fixed Ω M h 2 when EMD is involved. EMD by a modulus: numerical results We now move to consider specific mechanisms for establishing a period of EMD, beginning with the case where the matter-dominating field Φ is a scalar modulus with mass m Φ and initial amplitude Φ i M P [20]. The modulus begins to oscillate, acquiring a matter equation of state, when H ≈ m Φ , at which time its energy density is given by ρ Φ (t i ) = (1/2)m 2 Φ Φ 2 i . This initial energy density, along with the matter-like redshift relation ρ Φ ∼ a −3 , determines how quickly Φ can dominate over the background radiation energy density, be it of the hidden or visible sectors. The initial ratio of the VS radiation energy density to that of the hidden sector is given by the factor f i . The Hubble factor during the period before EMD by Φ is given by (3.1). The modulus amplitude, initially fixed at Φ i , starts to oscillate once H m Φ , and an EMD phase begins shortly after the energy densities of Φ and radiation become comparable. Solving for H m Φ and redshifting to this first era of matter-radiation equality, one finds the expansion at this time approximately corresponds to H MD ≈ m Φ Φ 4 i 36M 4 P . (5.1) In calculating this, we have assumed the energy density of Φ is dominant over, as opposed to equal to, that of radiation, which results in a better agreement between our analytical calculations and numerical results shown below. For a modulus with maximal amplitude, we note that the modulus essentially dominates the energy density of the Universe as it begins to oscillate, while a smaller amplitude results in a delay. In order to successfully establish EMD, Φ must also be sufficiently long lived such that its decay completes well after the start of EMD. The minimum value of the initial amplitude, corresponding to decay at the onset of EMD, can be estimated from (3.2) and (5.1) to be Φ i 36Γ Φ M 4 P m Φ 1/4 = 6αm Φ M P /(2π) 1/4 . (5.2) For tree-level decays, a given visible sector reheat temperature determines not only the end of EMD, but also the mass of Φ and thus the minimum amplitude to have an EMD era at all. A choice of Φ i , within the allowed limits, then determines how early the EMD phase starts. We parenthetically note that for a given visible sector reheat temperature, the inclusion of a loop factor in Γ Φ shifts the values of m Φ and Φ i which correspond to a particular EMD duration. There is however, some degeneracy in the corresponding cosmologies. For instance, a change in initial amplitude of 10 −1 can be compensated by a change in mass of 10 4 and a loop factor α of 10 −6 , such that the resulting EMD phase is unchanged, having the same H MD , Γ Φ , and boundary condition (5.2). As mentioned previously, we will set α = 1 throughout unless otherwise specified. The evolution of the three background energy density components (that of Φ and the radiation from the hidden and visible sectors) is governed by the following usual set of Boltzmann equations: dρ Φ dt + 3Hρ Φ = −Γ Φ ρ Φ , (5.3) dρ (vis) r dt + 4Hρ (vis) r = Γ Φ ρ Φ , (5.4) dρ (hid) r dt + 4Hρ (hid) r = 0 , (5.5) where 3H 2 M 2 P = ρ Φ + ρ (vis) r + ρ (hid) r . We emphasize that, for simplicity, in the Boltzmann equations above we have taken Φ to decay only to the visible sector, though it is straightforward to include branching fractions for decay to both sectors. We numerically solve this set of equations beginning in a period of RD by any combination of visible sector and HS radiation, and track the evolution sufficiently beyond reheating such that RD in the visible sector is well-established. In our numerical calculations, we use a smooth function to estimate the temperature dependence of the relativistic degrees of freedom for energy density in the VS, g (vis) * , shown in Figure 1. At temperatures greater than ∼100 GeV, when all SM species are relativistic, g (vis) * takes its maximum value of 106.75. As the temperature decreases, the value smoothly drops as the various particle species become nonrelativistic. We only show temperatures greater than 1 GeV because the VS reheat temperature in our scenarios is typically larger. The minimum value of g (vis) * , corresponding to the present era, is 3.36 assuming 3 massless neutrino species. For the HS we assume a constant g (hid) * = 100. Figure 2 shows the energy density evolution in the two cases of initial RD by the HS (f i << 1) and VS (f i >> 1) respectively, for an example set of parameters. We allow the phase transition of the HS to occur at any time in the background evolution, and obtain the resultant current monopole abundance from the numerical solution. This is done by evaluating (2.10), the equation for the correlation length at the phase transition, when the temperature of the hidden sector reaches T (hid) C , and then approximating the number density of monopoles at that time as n M ∼ ξ(t * ) −3 . Subsequently, the number density is simply redshifted numerically through the EMD era and then normalized to the VS entropy density at reheating. We now turn to our numerical results. In Figure 3 we plot the present-day relic monopole abundance, Ω M h 2 , as a function of monopole mass, m M , where we have taken x M = 50 to be fixed, as well as λ = 1. In what follows we will set x M = 50 and λ = 1 throughout unless otherwise noted. The other parameter values match those of Figure 2. We show both numerical results, obtained from numerically solving the Boltzmann equations, and the three analytical approximations of Section 4, (4.10), (4.11), and (4.12). The numerical curve, shown in dark blue, has three distinct segments corresponding to the three regimes of production time: in the top right, monopoles are produced in the RD period before EMD -the slope of the curve in this region is the same as that of a pure RD monopole production scenario; the central segment of the curve corresponds to production during EMD, with a slope given by (4.11); and in the bottom left section, production after EMD recovers the RD slope. As can be seen by inspection, the analytic approximations, (4.10), (4.11), and (4.12), have extremely good agreement with the numerical results -the analytic results correspond to the light-blue dotted line "lying inside" the numerical curve. Figure 3 also shows colored regions depicting the three regimes of monopole production time. A given parameter set {m M , T (vis) RH , Φ i , f i , α, x M , λ, µ, ν} corresponds to a single point on Figure 3, so that as m M is varied, a single (blue) curve is traced out, passing through the colored regions that correspond to production after, during, or before the time of the phase transition. In this way only a subset of the colored regions are accessed. However, other points in the colored regions can be accessed by varying m M together with one or more of these other parameters. This behavior can be seen in Figure 4, which we discuss in more detail below. Figure 3 also shows as black dashed lines the two analytical expressions for production at the beginning (4.6) and end (4.7) of EMD, separating these three regimes. One way to interpret the boundary curves is the following. These two lines give analytic predictions for monopole production if, for a given monopole mass, production occurs at the end of initial RD and start of EMD (upper), or end of EMD and start of second RD (lower). The intersection of either of these dashed lines and the solid blue (numerical) line gives the mass for which production did occur at cross-over, for the parameters assumed for the solid line. These intersection points therefore mark the transitions between the three behaviors of the numerical line discussed in the previous paragraphs. Lastly, we note that the entire numerical curve sits at higher monopole masses when compared to a pure RD production scenario (shown by the red dashed line) because of the offset of the hidden and visible sector energy densities. This is consistent with the behavior of (4.13) and (4.17), specifically that the right-side of (4.17) is always less than one when 2ν ≤ 1 + µ. In Figure 4 we show how the curves of Figure 3 change for a variety of parameter values. As the beginning of EMD is placed earlier (by increasing the initial modulus amplitude Φ i ) while keeping the VS reheat temperature T (vis) RH fixed, the numerical curves (along with their analytical counterparts) shift farther away from the RD line toward larger monopole masses due to the increased amount of dilution from a progressively longer EMD period. If instead the end time of EMD is placed later (by decreasing T (vis) RH ) while holding the start time fixed, the curves again shift toward higher monopole masses due to the longer EMD period, but now the corresponding dashed boundary lines shift downward due to their dependence on the reheat temperature. Finally, as the critical exponents, µ and ν, are varied, the slopes of the curves change as expected. The upper-right shaded region (orange) corresponds to monopole production having occurred during the initial RD phase prior to EMD; the large central/lower-right region (magenta) corresponds to production during the EMD phase; and the small lower-left region (green) corresponds to production in the RD epoch after EMD has ended. Where the blue lines overlap with these three regions specifies the period in which monopole production occurred. For reference across the two panels, the dotted horizontal and vertical lines in both panels mark Ω M h 2 = 0.12 and m M = 1 PeV respectively. The entire set of curves and region boundaries in the right panel is shifted downward and to the left relative to the left panel, along the RD equivalent line due to the larger final offset between the visible and hidden radiation energy densities after reheating (see Figure 2). In all panels of Figure 4, all of the numerical curves retain the three-region slope behavior displayed in Figure 3, with the regions separated by the two dashed boundary lines regardless of the specific parameter values, as expected. We note that the change in slope between the three regimes of production time is most noticeable in the bottom blue curve of the bottom two panels, for which µ = ν = 1. As in Figure 3, the left panels correspond to initial RD by HS radiation (with f i < 1), while the right panels correspond to initial VS domination (f i > 1). The full set of lines shown in each right panel is shifted downward and to the left as f i is increased above 1 relative to the corresponding left panels. Otherwise, the scale and orientation is the same between the left and right panels. EMD by a decoupled particle: numerical results Rather than being a modulus, the field Φ that drives EMD can instead be a heavy particle which decouples from either the hidden or visible sector at a very early time and subsequently dominates the energy density of the Universe as a non-relativistic matter component before eventually decaying (see Figure 5). We will parameterize the interaction rate of Φ with the sector from which it is decoupling (the "host" sector) by the thermally averaged annihilation cross-section times relative velocity, σ Φ v . 9 The Boltzmann equation for the number density of Φ is then dn Φ dt + 3Hn Φ = σ Φ v (n 2 Φ,eq − n 2 Φ ) − Γ Φ n Φ ,(6.1) where Γ Φ is the decay rate given in (3.2), and the Hubble parameter H is again given by the sum of all energy density components. In our numerical calculations, we use the integral expression n Φ,eq = g Φ (2π) 3 d 3 p e E(p)/T ± 1 , (6.2) for the equilibrium number density, where + is for fermions, − is for bosons, E(p) 2 = m 2 Φ + |p| 2 , g Φ is the number of internal degrees of freedom for Φ, and the temperature T is of the host sector. If Φ decouples from the HS, the remaining two Boltzmann equations for the radiation components are dρ (vis) r dt + 4Hρ (vis) r = Γ Φ ρ Φ , (6.3) dρ (hid) r dt + 4Hρ (hid) r = σ Φ v E Φ (n 2 Φ − n 2 Φ,eq ) ,(6.r = Γ Φ ρ Φ + σ Φ v E Φ (n 2 Φ − n 2 Φ,eq ) , (6.5) dρ (hid) r dt + 4Hρ (hid) r = 0 . (6.6) The energy density of Φ is given by ρ Φ = E Φ f Φ , which we have approximated as E Φ n Φ , Figure 6. Numerical evolution of the background energy density components with scale factor in the case of EMD by a decoupled particle Φ. EMD begins once ρ Φ dominates over both radiation components, and lasts until Φ decays. Left panels: initial RD by the hidden sector. Right panels: initial RD by the visible sector. Top panels: Φ decoupling from the dominant sector. Bottom panels: Φ decoupling from the subdominant sector. The values of σ Φ v in each panel are chosen to correspond to relativistic freeze-out, thus yielding the longest possible EMD phase for the chosen background parameters. Larger values of σ Φ v will result in nonrelativistic freeze-out of Φ while smaller values lead to freeze-in, both of which reduce the duration of EMD by lowering the frozen Φ abundance and hence delaying the start time. Note that in the bottom two panels, relativistic freeze-out of Φ essentially results in the limiting EMD case where the start and end are nearly coincident. The mass of Φ in all panels is m Φ ≈ 10 9 GeV, due primarily to the value of T with the average energy per particle given approximately as E Φ ≈ m 2 Φ + 9T 2 [35,40]. The temperature T is that of the host sector. Note that we retain the decay of Φ predominantly to the visible sector in order to preserve the standard history from BBN onward. 10 We numerically solve the Boltzmann equations, in both decoupling cases, for the background energy densities, as shown in Figure 6. As before, we use a smooth function for the temperature dependence of the relativistic degrees of freedom in the VS, g (vis) * , shown in Figure 1. To obtain the energy density evolution, we start in RD at some initial early time, with the HS and VS radiation related by the factor f i , and with negligible Φ energy density. 11 As the Universe cools, Φ decouples from its host sector via freeze-out or freeze-in, leaving a frozen energy density that redshifts like matter once Φ becomes non-relativistic. This matter energy density can then dominate over radiation, provided that the frozen energy density is high enough for domination to occur before the eventual decay of Φ. The decay completes near H ≈ Γ Φ , and we are subsequently left with the standard phase of domination by visible sector radiation. The evolution of the equilibrium number density for Φ transitions from relativistic to nonrelativistic when the temperature of the host sector drops below m Φ . Because of this transition, there is a maximum frozen number density for a given m Φ , which is achieved through the decoupling of Φ while it is relativistic and in chemical equilibrium with its host sector. This is relativistic freeze-out. If Φ were to start with a number density larger than equilibrium, annihilations would drive it down to the equilibrium density, unless the annihilation rate was too small, which is not a scenario we will consider here because we assume RD at the initial time in order to justify an origin for the intervening EMD phase. Decoupling through relativistic freeze-out results in the earliest possible start time for the EMD phase caused by Φ of a given mass, and requires the annihilation rate to be large enough such that Φ reaches equilibrium while still relativistic, but not too large such that it remains in equilibrium after becoming non-relativistic. The largest value of σ Φ v that corresponds to relativistic freeze-out (which is the transition between relativistic and non-relativistic freeze-out) can be approximated by σ Φ v ≈                π 3 g (hid) * 1/2 (1 + f i ) 1/2 √ 90ζ(3)g Φ M P m Φ HS decoupling , π 3 g (vis) * F 1/2 1 + 1 f i 1/2 √ 90ζ(3)g Φ M P m Φ VS decoupling , (6.7) assuming x F ≡ m Φ /T (hid/vis) F O(1) for relativistic decoupling, and where ζ(s) is the Riemann zeta function of s. 12 If instead the annihilation rate of Φ is large enough to maintain equilibrium with its host sector below T ≈ m Φ , then decoupling will occur via non-relativistic freeze-out, resulting in a smaller frozen number density and thus a later start time for EMD. As the annihilation rate increases further, the frozen Φ energy density decreases and the start of EMD approaches the time of reheating, resulting in a shorter duration for the EMD phase. This gives an upper limit, corresponding to H MD Γ Φ , on the value of σ Φ v , for a given mass and decay rate (or equivalently visible sector reheat temperature) for EMD to happen at all: σ Φ v m Φ 3Γ 1/2 Φ M 2 P H 1/2 F , (6.8) where H F is the expansion rate at freeze-out and given in Appendix C, and we have used (C.2) for the expansion rate H MD at the time of matter domination. Now going in the other direction, if the annihilation rate is smaller than that needed for relativistic freeze-out, Φ will never reach local chemical and thermal equilibrium, which may possibly lead to a freeze-in process [41]. If freeze-in does occur, lowering σ Φ v further reduces the out-of-equilibrium number density, and thus the duration of EMD, down to a minimum value corresponding to the absence of EMD altogether. The value of σ Φ v corresponding to the transition between freeze-in and relativistic freeze-out (which defines the lower limit of the range of values leading to relativistic freeze-out) is approximately 9) and the minimum value corresponding to H MD Γ Φ is (see Appendix D) σ Φ v ≈                  π 3 g (hid) * 1/2 (1 + f i ) 1/2 √ 90ζ(3)g Φ M P T (hid) i HS decoupling , π 3 g (vis) * i 1/2 1 + 1 f i 1/2 √ 90ζ(3)g Φ M P T (vis) i VS decoupling ,(6.σ Φ v                  3π 7 g (hid) * 3/2 (1 + f i ) 3/2 Γ 1/2 Φ 90 3/2 ζ(3) 2 g 2 Φ M P m Φ H 1/2 i HS decoupling , 3π 7 g (vis) * i 3/2 1 + 1 f i 3/2 Γ 1/2 Φ 90 3/2 ζ(3) 2 g 2 Φ M P m Φ H 1/2 i VS decoupling . (6.10) We summarize these three different regimes of the annihilation rate. Starting with small annihilation rates, the decoupling of Φ proceeds as follows. For σ Φ v less than the right-side of (6.10), Φ decouples via freeze-in at such low energy densities that it will never dominate over radiation before decaying. For rates that satisfy (6.10) but are less than (6.9), the frozen-in energy density of Φ is large enough to dominate, leading to longer EMD durations as σ Φ v , and thus the frozen-in energy density, is increased. Between (6.9) and (6.7), decoupling occurs via relativistic freeze-out, which yields the largest frozen Φ energy density and the longest possible EMD duration, indepen-dent of σ Φ v . We note that essentially the only difference in (6.9) and (6.7) is the presence of the initial host sector temperature or the Φ mass in the denominator. Because the initial temperature can in general be quite large compared to m Φ , the regime of σ Φ v corresponding to relativistic freeze-out can extend for many orders of magnitude. For σ Φ v larger than (6.7) but satisfying (6.8), Φ decouples via nonrelativistic freeze-out, resulting in smaller frozen-out energy densities, and thus shorter EMD durations, as σ Φ v is increased. Finally, for rates larger than the right-side of (6.8), the frozen-out energy density is again too small to establish EMD before Φ decays. Other than defining the range of annihilation rates that can yield an EMD phase 13 , the significance of these regimes of σ Φ v is that a particular EMD phase, with a fixed start time and end time, can be established by two different values of σ Φ v , one corresponding to freeze-out and the other to freeze-in. The abundance of monopoles produced by the HS phase transition is determined by using (4.10)-(4.12), which are given in Section 4. These expressions were obtained in a modelindependent context and are valid in the cases presented in this section, provided that we use the appropriate expressions for quantities such as H MD . The present-day relic monopole abundance is shown in Figure 7 as a function of monopole mass for some example parameter values, and we have again taken x M ≡ m M /T (hid) C = 50 and α = λ = 1. We in particular consider several values for σ Φ v , and we have checked that these values are well-below the perturbativity limit for the Φ mass inferred from (3.2), (3.3), and the assumed reheat temperature. As in the modulus case, there are three regions corresponding to monopole production before, during, and after EMD, and the curves have the same behavior as before. The main feature that sets the decoupled-particle case apart from the modulus case is that any particular curve can be obtained be either non-relativistic freeze-out or freeze-in, meaning the value of the annihilation rate of Φ can be quite different while still reproducing the same curve. Otherwise, the same regions are generally accessible to a modulus or decoupled-particle scenario, where the maximum extent toward larger monopole masses is set by either the maximum initial modulus amplitude or by relativistic freeze-out in the two cases respectively. We finally note that the case of freeze-in depends on the initial host-sector temperature because freeze-in of Φ occurs in RD, such that the time of peak Φ production from the background occurs at the initial time (see [37] for details of freeze-in during RD before EMD). In our numerical calculations, we chose the initial time arbitrarily, with an initial energy density configuration consisting of dominant radiation and negligible Φ. For a given initial time, there is a unique annihilation rate that results in a particular freeze-in Φ energy density, provided that we remain within the freeze-in regime of the annihilation rate. The important thing to note is that the accessible region in Ω M h 2 vs m M is generally independent of the initial time because it is determined by the start and end of EMD, which can be obtained by multiple values of the initial time and annihilation rate. 13 We include an additional constraint in Appendix E on the parameter values that must hold for an EMD phase to have nonzero duration. Figure 7. Dependence of the present-day monopole relic abundance on the monopole mass in the case of EMD driven by a decoupled particle. As in Figures 3 and 4, the solid curves (purple and green) are obtained from a numerical evolution of the background, while the dotted lines (light purple and light green) on top of the numerical curves are the analytical expressions (4.10), (4.11), and (4.12). The purple color denotes Φ decoupling from the HS, while the green color corresponds to decoupling from the VS. All other lines have the same meaning as in Figures 3 and 4, which we repeat here. The red dashed line in all panels marks the purely RD equivalent scenario. The two black dashed lines in all panels indicate monopole production occurring at the start or end of EMD. The dotted horizontal and vertical lines in all panels mark Ω M h 2 = 0.12 and m M = 1 PeV respectively. Left panels: initial RD occurring in the hidden sector. Right panels: initial RD occurring in the visible sector. Top panels: Φ decoupling from the dominant sector. Bottom panels: Φ decoupling from the subdominant sector. In each panel, the curves which sit farthest to the right correspond to relativistic freeze-out of Φ from its host sector and thus mark the largest monopole masses accessible for the chosen parameters. The dependence on T (vis) RH and the critical exponents µ and ν is the same as in Figure 4. Parameter values giving observed dark matter relic abundance In this section, we will consider the values of our various parameters that result in the observed present-day DM abundance of Ω M h 2 = 0.12. As we've seen in the two previous sections, our analytical and numerical results agree very well, and we will therefore present an analytical analysis of the main parameters of our scenario, rather than a full numerical parameter scan. We will primarily use (4.10)-(4.12) as well as (B.6) which gives f RH ∝ e f , requiring that the observed DM relic abundance is achieved. For clarity in the analysis below, we will not specify the identity of the field Φ, taking the beginning and end of EMD as the more fundamental parameters. We will use the VS reheat temperature T (vis) RH to set the end of EMD, and the factor e f = a RH /a MD to fix the duration of EMD. Recall that e f can be expressed as (see Appendix B): e f = H MD Γ Φ 2/3 . (7.1) The remaining parameters are the initial ratio of the VS to HS radiation energy density f i , the monopole mass m M , as well as the various parameters associated with the details of the phase transition, x M , λ, µ, and ν. Four of these eight parameters can vary by many orders of magnitude in the cosmological histories we have been considering: m M , T RH , e f , and f i , so here we will focus on those as they lead to a more direct effect on the resulting cosmology. The others have much narrower ranges, and for these we will consider a discrete set of possibilities. Also, we will not vary parameters such as α, m Φ , Φ i (in the case of the modulus), or σ Φ v (in the case of the decoupled particle), as including variations in these parameters is degenerate, in the sense that they lead to the same cosmology, as discussed in Section 5. Figure 8 shows contours of T (vis) RH in the m M −e f plane, with the monopole abundance held fixed at Ω M h 2 = 0.12. The region above each contour results in overproduction of DM, while the region below results in underproduction. What can immediately be seen from the figure is that most lines shown have positive slopes in this plane, meaning that a longer EMD duration (i.e. a larger value of e f ) requires a larger monopole mass in order to achieve the same monopole abundance. This is consistent with the behavior in Figures 4 and 7, where the curves corresponding to longer EMD periods cross the Ω M h 2 = 0.12 line at larger monopole masses. Furthermore, for fixed monopole mass, a longer EMD duration results in too much dilution and thus underproduction of DM, while a shorter duration doesn't dilute the monopole abundance enough, leading to overproduction. In each panel of the figure, the region accessible to the T RH indicate that EMD ends at an earlier time, while larger values of e f correspond to longer EMD durations. Each solid contour has two segments with different slopes (a few of which occur beyond the range shown in the figure). Contours above the dashed contour overlap in their steeper segments, which follow the upper dotted black boundary line corresponding to monopole production before EMD, while those below overlap in their shallower segments, and follow the lower dotted black boundary line corresponding to production after EMD. Segments that are parallel to the dashed contour indicate monopole production during EMD. The region above a given contour results in overproduction of DM, while the region below results in underproduction. The slight differences in the overlap of the upper contours are due to changes in g (vis) * RH . We include a horizontal line at m M = 100 PeV for reference across panels, as well as a red 'star' which marks the pure RD scenario at e f = 1. The green circles located at e f ≈ 1.2 × 10 9 along the EeV contour, and e f ≈ 8.7 × 10 11 along the 7.2 PeV contour, correspond to the bound on H MD from (7.4). Please see the text for more details. Each panel additionally shows a special, dashed blue-green contour which separates two regimes of T (vis) RH , and passes through the RD point mentioned above without changing slope. Relative to Figures 4 and 7, this contour corresponds to the special value of T (vis) RH which places the intersection of the two black dashed lines (representing the start and end of EMD) at Ω M h 2 = 0.12 (this is most easily seen in the middle panels of Figure 4, where the intersection point of the two black dashed lines shifts along the RD line as T (vis) RH is changed). As the duration of EMD is increased along this dashed contour, the contour rises away from e f = 1 with a slope given by RH ) has two segments with different slopes: beginning on the left side at e f 1, the contours rise along the upper boundary line, corresponding to monopole production before EMD, until they reach a point which corresponds to production at the start of EMD -beyond this point, the contours deviate from the upper boundary with a slope parallel to the dashed contour -this segment corresponds to monopole production during EMD. The contours located below the dashed contour (with higher values of T (vis) RH ) have a similar two-segment behavior: beginning again at e f 1, the contours rise at a shallow slope along the lower boundary line (monopole production after EMD), until they reach a point corresponding to production at the end of EMD -from here on the contours leave the lower boundary and continue with the same slope as the dashed contour -production in this region occurs during EMD. The region above the dashed contour can therefore only access monopole production before and during EMD, while the region below only accesses production during and after EMD. Additionally, we note that in the lower right panel, with µ = ν = 1, the slope of the "after EMD" segment is essentially independent of e f , consistent with the lower panels of Figure 4 where the segments of the numerical curves corresponding to monopole production after EMD coincide with the pure RD scenario, thus erasing any dependence on the prior EMD history. The boundaries of the accessible region in the m M − e f plane, which correspond to monopole production before and after EMD, are given by (4.10) and (4.12), and are independent of T (vis) RH . This can be trivially understood for production after EMD, while in the case of production before, the monopole abundance experiences dilution from the full EMD phase, regardless of it's specific timing. However, as e f increases, a given contour turns away from the boundary at a point that corresponds to the start (upper contours) or the end (lower contours) of EMD, which does depend on T where we have additionally made use of f RH 1. This expression can then be used along with (4.10) to locate the monopole mass and EMD duration which result in the observed DM abundance for monopole production at the start of EMD. For monopole production at the end of EMD, (3.3) can be expressed in terms of VS quantities and set equal to itself in terms of HS quantities to obtain m (end) M ≈ g (vis) * RH g (hid) * RH (1 + f i ) 1/4 x M T (vis) RH e −1/4 f . (7.3) This expression, together with (4.12), then yields the monopole mass and EMD duration which result in the observed DM abundance for monopole production at the end of EMD. The value of T (vis) RH for the dashed contour shown in Figure 8, which separates the two sets of contours, can similarly be obtained by first eliminating e f in (7.2) and (7.3). This corresponds to the RD point at e f 1, marked by the red star, where the before and after boundaries meet (as do production at the start and end of EMD). Then, using either (4.10) or (4.12) gives the monopole mass required for Ω M h 2 = 0.12, which then yields the special value of T (4.13). Note that because the monopole abundance produced during the EMD and RD periods displays different power-law dependence on the monopole mass, the EMD and RD segments of the curves will shift by different amounts, resulting in movement of the turn-off points along the before and after boundaries. As is evident from (7.1), a long duration for the EMD phase requires a large separation between H MD and Γ Φ . This is rather easy to achieve, even for high reheat temperatures. However in inflationary models, the Hubble parameter at the start of EMD, H MD , is bounded from above by the value of the Hubble parameter H I at the end of inflation. This would correspond to an interesting scenario in which after inflation the early Universe directly enters the EMD phase, with some reheating in the hidden sector so that the initial temperature in that sector is above the critical temperature. However, from the non-detection of tensor modes, PLANCK data gives an upper limit to H I [42] H I < 2.5 × 10 −5 M P . Lastly, we comment on some interesting effects when the critical exponents, µ and ν, satisfy µ = ν > 1. Though we have specifically considered µ = ν = {1/2, 1} in our figures, the expressions presented throughout the text are applicable to more general values of the critical exponents. 15 In particular, we recall that as µ and ν approach 1, the case of monopole production after EMD (case III) approaches a purely RD scenario, so that when µ = ν = 1, the dependence on the prior EMD history is completely removed. This suggests that for µ = ν > 1, or more generally 2ν > 1 + µ, the monopole mass required for the observed dark matter abundance can actually be smaller than the RD case, at least for monopole production after EMD or shortly before its end. We have checked that this is indeed the case, however, the RD curve itself gets shifted to higher monopole masses when µ = ν > 1 such that case III actually results in heavier masses as compared to µ = ν < 1 (keeping the relic abundance fixed). This can be seen from expressions such as (4.5) and (4.12), where increasing the critical exponents above 1 results in an increase in the required monopole mass for both EMD and RD scenarios, but the increase is larger in the RD case. We also note that, from (2.10), the correlation length gets larger as the critical exponents are increased, resulting in less correlation volumes per Hubble volume, which in turn results in a smaller monopole number density at production. In this work we have broadened the scale for hidden sector monopoles masses to O(1-10 5 ) PeV. One may wonder how robust the lower limit of 1 PeV actually is. The effect of lowering the monopole mass relative to a RD scenario when µ = ν > 1 is greater for a longer EMD duration, as the lower boundary line in Figure 8 acquires a negative slope. Additionally, the visible-sector reheat temperature needs to be larger than that for µ = ν < 1 in order for contours of the observed dark matter relic abundance to access the lower boundary line -note the different positioning of the PeV contour in the upper-left and lower-right panels of Figure 8. Because of these two effects, an extended EMD period occurring very early will have the greatest effect in producing enough lower-mass monopoles to reproduce the observed DM abundance. Perhaps if the phase transition occurs toward the end of (or after) a period of EMD caused by inflationary reheating at very high temperatures, the monopole mass may be able to be brought below the PeV scale and still result in the full DM relic abundance. Furthermore, having the HS temperature be extremely suppressed below the VS actually helps lower the needed monopole mass significantly, as long as the VS reheat temperature is large enough to bring up the abundance. This suppression effect also applies to a purely RD scenario. In passing, we finally note that like µ = ν = 1, setting µ = ν = 2 is another special case in which the monopole abundance produced during EMD is now independent of the Hubble rate at the time of production, and only depends on the critical temperature. This can easily be seen in (4.4), where the factor of H 2 C in the denominator due to redshift cancels the dependence on the critical exponents. If the altered phase of expansion is instead caused by a form of energy density other than matter, this effect would occur for a different value of the critical exponents. Overall, with the exception of the effect of the critical exponents discussed above, as we vary the parameters of our scenarios, the accessible regions which reproduce the observed DM relic abundance do not change drastically. As we saw in Figure 4, the largest shifts occur when the critical exponents are changed. Our main finding that for hidden sector monopoles to be dark matter candidates, their masses must be larger than O(PeV) scale appears generic, with longer EMD periods leading to larger monopole masses when 2ν ≤ 1 + µ. Discussion In this work we have considered a scenario for dark matter production via a second order phase transition in the early Universe, where the dark matter (DM) candidate is a hidden-sector magnetic monopole. Such a topological dark matter scenario has been studied before, with the entire relic DM abundance being produced in the standard radiation-dominated (RD) era before BBN [4][5][6]12]. We have expanded the parameter space region of viability to allow the different sectors to have different temperatures, and by generalizing the cosmological history to include a period of early matter domination (EMD). By allowing the phase transition to occur at any time before, during, or after EMD, we have shown that histories involving EMD generally require heavier monopole masses in order to produce the entire DM relic abundance. Along with this general result, we have considered two specific examples of how a period of EMD may be generated: by a modulus, or by a heavy decoupled particle. These examples illustrate how one can embed our scenario in a specific model, and how the underlying model parameters influence the monopole abundance. Our main results are summarized in Figures 4, 7, and 8. We generally find that hidden sector monopoles in the mass range O(1-10 5 ) PeV can be dark matter candidates. We now summarize our main caveats, address some ways our scenario can be changed for future work, and what we expect that will do. Throughout this work we have assumed the number density for PeV scale monopoles is small enough to ignore the effects of monopole-anti-monopole annihilation, as shown in [12] following [31]. But because the scattering cross-section between fermions and monopoles is a strongly coupled problem, it is possible that the final monopole abundance is depleted more than the diffusion approximation studied in [31], due to the interaction with the hidden sector plasma (if present). The interaction of the monopole with the plasma may be more critical to understand if the monopole is a dyon, a possibility not considered here. Of course, if the number density decreases further due to annihilation, a higher monopole mass will be needed to get the same DM abundance. Another key assumption pervading this work is that the second order phase transition is classical, although we have strayed from that strict assumption by allowing the critical exponents to have generic values. But a consequence of assuming the monopole to be a classical topological object is that the monopole mass and the temperature of the phase transition are at similar mass scales, m M ∼ T (hid) C . Our conclusions will change substantially in theories for which this relation no longer holds. A prominent counter-example is provided by the N = 2 Seiberg-Witten theory [32,33] near the massless monopole or massless dyon points of the moduli space, in which the effective theory below the symmetry breaking scale contains nearly massless composites -'mesons' and 'baryons' of a magnetic U (1). Additionally, here the effect of annihilations at energies near the scale of the transition are expected to be important. Another fundamental assumption in our work is the set-up of our sectors, where we have assumed the sector which hosts the phase transition to interact very weakly, if at all with the visible sector of standard model particles. This can in general be different, and can result in changes to the monopole abundance after their production. For example, kinetic mixing between the visible and hidden sectors can lead to a long-range force which can then deplete the monopole abundance via annihilation. We expect this to have a similar effect to the scattering of monopoles with a HS plasma followed by annihilating, but in this case the monopole abundance can depend more strongly on visible-sector as well as hidden-sector properties. See for example [23]. Along these lines, we have also assumed that the energy density component driving the EMD period decays almost entirely to visible sector radiation. With additional interactions between the sectors, the EMD driving field may decay to hidden sector radiation as well. This can easily be incorporated into our analysis by generalizing the decay rate Γ Φ to include branching fractions to both visible and hidden radiation. One must then be careful to not produce too much hidden (or "dark") radiation by restricting the branching fractions with current limits on dark radiation [43]. Aside from the set-up of our sectors, another important generalization of our work is to allow for early domination by a component with a generic equation of state, rather than focusing on EMD alone. The redshift relation for the dominating energy density is then ρ ∝ a −3(1+w) , with the parameter w determining the behavior, which modifies subsequent calculations. A specific alternative to EMD is a period of kination, where the kinetic energy of a scalar field dominates the energy density of the universe for a time. In such a period, the dominant form of energy density redshifts faster than radiation, with w = 1 and ρ ∝ a −6 , which can have interesting consequences for the monopole abundance if the phase transition occurs during or before such a period. In fact, the phase transition occurring after a period of kination can also affect the resultant monopole abundance, for example by flipping the radiation energy densities of the two sectors. Kination would typically not last very long because it dilutes as a 6 , but if other components are suppressed, it can last longer -perhaps the same EMD driving field can have an early period of kination which later transitions to EMD before decaying. One should track the behavior of radiation in the two sectors during such a history to see how it affects the temperatures and thus the final monopole abundance. Lastly, in our decoupled particle example, the mechanism of Φ decoupling need not be velocity independent. This can lead to temperature dependence in the interaction rate of Φ with its host sector and can alter the details of the decoupling. Such effects, however, shouldn't change our main results, just the specifics of the particle decoupling models (what values of Φ mass and decoupling parameter lead to an EMD phase of a given start and end). We hope this work stimulates further research into topological dark matter scenarios. and HS radiation energy densities at reheating is f RH ≈ (1 + f i ) Γ 2 Φ H 2 MD H MD Γ Φ 8/3 (B.3) where we have redshifted hidden sector quantities back to the start of EMD, and where f i is defined as the ratio of the visible sector to hidden sector radiation energy densities at some time t i prior to the onset of the EMD phase, To facilitate our comparison between scenarios which include a phase of EMD and those which remain purely RD, we make use of the double ratio f (EMD) RH f (RD) RH =    f RH f i 1 f RH f i f i 1 , (B.7) ≈ a RH a MD = e f f i 1 or f i 1 , (B.8) where in the second line we have made use of (B.6) and where we have included superscripts on the two f RH 's on the left-side for clarity (whenever f appears without a superscript label, it refers to the EMD case). We note that since in any given RD-equivalent scenario f (RD) RH is just a number, to simplify our notation we will often drop the subscript and just write this term as f (RD) . The energy density ratio in a purely RD scenario corresponding to an EMD scenario with initial domination by visible sector radiation is given by f (RD) RH = f i , while in the case of an EMD scenario with initial domination by HS radiation, it is f (RD) RH = 1. We have additionally numerically verified the value of e f as the ratio of the scale factors at the end and beginning of the EMD period, as well as the double ratio of radiation energy densities. C Decoupling of Φ from either sector via freeze-out In order to analytically estimate the relic abundance of topological DM from (4.10)-(4.12), we need to obtain an expression for the Hubble rate at the onset of EMD, H MD . We do so by redshifting the frozen number density of Φ at the time of freeze-out, given by n Φ,F , to the start of EMD: n Φ,MD = n Φ,H MD ≈ m 2 Φ 9 σ Φ v 2 M 4 P H F . (C.2) What remains is to specify H F , which we do below for a number of cases. C.1 Non-relativistic freeze-out from hidden sector Using the usual freeze-out condition of n Φ,eq σ Φ v = H F , with the non-relativistic form of the equilibrium number density for a boson Φ, we have g Φ m 2 Φ 2πx F 3/2 e −x F σ Φ v ≈ π 2 90 g (hid) * (1 + f i ) m 2 Φ M P x 2 F , (C.3) where we have used H F ≈ (1 + f i ) m 2 Φ M P x 2 F with x F ≡ m Φ /T (hid) F . Rearranging yields an expression that can be solved for x F : x F ≈ ln   3 √ 5g Φ σ Φ v m Φ M P x 1/2 F 2π 5/2 g (hid) * 1/2 (1 + f i ) 1/2   . (C.4) If Φ is instead a fermion, the left-side of (C.3) is multiplied by a factor of 3/4, with a corresponding change in the expression for x F . The solution to this can then be used in the expression for H F above to complete its specification in terms of the parameters of our scenario. C.2 Non-relativistic freeze-out from visible sector Here we define x F ≡ m Φ /T (vis) F , resulting in g Φ m 2 Φ 2πx F 3/2 e −x F σ Φ v ≈ π 2 90 g (vis) * F 1 + 1 f i m 2 Φ M P x 2 F . (C.5) and x F ≈ ln    3 √ 5g Φ σ Φ v m Φ M P x 1/2 F 2π 5/2 g (vis) * F 1/2 1 + 1 f i 1/2    . (C.6) Otherwise, this case is the same as above. C.3 Relativistic freeze-out from hidden sector In this case, we use the relativistic expression for the equilibrium number density, giving (1 + f i ) 1/2 . (C.8) ζ(3)g Φ m 3 Φ π 2 x 3 F σ Φ v ≈ C.4 Relativistic freeze-out from visible sector In this case, we have ζ(3)g Φ m 3 Φ π 2 x 3 F σ Φ v ≈ π 2 90 g (vis) * F 1 + 1 f i m 2 Φ M P x 2 F , (C.9) and x F ≈ √ 90ζ(3)g Φ σ Φ v M P m Φ π 3 g D Decoupling of Φ from either sector via freeze-in Because Φ is the source of the EMD period, at some point it decouples in the prior RD phase. If the annihilation rate to produce Φ is too tiny, Φ may never reach local, chemical and thermal equilibrium with the ambient radiation. However, the produced number density of Φ particles may be large enough to eventually dominate the energy density. This is known as freeze-in [41]. In this case, freeze-in in a RD period is dominated by the relativistic component and the abundance is set at the initial time. We begin with d(a 3 n Φ ) dt = a 3 σ Φ v (n 2 Φ,eq − n 2 Φ ) − a 3 Γ Φ n Φ , (D.1) We are interested in the early evolution of the Φ number density in a freeze-in scenario well-before it decays, as well as well-before it reaches equilibrium. Thus we may drop the decay term relative to the decoupling term above, as well as the actual number density relative to the thermal equilibrium value. With these approximations we have d(a 3 n Φ ) dH = − a 3 σ Φ v n 2 Φ,eq 2H 2 , (D.2) which for a = a i (t/t i ) 1/2 and H = 1/2t, appropriate for RD, one has d(a 3 n Φ ) dH = − a 3 i H 3/2 i σ Φ v n 2 Φ,eq 2H 7/2 . (D.3) To continue, we must express the temperature dependence of the equilibrium number density in terms of H, which is most easily done by specializing to the two decoupling cases. D.1 Freeze-in from hidden sector If Φ is produced from the HS, we have (1 + f i ) 3/2 . (D.5) Assuming this can be large enough to dominate the energy density at, by definition, the beginning of EMD, and using m Φ n Φ,MD ≈ 3H 2 MD M 2 P , setting H F = H MD gives H MD ≈ 90 3 ζ(3) 4 g 4 Φ M 2 P σ Φ v 2 m 2 Φ H i 9π 14 g (hid) * 3 (1 + f i ) 3 . (D.6) D.2 Freeze-in from visible sector If Φ is produced from the visible sector, we similarly have (D. 8) In sum, the equations in this Appendix give the number density n F of Φ particles in a freeze-in scenario, assuming it is produced in the early Universe from either the hidden or visible sectors, evaluated well-before it decays. And by definition of the freeze-in scenario, the number density n F is assumed to be well-below its equilibrium number density. n Φ,F ≈ 90 3/2 ζ(3) 2 g 2 Φ H 3/2 F σ Φ v M 3 P H 1/2 i π 7 g E Additional consistency constraint for the decoupled Φ scenario We obtain another constrain that must be satisfied in order for the EMD phase caused by the decoupled Φ to have nonzero duration. If Φ decouples from the subdominant sector, the value of f i must be such that the decoupled number density is large enough to lead to EMD. Using (6.7) for an annihilation rate that achieves relativistic freeze-out (which corresponds to the maximum frozen number density and thus longest possible duration for EMD), we require H MD Γ Φ . Using (C.2) for H MD and (C.8) and (C.10) for x F in their respective cases, we have f i 30 √ 10ζ(3) 2 g 2 Φ m 2 Φ π 7 g (hid) * 3/2 M P Γ Φ 2/3 , (E.1) in the case of decoupling from the HS while the VS is dominant, and f i   π 7 g (vis) * F 3/2 M P Γ Φ 30 √ 10ζ(3) 2 g 2 Φ m 2 Φ   2/3 , (E.2) in the case of decoupling from the VS while the HS is dominant. . Because the HS is not being fed by the decay of Φ, the relation is that of standard MD: h Figure 1 . 1Temperature dependence of the relativistic degrees of freedom in the visible sector assumed in our numerical calculations for temperatures greater than 1 GeV. Figure 2 . 2Numerical evolution of the background energy density components with scale factor in the case of modulus-driven EMD. EMD begins once ρ Φ dominates over both radiation components, and lasts until Φ decays. Top panel: initial RD by the hidden sector. Bottom panel: initial RD by the visible sector. A VS reheat temperature of T (vis) RH = 10 4 GeV, with α = 1, results in a modulus mass of m Φ ≈ 10 9 GeV essentially independent of the other parameters as long as the EMD period has a noticeable duration. Figure 3 . 3Dependence of the present-day monopole relic abundance on the monopole mass, for an example set of parameter values, in the case of modulus-driven EMD. The parameters of the cosmological background are the same as in the two panels ofFigure 2. Left: initial RD occurring in the hidden sector. Right: initial RD occurring in the visible sector. The solid curves (blue) are obtained from a numerical evolution of the background, while the dotted lines (light blue) lying on top of the numerical curves are the analytical expressions (4.10), (4.11), and (4.12). The red dashed line in both panels marks the purely RD equivalent scenario for comparison. The two black dashed lines, separating the three shaded regions, indicate the relic abundance for that mass if monopole production occurs at the start (top) or end (bottom) of EMD. Figure 4 . 4Dependence of the present-day monopole relic abundance on the monopole mass, for a variety of parameter values, with fixed x M = 50 and α = λ = 1. The solid curves (blue) are obtained from a numerical evolution of the background, while the dotted lines (light blue) on top of the numerical curves are the analytical expressions (4.10) -(4.12). Red and black dashed lines are as in Figure 3. For reference, the dotted horizontal and vertical lines in all panels mark Ω M h 2 = 0.12 and m M = 1 PeV respectively. The curves labeled by '2.' in the top panels, '2.' in the middle panels, and '1.' in the bottom panels correspond to the curves of Figure 3. Left panels: initial RD in the HS. Right panels: initial RD in the VS. Figure 5 . 5Diagram of particle Φ decoupling from either sector while always reheating to the visible sector. RH = 10 4 10GeV, as inFigure 2. 14 Figure 8 . 148is bounded by two black dotted lines: an upper line with slope given by m after EMD. Note that only segments of these lines are visible in the figure, as they extend underneath the main contours. The boundary lines meet at the left edge of each figure panel, where e f 1, which corresponds to the absence of an EMD phase, denoted by a 'red star' in the figure. The monopole mass at this point agrees with the mass at which the RD line crosses Ω M h 2 = 0.12 in Figures 4 and 7, for corresponding parameter values. Contours of T (vis) RH in the m M −e f plane holding the monopole relic abundance fixed at the observed value for DM of Ω M h 2 = 0.12, obtained using (4.10)-(4.12), and setting α = 1. Each panel corresponds to parameter variation relative to the top left panel. Larger values of T (vis) we have assumed the large-e f behavior. Thus the entire dashed blue contour corresponds to monopole production occurring in EMD, consistent withFigures 4 and 7.Each contour located above the dashed contour (with lower values of T (vis) see (4.6) and (4.7)). The location of these "turn-off" points in the m M − e f plane can be obtained in the following way.For monopole production at the start of EMD, (3.1), (3.3), and (7.1) lead to the relation direct substitution. Contours of Ω M h 2 in the m M − e f plane for values of Ω M h 2 not equal to 0.12 can be obtained by shifting the curves of Figure 8 according to Eq. this limit translates into an upper bound on e f . For the highest T (vis) RH considered in Figure 8, which is EeV, the maximum value for e f allowed by this bound is e f,max ≈ 10 9 . This maximum for the T (vis) RH = 1 EeV contour is denoted in Figure 8 by a 'green dot'. Along this contour larger values for e f are excluded by (7.4). The only other T , for which e f,max ≈ 10 12 . For all other values of T (vis) RH considered, the maximum value of e f is off the right edge of the plots. MD era we have a ∝ H −2/3 , which combined with H RH Γ Φ gives 16 ) 16For all three cases, f(EMD) RH Noting that we have m Φ n Φ,MD ≈ 3H 2 MD M 2 P at the onset of EMD, we are left withF H MD H F 3/2 = H 3/2 MD σ Φ v H 1/2 F . (C.1) In the condensed matter literature one often finds a different critical exponent z related to µ and ν by µ = zν. 4 A more general expression for the density of d-dimensional defects in D spatial dimensions can be found in[26]. A KZM description of the dynamics of defects in the quantum phase transition of the quantum Ising model in one-dimension with µ = ν = 1 can be found in[29,30]. The interactions between magnetic monopoles or more generally, dyons, and electric charges is a strongly coupled system and poorly understood. For a discussion of the annihilation rate for monopole-anti-monopole pairs in an ambient plasma, see[31]. If the VS radiation energy density is larger than the instantaneous contribution from the decay of Φ at a given time, the VS radiation will evolve via ordinary redshift. Once the energy density is sufficiently diluted for the decay contribution to become dominant, (3.4) is valid. One can see this by analyzing the system in(5.3). For more on the effects of a large abundance of radiation during EMD, see[36,37]. Expressions at the boundaries of EMD can similarly be obtained by using (4.6) and (4.7) along with the corresponding values of HC. For simplicity, we assume that σΦv is independent of velocity, so that σΦv is independent of temperature, as the details of the Φ field and its interactions are not the focus of this work. However more general forms can and should be considered in a realistic model. In the Boltzmann equations we do not include the possibility of Φ decay to the HS, though one can easily include it by introducing branching fractions for both sectors.11 One can consider a non-negligible initial energy density for Φ, which will depend on the details of specific models, and we do not consider it any further here.12 We obtain these expressions by setting x f 1 in the usual freeze-out condition using the relativistic expression for the equilibrium number density of Φ (see Appendix C for more on freeze-out decoupling). As e f approaches 1, which corresponds to shorter and shorter EMD periods until EMD is no longer well defined, the power-law behavior of the contours inFigure 8breaks down. This can be seen in the slight curvature of the contours near e f = 1, and one must be more careful when using approximate expressions for e f in this region. However, because this deviation is quite small, and only occurs for poorly-defined EMD periods, approximations based on large e f are sufficient when considering our EMD scenarios. One has to be careful about possible modifications to the mM − TC relation in such cases as well. For the purposes of this discussion, we will assume the direct proportionality of a classical phase transition, though this can be generalized without too much effort. AcknowledgementsThe authors thank Lukasz Cincio, Jacek DziarmagaA Table of notationIn this Appendix we provide a table of notation. Subscripts generally label the time at which a quantity is evaluated, while superscripts generally label the sector to which a quantity belongs, unless stated otherwise. is determined by the duration of the EMD phase, and we can approximate it in the following way. At the end of EMD, as Φ completes its decay and reheats the visible sector, the ratio of the radiation energy densities of the two sectors becomes fixed aswhere the additional subscript 'RH' on the energy densities indicates their value at reheating. At the onset of EMD, the energy densities of Φ and radiation are close to equal and we have ρ Φ,MD ≈ ρ r,MD ≈ 3H 2 MD M 2 P , while at the end of EMD we have ρ Φ,RH ≈ ρ r,RH ≈ 3Γ 2 Φ M 2 P . In the case of initial HS domination, ρ r,MD is dominated by ρ (hid) r,MD , while for initial visible sector domination it is dominated by ρ (vis) r,MD . The energy density at reheating in both cases is dominated by the visible sector because of our decay requirement. Therefore, the ratio of the visible sector Search for γ-Ray Line Signals from Dark Matter Annihilations in the Inner Galactic Halo from 10 Years of Observations with H. H Abdallah, .E.S.S.Phys. Rev. Lett. 12020201101H. Abdallah et al., "Search for γ-Ray Line Signals from Dark Matter Annihilations in the Inner Galactic Halo from 10 Years of Observations with H.E.S.S.," Phys. Rev. Lett., vol. 120, no. 20, p. 201101, 2018. Resummed Photon Spectra for WIMP Annihilation. M Baumgart, T Cohen, I Moult, N L Rodd, T R Slatyer, M P Solon, I W Stewart, V Vaidya, JHEP. 03117M. Baumgart, T. Cohen, I. Moult, N. L. Rodd, T. R. Slatyer, M. P. Solon, I. W. Stewart, and V. Vaidya, "Resummed Photon Spectra for WIMP Annihilation," JHEP, vol. 03, p. 117, 2018. Dark matter production in the early Universe: beyond the thermal WIMP paradigm. H Baer, K.-Y Choi, J E Kim, L Roszkowski, Phys. Rept. 555H. Baer, K.-Y. Choi, J. E. Kim, and L. Roszkowski, "Dark matter production in the early Universe: beyond the thermal WIMP paradigm," Phys. Rept., vol. 555, pp. 1-60, 2015. Cosmological Experiments in Superfluid Helium?. W H Zurek, Nature. 317W. H. Zurek, "Cosmological Experiments in Superfluid Helium?," Nature, vol. 317, pp. 505-508, 1985. Cosmic strings in laboratory superfluids and the topological remnants of other phase transitions. W H Zurek, Acta Phys. Polon. 24W. H. Zurek, "Cosmic strings in laboratory superfluids and the topological remnants of other phase transitions," Acta Phys. Polon., vol. B24, pp. 1301-1311, 1993. Cosmological experiments in condensed matter systems. W H Zurek, Phys. Rept. 276W. H. Zurek, "Cosmological experiments in condensed matter systems," Phys. Rept., vol. 276, pp. 177-221, 1996. Topological defects as relics of emergent continuous symmetry and higgs condensation of disorder in ferroelectrics. S.-Z Lin, X Wang, Y Kamiya, G.-W Chern, F Fan, D Fan, B Casas, Y Liu, V Kiryukhin, W H Zurek, C D Batista, S.-W Cheong, Nature Physics. 1012S.-Z. Lin, X. Wang, Y. Kamiya, G.-W. Chern, F. Fan, D. Fan, B. Casas, Y. Liu, V. Kiryukhin, W. H. Zurek, C. D. Batista, and S.-W. Cheong, "Topological defects as relics of emergent continuous symmetry and higgs condensation of disorder in ferroelectrics," Nature Physics, vol. 10, no. 12, pp. 970-977, 2014. Critical dynamics of spontaneous symmetry breaking in a homogeneous bose gas. N Navon, A L Gaunt, R P Smith, Z Hadzibabic, Science. 3476218N. Navon, A. L. Gaunt, R. P. Smith, and Z. Hadzibabic, "Critical dynamics of spontaneous symmetry breaking in a homogeneous bose gas," Science, vol. 347, no. 6218, pp. 167-170, 2015. Emergence of coherence via transverse condensation in a uniform quasi-two-dimensional bose gas. L Chomaz, L Corman, T Bienaimé, R Desbuquois, C Weitenberg, S Nascimbène, J Beugnon, J Dalibard, Nature Communications. 616162L. Chomaz, L. Corman, T. Bienaimé, R. Desbuquois, C. Weitenberg, S. Nascimbène, J. Beugnon, and J. Dalibard, "Emergence of coherence via transverse condensation in a uniform quasi-two-dimensional bose gas," Nature Communications, vol. 6, no. 1, p. 6162, 2015. Exploring the kibble-zurek mechanism with homogeneous bose gases. J Beugnon, N Navon, Journal of Physics B: Atomic, Molecular and Optical Physics. 5022002J. Beugnon and N. Navon, "Exploring the kibble-zurek mechanism with homogeneous bose gases," Journal of Physics B: Atomic, Molecular and Optical Physics, vol. 50, p. 022002, jan 2017. Global formation of topological defects in the multiferroic hexagonal manganites. Q N Meier, M Lilienblum, S M Griffin, K Conder, E Pomjakushina, Z Yan, E Bourret, D Meier, F Lichtenberg, E K H Salje, N A Spaldin, M Fiebig, A Cano, Phys. Rev. X. 741014Q. N. Meier, M. Lilienblum, S. M. Griffin, K. Conder, E. Pomjakushina, Z. Yan, E. Bourret, D. Meier, F. Lichtenberg, E. K. H. Salje, N. A. Spaldin, M. Fiebig, and A. Cano, "Global formation of topological defects in the multiferroic hexagonal manganites," Phys. Rev. X, vol. 7, p. 041014, Oct 2017. Topological Dark Matter. H Murayama, J Shu, Phys. Lett. 686H. Murayama and J. Shu, "Topological Dark Matter," Phys. Lett., vol. B686, pp. 162-165, 2010. Effects of decay of scalar partner of axion on cosmological bounds of axion supermultiplet properties. J E Kim, Phys. Rev. Lett. 67J. E. Kim, "Effects of decay of scalar partner of axion on cosmological bounds of axion supermultiplet properties," Phys. Rev. Lett., vol. 67, pp. 3465-3468, 1991. Can decaying particles raise the upper bound on the Peccei-Quinn scale?. M Kawasaki, T Moroi, T Yanagida, Phys. Lett. B. 383M. Kawasaki, T. Moroi, and T. Yanagida, "Can decaying particles raise the upper bound on the Peccei-Quinn scale?," Phys. Lett. B, vol. 383, pp. 313-316, 1996. The Cosmology of string theoretic axions. T Banks, M Dine, Nucl. Phys. B. 505T. Banks and M. Dine, "The Cosmology of string theoretic axions," Nucl. Phys. B, vol. 505, pp. 445-460, 1997. Can strong QCD action in the early universe raise the axion decay constant?. J E Kim, 28th International Conference on High-energy Physics. 7J. E. Kim, "Can strong QCD action in the early universe raise the axion decay constant?," in 28th International Conference on High-energy Physics, pp. 1537-1540, 7 1996. Axion cosmology with its scalar superpartner. M Hashimoto, K Izawa, M Yamaguchi, T Yanagida, Phys. Lett. B. 437M. Hashimoto, K. Izawa, M. Yamaguchi, and T. Yanagida, "Axion cosmology with its scalar superpartner," Phys. Lett. B, vol. 437, pp. 44-50, 1998. Hadronic axion model in gauge mediated supersymmetry breaking and cosmology of saxion. T Asaka, M Yamaguchi, Phys. Rev. D. 59125003T. Asaka and M. Yamaguchi, "Hadronic axion model in gauge mediated supersymmetry breaking and cosmology of saxion," Phys. Rev. D, vol. 59, p. 125003, 1999. Supersymmetry, axions and cosmology. T Banks, M Dine, M Graesser, Phys. Rev. 6875011T. Banks, M. Dine, and M. Graesser, "Supersymmetry, axions and cosmology," Phys. Rev., vol. D68, p. 075011, 2003. Cosmological Moduli and the Post-Inflationary Universe: A Critical Review. G Kane, K Sinha, S Watson, Int. J. Mod. Phys. 24081530022G. Kane, K. Sinha, and S. Watson, "Cosmological Moduli and the Post-Inflationary Universe: A Critical Review," Int. J. Mod. Phys., vol. D24, no. 08, p. 1530022, 2015. The Origin of Magnetic Fields. E N Parker, Astrophys. J. 160383E. N. Parker, "The Origin of Magnetic Fields," Astrophys. J., vol. 160, p. 383, 1970. Magnetic Monopoles and the Survival of Galactic Magnetic Fields. M S Turner, E N Parker, T J Bogdan, Phys. Rev. 261296M. S. Turner, E. N. Parker, and T. J. Bogdan, "Magnetic Monopoles and the Survival of Galactic Magnetic Fields," Phys. Rev., vol. D26, p. 1296, 1982. Monopoles, strings and dark matter. C , Gomez Sanchez, B Holdom, Phys. Rev. 83123524C. Gomez Sanchez and B. Holdom, "Monopoles, strings and dark matter," Phys. Rev., vol. D83, p. 123524, 2011. Bounding millimagnetically charged particles with magnetars. A Hook, J Huang, Phys. Rev. 96555010A. Hook and J. Huang, "Bounding millimagnetically charged particles with magnetars," Phys. Rev., vol. D96, no. 5, p. 055010, 2017. Detecting Dark Matter with Aharonov-Bohm. J Terning, C B Verhaaren, JHEP. 12152J. Terning and C. B. Verhaaren, "Detecting Dark Matter with Aharonov-Bohm," JHEP, vol. 12, p. 152, 2019. Universality of phase transition dynamics: Topological Defects from Symmetry Breaking. A Campo, W H Zurek, Int. J. Mod. Phys. A. 2981430018A. del Campo and W. H. Zurek, "Universality of phase transition dynamics: Topological Defects from Symmetry Breaking," Int. J. Mod. Phys. A, vol. 29, no. 8, p. 1430018, 2014. Topological relics of symmetry breaking: Winding numbers and scaling tilts from random vortex-antivortex pairs. W H Zurek, J. Phys. Condens. Matter. 2540404209W. H. Zurek, "Topological relics of symmetry breaking: Winding numbers and scaling tilts from random vortex-antivortex pairs," J. Phys. Condens. Matter, vol. 25, no. 40, p. 404209, 2013. Symmetry Behavior at Finite Temperature. L Dolan, R Jackiw, Phys. Rev. 9L. Dolan and R. Jackiw, "Symmetry Behavior at Finite Temperature," Phys. Rev., vol. D9, pp. 3320-3341, 1974. Dynamics of a quantum phase transition. W H Zurek, U Dorner, P Zoller, Phys. Rev. Lett. 95105701W. H. Zurek, U. Dorner, and P. Zoller, "Dynamics of a quantum phase transition," Phys. Rev. Lett., vol. 95, p. 105701, 2005. Dynamics of a quantum phase transition: Exact solution of the quantum ising model. J Dziarmaga, Physical Review Letters. 95J. Dziarmaga, "Dynamics of a quantum phase transition: Exact solution of the quantum ising model," Physical Review Letters, vol. 95, Dec 2005. Cosmological Production of Superheavy Magnetic Monopoles. J , Phys. Rev. Lett. 431365J. Preskill, "Cosmological Production of Superheavy Magnetic Monopoles," Phys. Rev. Lett., vol. 43, p. 1365, 1979. Electric -magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang-Mills theory. N Seiberg, E Witten, Nucl. Phys. B. 426Nucl.Phys.BN. Seiberg and E. Witten, "Electric -magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang-Mills theory," Nucl. Phys. B, vol. 426, pp. 19-52, 1994. [Erratum: Nucl.Phys.B 430, 485-486 (1994)]. Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD. N Seiberg, E Witten, Nucl. Phys. B. 431N. Seiberg and E. Witten, "Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD," Nucl. Phys. B, vol. 431, pp. 484-550, 1994. Thermodynamics of SU(2) N=2 supersymmetric Yang-Mills theory. S Paik, L G Yaffe, JHEP. 0159S. Paik and L. G. Yaffe, "Thermodynamics of SU(2) N=2 supersymmetric Yang-Mills theory," JHEP, vol. 01, p. 059, 2010. Largest temperature of the radiation era and its cosmological implications. G F Giudice, E W Kolb, A Riotto, Phys. Rev. 6423508G. F. Giudice, E. W. Kolb, and A. Riotto, "Largest temperature of the radiation era and its cosmological implications," Phys. Rev., vol. D64, p. 023508, 2001. Dark Matter Production in an Early Matter Dominated Era. M Drees, F Hajkarim, JCAP. 18020257M. Drees and F. Hajkarim, "Dark Matter Production in an Early Matter Dominated Era," JCAP, vol. 1802, no. 02, p. 057, 2018. Freeze-in Production of Dark Matter Prior to Early Matter Domination. R Allahverdi, J K Osiński, Phys. Rev. 101663503R. Allahverdi and J. K. Osiński, "Freeze-in Production of Dark Matter Prior to Early Matter Domination," Phys. Rev., vol. D101, no. 6, p. 063503, 2020. Review of Particle Physics. M Tanabashi, Phys. Rev. 98330001M. Tanabashi et al., "Review of Particle Physics," Phys. Rev., vol. D98, no. 3, p. 030001, 2018. Planck 2018 results. VI. Cosmological parameters. N Aghanim, N. Aghanim et al., "Planck 2018 results. VI. Cosmological parameters," 2018. The Dark Matter Annihilation Boost from Low-Temperature Reheating. A L Erickcek, Phys. Rev. 9210103505A. L. Erickcek, "The Dark Matter Annihilation Boost from Low-Temperature Reheating," Phys. Rev., vol. D92, no. 10, p. 103505, 2015. Freeze-In Production of FIMP Dark Matter. L J Hall, K Jedamzik, J March-Russell, S M West, JHEP. 0380L. J. Hall, K. Jedamzik, J. March-Russell, and S. M. West, "Freeze-In Production of FIMP Dark Matter," JHEP, vol. 03, p. 080, 2010. Y Akrami, Planck 2018 results. X. Constraints on inflation. 7Y. Akrami et al., "Planck 2018 results. X. Constraints on inflation," 7 2018. Dark Matter and Dark Radiation. L Ackerman, M R Buckley, S M Carroll, M Kamionkowski, Physical Review D. 10L. Ackerman, M. R. Buckley, S. M. Carroll, and M. Kamionkowski, "Dark Matter and Dark Radiation," Physical Review D, pp. 277-286, 10 2008.
[]
[ "ABELIAN FUNCTIONS FOR TRIGONAL CURVES OF GENUS THREE", "ABELIAN FUNCTIONS FOR TRIGONAL CURVES OF GENUS THREE" ]
[ "J C Eilbeck ", "V Z Enolski ", "S Matsutani ", "Y Ônishi ", "E Previato " ]
[]
[]
We develop the theory of generalized Weierstrass σ-and ℘-functions defined on a trigonal curve of genus three. In particular we give a list of the associated partial differential equations satisfied by the ℘-functions, a proof that the coefficients of the power series expansion of the σ-function are polynomials of moduli parameters, and the derivation of two addition formulae.
10.1093/imrn/rnm140
[ "https://export.arxiv.org/pdf/math/0610019v2.pdf" ]
114,630
math/0610019
3e0406e6a0e9e2e95d74baeea6a0c194f989747b
ABELIAN FUNCTIONS FOR TRIGONAL CURVES OF GENUS THREE 23 Oct 2007 J C Eilbeck V Z Enolski S Matsutani Y Ônishi E Previato ABELIAN FUNCTIONS FOR TRIGONAL CURVES OF GENUS THREE 23 Oct 2007arXiv:math/0610019v2 [math.AG] We develop the theory of generalized Weierstrass σ-and ℘-functions defined on a trigonal curve of genus three. In particular we give a list of the associated partial differential equations satisfied by the ℘-functions, a proof that the coefficients of the power series expansion of the σ-function are polynomials of moduli parameters, and the derivation of two addition formulae. Introduction Constructive theories of Abelian and modular functions associated with algebraic curves have seen an upsurge of interest in recent times. These classical functions have been of crucial importance in mathematics since their definition at the hands of Abel, Jacobi, Poincaré and Riemann, but their relevance in physics and applied mathematics has greatly developed over the past three decades. Algebraic curves are here intended as Riemann surfaces, unless specified to be singular. The study of the simplest hyperelliptic curves, namely curves of genus two, goes back to the beginning of the 20th century, and these are treated in much detail in advanced textbooks, see for example Baker (1907) [4] and more recently Cassels and Flynn (1996) [17]. Not so much is known about the simplest trigonal curves, which have genus three. The study of modular functions of these curves was originated by Picard, and reprised recently by Shiga [27] and his school. In this paper we study Abelian functions associated with the simplest general type of curve, the general (3,4) curve. This is an (n, m)-curve in the sense of Burchnall-Chaundy [7] Our work is based on the realization of Abelian functions as logarithmic derivatives of the multi-dimensional σ-function. This approach is due to Weierstrass and Klein and was developed by Baker [1]; for recent developments of the theory of multi-dimensional σ-functions, see Grant [19], Buchstaber, Enolskii, and Leykin [8], Buchstaber and Leykin [10], [11], Eilbeck, Enolskii and Previato [16], and Baldwin and Gibbons [5], among others. We shall adopt as a template the Weierstrass theory of elliptic functions, trying to extend as far as possible these results to the case of the trigonal genus-three curve. Let σ(u) and ℘(u) be the standard functions in Weierstrass elliptic function theory. They satisfy the well-known formulae (0.1) ℘(u) = − d 2 du 2 log σ(u), (℘ ′ ) 2 = 4℘ 3 − g 2 ℘ − g 3 , ℘ ′′ = 6℘ 2 − 1 2 g 2 and the addition formula, which is a basic formula of the theory (0.2) − σ(u + v)σ(u − v) σ(u) 2 σ(v) 2 = ℘(u) − ℘(v). We present here two addition formulae (Theorems 8.1 and 9.1). The first of these is for the general trigonal curve of degree four, whereas the second is restricted to a "purely trigonal" curve of degree four (see (2.1)). The first main Theorem 8.1 is the natural generalization of (0.2). The authors realized the existence of the formula of the second main Theorem 9.1 from [26]. However we were not able to use that paper to establish our result, instead working from results by Cho and Nakayashiki [12], Grant's paper [19], p.100, (1.6), or a calculation using [9]. The crucial part is to identify the coefficients of the right hand sides of these two formulae. To calculate these, we used a power-series expansion of the σ-function, stimulated by the works of Buchstaber and Leykin [10] for hyperelliptic case and of Baldwin and Gibbons [5] for a purely trigonal curve of genus four. The σ-functional realization of Abelian functions of trigonal curve of arbitrary genus g was previously developed in [9] and [15]. Using these results in the case of g = 3 we present explicit formulae for 6 canonical meromorphic differentials and the symmetric bidifferential which allow us to derive a complete set of relations for trigonal ℘-functions, generalizing the above relations for the Weierstrass ℘-function. We note that we have recently developed a parallel, but more limited theory, for purely trigonal curves of genus four in [6], a paper which draws heavily on the results presented here. It is perhaps useful to compare and contrast these two cases. As demonstrated in Schilling's generalization of the Neumann system [28], there are basically two cases of trigonal cyclic covers, the order of a related linear differential operator that commutes with the given one of order three being congruent to 1 or 2 modulo 3, respectively. In each case, the action variables of the integrable system parametrize a family of curves of the same type, thus the family of curves in the (3, 4)-case cannot be obtained as a limit of that in the (3, 5)-case, as they have different dimensions. In the present paper, we develop the method and prove the addition formulae, together with the characterising differential equations, for the former case, in that the highest power of x appearing in the equation of the curve is 4 (≡ 1 modulo 3); this corresponds to the 'base' case of the Boussinesq equation, the smallest-genus spectral curve of an algebro-geometric third-order operator. In [6], the case where the highest power of x appearing in the equation of the curve is 5 (≡ 2 modulo 3) is addressed. The differences in the two cases manifest themselves in a number of ways, for example the parity of the σ function is different in the two cases, and the two-term addition formulae are antisymmetric in the genus 3 case and symmetric in the genus 4 case. Also the results are given for the general (3, 4)-curve here, whereas only for the purely trigonal (3, 5)-case in [6]. It may be possible with some work to relate the (3,5)-case to the (3, 4)-case, but this would not be straightforward and we have not yet attempted this. Our study is far from complete, and a number of questions still remain. One of the first problems still to be considered should be the explicit recursive construction of the σ-series generalizing the one given by Weierstrass; for a hyperelliptic curve of genus two, this result was found by Buchstaber and Leykin [10], who also devised a procedure to derive such recursions for the whole family of (n, m)-curves [10], [11]. Another problem is the deeper understanding of the algebraic structure of the addition theorems developed here, in order to generalize results to higher genera. As a pattern one can consider the addition formula of [8] for hyperelliptic σ-functions of arbitrary genera written in terms of certain Pfaffians. Also, the description of Jacobi and Kummer varieties as projective varieties, whose coordinates are given in terms of (derivatives of) trigonal ℘-functions, is far from complete. We hope the results we present to be the first steps towards a general theory of trigonal curves of arbitrary genus, as well as a tool in the study of projective varieties which are images of Jacobians. The paper is organized as follows. We first discuss the basic properties of the general (3, 4)-curve in Section 1, and define a restricted version of this curve, the "purely trigonal case", in Section 2. In Section 3, we introduce the σ function for the general curve, and in Section 4 the Abelian functions ℘ ij and their derivatives. Section 5 of the paper is devoted to the various differential relations satisfied by these Abelian functions, and the series expansion of the σ function is discussed in Section 6, in which the result (Theorem 6.1) is new, is proved quite constructively, and is the key for the rest of papers. Let Θ [2] be the standard theta divisor, namely the image of the Abelian map of the symmetric square of the curve that we consider, in its Jacobian variety J. The basis of the spaces Γ (J, O(nΘ [2] )) of functions on J whose poles are at most of order n along Θ [2] are discussed in Section 7, as a preliminary to the two main addition Theorems in Sections 8 and 9, respectively. The first addition theorem is a two-term relation for the general (3, 4)-curve, and the second a three-term relation for the "purely trigonal" (3, 4)-curve. Appendix A has some formulae for the fundamental bi-differential, and Appendix B has a list of quadratic three-index relations for the "purely trigonal" case only, as the full relations would require too much space. The web site [13] contains more details of the relations omitted through lack of space. While Sections 1 and 2 overlap somewhat with material in [26], we believe that the results are useful to make the present paper reasonably self-contained. f (x, y) = y 3 +(µ 1 x + µ 4 )y 2 + (µ 2 x 2 + µ 5 x + µ 8 )y − (x 4 + µ 3 x 3 + µ 6 x 2 + µ 9 x + µ 12 ), (µ j are constants), (1.1) with the unique point ∞ at infinity. This curve is of genus 3, if it is non-singular. We consider the set of 1-forms (1.2) ω 1 = dx f y (x, y) , ω 2 = xdx f y (x, y) , ω 3 = ydx f y (x, y) , where f y (x, y) = ∂ ∂y f (x, y). This is a basis of the space of differentials of the first kind on C. We denote the vector consisting of the forms (1.2) by (1.3) ω = (ω 1 , ω 2 , ω 3 ) We know, by the general theory, that for three variable points (x 1 , y 1 ), (x 2 , y 2 ), and (x 3 , y 3 ) on C, the sum of integrals from ∞ to these three points (1.4) u = (u 1 , u 2 , u 3 ) = (x 1 ,y 1 ) ∞ ω + (x 2 ,y 2 ) ∞ ω + (x 3 ,y 3 ) ∞ ω fills the whole space C 3 . We denote the points in C 3 by u and v etc., and their natural coordinates in C 3 by the subscripts (u 1 , u 2 , u 3 ), (v 1 , v 2 , v 3 ). We denote the lattice generated by the integrals of the basis (1.2) along any closed paths on C by Λ. We denote the manifold C 3 /Λ, by J, the Jacobian variety over C of C. We denote by κ the natural map to the quotient group, (1.5) κ : C 3 → C 3 /Λ = J. Λ is a lattice of the space C 3 generated by the integrals ω along any loop on C. We define for k = 1, 2, 3, . . . , the map (1.6) ι : Sym k (C) → J (P 1 , · · · , P k ) → P 1 ∞ ω + · · · + P k ∞ ω mod Λ, and denote its image by W [k] . (W [k] = J for k ≥ 3 by the Abel-Jacobi theorem.) We will use the same symbol u = (u 1 , u 2 , u 3 ) for a point u ∈ C 3 in κ −1 (W [k] ). Let (1.7) [−1](u 1 , u 2 , u 3 ) = (−u 1 , −u 2 , −u 3 ), and (1.8) Θ [k] := W [k] ∪ [−1]W [k] . We call this Θ [k] the k-th standard theta subset. In particular, if k = 1, then (1.6) gives an embedding of C: (1.9) ι :C → J P → P ∞ ω mod Λ. We note that (1.10) Θ [2] = W [2] , Θ [1] = W [1] , differing from the genus-3 hyperelliptic case in a suitable normalization [8]. If u = (u 1 , u 2 , u 3 ) varies on the inverse image κ −1 ι(C) = κ −1 (W [1] ) of the embedded curve, we can take u 3 as a local parameter at the origin (0, 0, 0). Then we have (see [26], e.g.) Laurent expansions with respect to u 3 as follows: (1.11) u 1 = 1 5 u 3 5 + · · · , u 2 = 1 2 u 3 2 + · · · and (1.12) x(u) = 1 u 3 3 + · · · , y(u) = 1 u 3 4 + · · · . We introduce a weight for several variables as follows: Definition 1.1. We define a weight for constants and variables appearing in our relations as follows. The weights of the variables u 1 , u 2 , u 3 for every u = (u 1 , u 2 , u 3 ) of W [k] , (k = 1, 2, . . . ) are 5, 2, 1, respectively, and the weight of each coefficient µ j in (1.1) is −j, the weights of x and y of each point (x, y) of C are −3 and −4, respectively. So, the weights of the variables are nothing but the order of zero at ∞, while the weight assigned to the coefficients is a device to render f (x, y) homogeneous. This is the reason why µ 7 , µ 10 , µ 11 are absent. We remark that the weights of the variables u k are precisely the Weierstrass gap numbers of the Weierstrass gap sequence at ∞, whilst the weights of monomials of x(u) and y(u) correspond to the Weierstrass non-gap numbers in the sequence. In particular, in the case considered the Weierstrass gap sequence is of the form 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, . . . where orders of existing functions of the form x p y q , p, q ∈ N ∪ {0} are overlined. The definition above is compatible, for instance, with the Laurent expansion of x(u) and y(u) with respect to u 3 , etc. for u ∈ W [1] . Moreover, all the equalities in this paper are homogeneous with respect to this weight. In the next section, we use the discriminant of C. Axiomatically, the discriminant D of C is defined as (one of) the simplest polynomial(s) in the µ j 's such that D = 0 if and only if C has a singular point. Here we are regarding C as a family of curves over Z. While no concrete expression of the discriminant is necessary for the main results in this paper, we put forward a conjecture based on the results of experimentation on special cases of C using computer algebra. R 1 = rslt x rslt y f (x, y), f x (x, y) , rslt y f (x, y), f y (x, y) , R 2 = rslt y rslt x f (x, y), f x (x, y) , rslt x f (x, y), f y (x, y) , R 3 = gcd(R 1 , R 2 ), where rslt z represents the resultant, namely, the determinant of the Sylvester matrix with respect to the variable z. Then R 3 is of weight 144 and a perfect square in the ring Z[µ 1 , µ 4 , µ 2 , µ 5 , µ 8 , µ 3 , µ 6 , µ 9 , µ 12 ]. Unfortunately checking this condition directly is a computing task presenting considerable difficulties due to the size of the intermediate expressions involved. We leave this as a conjecture and remark only that work on a full calculation is continuing. This result is not crucial to this paper, but we will adopt it as a working hypothesis (see Remark 6.2). To continue, we define here the discriminant D of C by a square root of R 3 : (1.14) D = R 3 . We comment on the choice of this root in 6.2. If the conjecture is true, D is of weight 72. For the convenience of the reader we give R 3 R 3 = 256µ 12 3 − 27µ 12 2 µ 3 4 − 128µ 12 2 µ 6 2 + 144µ 12 2 µ 6 µ 3 2 − 192µ 12 2 µ 9 µ 3 + 16µ 12 µ 6 4 − 80µ 12 µ 9 µ 6 2 µ 3 − 4µ 12 µ 3 2 µ 6 3 + 18µ 12 µ 9 µ 3 3 µ 6 + 144µ 12 µ 9 2 µ 6 − 6µ 12 µ 9 2 µ 3 2 − 4µ 9 2 µ 6 3 − 4µ 9 3 µ 3 3 + µ 9 2 µ 3 2 µ 6 2 + 18µ 9 3 µ 6 µ 3 − 27µ 9 4 6 for the special case of µ 1 = µ 2 = µ 4 = µ 5 = µ 8 = 0 (see Section 2). Definition 1.3. The 2-form Ω((x, y), (z, w)) on C × C is called fundamental 2-from of the second kind or (fundamental second kind bi-differential) if it is symmetric, namely, (1.15) Ω((x, y), (z, w)) = Ω((z, w), (x, y)), it has its only pole (of second order) along the diagonal of C × C, and in the vicinity of each point (x, y) it is expanded in power series as (1.16) Ω((x, y), (z, w)) = 1 (ξ − ξ ′ ) 2 + O(1) dξdξ ′ (as (x, y) → (z, w)), where ξ and ξ ′ are local coordinates of points (x, y) and (z, w). We shall look for a realization of Ω((x, y), (z, w)) in the form (1.17) Ω((x, y), (z, w)) = F ((x, y), (z, w))dxdz (x − z) 2 f y (x, y)f w (z, w) , where F ((x, y), (z, w)) is a polynomial in its variables. (1.18) Σ (x, y), (z, w) = 1 (x − z)f y (x, y) 3 k=1 y 3−k f (Z, W ) W 3−k+1 W (Z,W )=(z,w) , where [ ] W means removing the terms of negative powers with respect to W . Then there exist differentials η j = η j (x, y) (j = 1, 2, 3) of the second kind that have their only pole at ∞ such that the fundamental 2-form of the second kind is given as 1 , Ω((x, y), (z, w)) = d dx Σ((z, w), (x, y)) + 3 k=1 ω k (z, w) dz η k (x, y) dx dxdz. (1.19) The set of differentials {η 1 , η 2 , η 3 } is determined modulo the space spanned by the ω j s of (1.2). Proof. The 2-form (1.20) d dz Σ (x, y), (z, w) dxdz satisfies the condition on the poles as a function of (x, y), indeed one can check that (1.20) has only a second order pole at (x, y) = (z, w) whenever (z, w) is an ordinary point or a Weierstrass point; at infinity the expansion (1.12) should be used. However, the form (1.20) has unwanted poles at infinity as a form in the (z, w)-variables. To restore the symmetry given in (1.15) we complement (1.20) by the second term to obtain (1.19) with polynomials η j (x, y) which should be found from (1.15). That results in a system of linear equations for coefficients of η j (x, y) which is always solvable. As a result, the polynomials η i (x, y) as well as F ((x, y), (z, w)) are obtained explicitly. Remark 1.5. The 1-form Π (z 2 ,w 2 ) (z 1 ,w 1 ) (x, y) = Σ((x, y), (z 1 , w 1 ))dx − Σ((x, y), (z 2 , w 2 ))dx is the differential of the third kind, with first order poles at points (x, y) = (z 1 , w 1 ) and (x, y) = (z 2 , w 2 ), and residues +1 and −1 correspondingly. Remark 1.6. The realization of the fundamental 2-form in terms of the Schottky-Klein prime-form and θ-functions is given in [1], no.272, and the theory based on the θ-functional representation is developed in [18]. Here we deal with an equivalent algebraic representation of the fundamental 2-form which goes back to Klein and exhibit an algebraic expression for it, that is also mentioned by Fay in [18] where the prime-form was defined. The above derivation of the fundamental 2-form is done in [1], around pg. 194, and it was reconsidered in [15] for a large family of algebraic curves. The case of a trigonal curve of genus four was developed in [5], pp. 3617-3618. It is easily seen that the η j above is written as (1.21) η j (x, y) = h j (x, y) f y (x, y) dx, j = 1, 2, 3, where h j (x, y) ∈ Q[µ 1 , µ 2 , µ 4 , µ 5 , µ 8 , µ 3 , µ 6 , µ 9 , µ 12 ][x, y] , and h j is of homogeneous weight. The differentials η j are defined modulo the space of holomorphic differentials with the same weight, but it is possible to choose the standard η j s uniquely by requiring that for each j = 1, 2, 3 the polynomial h j (x, y) do not contain monomials corresponding to nongaps with bigger j. Moreover there exist precisely 2g = 6 monomials defining standard differentials, for more details see [11], Chapter 4. In particular, straightforward calculations lead to the following expressions h 3 (x, y) = −x 2 , h 2 (x, y) = −2xy + µ 1 x 2 , h 1 (x, y) = −(5x 2 + (µ 1 µ 2 − 3µ 3 )x + µ 2 µ 4 + µ 6 )y + µ 2 y 2 + 3µ 1 x 3 − (µ 2 2 + 2µ 3 µ 1 − 2µ 4 )x 2 − (µ 5 µ 2 + µ 6 µ 1 + µ 3 µ 4 )x + 3 4 µ 1 f x (x, y) − 1 3 µ 2 − 1 4 µ 1 2 f y . (1.22) The orders of monomials defining standard differentials are printed in bold: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, . . ., these can be written as 3i+4j, 0 ≤ i ≤ 2, 0 ≤ j ≤ 1. We remark that the last two terms in the definition of h 1 (x, y) are chosen to provide the standard differentials described above. The polynomial F (x, y), (z, w) in (1.17) is of homogeneous weight (weight −8) , and is given explicitly in Appendix A. Purely trigonal curve of degree four In Section 9 of this paper, we restrict ourselves to the curve (2.1) C : y 3 = x 4 + µ 3 x 3 + µ 6 x 2 + µ 9 x + µ 12 specialized from (1.1). We also restrict results given in Appendix B to this case to save space. This curve is called a purely trigonal curve of degree four. Equivalently we can represent the curve (1.1) in the form (2.2) C : y 3 = 4 k=1 (x − a k ), and evaluate the discriminant D according to (1.14) as (2.3) D = 1≤i<j≤4 (a i − a j ) 4 . The curve C is smooth if and only if a i = a j for all i, j = 1, . . . 4. While we assume this to be case, results in the singular cases are obtained by suitable limiting process. For the curve (2.1), the basis (1.2) of differential forms of first kind and the function Σ in (1.18) can be written explicitly as (2.4) ω 1 = dx 3y 2 , ω 2 = xdx 3y 2 , ω 3 = ydx 3y 2 = dx 3y , and (2.5) Σ (x, y), (z, w) = y 2 + yw + w 2 3(x − z)y 2 , respectively. The function σ(u) is defined by using these. Let ζ = e 2π √ −1/3 . The curve C has an automorphism (x, y) → (x, ζy), and for u = (u 1 , u 2 , u 3 ) ∈ κ −1 ι(C), ζ j acts by (2.6) [ζ j ]u = (ζ j u 1 , ζ j u 2 , ζ 2j u 3 ) = (x, ζ j y) ∞ (du 1 , du 2 , du 3 ). This action naturally induces an action on κ −1 Θ [k] , (k = 2, 3, . . . ), implying that the set Θ [k] is stable under the action of [ζ j ]. The σ-function We construct here the σ-function (3.1) σ(u) = σ(u 1 , u 2 , u 3 ) associated with C for u ∈ C 3 (see also [8], Chap.1). We choose closed paths (3.2) α i , β j (1 ≦ i, j ≦ 3) on C which generate H 1 (C, Z) such that their intersection numbers are α i ·α j = β i ·β j = 0, α i · β j = δ ij . Define the period matrices by (3.3) [ ω ′ ω ′′ ] = α i ω j β i ω j i,j=1,2,3 , [ η ′ η ′′ ] = α i η j β i η j i,j=1,2,3 . We can combine these two matrices into (3.4) M = ω ′ ω ′′ η ′ η ′′ . Then M satisfies (3.5) M −1 3 1 3 t M = 2π √ −1 −1 3 1 3 . This is the generalized Legendre relation (see (1.14) on p. 11 of [8]). In particular, ω ′ −1 ω ′′ is a symmetric matrix. We know also that (3.6) Im (ω ′ −1 ω ′′ ) is positive definite. By looking at (1.2), we see the canonical divisor class of C is given by 4∞ and we are taking ∞ as the point, the Riemann constant is an element of 1 2 Z 6 (see [22], Coroll.3.11, p.166). Let (3.7) δ := δ ′ δ ′′ ∈ 1 2 Z 6 be the theta characteristic which gives the Riemann constant with respect to the base point ∞ and to the period matrix [ω ′ ω ′′ ]. Note that we use δ ′ , δ ′′ as well as n in (3.8) as columns, to keep the notation a bit simpler. We define (3.8) σ(u) = σ(u; M) = σ(u 1 , u 2 , u 3 ; M) = c exp(− 1 2 uη ′ ω ′ −1 t u)ϑ[δ] (ω ′ −1 t u; ω ′ −1 ω ′′ ) = c exp(− 1 2 uη ′ ω ′ −1 t u)× × n∈Z 3 exp 2πi 1 2 t (n + δ ′ )ω ′ −1 ω ′′ (n + δ ′ ) + t (n + δ ′ )(ω ′ −1 t u + δ ′′ ) , where (3.9) c = 1 8 √ D π 3 |ω ′ | 1/2 with D from (1.14). Here the choice of a root of (3.9) is explained in the remark 6.2 below. The series (3.8) converges because of (3.6). In what follows, for a given u ∈ C 3 , we denote by u ′ and u ′′ the unique elements in R 3 such that (3.10) u = u ′ ω ′ + u ′′ ω ′′ . Then for u, v ∈ C 3 , and ℓ (= ℓ ′ ω ′ + ℓ ′′ ω ′′ ) ∈ Λ, we define L(u, v) := u(η ′t v ′ + η ′′t v ′′ ), χ(ℓ) := exp[π √ −1 2(ℓ ′ δ ′′ − ℓ ′′ δ ′ ) + ℓ ′t ℓ ′′ ] (∈ {1, −1}). (3.11) In this situation the most important properties of σ(u; M) are as follows: Lemma 3.1. The function σ(u) is an entire function. For all u ∈ C 3 , ℓ ∈ Λ and γ ∈ Sp(6, Z), we have σ(u + ℓ; M) = χ(ℓ)σ(u; M) exp L(u + 1 2 ℓ, ℓ), (3.12) σ(u; γM) = σ(u; M), (3.13) u → σ(u; M) has zeroes of order 1 along Θ [2] , (3.14) σ(u; M) = 0 ⇐⇒ u ∈ Θ [2] . (3.15) Proof. The function σ is clearly entire from its definition and from the known property of theta series. The formula (3.12) is a special case of the equation from [1] (p.286 in the 1995 reprint, ℓ.22). The statement (3.13) is easily shown by using the definition of σ(u) since γ corresponds to changing the choice of the paths of integration given in (3.3). The statements (3.14) and (3.15) are explained in [1], (p.252). These facts are partially described also in [8], (p.12, Th.1.1 and p.15). Lemma 3.2. The function σ(u) is either odd or even, i.e. (3.16) σ([−1]u) = −σ(u) or σ([−1]u) = σ(u). Proof. We fix a matrix M satisfying (3.5) and (3.6). Therefore the bilinear form L( , ) is fixed. Then the space of the solutions of (3.12) is one dimensional over C, because the Pfaffian of the Riemann form attached to L( , ) is 1 (see [24], Lemma 3.1.2 and [20], p.93, Th.3.1). Hence, such non-trivial solutions automatically satisfy (3.13) and (3.15); while (3.14) requires the constant factor to be the same, this is guaranteed by the definition of σ and the fact that (3.9) is independent of γ. In this sense, (3.12) characterizes the function σ(u) up to a constant, which depends only on the µ j s. Now considering the loop integrals for ω in the reverse direction, we see that [−1]Λ = Λ. Hence u → σ([−1]u) satisfies (3.12) also. So there exists a constant K such that σ([−1]u) = K σ(u). Since [−1] 2 is trivial, it must be K 2 = 1. Remark 3.3. In fact σ(u) is an odd function as we see in the Theorem 6.1. We need the power series expansion of σ(u) with respect to u 1 , u 2 , u 3 . To get the expansion, first of all, we need to investigate Abelian functions given by logarithmic (higher) derivatives of σ(u). We shall examine this in the next Section. Standard Abelian functions Definition 4.1. A meromorphic function u → P(u) on C 3 is called a standard Abelian function if it is holomorphic outside κ −1 (Θ [2] ) and is multi-periodic, namely, if it satisfies (4.1) P(u + ω ′ n + ω ′′ m) = P(u) for all integer vectors n, m ∈ Z and all u ∈ κ −1 (Θ [2] ). To realize the standard Abelian functions in terms of the σ-function, we first let (4.2) ∆ i = ∂ ∂u i − ∂ ∂v i for u = (u 1 , u 2 , u 3 ) and v = (v 1 , v 2 , v 3 ) . This operator occurs in what is now known as Hirota's bilinear operator, but in fact was introduced much earlier in the PDE case by Baker ([3], p.151, [4], p.49) (see also [14]). We define fundamental Abelian functions on J by (4.3) ℘ ij (u) = − 1 2σ(u) 2 ∆ i ∆ j σ(u)σ(v)| v=u = − ∂ 2 ∂u i ∂u j log σ(u) . It follows from (3.16) that these functions are even. For the benefit of the reader familiar with the genus one case, we should point out that the Weierstrass function ℘(u) described in eqn. (0.1) would be written as ℘ 11 (u) in this notation. It is clear that they belong to Γ (J, 2Θ [2] ). Moreover, we define (4.4) ℘ ijk (u) = ∂ ∂u k ℘ ij (u), ℘ ijkℓ (u) = ∂ ∂u ℓ ℘ ijk (u). The three index ℘-functions are odd and four index ℘ are even. The functions (4.3) and (4.4) are standard Abelian functions from Lemma 3.1. Following (and generalizing) Baker ([3], pg 151, [4], pp.49-50) (see also [9], pp.18-19, or [12]), we define (4.5) Q ijkℓ (u) = − 1 2σ(u) 2 ∆ i ∆ j ∆ k ∆ ℓ σ(u)σ(v)| v=u = ℘ ijkℓ (u) − 2(℘ ij ℘ kℓ + ℘ ik ℘ jℓ + ℘ iℓ ℘ jk )(u), which specializes to Q ijkk = ℘ ijkk − 2℘ ij ℘ kk − 4℘ ik ℘ jk , Q iikk = ℘ iikk − 2℘ ii ℘ kk − 4℘ ik 2 , Q ikkk = ℘ ikkk − 6℘ ik ℘ kk , Q kkkk = ℘ kkkk − 6℘ kk 2 . A short calculation shows that Q ijkℓ belongs in Γ (J, O(2Θ [2] )), whereas ℘ ijkℓ belongs in Γ (J, O(4Θ [2] )). In particular Q 1333 plays a key role in what follows. Note that although the subscripts in ℘ ijkℓ do denote differentiation, the subscripts in Q ijkℓ do not denote direct differentiation, and the latter notation is introduced for convenience only. This is important to bear in mind when we use cross-differentiation, for example the ℘ ijkℓ satisfy ∂ ∂um ℘ ijkℓ (u) = ∂ ∂u ℓ ℘ ijkm (u) , whereas the Q ijkℓ do not. The following useful formula (4.6) involving fundamental Kleinian functions, for the case of the general curve (1.1), was derived in [9]. It would be helpful for the reader to read [2], p. 377, for the case of hyperelliptic curves. The formula (4.6) below is proved similarly. ∞ ω + (x 2 ,y 2 ) ∞ ω + (x 3 ,y 3 ) ∞ ω with appropriate paths of the integrals. Let (x, y) be an arbitrary point on the curve C. Then, for each k = 1, 2, 3, the following formula holds (4.6) [1 x y] ℘ ij (x,y) ∞ ω − u 1 x k y k = F (x, y), (x k , y k ) (x − x k ) 2 , where F (x, y), (z, w) is a polynomial defined by (1.17) or (A.3). Proof. Using (3.14) and relations of differntials of the second kind on C with ones of the third kind (see [1], p.22,ℓ.15 and p.22,ℓ.11), we have an equation connecting the theta series appeared in (3.8) and differentials of the third kind (see [1], p.275, ℓ.−11, for example). Then such the equation is modified to a form suitable with σ(u) and the 2-form Ω((x, y), (z, w)) of (1.19). Finally, after taking logarithm of the modified one, operating Let (x, y) be any one of (x i , y i )s. Then we have infinitely many relations, of homogeneous weight, linear in ℘ ij (u), ℘ ijk (u), ℘ ijkℓ (u), · · · (i, j, k = 1, 2, 3), and whose coefficients are polynomials of x, y and µ j s. We list the first three of them of lower weights as follows : ℘ 33 (u)y + ℘ 23 (u)x + ℘ 13 (u) = x 2 , (4.7) ℘ 23 (u) + 1 3 µ 1 ℘ 33 (u) − ℘ 333 (u) y + ℘ 22 (u) − ℘ 233 (u) + 1 3 µ 1 ℘ 23 (u) x + 1 3 µ 1 ℘ 13 (u) + ℘ 12 (u) − ℘ 133 (u) = 2xy − 2 3 µ 1 x 2 , (4.8) − 3y 2 + 1 3 ℘ 33 µ 2 + 1 2 ℘ 3333 − 1 2 µ 1 ℘ 333 + 1 9 µ 1 2 ℘ 33 + 2µ 1 x − 3 2 ℘ 233 + 2µ 4 y + 2 3 µ 2 − 1 9 µ 1 2 x 2 + (− 1 2 µ 1 ℘ 233 + µ 5 + 1 2 ℘ 2333 + 1 3 ℘ 23 µ 2 + 1 9 µ 1 2 ℘ 23 − 3 2 ℘ 223 )x + 1 2 ℘ 1333 + 1 3 µ 2 ℘ 13 + µ 8 + 1 9 µ 1 2 ℘ 13 − 3 2 ℘ 123 − 1 2 µ 1 ℘ 133 = 0. (4.9) More equations of this type are available in [13]. Proof. These relations are derived from (4.6) by expanding (4.6), with respect to a local parameter t = x −1/3 , in the vicinity of the point at infinity, and comparing the principal parts of the poles on both sides of the relation (4.6), we find the solution of the Jacobi inversion problem. Remark 4.4. (1) In the case of trigonal curves, formula of this type was first given explicitly for a particular case of the curve (1.1) in [15]. (2) We use in the proof of Lemma 5.1 below the first seven relations in 4.3. Namely, those of weight from −6 to −12. The first two of relations in 4.3 give solution of the Jacobi inversion problem (see also [9]): We remark that the right hand side of equations (4.7), (4.8) are related to the polynomials h 3 (z, w) and h 2 (z, w) defining the canonical meromorphic differentials η 3 (z, w) and η 2 (z, w). Further the first equation in (4.8) is directly related to the determinant of the matrices constructed in [26], using the algebraic approach developed in [21]. If we take the resultant of (4.7), (4.8) with respect to y, we find a cubic equation in x which can be used to substitute for x 3 in terms of lower powers of x x 3 = 1 2 (3℘ 23 + µ 1 ℘ 33 − ℘ 333 ) x 2 + 1 2 ℘ 33 ℘ 22 + 2℘ 13 + ℘ 23 ℘ 333 − ℘ 33 ℘ 233 − ℘ 23 2 x + 1 2 ℘ 33 ℘ 12 − 1 2 ℘ 33 ℘ 133 − 1 2 ℘ 13 ℘ 23 + 1 2 ℘ 13 ℘ 333 . (4.10) If we now take the resultant of (4.7), (4.9) with respect to y, we get a quartic in x which can be reduced to a quadratic by repeated use of (4.10). This quadratic in x is not further reducible A quadratic equation in x has at most only two solutions and u has three free variables. Hence each the coefficients of 1, x, x 2 of the quadratic must all be identically zero. Furthermore, each coefficient can be split into two parts which are even and odd under the reflection (1.7), and each of these parts must vanish. So each term of order higher than two in the expansion of (4.6) can give up to six separate equations involving the ℘ functions. The simplest two arising from the resultant of (4.7), (4.9) are ℘ 222 − 2℘ 33 ℘ 233 + 2℘ 23 ℘ 333 − µ 2 ℘ 233 + µ 3 ℘ 333 + µ 1 ℘ 223 = 0, (4.11) ℘ 23 ℘ 233 − 2℘ 33 ℘ 223 + ℘ 333 ℘ 22 + 2℘ 133 + µ 1 (℘ 23 ℘ 333 − ℘ 33 ℘ 233 ) = 0, (4.12) where ℘ ij = ℘ ij (u) and ℘ ijk = ℘ ijk (u). Equations satisfied by the Abelian functions for the general trigonal case We can use the expansion of (4.6) as described in the discussion following Theorem 4.2 to derive various equations which the Abelian functions defined by (4.4) and (4.5) must satisfy. We consider first the 4-index equations, the generalizations of ℘ ′′ = 6℘ 2 − 1 2 g 2 in the cubic (genus 1) case. Lemma 5.1. The 4-index functions ℘ ijkℓ associated with (2.1) satisfy the following relations : ℘ 3333 = 6℘ 33 2 + µ 1 2 ℘ 33 − 3℘ 22 + 2µ 1 ℘ 23 − 4µ 2 ℘ 33 − 2µ 4 , ℘ 2333 = 6℘ 23 ℘ 33 + µ 1 2 ℘ 23 + 3µ 3 ℘ 33 − µ 2 ℘ 23 − µ 5 − µ 1 ℘ 22 , ℘ 2233 = 4℘ 23 2 + 2℘ 33 ℘ 22 + µ 1 µ 3 ℘ 33 − µ 2 ℘ 22 + 2µ 6 + 3µ 3 ℘ 23 + µ 1 µ 2 ℘ 23 + 4℘ 13 , ℘ 2223 = 6℘ 22 ℘ 23 + 4µ 1 ℘ 13 + µ 1 µ 3 ℘ 23 + µ 2 µ 3 ℘ 33 + 2µ 3 µ 4 + µ 2 2 ℘ 23 + 4µ 4 ℘ 23 + 3µ 3 ℘ 22 + 2µ 1 µ 6 + µ 2 µ 5 − 2µ 5 ℘ 33 , ℘ 2222 = 6℘ 22 2 − 2µ 2 µ 3 ℘ 23 + µ 1 µ 2 µ 5 + 2µ 1 µ 3 µ 4 + 24℘ 13 ℘ 33 + 4µ 1 2 ℘ 13 − 4µ 2 ℘ 13 − 4℘ 1333 + 4µ 5 ℘ 23 + 2µ 1 2 µ 6 − 2µ 2 µ 6 + µ 3 µ 5 − 3µ 3 2 ℘ 33 + 12µ 6 ℘ 33 + 4µ 4 ℘ 22 + µ 2 2 ℘ 22 + 4µ 1 µ 3 ℘ 22 , ℘ 1233 = 4℘ 13 ℘ 23 + 2℘ 33 ℘ 12 − 2µ 1 ℘ 33 ℘ 13 − 1 3 µ 1 3 ℘ 13 + 1 3 µ 1 ℘ 1333 + 1 3 µ 1 2 ℘ 12 + 3µ 3 ℘ 13 + 1 3 µ 1 µ 8 + 4 3 µ 1 µ 2 ℘ 13 − µ 2 ℘ 12 + µ 9 , ℘ 1223 = 4℘ 23 ℘ 12 + 2℘ 13 ℘ 22 − 2µ 2 ℘ 33 ℘ 13 − 2µ 8 ℘ 33 − 2 3 µ 8 µ 2 + 1 3 µ 2 ℘ 1333 + 3µ 3 ℘ 12 + 4µ 4 ℘ 13 + 4 3 µ 2 2 ℘ 13 − 2℘ 11 − 1 3 µ 1 2 µ 2 ℘ 13 + 1 3 µ 1 µ 2 ℘ 12 + µ 1 µ 3 ℘ 13 , ℘ 1222 = 6℘ 22 ℘ 12 + 6µ 9 ℘ 33 − µ 3 ℘ 1333 + 4µ 5 ℘ 13 + µ 2 2 ℘ 12 − µ 2 µ 9 + 4µ 4 ℘ 12 − 2µ 1 ℘ 11 + 6µ 3 ℘ 33 ℘ 13 − 3µ 2 µ 3 ℘ 13 + µ 1 2 µ 3 ℘ 13 + 3µ 1 µ 3 ℘ 12 − µ 1 µ 2 µ 8 , ℘ 1133 = 4℘ 13 2 + 2℘ 33 ℘ 11 − µ 9 ℘ 23 + 2µ 6 ℘ 13 + µ 8 ℘ 22 − µ 5 ℘ 12 + 2 3 µ 4 ℘ 1333 + 2 3 µ 4 µ 8 + 2µ 2 µ 8 ℘ 33 − 4µ 4 ℘ 13 ℘ 33 + 2 3 µ 2 µ 4 ℘ 13 + µ 1 µ 9 ℘ 33 − µ 1 µ 8 ℘ 23 + µ 1 µ 5 ℘ 13 − 2 3 µ 1 2 µ 4 ℘ 13 + 2 3 µ 1 µ 4 ℘ 12 , ℘ 1123 = 4℘ 12 ℘ 13 + 2℘ 23 ℘ 11 + 2µ 3 µ 4 ℘ 13 − µ 3 µ 8 ℘ 33 − 2µ 5 ℘ 13 ℘ 33 + µ 2 µ 8 ℘ 23 + 4 3 µ 2 µ 5 ℘ 13 − µ 9 ℘ 22 + 2µ 6 ℘ 12 + 1 3 µ 5 ℘ 1333 + 1 3 µ 5 µ 8 + µ 1 µ 9 ℘ 23 − 1 3 µ 1 2 µ 5 ℘ 13 + 1 3 µ 1 µ 5 ℘ 12 , ℘ 1122 = 4℘ 12 2 + 2℘ 11 ℘ 22 + 2 3 µ 1 2 µ 6 ℘ 13 + 4 3 µ 1 µ 6 ℘ 12 + µ 3 µ 9 ℘ 33 + µ 2 µ 9 ℘ 23 + 8µ 12 ℘ 33 + 2µ 3 µ 4 ℘ 12 − 2 3 µ 6 ℘ 1333 + 4µ 8 ℘ 13 − 2 3 µ 6 µ 8 + 4µ 6 ℘ 33 ℘ 13 − µ 3 µ 8 ℘ 23 + µ 3 µ 5 ℘ 13 − 8 3 µ 2 µ 6 ℘ 13 + µ 2 µ 8 ℘ 22 + µ 2 µ 5 ℘ 12 , ℘ 1113 = 6℘ 13 ℘ 11 + 6µ 2 µ 8 ℘ 13 − 2µ 2 µ 12 ℘ 33 − µ 1 2 µ 8 ℘ 13 + 4µ 1 µ 12 ℘ 23 + µ 1 µ 8 ℘ 12 + µ 5 µ 9 ℘ 33 + µ 5 2 ℘ 13 − 2µ 4 µ 9 ℘ 23 + µ 1 µ 9 ℘ 13 − 6µ 8 ℘ 33 ℘ 13 − 2µ 6 µ 8 ℘ 33 + µ 8 ℘ 1333 − 4µ 4 µ 12 + 3µ 9 ℘ 12 − 6µ 12 ℘ 22 − µ 5 µ 8 ℘ 23 + 4µ 4 µ 6 ℘ 13 , ℘ 1112 = 6℘ 12 ℘ 11 + 6µ 3 µ 12 ℘ 33 + 3µ 3 µ 8 ℘ 13 − 2µ 6 µ 8 ℘ 23 − µ 1 µ 8 2 + 5µ 2 µ 8 ℘ 12 + 4µ 2 µ 12 ℘ 23 − 2µ 1 µ 12 ℘ 22 + 4µ 4 µ 6 ℘ 12 − µ 5 µ 8 ℘ 22 + µ 5 2 ℘ 12 + 4µ 5 µ 12 − µ 9 ℘ 1333 − 4µ 1 µ 12 µ 4 + µ 1 2 µ 9 ℘ 13 + 3µ 1 µ 9 ℘ 12 − 2µ 4 µ 9 ℘ 22 + µ 5 µ 9 ℘ 23 − 4µ 2 µ 9 ℘ 13 + 6µ 9 ℘ 13 ℘ 33 − 3µ 8 µ 9 , ℘ 1111 = 6℘ 11 2 + 4µ 4 µ 9 ℘ 12 − 8µ 4 2 µ 12 − 2µ 2 2 µ 4 µ 12 − 3µ 8 2 ℘ 22 − 2µ 4 µ 8 2 + µ 5 2 ℘ 11 − 3µ 9 2 ℘ 33 − 4µ 12 ℘ 1333 + 24µ 12 ℘ 33 ℘ 13 + 12µ 5 µ 12 ℘ 23 + µ 2 µ 4 µ 5 µ 9 − 6µ 1 µ 3 µ 4 µ 12 + µ 1 µ 2 µ 5 µ 12 + 2µ 6 2 µ 8 + 2µ 2 2 µ 8 2 − µ 5 µ 6 µ 9 − 2µ 5 µ 9 ℘ 13 + 4µ 4 µ 6 ℘ 11 + 4µ 6 µ 8 ℘ 13 + 8µ 2 µ 8 ℘ 11 − 6µ 2 µ 6 µ 12 − 12µ 2 µ 12 ℘ 13 + 4µ 1 2 µ 12 ℘ 13 + 2µ 1 2 µ 6 µ 12 + 2µ 8 µ 5 ℘ 12 − 6µ 8 µ 9 ℘ 23 − 12µ 4 µ 12 ℘ 22 + µ 2 µ 5 2 µ 8 + 2µ 1 µ 4 µ 6 µ 9 + µ 1 µ 5 µ 6 µ 8 + 12µ 6 µ 12 ℘ 33 + 4µ 1 µ 9 ℘ 11 + 2µ 3 µ 4 2 µ 9 + 9µ 3 µ 5 µ 12 − 2µ 1 µ 3 µ 8 2 − 6µ 3 µ 8 µ 9 + 2µ 1 µ 2 µ 8 µ 9 + µ 3 µ 4 µ 5 µ 8 + 2µ 2 µ 4 µ 6 µ 8 + 2µ 2 µ 9 2 . Proof. Many of these relations follow from the sets of equations generated from the first seven terms of the expansion of (4.6) as indicated in Proposition 4.3 by a similar argument as that explained at the end of the previous Section. Others can be derived making use of derivatives of the equations in Lemma 5.5, or products of these equations with three index expressions ℘ ijk , working in a self-consistent way from higher to lower weights. The calculations are somewhat long and tedious and much facilitated by heavy use of Maple. Full Maple worksheets are available on request from the authors Remark 5.2. The complete set of the four-index relations for ℘-functions for genus three was derived by Baker [3] in the hyperelliptic case only. As far as we know, the above relations are new, and a comparison with Baker's relations is of interest. Remark 5.3. With the use of (4.5), these equations can be written in a slightly more compact form involving the Q ijkℓ functions. For example, the sixth equation (for ℘ 2222 ) becomes Q 2222 = −2µ 2 µ 3 ℘ 23 + µ 1 µ 2 µ 5 + 2µ 1 µ 3 µ 4 + 4µ 1 2 ℘ 13 − 4µ 2 ℘ 13 − 4Q 1333 + 4µ 5 ℘ 23 + 2µ 1 2 µ 6 − 2µ 2 µ 6 + µ 3 µ 5 − 3µ 3 2 ℘ 33 + 12µ 6 ℘ 33 + 4µ 4 ℘ 22 + µ 2 2 ℘ 22 + 4µ 1 µ 3 ℘ 22 . The importance of this switch to the Q variables is that the equations become linear in the Q ijkℓ and the 2-index ℘ ij . An alternative way of looking at this is that the equations in Lemma 5.1 have only second-order poles in σ. Remark 5.4. The first relation in Lemma 5.1, after differentiating twice with respect to u 3 , becomes the Boussinesq equation for the function ℘ 33 (see [9,15]). + 4 µ 4 ℘ 233 − 2 µ 5 ℘ 333 − 2 ℘ 33 ℘ 222 + µ 2 ℘ 222 − µ 1 µ 2 ℘ 223 − 4 ℘ 123 = 0, [−8] 3 µ 1 µ 3 ℘ 223 − 3 µ 2 µ 3 ℘ 233 − 24 ℘ 33 ℘ 133 + 24 ℘ 13 ℘ 333 + 12 ℘ 122 − 12 µ 1 ℘ 123 + 12 µ 2 ℘ 133 − 3 µ 3 ℘ 222 + 6 µ 5 ℘ 233 − 3 µ 3 2 ℘ 333 + 12 µ 6 ℘ 333 − 6 ℘ 23 ℘ 222 + 6 ℘ 22 ℘ 223 = 0, [−9] 2 ℘ 33 ℘ 123 − µ 1 ℘ 33 ℘ 133 + µ 1 ℘ 13 ℘ 333 + ℘ 23 ℘ 133 − ℘ 12 ℘ 333 − 2 ℘ 13 ℘ 233 = 0, [−10] ℘ 113 + ℘ 13 ℘ 223 − 2 µ 4 ℘ 133 + ℘ 33 ℘ 122 − ℘ 22 ℘ 133 − ℘ 12 ℘ 233 + µ 8 ℘ 333 − µ 2 ℘ 133 ℘ 33 − µ 1 ℘ 13 ℘ 233 + µ 2 ℘ 13 ℘ 333 + µ 1 ℘ 133 ℘ 23 = 0, [−11] −℘ 112 − 3 µ 9 ℘ 333 + ℘ 13 ℘ 222 − ℘ 12 ℘ 223 − 2 ℘ 22 ℘ 123 − 2 µ 5 ℘ 133 + µ 1 ℘ 113 + 2 ℘ 23 ℘ 122 − µ 8 ℘ 233 − µ 2 ℘ 13 ℘ 233 + 3 µ 3 ℘ 33 ℘ 133 − 3 µ 3 ℘ 13 ℘ 333 + µ 2 ℘ 23 ℘ 133 = 0, [−12] 8 µ 4 ℘ 133 ℘ 33 − 8 µ 4 ℘ 13 ℘ 333 − 4 µ 2 µ 4 ℘ 133 + 2 µ 1 µ 9 ℘ 333 − 2 µ 1 µ 8 ℘ 233 + 2 µ 1 µ 5 ℘ 133 + 4 µ 1 µ 4 ℘ 123 + 4 µ 8 µ 2 ℘ 333 + 3 µ 3 ℘ 13 ℘ 233 − 3 µ 3 ℘ 23 ℘ 133 − µ 1 ℘ 112 + 3 ℘ 12 ℘ 222 + 4 ℘ 11 ℘ 333 − 2 µ 6 ℘ 133 − 3 ℘ 122 ℘ 22 − 4 µ 4 ℘ 122 + µ 9 ℘ 233 + 2 µ 8 ℘ 223 − 8 ℘ 33 ℘ 113 + 4 ℘ 13 ℘ 133 − 2 µ 1 2 ℘ 113 + 2 µ 2 ℘ 113 − 2 µ 5 ℘ 123 = 0, [−13] 4 ℘ 123 ℘ 13 + 4 µ 4 ℘ 23 ℘ 133 + µ 3 µ 8 ℘ 333 − 2 µ 5 ℘ 33 ℘ 133 + 2 µ 5 ℘ 13 ℘ 333 + µ 2 µ 8 ℘ 233 + µ 8 ℘ 222 − 4 ℘ 12 ℘ 133 − 2 ℘ 23 ℘ 113 + 2 ℘ 33 ℘ 112 − 4 µ 4 ℘ 13 ℘ 233 − µ 1 µ 8 ℘ 223 = 0, [−14] −µ 9 ℘ 222 + µ 1 µ 9 ℘ 223 + 4 ℘ 13 ℘ 122 + 2 ℘ 23 ℘ 112 − 2 ℘ 113 ℘ 22 − µ 3 µ 9 ℘ 333 − µ 2 µ 9 ℘ 233 + 2 µ 5 ℘ 23 ℘ 133 − 8 µ 12 ℘ 333 − 4 µ 8 ℘ 133 − 4 µ 6 ℘ 13 ℘ 333 + 4 µ 6 ℘ 33 ℘ 133 − 4 ℘ 12 ℘ 123 − 2 µ 5 ℘ 13 ℘ 233 = 0. [−15] where the number in brackets [ ] indicates the weight. Proof. We have already given the first two of these equations in the discussion following Theorem 4.2. Some of the others follow in the same way from the expansion of (4.2). Alternatively, some can be calculated directly by expressing the equations in Lemma 5.1 in terms of ℘ ijkℓ and ℘ mn functions, then using cross differentiation on suitably chosen pairs of equations. For example the first relation above for ℘ 222 can be derived from ∂ ∂u 2 ℘ 3333 − ∂ ∂u 3 ℘ 2333 = 0. Remark 5.6. For a fixed weight, these relations are not always unique, for example at weight −11 we also have the relation ℘ 33 ℘ 122 +2℘ 23 ℘ 123 +3℘ 113 +µ 2 ℘ 13 ℘ 333 −µ 2 ℘ 33 ℘ 133 +µ 8 ℘ 333 −2℘ 12 ℘ 233 −2µ 4 ℘ 133 −℘ 13 ℘ 223 = 0 These dual relations arise because in some cases the cross differentiation can be done in two different ways. In deriving the results in this section, it is sometimes required to make use of both bilinear relations at a given weight to provide enough equations to solve for the unknowns. A full list of the known bilinear relations is given at [13]. Lemma 5.7. The quadratic expressions in the 3-index functions ℘ ijk associated with (2.1) down to weight −23 can be expressed in terms of (at most cubic) relations in the ℘ mn and ℘ 1333 . For example we have the following five relations down to weight −8 : ℘ 333 2 = ℘ 33 2 µ 1 2 + 2µ 1 ℘ 23 ℘ 33 + ℘ 23 2 + 4℘ 13 − 4℘ 33 ℘ 22 + 4℘ 33 3 − 4µ 2 ℘ 33 2 − 4µ 4 ℘ 33 , ℘ 233 ℘ 333 = 2µ 3 ℘ 33 2 + 4℘ 33 2 ℘ 23 − µ 1 ℘ 33 ℘ 22 − 2µ 5 ℘ 33 − 2µ 2 ℘ 33 ℘ 23 + µ 1 2 ℘ 33 ℘ 23 − 2℘ 12 − ℘ 22 ℘ 23 + µ 1 ℘ 23 2 + 2µ 1 ℘ 13 , ℘ 133 ℘ 333 = − 1 3 µ 1 ℘ 33 ℘ 12 + 1 3 µ 1 2 ℘ 33 ℘ 13 − 4 3 µ 2 ℘ 33 ℘ 13 + 2 3 ℘ 33 ℘ 1333 − 4 3 µ 8 ℘ 33 + ℘ 23 ℘ 12 + µ 1 ℘ 13 ℘ 23 − 2℘ 13 ℘ 22 , ℘ 223 ℘ 333 = 2µ 1 ℘ 23 ℘ 22 − 2µ 2 ℘ 33 ℘ 22 + 2µ 1 µ 4 ℘ 23 − µ 1 µ 5 ℘ 33 + 2℘ 33 2 ℘ 22 − 2µ 4 ℘ 22 + 2℘ 33 ℘ 23 2 + 4 3 µ 1 2 ℘ 13 − 4 3 µ 2 ℘ 13 − 4 3 µ 1 ℘ 12 − 4 3 µ 8 − 2℘ 22 2 + µ 1 µ 2 ℘ 33 ℘ 23 + 2 3 ℘ 1333 + ℘ 23 ℘ 33 µ 3 + µ 1 µ 3 ℘ 33 2 − µ 2 ℘ 23 2 − µ 5 ℘ 23 , ℘ 233 2 = 4℘ 33 ℘ 23 2 + 8℘ 13 ℘ 33 + 4µ 3 ℘ 33 ℘ 23 − 2µ 1 ℘ 23 ℘ 22 + 4 3 µ 1 2 ℘ 13 − 4 3 µ 2 ℘ 13 + 4µ 6 ℘ 33 + µ 1 2 ℘ 23 2 − 4 3 µ 8 + ℘ 22 2 − 4 3 ℘ 1333 − 4 3 µ 1 ℘ 12 . The expressions at lower weight quickly become very lengthy. For the purely trigonal case we give a list of the known quadratic expressions in the 3-index functions up to weight −15 in Appendix B. The full list for the general (3, 4)-curve down to weight −23 is available at [13]. Proof. The relations can be found using a combination of three types of intermediate relations. One type is from terms in the expansion of (4.6). Another is to multiply one of the linear three-index ℘ ijk relations above by another ℘ ijk and substitute for previously calculated ℘ ijk ℘ ℓmn relations of higher weight. Yet another is to take a derivative of one of the bilinear three-index ℘ ijk relations above and to substitute the known linear four-index ℘ ijkℓ and previously calculated ℘ ijk ℘ ℓmn relations. Again, we work in a self-consistent way from higher to lower weights. The strategy for all the results in this section is to proceed down one weight at a time and to derive all the three types of relations (4-index ℘ ijkℓ , bilinear 2-and 3-index, and quadratic 3-index) at a given weight before moving down to the next. An extra complication is that at certain weights some of the intermediate calculations can involve quartic terms in the ℘ mn and ℘ 1333 . It is always possible to find enough relations to eliminate the quartic term up to weight −23. (2) For equations of weight below −23, we have not been able to find cubic expressions for the ℘ ijk ℘ ℓmn terms. We believe it should be possible to explain this using the results of Cho and Nakayashuiki [12], and we are currently investigating this possibility. (3) The calculations in this section make no use of the expansion of the σ function, which is given in the next section. Expansion of the σ-function This Section is devoted to show the coefficients of the power series expansion of σ(u) is a polynomial in µ j s. In the Weierstrass formulation of the theory of elliptic functions, the σ-function is defined as the power series expansion in the Abelian variable u with coefficients depending on the Weierstrass parameters g 2 , g 3 , and related by certain recursive relations. The extension of Weierstrass theory to arbitrary algebraic curves was intensively developed in the 19th century and later, its development being attached to names such as Baker, Bolza, Brioschi, Burkhardt, Klein, and Wiltheiss. Some important modern developments of this theory are due to Buchstaber and Leykin [10,11] who give a construction of linear differential (heat-like) operators that annihilate the σ-function for any (m, n)-curve. In the hyperelliptic case the operators are sufficient to find the recursion defining the whole series expansion. The exact analogue of the Weierstrass recursive series formula is known only for genus two, see [11], p.68. In other cases the detailed results have not yet been developed, although the general method is provided in the publications mentioned above. Here we shall give the few first terms of the power series expansion, obtained by finding the coefficients of the Taylor series by using the PDEs given in Lemma 5.1. Theorem 6.1. The function σ(u) associated with the general trigonal curve (1.1) of genus three has an expansion of the following form : (6.1) σ(u 1 , u 2 , u 3 ) = ε · C 5 (u 1 , u 2 , u 3 ) + C 6 (u 1 , u 2 , u 3 ) + C 7 (u 1 , u 2 , u 3 ) + · · · , where ε is a non-zero constant and each C j is a polynomial composed of sums of monomials in u i of odd total degree and of total weight j with polynomial coefficient in µ i s of total weight (5 − j). Especially, σ(u) is an odd function (see 3.2). The first few C j s are C 5 = u 1 − u 3 u 2 2 + 1 20 u 3 5 , C 6 = 1 12 µ 1 u 3 4 u 2 − 1 3 µ 1 u 2 3 , C 7 = 1 504 µ 1 2 − 3 µ 2 u 3 7 + 1 6 µ 2 u 3 3 u 2 2 , C 8 = 1 360 µ 1 3 + 9 µ 3 − 2 µ 1 µ 2 u 3 6 u 2 − 1 2 µ 3 u 3 2 u 2 3 , C 9 = 1 25920 µ 1 2 − 3 µ 2 2 u 3 9 + 1 120 2 µ 4 − µ 2 2 + µ 1 2 µ 2 + 6 µ 1 µ 3 u 3 5 u 2 2 − 1 12 4 µ 1 µ 3 + 4 µ 4 + µ 2 2 u 3 u 2 4 + 1 12 µ 4 u 3 4 u 1 , C 10 = 1 20160 8 µ 1 µ 4 − 54 µ 2 µ 3 + 3 µ 1 µ 2 2 + 18 µ 1 2 µ 3 + µ 1 5 − 12 µ 5 − 4 µ 1 3 µ 2 u 3 8 u 2 + 1 72 6 µ 2 µ 3 + 2 µ 1 µ 4 + µ 1 µ 2 2 + µ 1 2 µ 3 u 3 4 u 2 3 − 1 60 4 µ 1 2 µ 3 + µ 1 µ 2 2 + 4 µ 5 + 4 µ 1 µ 4 − 2 µ 2 µ 3 u 2 5 + 1 6 µ 5 u 3 3 u 2 u 1 , C 11 = − 1 6652800 18 µ 1 µ 2 µ 3 + 27 µ 1 4 µ 2 − 72 µ 6 − 3 µ 1 6 − 24 µ 2 µ 4 + 16 µ 1 2 µ 4 − 24 µ 1 µ 5 + 27 µ 3 2 + 85 µ 2 3 − 4 µ 1 3 µ 3 − 82 µ 1 2 µ 2 2 u 3 11 + 1 5040 27 µ 3 2 + µ 2 3 − 6 µ 2 µ 4 − 18 µ 1 µ 2 µ 3 + 8 µ 1 3 µ 3 − 4 µ 1 µ 5 + 6 µ 1 2 µ 4 + 12 µ 6 + µ 1 4 µ 2 − 3 µ 1 2 µ 2 2 u 3 7 u 2 2 − 1 72 9 µ 3 2 − µ 2 3 − 4 µ 2 µ 4 − 2 µ 1 µ 2 µ 3 u 3 3 u 2 4 + 1 360 µ 1 µ 5 − 4 µ 2 µ 4 + µ 1 2 µ 4 + 3 µ 6 u 3 6 u 1 − 1 2 µ 6 u 3 2 u 2 2 u 1 , C 12 = − 1 1814400 27 µ 1 µ 3 2 − 243 µ 2 2 µ 3 − µ 1 7 + 72 µ 1 µ 2 µ 4 − 31 µ 1 4 µ 3 − 144 µ 2 µ 5 − 16 µ 1 3 µ 4 + 6 µ 1 5 µ 2 − 10 µ 1 3 µ 2 2 + 24 µ 1 2 µ 5 + 4 µ 1 µ 2 3 − 72 µ 1 µ 6 + 180 µ 1 2 µ 2 µ 3 u 3 10 u 2 + 1 2160 18 µ 3 µ 4 − 2 µ 1 µ 2 3 + 27 µ 1 µ 3 2 − 9 µ 2 2 µ 3 + µ 1 3 µ 2 2 + µ 1 4 µ 3 + 6 µ 1 2 µ 2 µ 3 + 2 µ 1 3 µ 4 + 12 µ 1 µ 6 u 3 6 u 2 3 − 1 24 µ 3 3 µ 1 µ 3 + 4 µ 4 + µ 2 2 u 3 2 u 2 5 + 1 120 6 µ 3 µ 4 + 2 µ 1 µ 6 − µ 2 µ 5 + µ 1 2 µ 5 u 3 5 u 2 u 1 − 1 6 2 µ 1 µ 6 + 2 µ 3 µ 4 + µ 2 µ 5 u 3 u 2 3 u 1 . Proof. We divide the proof into four parts. Step 1. We have already shown in 3.2, that all the terms are of total odd degree or even degree. We first show that the expansion contains a term linear in u 1 , so the expansion must be odd. Let B(D) be the Brill-Noether matrix for an effective divisor D of C. Then it is well known that (see for example [24] or [25]) dim Γ (C, O(D)) = deg D + 1 − rankB(D), where Γ (C, O(D)) is the space of functions on C whose divisor are larger than or equal to −D. Moreover, for two points P 1 , P 2 on C, dim Γ (C, O(P 1 + P 2 )) > 1 if and only if the point ι(P 1 , P 2 ) ∈ Θ [2] is a non-singular point of Θ [2] (note that C is of genus 3). By checking the Brill-Noether matrix B(P 1 + P 2 ), we see Θ [2] is non-singular everywhere, Especially κ −1 (Θ [2] ) is non-singular at the origin (0, 0, 0). On the other hand, let u and v be two variables on κ −1 (Θ [1] ). Then we have an expansion with respect to v 3 : 0 = σ(u + v) = σ 3 (u)v 3 + 1 2 (σ 2 (u) + σ 33 (u)) v 3 2 + · · · , where σ i = ∂σ/∂u i , etc. Hence σ 3 (u) = 0 σ 2 (u) + σ 33 (u) = 0. Again by expansion 0 = σ 3 (u) = σ 33 (0)v 3 + · · · , we see that σ 33 (0) = 0. In summary, σ 3 (0) = σ 2 (0) = 0, so from the above arguments and (3.14), we must have σ 1 (0) = 0. Hence the σ-expansion must be odd. Step 2. Next we show that the terms of weight less than 5 vanish and C 5 (u) is non-trivial. We write all the possible odd terms up to and including terms of weight 5. Using the first two equations in 5.1, we can show that the coefficients of the terms of weight four and less are zero, and that the coefficients of weight 5 are given by those in C 5 up to multiplication by a constant. We know from Step 1 that this constant is non-zero and we insert this constant into the ε. Step 3. We now calculate the coefficents C i , i > 5. The proof of this step is by construction (with heavy use of Maple) using the PDEs given in Lemma 5.1. We expand σ(u 1 , u 2 , u 3 ) in a Taylor series with undetermined coefficients, keeping only odd terms. We do not assume that the coefficients of the expansion are polynomial in the µ i , only that they are independent of the u i . We then insert the expansion into the 4-index PDEs for the ℘, and truncate to successive orders in the weights of the u i . These give a series of linear equations for the coefficients, and be using a sufficient number of the PDEs we can always find unique solutions, as listed above. We have carried out this calculation down to C 18 . We have omitted the details of the expressions for C 13 , . . . , C 18 , as they are rather lengthy, but these are available at [13]. Step 4. Now consider the general term in the expansion. Set A u 1 p u 2 q u 3 r , A ∈ Q(µ i ) to be the lowest weight unknown term. Since we have already shown by construction that the coefficients for all weights down to −29 with respect to u j s are polynomials, we may assume that p + q + r ≧ 4. We consider the set (♯) of quadratic equations in σ(u) and its (higher) derivatives obtained from the above, by multiplying the equations in 5.1 by σ(u) 2 . We take an equation (6.2) σ(u) 2 Q ijkℓ (u) = · · · from (♯) such that u 1 p u 2 q u 3 r is divisible by u i u j u k u ℓ . We have at least one such equation. Differentiating (6.2), we have an equation of the form (6.3) σ(u) ∂ p+q+r σ ∂u 1 p ∂u 1 q ∂u 1 r (u) + · · · = 0 such that all terms are polynomial of σ(u) and its higher derivatives and such that ∂ p+q+r σ ∂u 1 p ∂u 1 q ∂u 1 r (u) is the highest derivative in (6.3). By looking at the coefficient of the term u 1 , we have a linear equation of the form A + · · · = 0 over Q[µ 1 , · · · , µ 12 ]. Since the other terms except A in the above equation come from terms of σ(u) whose weight is less than weight of u 1 p u 2 q u 3 r , we see A is a polynomial in the µ j s by the induction hypothesis. Remark 6.2. (1) In Theorem 6.1, the constant ε might be unity, another 8th root of 1, or some other constant. We have not been able to narrow down this result. If the case ε = 1 is true, then the determination of ε reduces to the choice of roots in (1.14) and (3.9). The remaining results in this paper do not depend on this choice, or on the possibility that ε = 1. (2) The weight of σ(u) is inferred from (3.9) since the weight of |ω ′ | is 5 + 2 + 1 and the conjectured weight of D is 72. The weight of the terms in the exponentials are all 0 and the weight of c is 72/8 − (5 + 2 + 1)/2 = 5 and coincides with the terms in the expansion of 6.1 if the weight of ε is 0. We shall need later on the following special property of the σ-function in the purely trigonal case: For notational simplicity, we denote (7.1) ∂ j = ∂ ∂u j . We also define (7.2) ℘ [ij] = the determinant of the (i, j)-(complementary) minor of [℘ ij ] 3×3 . We have explicit bases of the vector spaces Γ (J, O(2Θ [2] )) and Γ (J, O(3Θ [2] )) as follows (see also [12], Example in Section 9): Lemma 7.1. We have the following : [11] ⊕ C℘ [12] ⊕ C℘ [13] ⊕ C℘ [22] ⊕ C℘ [23] ⊕ C℘ [33] . Γ (J, O(2Θ [2] )) = C1 ⊕ C℘ 11 ⊕ C℘ 12 ⊕ C℘ 13 ⊕ C℘ 22 ⊕ C℘ 23 ⊕ C℘ 33 ⊕ CQ 1333 , Γ (J, O(3Θ [2] )) = Γ (J, O(2Θ [2] )) ⊕ C℘ 111 ⊕ C℘ 112 ⊕ C℘ 113 ⊕ C℘ 122 ⊕ C℘ 123 ⊕ C℘ 133 ⊕ C℘ 222 ⊕ C℘ 223 ⊕ C℘ 233 ⊕ C℘ 333 ⊕ C∂ 1 Q 1333 ⊕ C∂ 2 Q 1333 ⊕ C∂ 3 Q 1333 ⊕ C℘ Proof. We know the dimensions of the spaces above are 2 3 = 8 and 3 3 = 27, respectively by the Riemann-Roch theorem for Abelian varieties (see for example, [23], (pp.150-155), [20], (p.99, Th. 4.1). Moreover, (3.14) shows that the functions in the right hand sides belong to the spaces of the left hand sides, respectively. For the space Γ (J, O(2Θ [2] )), ℘ ij and Q ijkℓ become the basis of the space from Definition 4.1, Lemma 3.1, and the arguments in the previous section. However these are not all linearly independent, since there are connecting relations, such as those given in Lemma 5.1, and the number of these relations is greater than the dimension of the space. Thus the problem is reduced to picking the linearly independent bases as a function space. It is obvious that such independence does not depend upon the coefficients of curve by considering these expansions around the origin of C 3 . Hence by multiplying by σ(u) 2 from the right hand side with respect to u 1 , u 2 , u 3 , and after putting all the µ j equal to zero, we see the functions of the right hand side are linearly independent. The authors used a computer to check this. Similarly, for the space Γ (J, O(3Θ [2] )), the 27 functions obtained by multiplying by σ(u) 3 from the right hand side are checked to be linearly independent by using a computer, expanding the given functions in the Abelian variables (cf. Theorem 6.1) to a sufficiently high power that independence is checked. We also see both decompositions in Lemma 7.1 in Example in Section 9 of [12]. The first main addition theorem Theorem 8.1. The σ-function associated with (2.1) satisfies the following addition formula on J × J : − σ(u + v)σ(u − v) σ(u) 2 σ(v) 2 = ℘ 11 (u) − ℘ 11 (v) + ℘ 12 (u)℘ 23 (v) − ℘ 12 (v)℘ 23 (u) + ℘ 13 (u)℘ 22 (v) − ℘ 13 (v)℘ 22 (u) + 1 3 (℘ 33 (u)Q 1333 (v) − ℘ 33 (v)Q 1333 (u)) − 1 3 µ 1 (℘ 12 (u)℘ 33 (v) − ℘ 12 (v)℘ 33 (u)) − µ 1 (℘ 13 (u)℘ 23 (v) − ℘ 13 (v)℘ 23 (u)) + 1 3 µ 1 2 − µ 2 (℘ 13 (u)℘ 33 (v) − ℘ 13 (v)℘ 33 (u)) + 1 3 µ 8 (℘ 33 (u) − ℘ 33 (v)) (8.1) Proof. Firstly, we notice that the left hand side is an odd function with respect to (u, v) → ([−1]u, [−1]v), and that it has poles of order 2 along (Θ [2] ×J)∪(J ×Θ [2] ) but nowhere else. Moreover it is of weight −10. Therefore, by Lemma 7.1, the left hand side is expressed by a finite sum of the form (8.2) j A j X j (u)Y j (v) − X j (v)Y j (u) , where the A j are rational functions of the µ i s with homogeneous weight, and the X j and Y j are functions chosen from the right hand side of the first equality in Lemma 7.1. We claim that all the A j are polynomial in the µ i s. Suppose all the A j s are reduced fractional expressions, and at least one of the A j s is not a polynomial. Take the least common multiple B of all the denominators of the A j s. Note that there is a set of special values of the µ i s such that B vanishes and the numerator of at least one A j does not vanish. After multiplying the equation "lhs"= (8.2) by B σ(u) 2 σ(v) 2 , and taking the µ i s to be such a zero of B, we have a contradiction, by using the linear independency of Lemma 7.1 twice with respect to the variables u and v for the corresponding curve of (1.1). Hence, all the A j must be polynomials. Hence, we see that the desired right hand side must be expressed by using constants a, b, c, d, e, f, g 1 , g 2 , h 1 , h 2 , i 1 , i 2 , j, k 1 , k 2 , k 3 which are polynomials in µ i s and independent of the u i and v i , as follows: (8.3) a [℘ 11 (u) − ℘ 11 (v)] + b [℘ 12 (u)℘ 23 (v) − ℘ 12 (v)℘ 23 (u)] + c [℘ 13 (u)℘ 22 (v) − ℘ 13 (v)℘ 22 (u)] + d [Q 1333 (u)℘ 33 (v) − Q 1333 (v)℘ 33 (u)] + eµ 1 [℘ 12 (u)℘ 33 (v) − ℘ 12 (v)℘ 33 (u)] + f [℘ 13 (u)℘ 23 (v) − ℘ 13 (v)℘ 23 (u)] + g 1 [℘ 13 (u)℘ 33 (v) − ℘ 13 (v)℘ 33 (u)] + g 2 [Q 1333 (u) − Q 1333 (v)] + h 1 [℘ 23 (u)℘ 22 (v) − ℘ 23 (v)℘ 22 (u)] + h 2 [℘ 12 (u) − ℘ 12 (v)] + i 1 [℘ 22 (u)℘ 33 (v) − ℘ 22 (v)℘ 33 (u)] + i 2 [℘ 13 (u) − ℘ 13 (v)] + j[℘ 23 (u)℘ 33 (v) − ℘ 23 (v)℘ 33 (u)] + k 1 [℘ 22 (u) − ℘ 22 (v)] + k 2 [℘ 23 (u) − ℘ 23 (v)] + k 3 [℘ 33 (u) − ℘ 33 (v)]. We find by computer using Maple, on substituting the expansion (6.1) up to C 13 terms of σ(u) into (8.3), and truncating up to weight 18 in the u i and v i , that a = b = c = −1, d = 1 3 , e = − 1 3 µ 1 , f = −µ 1 , g 1 = 1 3 (µ 2 1 − µ 2 ), g 2 = h 1 = h 2 = i 1 = i 2 = j = k 1 = k 2 = 0, k 3 = 1 3 µ 8 .(8.σ(2u) σ(u) 4 = −℘ 111 (u) − ℘ 112 ℘ 23 + ℘ 12 (u)℘ 123 (u) − ℘ 113 (u)℘ 22 (u) + ℘ 13 (u)℘ 122 (u) − 1 3 ℘ 133 (u)Q 1333 (u) + 1 3 ℘ 33 (u) ∂ ∂u 1 Q 1333 (u) + 1 3 µ 1 ℘ 112 (u)℘ 33 (u) − ℘ 12 (u)℘ 133 (u) + µ 1 (℘ 113 (u)℘ 23 − ℘ 13 (u)℘ 123 ) − 1 3 µ 1 2 − µ 2 ℘ 113 (u)℘ 33 (u) − ℘ 13 (u)℘ 133 (u) − 1 3 µ 8 ℘ 133 (u). (8.6) In the case of the elliptic curve, the corresponding relation is σ(2u) = −℘ ′ (u)σ 4 (u), whilst the corresponding formula for the hyperelliptic genus two curve is given in [4], p. 129. The second main addition theorem The second main addition result applies only in the purely trigonal case (2.1), using the results of Lemma 6.3. The formula is as follows: Theorem 9.1. The σ-function associated with (2.1) satisfies the following addition formula on J × J : [11] (v) − 1 4 ℘ 222 (u)℘ [12] (v) + 1 12 ∂ 3 Q 1333 (u)℘ [11] (v) + 1 2 ℘ 333 (u)℘ [22] (v) − 1 4 µ 3 ℘ 333 (u)℘ [12] (v) + 1 2 µ 6 ℘ 13 (u)℘ 333 (v) − 1 4 µ 9 ℘ 23 (u)℘ 333 (v) − 1 2 µ 12 ℘ 333 (u). Proof. Our goal is to express (9.1) σ(u + v)σ(u + [ζ]v)σ(u + [ζ 2 ]v) σ(u) 3 σ(v) 3 = R(u, v) + R(v, u), where R(u, v) = − 1 3 ℘ 13 (u)∂ 3 Q 1333 (v) − 3 4 ℘ 23 (u)℘ 112 (v) − 1 2 ℘ 111 (u) + 1 4 ℘ 122 (u)℘(9.2) σ(u + v)σ(u + [ζ]v)σ(u + [ζ 2 ]v) σ(u) 3 σ(v) 3 using several ℘ functions. Because (9.2) belongs to Γ (J × J, O(3((Θ [2] × J) ∪ (J × Θ [2] )))), a similar argument to that at the beginning of the proof of Th. 8.1 shows that it must be a finite sum of multi-linear forms of the 27 functions in Lemma 7.1, namely, of the form (9.3) finite sum j C j X j (u)Y j (v), where X j and Y j are any of the functions appearing in the right hand side of the description of Γ (J, O(3Θ [2] ))) in Lemma 7.1, and the C j are polynomial in µ i s. Moreover, (9.2) has the following properties: L1. As a function on J × J, its weight is (−5) × 3 = −15; L2. It is invariant under u → [ζ]u (resp. v → [ζ]v); L3. It has a pole of order 3 on (Θ [2] × J) ∪ (J × Θ [2] ); L4. It is invariant under the exchange u ↔ v (by Lemma 6.3). Hence, (9.3) has the same properties. Thus, we may consider only the functions in our basis of Γ (J, O(3Θ [2] )) that have the following corresponding properties: R1. The weight is greater than or equal to (−5) × 3 = −15; R2. They are invariant under u → [ζ]u; R3. They have poles of order at most 3 on Θ [2] . There are 12 such functions and they are listed as follows: ℘ [12] (weight = −9), ℘ [11] (weight = −6), ∂ 3 Q 1333 = −6(℘ 13 ℘ 333 − ℘ 133 ℘ 33 ) − 3℘ 122 (weight = −9), and the ℘ [ij] are defined in (7.2). Here the last equality is given by cross-differentiation from ∂ 1 Q 3333 using the first of the relations in Lemma 5.1 with µ 1 = µ 2 = µ 4 = 0. Since (9.2) is an even function, it must be of the form (9.4) σ(u + v)σ(u + [ζ]v)σ(u + [ζ 2 ]v) σ(u) 3 σ(v) 3 =R(u, v) +R(v, u), wherẽ R(u, v) = a 1 ℘ 13 (u)℘ 122 (v) + a 2 ℘ 13 (u) ∂ 3 Q 1333 (v) + a 3 ℘ 23 (u)℘ 112 (v) + a 4 ℘ 111 (u) + a 5 ℘ 122 (u)℘ [11] (v) + a 6 ℘ 222 (u)℘ [12] (v) + a 7 ∂ 3 Q 1333 (u)℘ [11] (v) + a 8 ℘ 333 (u)℘ [22] (v) [11] (v) + b 6 ℘ 333 (u)℘ [12] (v) + c 1 ℘ 13 (u)℘ 333 (v) + c 2 ℘ 23 (u)℘ 222 (v) + c 3 ℘ 122 (u) + c 4 ℘ 333 (u)℘ [11] (v) + c 5 ∂ 3 Q 1333 (u) + d 1 ℘ 23 (u)℘ 333 (v) + d 2 ℘ 222 (u) + e 1 ℘ 333 (u). + b 1 ℘ 13 (u)℘ 222 (v) + b 2 ℘ 23 (u)℘ 122 (v) + b 3 ℘ 23 (u) ∂ 3 Q 1333 (v) + b 4 ℘ 112 (u) + b 5 ℘ 222 (u)℘ By substituting (6.1) into (9.4), and comparing coefficients of different mononomials in u i , v j , we can find the constants a 1 , · · · , e 1 depending on the µ k s. Again, in this lengthy Maple calculation, it is not necessary to assume the coefficients are polynomial in the µ i . Remark 9.2. By applying (9.5) 1 3 ∂ 2 ∂u i ∂u j + ∂ 2 ∂u i ∂v j + ∂ 2 ∂v i ∂v j log to (9.1), we obtain algebraic addition formulae for standard Abelian functions, which would be interesting to compare with those of Remark (8.2). Remark 9.3. By putting v = −u + (δ, 0, 0) into (9.1), dividing through by δ and letting δ → 0, we can get an unusual "shifted" σ-formula of the form − σ(u − [ζ]u)σ(u − [ζ 2 ]u) σ(u) 6 = 12 i=1 c i [g i (u)∂ 1 f i (u) − f i (u)∂ 1 g i (u)] ,(9.6) where the f i and the g i are the even and odd derivative components respectively of the formula in (9.1), i.e. as given in the following table c i f i g i c i f i g i 1 2 ℘ 13 (u) ℘ 122 (u) − 1 3 ℘ 13 (u) ∂ 3 Q 1333 (u) − 3 4 ℘ 23 (u) ℘ 112 (u) − 1 2 1 ℘ 111 (u) 1 4 ℘ [11] (u) ℘ 122 (u) − 1 4 ℘ [12] (u) ℘ 222 (u) 1 12 ℘ [11] (v) ∂ 3 Q 1333 (u) 1 2 ℘ [22] (u) ℘ 333 (u) − 1 4 µ 3 ℘ [12] (u) ℘ 333 (u) 1 2 µ 6 ℘ 13 (u) ℘ 333 (u) − 1 4 µ 9 ℘ 23 (u) ℘ 333 (v) − 1 2 µ 12 1 ℘ 333 (u) Remark 9.4. In the general elliptic case, there appears to be no formulae corresponding to (9.1) and (9.6). However for the specialized equianharmonic case, where ℘ satisfies (℘ ′ ) 2 = 4℘ 3 − g 3 , it is straightforward to show that σ(u + v)σ(u + ζv)σ(u + ζ 2 v) σ 3 (u)σ 3 (v) = − 1 2 (℘ ′ (u) + ℘ ′ (v)), and σ ((1 − ζ)u) σ ((1 − ζ 2 )u) σ 6 (u) = 3℘ 2 (u). These seem to be just the first of a family of multi-term addition formulae on special curves with automorphisms, which will be discussed in more detail elsewhere. grateful to John Gibbons for pointing out the possibility of the relations described in Remarks 8.3 and 9.3. We are grateful to Mr Matthew England for pointing out a number of typos in various versions of this manuscript. Some of the calculations described in this paper were carried out using Distributed Maple [29], and we are grateful to the author of this package, Professor Wolfgang Schreiner of RISC-Linz, for help and advice. Finally, we would like to express special thanks to the referees for constructive suggestions to improve the paper, in particular for pointing out some crucial gaps in the main theorems and for giving hints how to fill them. Appendix A: The fundamental bi-differential We write the polynomial f (x, y) in (1.1) that defines the trigonal curve C as (A.1) f (x, y) = y 3 + p(x)y 2 + q(x)y − r(x) with p(x) = µ 1 x + µ 4 , q(x) = µ 2 x 2 + µ 5 x + µ 8 , r(x) = x 4 + µ 3 x 3 + µ 6 x 2 + µ 9 x + µ 12 . We describe explicitly the fundamental non-normalized bi-differential (Klein's fundamental 2-form of the second kind) Σ((x, y), (z, w)) in (1.18) of the curve for (x, y), (z, w) in C defined by f (x, y) = 0. Following the scheme described in [1] and applied to trigonal curves in [15], [9] and the present paper, one can realize Σ((x, y), (z, w)) explicitly as (A.2) Ω((x, y), (z, w)) = F ((x, y), (z, w))dxdz (x − z) 2 f y (x, y)f w (z, w) with the polynomial F ((x, y); (z, w)) given by the formula F (x, y), (z, w) = (wy + Q(x, z))(wy + Q(z, x)) T (x, z) = 3µ 12 + (z + 2x)µ 9 + x(x + 2 z)µ 6 + 3µ 3 x 2 z + p(z)q(x) + x 2 z 2 + 2 x 3 z. (A.4) Lemma 1. 4 . 4(Fundamental 2-form of the second kind) Let Σ (x, y), (z, w) be the meromorphic function on C × C, Proposition 4. 2 . 2Let u ∈ C 3 and (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) be Abelian preimages of u, i.e. u = (x 1 ,y 1 ) ∂ 2 ∂u 2i ∂u j to it gives the desired equation. Proposition 4. 3 . 3Suppose the (x i , y i )s and u are related as in Proposition 4.2. Corollary 4. 5 . 5Suppose the (x i , y i )s and u are related as in Theorem 4.2. The solution of the Jacobi inversion problem is given by (x 1 , y 1 ), (x 2 , y 2 ),and (x 3 , y 3 ), where these points are the set of zeros of the equations (4.7), (4.8) for (x, y). Lemma 5. 5 . 5The 3-index functions ℘ ijk associated with (2.1) satisfy a number of bi-linear relations (linear in both 3-index and 2-index functions). These have no analogue in the genus 1 case. For example, in decreasing weight, starting at −6 we have Remark 5. 8 . ( 1 ) 81These relations are the generalizations of the familiar relation (℘ ′ ) 2 = 4℘ 3 − g 2 ℘ − g 3 in the genus 1 theory. Lemma 6. 3 . 3The σ function associated with the purely trigonal curve (2.1) satisfies σ([−ζ]u) = −ζσ(u) for u ∈ C 3 under the notation (2.1).Proof. Since Λ is stable under the action of [ζ] and [−1], we can check the statement by Lemma 3.1 and Remark 3.7.Basis of the space Γ (J, O(nΘ[2] )) ℘ 111 (weight = −15), ℘ 112 (weight = −12), ℘ 122 (weight = −9),℘ 222 (weight = −6), ℘ 333 (weight = −3), ℘[22] (weight = −12), + T (z, x) − F 0 (x, z) (A.3) with Q(x, z) = (µ 2 1 − µ 2 )xz + (2 µ 1 µ 4 − µ 5 )x − µ 8 + µ 2 4 4 ) 4as asserted. In the Maple calculation, it is not necessary to assume the polynomial nature of the coefficients as functions of the µ j . , we have −℘ ij (u + v) + ℘ ij (u) from the left hand side, and have a rational expression of several ℘ ij···ℓ (u)s and ℘ ij···ℓ (v)s on the right hand side. Hence, we have an algebraic addition formulae for ℘ ij (u)s. Remark 8.3. By putting v = u − (δ, 0, 0) and letting δ → 0, we can get a "double-angle" σ-formulaRemark 8.2. By applying (8.5) 1 2 ∂ ∂u i ∂ ∂u j + ∂ ∂v j log to 8.1 Since x and y are related, we do not use ∂. − µ 3 µ 9 ℘ 23 − 2µ 3 µ 12 . [−15] AcknowledgementsThis paper was started during a visit by the authors to Tokyo Metropolitan University in 2005, supported by JSPS grant 16540002. We would like to express our thanks to Prof. M. Guest of TMU who helped organize this visit. The work continued during a visit by VZE to Heriot-Watt University under the support of the Royal Society. Further work was done whilst three of the authors (JCE, VZE, and EP) were attending the programme in Nonlinear Waves at the Mittag-Leffler Institute in Stockholm in 2005, and we would like to thank Professor H. Holden of Trondheim and the Royal Swedish Academy of Sciences for making this possible [EP, being then on leave from Boston University, is grateful for NSA grant MDA904-03-1-0119 which supported her doctoral students who were performing related research]. The authors are also grateful for a number of useful discussions with Prof. A. Nakayashiki, and Drs. John Gibbons and Sadie Baldwin. In particular we areThe term F 0 (x, y) vanishes at µ 1 = µ 4 = 0 and is given by F 0 (x, y) = c 32 (x + z)x 2 z 2 + c 22 x 2 z 2 + c 21 (x + z)xz + c 11 xz + c 10 (x + z) + c 00 , c 32 = −µ 1 , c 22 = −2µ 4 − 2µ 1 2 µ 2 + µ 1 4 + 2µ 3 µ 1 ,We also remark that the expression (A.3) generalizes the Kleinian 2-polar previously derived in the hyperelliptic case[1].Appendix B: Quadratic three-index relationsA complete list of the known relations quadratic in three-index ℘ ijk , up to weight −15, for the "purely trigonal" case is given below. Note that with care we can obtain an expression such that the highest power on the r.h.s. is no more than cubic. The number in square brackets [ ] is the weight. A fuller list for the general(3,4)case is given at[13]. The above equations describe the Jacobi variety as an algebraic variety, see also[9]where a general matrix construction is given. . H F Baker, Abelian Functions. Cambridge Univ. Press1897CambridgeH. F. Baker. Abelian Functions. Cambridge Univ. Press, Cambridge, 1897. On the hyperelliptic sigma functions. H F Baker, Amer. J. of Math. 20H. F. Baker. On the hyperelliptic sigma functions. Amer. J. of Math., 20:301-384, 1898. On a system of differential equations leading to periodic function. H F Baker, Acta Math. 26H. F. Baker. On a system of differential equations leading to periodic function Acta Math., 26:135- 156, 1902. Multiply Periodic Functions. H F Baker, Cambridge Univ. PressCambridgeH. F. Baker. Multiply Periodic Functions. Cambridge Univ. Press, Cambridge, 1907. Genus 4 trigonal reduction of the Benney equations. Sadie Baldwin, John Gibbons, J. Phys. A. 39Sadie Baldwin and John Gibbons. Genus 4 trigonal reduction of the Benney equations. J. Phys. A, 39:3607-3639, 2006. Abelian functions for purely trigonal curves of genus four. Sadie Baldwin, J C Eilbeck, John Gibbons, Y Ônishi, Sadie Baldwin, J. C. Eilbeck, John Gibbons, and Y.Ônishi. Abelian functions for purely trigonal curves of genus four. http://arxiv.org/abs/math.AG/0612654, 2006. Commutative ordinary differential operators. J L Burchnall, T W Chaundy, Proc. London Math. Soc. 118J. L. Burchnall and T. W. Chaundy. Commutative ordinary differential operators. Proc. London Math. Soc, 118:420-440, 1923. Kleinian functions, hyperelliptic Jacobians and applications. V M Buchstaber, V Z Enolskii, D V Leykin, Reviews in Math. and Math. Physics. 10V. M. Buchstaber, V. Z. Enolskii, and D. V. Leykin. Kleinian functions, hyperelliptic Jacobians and applications. Reviews in Math. and Math. Physics, 10:1-125, 1997. Uniformization of Jacobi varieties of trigonal curves and nonlinear equations. V M Buchstaber, V Z Enolskii, D V Leykin, Functional Anal. Appl. 34V. M. Buchstaber, V. Z. Enolskii, and D. V. Leykin. Uniformization of Jacobi varieties of trigonal curves and nonlinear equations. Functional Anal. Appl., 34:159-171, 2000. Polynomial Lie algebras. V M Buchstaber, D V Leykin, Func. Anal. Appl. 364V. M. Buchstaber and D. V. Leykin. Polynomial Lie algebras. Func. Anal. Appl. 36:4, 267-280, 2002. V M Buchstaber, D V Leykin, Addition Laws on Jacobian Varieties of Plane Algebraic Curves Proceedings of the Steklov Institute of Mathematics. 251V. M. Buchstaber and D. V. Leykin. Addition Laws on Jacobian Varieties of Plane Algebraic Curves Proceedings of the Steklov Institute of Mathematics, 251, 54-126, 2005 K Cho, A Nakayashiki, Differential structure of Abelian functions. to appear inK. Cho and A. Nakayashiki. Differential structure of Abelian functions. http://arxiv.org/abs/math.AG/0604267, 2006, to appear in International Journal of Mathemat- ics. Weierstrass functions for higher genus curves. J. C. EilbeckWeierstrass functions for higher genus curves. http://www.ma.hw.ac.uk/Weierstrass/, maintained by J. C. Eilbeck. Bilinear operators and the power series for the Weierstrass σ function. J C Eilbeck, V Z Enolskii, J. Phys. A. 33J. C. Eilbeck, V. Z. Enolskii. Bilinear operators and the power series for the Weierstrass σ function. J. Phys. A. 33:791-794, 2000. On the Kleinian construction of Abelian functions of canonical algebraic curves. J C Eilbeck, V Z Enolskii, D V Leykin, Symmetries of Integrable Differences Equations, volume CRMP/25 of CRM Proceedings and Lecture Notes. D. Levi and O. RagniscoProceedings of the 1998 SIDE III ConferenceJ. C. Eilbeck, V. Z. Enolskii, and D. V. Leykin. On the Kleinian construction of Abelian functions of canonical algebraic curves. In D. Levi and O. Ragnisco, editors, Proceedings of the 1998 SIDE III Conference, 1998: Symmetries of Integrable Differences Equations, volume CRMP/25 of CRM Proceedings and Lecture Notes, pages 121-138, 2000. On a generalized Frobenius-Stickelberger addition formula. J C Eilbeck, V Z Enolskii, E Previato, Lett. Math. Phys. 65J. C. Eilbeck, V. Z. Enolskii, and E. Previato. On a generalized Frobenius-Stickelberger addition formula. Lett. Math. Phys, 65:5-17, 2003. Prolegomena to a middlebrow arithmetic of curves of genus 2. J W S Cassels, E V Flynn, London Math. Soc. Lect. Notes. 230Cambridge Univ. PressJ. W. S. Cassels and E. V. Flynn. Prolegomena to a middlebrow arithmetic of curves of genus 2 London Math. Soc. Lect. Notes, volume 230, Cambridge Univ. Press, 1996. J D Fay, Theta Functions on Riemann Surfaces. BerlinSpringer-Verlag352J. D. Fay, Theta Functions on Riemann Surfaces, Lecture Notes in Mathematics, vol.352, Springer- Verlag, Berlin, 1973. Formal groups in genus two. D Grant, J. reine angew. Math. 411D. Grant. Formal groups in genus two. J. reine angew. Math., 411:96-121, 1990. Introduction to algebraic functions and Abelian functions. S Lang, Springer-VerlagNumber 89 in Grad. Text in Math. 2nd. editionS. Lang. Introduction to algebraic functions and Abelian functions. Number 89 in Grad.Text in Math. Springer-Verlag, 2nd. edition, 1982. A Generalized Kiepert Formula for Plane Affine Curves. S Matsutani, E Previato, 1103-467X ISRN IM-L-R-03-05/06-SE+fallInstitut Mittag-Leffler preprintS. Matsutani and E. Previato A Generalized Kiepert Formula for Plane Affine Curves. Institut Mittag-Leffler preprint, 2005 fall no. 03, http://www.mittag-leffler.se/preprints/ (ISSN 1103-467X ISRN IM-L-R-03-05/06-SE+fall). D Mumford, Tata Lectures on Theta I, Progress in Mathematics, v. 28. Birkhäuser. D. Mumford. Tata Lectures on Theta I, Progress in Mathematics, v. 28. Birkhäuser, 1983. Abelian varieties. D Mumford, Oxford Univ. PressD. Mumford. Abelian varieties. Oxford Univ. Press, 1985. Complex multiplication formulae for hyperelliptic curves of genus three. Y Ônishi, Tokyo J. Math. 21Y.Ônishi. Complex multiplication formulae for hyperelliptic curves of genus three. Tokyo J. Math., 21:381-431, 1998. A list of corrections is available from http://web.cc.iwate-u.ac.jp/∼onishi/ Determinant expressions for hyperelliptic curves (with an appendix by. Y Ônishi, S.Matsutani) Proc. Edinb. Math. Soc. 48Y.Ônishi. Determinant expressions for hyperelliptic curves (with an appendix by S.Matsutani) Proc. Edinb. Math. Soc., 48:705-742, 2005. Abelian functions for trigonal curves of degree four and determinantal formulae in purely trigonal case. Y Ônishi, Y.Ônishi. Abelian functions for trigonal curves of degree four and determinantal formulae in purely trigonal case. http://arxiv.org/abs/math.NT/0503696, 2005. On the representation of the Picard modular function by θ constants I-II Publ. H Shiga, RIMS, Kyoto Univ. 24H. Shiga. On the representation of the Picard modular function by θ constants I-II Publ. RIMS, Kyoto Univ., 24:311-360, 1988. Generalizations of the Neumann system. A curve theoretical approach. R J Schilling, II. Comm. Pure Appl. Math. 42R. J. Schilling. Generalizations of the Neumann system. A curve theoretical approach. II. Comm. Pure Appl. Math., 42:409-442, 1989. Distributed Maple: Parallel computer algebra in networked environments. Wolfgang Schreiner, Christian Mittermaier, Karoly Bosa, Journal of Symbolic Computation. 35Wolfgang Schreiner, Christian Mittermaier, and Karoly Bosa. Distributed Maple: Parallel computer algebra in networked environments. Journal of Symbolic Computation, 35:305-347, 2003.
[]
[ "Coexistence of density wave and superfluid order in a dipolar Fermi gas", "Coexistence of density wave and superfluid order in a dipolar Fermi gas" ]
[ "Zhigang Wu \nDepartment of Physics and Astronomy\nAarhus University\nDK-8000Aarhus CDenmark\n", "Jens K Block \nDepartment of Physics and Astronomy\nAarhus University\nDK-8000Aarhus CDenmark\n", "Georg M Bruun \nDepartment of Physics and Astronomy\nAarhus University\nDK-8000Aarhus CDenmark\n" ]
[ "Department of Physics and Astronomy\nAarhus University\nDK-8000Aarhus CDenmark", "Department of Physics and Astronomy\nAarhus University\nDK-8000Aarhus CDenmark", "Department of Physics and Astronomy\nAarhus University\nDK-8000Aarhus CDenmark" ]
[]
We analyse the coexistence of superfluid and density wave (stripe) order in a quasi-twodimensional gas of dipolar fermions aligned by an external field. Remarkably, the anisotropic nature of the dipolar interaction allows for such a coexistence in a large region of the zero temperature phase diagram. In this region, the repulsive part of the interaction drives the stripe formation and the attractive part induces the pairing, resulting in a supersolid with p-wave Cooper pairs aligned along the stripes. From a momentum space perspective, the stability of the supersolid phase is due to the fact that the stripe order renders the Fermi surface only partially gapped, leaving gapless regions that are most important for p-wave pairing. We finally demonstrate how this supersolid phase can be detected in time-of-flight experiments.
10.1103/physrevb.91.224504
[ "https://arxiv.org/pdf/1412.2783v1.pdf" ]
119,238,759
1412.2783
9ced80295d553c1ae8ba988d3d737f0d24631722
Coexistence of density wave and superfluid order in a dipolar Fermi gas (Dated: December 10, 2014) 8 Dec 2014 Zhigang Wu Department of Physics and Astronomy Aarhus University DK-8000Aarhus CDenmark Jens K Block Department of Physics and Astronomy Aarhus University DK-8000Aarhus CDenmark Georg M Bruun Department of Physics and Astronomy Aarhus University DK-8000Aarhus CDenmark Coexistence of density wave and superfluid order in a dipolar Fermi gas (Dated: December 10, 2014) 8 Dec 2014numbers: 0375Ss6785Lm6710Db6780kb We analyse the coexistence of superfluid and density wave (stripe) order in a quasi-twodimensional gas of dipolar fermions aligned by an external field. Remarkably, the anisotropic nature of the dipolar interaction allows for such a coexistence in a large region of the zero temperature phase diagram. In this region, the repulsive part of the interaction drives the stripe formation and the attractive part induces the pairing, resulting in a supersolid with p-wave Cooper pairs aligned along the stripes. From a momentum space perspective, the stability of the supersolid phase is due to the fact that the stripe order renders the Fermi surface only partially gapped, leaving gapless regions that are most important for p-wave pairing. We finally demonstrate how this supersolid phase can be detected in time-of-flight experiments. Since the prediction of superfluidity in solid helium several decades ago [1][2][3][4], the intriguing possibility of coexisting diagonal (density) and off-diagonal (superfluid) order forming a supersolid has been subject to intense investigations. However, the supersolid phase has not been observed unequivocally as interpretations of state-of-theart experiments in helium are still debated [5,6]. The recent experiments on cold dipolar gases [7][8][9][10][11][12][13][14], may finally allow an observation of this conceptually important phase. Supersolidity has been predicted to exist for dipolar bosons in an optical lattice [15][16][17], dipolar bosons with three-body forces [18], and spinor Bose condensates with spin-orbit coupling [19,20]. For Fermions however, relevant studies are fewer and limited to the case of an optical lattice [21,22]. In this paper, we expand the scope of study concerning supersolidity of dipolar Fermi gases and show that a supersolid phase is in fact the ground state in a large region of the phase diagram of a two-dimensional (2D) Fermi gas of dipoles aligned by an external field. We consider spinless Fermions with a dipole moment d at zero temperature, confined in the xy plane by a harmonic trapping potential V tr (r) = mω 2 z z 2 /2 along the zdirection. We take ω z 0 F ( = 1), where 0 F = k 0 F 2 /2m is the Fermi energy of a 2D non-interacting gas with areal density n 0 and k 0 F = √ 4πn 0 . In this limit, the Fermions are "frozen" in the harmonic oscillator ground state in the z direction and the system is effectively 2D. The dipole moments are aligned by an external field E into a direction which is perpendicular to the y-axis and forms an angle Θ with respect to the z-axis as illustrated in Fig. 1. The interaction between the dipoles is V d (r) = D 2 (1 − 3 cos 2 θ rd )/r 3 where D 2 = d 2 /4πε 0 for electric dipoles, and θ rd is the angle between the relative displacement vector of the two dipoles r = (ρ, z) and the dipole moment d. For ω z 0 F , the Fourier transform of the effective 2D interaction is given by (up to an irrelevant constant term) [23] V (q) = −2πD 2 F (q)ξ(Θ, ϕ). (1) Here F (q) = q exp[(qw) 2 /2]erfc(qw/ √ 2) and ξ(Θ, ϕ) = cos 2 Θ − sin 2 Θ cos 2 ϕ, where w = 1/mω z is the trapping length in z-direction and ϕ is the polar angle of q. We note that F (q) saturates in the limit of large q. This of course is not physical since any true molecular potential has a strong repulsive core which effectively provides a high momentum cut-off for the potential. Such a cut-off can also be introduced by considering the two-body scattering problem as we will discuss below. With a cut-off in mind, we can use the fact that the 2D limit ω z 0 F is equivalent to k 0 F w 1, and make the approximation F (q) q + O(qw) in (1). The strength of this interaction is measured by the ratio of the typical interaction and kinetic energy g = 4mD 2 k 0 F /3π. In addition to g, the system is characterised by the dipole tilting angle Θ, which controls the degree of anisotropy of the interaction in the xy-plane. For weak to moderate interaction strengths and small arXiv:1412.2783v1 [cond-mat.quant-gas] 8 Dec 2014 tilting angles, the system is well described by Landau Fermi liquid theory [18,25]. For Θ > 0, the repulsion between two dipoles in the xy plane is strongest when their relative displacement vector is along the y direction and weakest along the x direction. The anisotropy is predicted to give rise to stripe formation for interaction strengths beyond a critical value g c (Θ) [1,[25][26][27][28]. In this phase the dipoles form stripes parallel to the x-axis to minimise the repulsion, corresponding to a density modulation with wave vector q c = q cŷ as illustrated in Fig. 1. As Θ increases the dipolar interaction eventually becomes partially attractive. A Fermi liquid to p-wave superfluid phase transition is predicted to occur for tilting angles greater than a critical angle Θ s arcsin(2/3) 0.23π, due to the attractive part of the dipolar interaction [30]. The zero temperature mean-field phase diagram shown in Fig. 2 summarises the discussion given above. In this phase diagram the critical coupling strength g c (Θ) for stripe formation is obtained from a Hartree-Fock (HF) calculation [1,26] and the normal Fermi liquid to superfluid transition critical angle is determined by BCS theory (see below). Although each of these three phases have been studied extensively, an interesting question remains unaddressed: what is the nature of the ground state in the region of the phase diagram where the superfluid and stripe phases overlap? In this paper, we provide an answer to this question. We demonstrate that the system in the density wave phase eventually becomes unstable towards pairing as the tilting angle increases and the interaction becomes more attractive. Importantly, the resulting superfluid order does not exclude the stripe order, thus making the system a supersolid as understood in the sense mentioned above. We use self-consistent HF theory to analyse the stripe phase, as it is the only theory so far that allows us to determine quantitatively properties of this phase. In the stripe phase, the translational symmetry is broken in the y direction, and ĉ kĉ † k±qc = 0, whereĉ k annihilates a dipole with momentum k. This yields a modulated density n(ρ) = n 0 +n 1 cos(q c ·ρ) where n 1 is the stripe order parameter. As described in detail in Ref. [1], the resulting mean-field Hamiltonian can be diagonalised by an unitary transformationγ jk = G U j,k+Gĉk+G , wherek is restricted to the first Brillouin zone (BZ) −q c /2 <k·q c ≤ q c /2, G = lq c , l = 0, ±1, · · · are the reciprocal lattice vectors, and j is a band index. The diagonalised Hamiltonian takes the formĤ MF = jk ε jkγ † jkγ jk , whereγ jk annihilates a quasiparticle ψ jk (ρ) with energy ε jk . The most salient outcome of the HF analysis is that the Fermi surface k F (φ) contains gapped regions around φ = ±π/2, as well as gapless regions around φ = 0 and φ = π, see Fig. 4. The gapped regions are a manifestation of the stripe order, and the gap magnitude is roughly proportional to 0 F n 1 /n 0 . The key fact for the present purpose is that the stripe order still leaves gapless regions on the Fermi surface, which opens up the intriguing possibility of superfluid pairing. To explore this, we use BCS theory with the HamiltonianĤ BCS =Ĥ MF +Ĥ P . Here, H P = jj kk V j j (k , −k) 2 γ † j k γ † j ,−k γ jkγj,−k + h.c. describes pairing between the time-reversed states, where V j j (k , −k) is the interaction between the quasiparticles [31]. To derive a gap equation that is amenable to a partial wave expansion, we switch to the "extended zone scheme", whereby a single particle state ψ jk (ρ) in the j'th band in the first BZ is mapped onto a state ψ k (ρ) in the j'th BZ in the standard way [31], where the vector k is now unrestricted. The effective pairing interaction V j j (k , −k) shall be denoted by V(k, −k ). Pairing between time-reversed quasiparticles gives rise to the gap parameter ∆ k ≡ k V(k, −k ) γ −k γ k , which satisfies the gap equation ∆ k = − dk (2π) 2 V(k, −k )∆ k 1 2E k − P 2ξ k .(2) Here ξ k = ε k − µ and E k = ξ 2 k + |∆ k | 2 , where the chemical potential µ is approximated by the value in the stripe phase. The Cauchy principal value term P/2ξ k in (2) renders the gap equation well defined with no need for a high momentum cut-off. Such a term can be introduced by renormalizing the gap equation in terms of scattering amplitude of two dipoles in a vacuum [25,32,33]. In the absence of experimental data for dipole-dipole scattering in 2D, one can simply regard it as a specific procedure to provide a cut-off. To solve the gap equation, we expand the gap parameter as ∆ k = n=1 ∆ n (k) cos nφ where restricts the summation to odd indices, since ∆ −k = −∆ k for spinless Fermions. A more general expansion contains both cos nφ and sin nφ terms. However, the cos nφ terms are favoured by the attractive part of the potential and the gap parameter given by the previous expression maximises the pairing [31]. Using the expansion in (2) we obtain a system of equations ∆ n (k) = ∞ n =1 ∞ 0 dk K nn (k, k )∆ n (k ),(3) where K nn (k, k ) = − 1 8π 2 ∞ l=1 k V cc nl (k, k ) × 2π 0 dφ cos lφ cos n φ 1 E k − P ξ k(4) and V cc nn (k, k ) = 2π 0 dφ π dφ π cos nφ cos n φ V(k, −k ). (5) Equations (3)- (4) with (18) are the fundamental equations to be solved numerically, and they form the basis of the results presented in the rest of this paper. Before we turn to fully numerical solutions of (3), it is important to understand under what conditions the gap equation admits a solution. To do so, we shall first examine the Fourier components V cc nn (k, k ). As shown in the Supplementary Material [31], these Fourier components calculated numerically from (18) differ very little from those between the bare particles, which are obtained by replacing V(k, −k ) in (18) by the bare interaction V (k−k ). The latter components, denoted by V cc nn (k, k ), can be determined analytically and obey the selection rule V cc nn (k, k ) = 0 only if n = n, n ± 2. In addition, the lowest component V cc 11 (k, k ) is in general dominant over the higher components. We find that the Fourier components V cc nn (k, k ) possess all the above properties to a very good approximation. The agreement between V cc nn (k, k ) and V cc nn (k, k ) holds even deep into the stripe phase, which seems initially surprising since a large stripe amplitude gives rise to extended gapped regions around the Fermi surface. The reason is that the quasiparticle interaction V(k, −k ) is altered from V (k − k ) only in the gapped regions centred at φ = ±π/2, which are precisely the regions of integration in (18) suppressed by the cos nφ cos n φ factor. In light of earlier work on the superfluid transition [30], the fact that the quasiparticle pairing interaction is approximately the same as that between the bare particles strongly suggests that pairing in the stripe phase is possible, provided that the Fermi surface is not fully gapped. From these results it can be shown that the dominant component of the gap parameter is in fact the first harmonic. Thus the simplest approximation is the momentum-independent ansatz ∆ k ∆ 1 cos φ. Since the integrand in (3) is peaked around the partially gapped Fermi surface, a good estimate for when the gap equation admits a finite solution simply follows from the requirement that the effective p-wave interaction in the vicinity of the Fermi surface is attractive, i.e., (6) where k F (0) is the Fermi wave vector at φ = 0. The critical angle is therefore Θ s = arcsin(2/3), which is the same as that obtained for a normal Fermi liquid to superfluid transition at the same level of approximation [30]. V cc 11 ≡ V cc 11 (k F (0), k F (0)) 4πg m 1 − 9 4 sin 2 Θ < 0, With an estimate of the critical angle, we now solve the gap equation self-consistently including higher harmonics and retaining the full momentum dependence of ∆ n (k). The quasiparticle energies ξ k and the effective interactions are calculated from the HF theory for the stripe phase and are then used as input to the gap equation (3)-(4). This approach assumes that the pairing has a negligible effect on the stripes, which we will demonstrate is correct. As an example of the calculations, we show in Fig. 3 (left) the amplitudes of the first three partial wave components of the gap parameter for g = 1 and Θ = 0.28π. For these parameters, the system is deep in the stripe phase with n 1 /n 0 0.26 in the absence of pairing. We see that the pairing occurs dominantly in the p-wave channel cos φ, but also has a noticeable f -wave (cos 3φ) component; all the higher partial wave components are completely negligible. This feature is in fact typical of solutions to the gap equation. With the solutions of the gap equation we can further determine the redistribution of the quasiparticles and hence, the change of the stripe amplitude as a result of pairing. We find very small relative changes in the stripe amplitude in all our calculations. This demonstrates that our approach is consistent and that the stripe and superfluid orders can indeed coexist forming a type of supersolid. Figure 3 (right) depicts the gap parameter at the tip of the Fermi surface, ∆(k F (0)x), as a function of Θ for various values of g. For negative but small effective p-wave interaction, the behaviour of ∆(k F (0)x) is well described by the weak pairing approximation ∆(k F (0)x)/ 0 F ∼ exp (4π/mV cc 11 ), which follows from the ansatz mentioned earlier. To understand the coexistence of stripe and superfluid orders, we examine the bare particle pair correlation function C P (k, −k) = | ĉ kĉ−k | 2 which is plotted in Fig. 4 (left) for g = 1 and Θ = 0.28π. It clearly shows that pairing is concentrated in the gapless regions of the underlying Fermi surface for the stripe phase. Consequently, it does not affect the particle distribution in the gapped region which is responsible for the stripe formation. We analyse this further by determining the pair wave function in real space ψ pair (ρ, ρ ) ≡ ψ (ρ)ψ(ρ ) , whereψ(ρ) is the field operator of the dipoles. In Fig. 4 (right) we show |ψ pair (ρ, ρ )| 2 for a Cooper pair with the centre of mass at the origin of the coordinates. The p-wave nature of the pairing is clearly visible with |ψ pair (ρ, −ρ)| 2 strongly peaked along the x-axis, where the dipole-dipole interaction is most attractive. In addition, we plot in Fig. 5 the relative probability density of finding the centre-of-mass of a Cooper pair at a specific location. This is given by Pr(Y ) ≡ d(ρ−ρ )|ψ pair (ρ, ρ )| 2 , which depends only on the y-coordinate of the Cooper pair centre-of-mass due to the translational symmetry in the x direction. We see that the probability density varies in phase with that of the density of the dipoles, such that the p-wave pairing has a maximum on the stripes and a minimum in between. Thus, from the real space perspective, the stripe and superfluid orders coexist due to the anisotropy of the dipolar interaction. Namely, the repulsive part induces the stripe formation while the attractive part induces pairing, resulting in Cooper pairs with p-wave symmetry along the stripes as illustrated in Fig. 1. In the limit of strong interaction, the density between the stripes presumably vanishes and the stripes well separate. This raises the interesting possibility of realising an array of 1D p-wave superconductors which have topological properties [34]. The study of this strong coupling limit is beyond the scope of the present paper. We argued earlier that Θ s arcsin(2/3) is a good approximation for the boundary separating the stripe and the supersolid phases for g > g c (Θ). We now obtain a more accurate result by varying the tilting angle and determining the critical angle Θ s (g) below which the gap equation ceases to admit a finite solution. Such calculations can also be carried out for the normal Fermi liquid to superfluid transition. The overall phase boundary obtained this way is shown in Fig. 2. We see that our initial estimate is in fact remarkably accurate and the phase boundary has a rather weak dependence on the interaction strength g. This suggests that the onset of pairing is primarily determined by the degree of anisotropy of the dipolar potential. The identification of the supersolid region in the phase diagram bounded by this boundary, g c (Θ) and the collapse line, and our elucidation of the nature of this phase, are the main results of this paper. ∆ n (k)/ǫ 0 F k/k 0 F ∆ 1 (k) ∆ 3 (k) ∆ 5 (k) Finally, we discuss how the supersolid phase can be detected in time-of-flight (TOF) experiments, which have been used to probe a variety of phases and correlations [35][36][37][38]. As shown in Ref. [1], the density wave order can be detected by measuring the correlation function C D (k, k + q c ) ≡ | ĉ † kĉ k+qc | 2 in TOF experiments. Similarly the superfluid order can be detected by a measurement of the pair correlation function C P (k, −k), which can then be compared to theoretical results such as that shown in Fig. 4 (left). However, we need to bear in mind that in a standard experiment, the imaging system introduces a smoothening of the absorption images in the xy-plane, which can be modelled by convolution of the absorption density with a Gaussian [36,41]. This reduces the magnitude of the correlation peak considerably from the theoretical maximum value of approximately 1/4 shown in Fig. 4. Nevertheless, we expect that the TOF experiments can be used to detect the supersolid phase. In conclusion, we demonstrate that a 2D gas of fermionic dipoles aligned by an external field allow for a coexistence of stripe and superfluid order in a large region of the zero temperature phase diagram. This occurs as a result of the anisotropic nature of the dipolar interac-tion, where the repulsive part drives the stripe formation, and the attractive part induces the formation of p-wave Cooper-pairs along the stripes. In momentum space, the existence of the supersolid phase can be understood from the fact that the stripe order renders the Fermi surface partially gapped, leaving gapless the regions most important for p-wave pairing. We finally discuss how the supersolid phase can be detected in TOF experiments. Our results point to several interesting future research directions. This includes realising an array of 1D topological superconductors in the limit of strong interaction, and investigating parallels to the high T c cuprates, where the co-existence of charge-density-wave order and superconductivity was recently observed [42]. The mean-field Hamiltonian used to describe this phase is given by [1] H M F = k kĉ † kĉ k + k [h kĉ † k+qcĉ k + h.c.],(7) where q c = q cŷ , k is the single particle Hartree-Fock energy k = k 2 2m + 1 A k [V (0) − V (k − k )] ĉ † k ĉ k(8) and h k is a real off-diagonal element defined by h k = 1 A k [V (q c ) − V (k − k )] ĉ † k ĉ k +qc .(9) The inclusion of the second term in Eq. (7) accounts for the possibility of formation of the density wave along the y direction. The Hamiltonian in Eq. (7) resembles (although is not identical to) that of non-interacting particles in a potential periodic in the y direction with periodicity 2π/q c . Consequently the quasiparticle eigenlevels ε jk of Eq. (7) exhibit a band-like structure along the y direction of the wave vector, where j = 1, 2, · · · is the band index andk is restricted to the first Brillouin zone. The corresponding quasiparticle wave function ψ jk (ρ) can be expressed as ψ jk (ρ) = G U j,k+G e i(k+G)·ρ / √ A, where G = lq c , l = 0, ±1, · · · is the reciprocal lattice vector and the expansion coefficients U j,k+G are determined the Schrödinger equation k +G − ε jk U j,k+G + G =G±qc hk +G U n,k+G = 0.(10) Equation (10) is analogous to the Schrödinger equation of a particle in a periodic lattice, where hk +G plays the role of the Fourier components of a "periodic potential". Unlike a true periodic potential, however, hk +G depends explicitly onk due to the inclusion of the exchange interaction. In terms of the quasiparticle wave function basis, the Hamiltonian in (7) can now be brought into a diagonalised formĤ M F = jk ε jkγ † jkγ jk , wherê γ jk = G U j,k+Gĉk+G is the annihilation operator of the quasiparticle. The quasiparticle occupation number in the ground state N jk = γ † jkγ jk = θ(µ − ε jk ) is specified by the chemical potential µ of the density wave phase, which in turn is determined by the density of the gas as n 0 = 1 A jk θ(µ − ε jk ).(11) The quasiparticle energy ε jk and the expansion coefficients U j,k+G are implicit functions of the Hartree-Fock elements k and h k . Therefore these quantities as well as the chemical potential µ are determined self-consistently through Eqs. (8)- (11). The reader is referred to Ref. [1] for a detailed account of their numerical calculation. It turns out that the effects of the off-diagonal terms in Eq. (7) to the quasiparticle dispersion are only perturbative, due to the fact the magnitudes of h k are generally small compared to the Fermi energy 0 F [1]. Consequently the quasiparticle dispersion does not in fact deviate significantly from the usual parabolic form except in regions close to the Brillouin zone boundaries where band gaps open up. It is thus meaningful to use the "extended zone scheme" [2] instead of the "reduced zone scheme" in labelling the quasiparticle energy levels. More specifically, each of the physical quantities associated with the single particle state ψ jk (ρ) can be labelled by a single wave vector in the j-th Brillouin zone k = k j , which is defined as k j = k + j 2 q c , −q c /2 <k ·q c ≤ 0 k − j 2 q c , 0 <k ·q c ≤ q c /2(12) for j = 2, 4, · · · and k j = k − j−1 2 q c , −q c /2 <k ·q c ≤ 0 k + j−1 2 q c , 0 <k ·q c ≤ q c /2(13) for j = 1, 3, · · · . Likewise, the physical quantities associated with the time-reversal state ψ j,−k (ρ) can be labelled by the vector −k. The effective pairing interaction, given by V j j (k , −k) = GG GG δ G−G ,G −G U * j ,k +G U * j ,−k +G ×U j,−k+G U j,k+G V k −k + G − G ,(14) shall be denoted by V(k, −k ). These correspondences are made clear if one considers the limit of vanishing off-diagonal Hartree-Fock elements h k . In this limit the Bloch state ψ jk (ρ) simply approaches the plane wave state e ikn·ρ / √ A and the effective pairing interaction V(k, −k ) approaches the bare interaction V (k − k ). 2: The pairing symmetry Here we provide a rationale for the choice of a gap parameter ∆(k) with even parity with respect to φ as expressed in the main Letter. The gap equation (4) in the main Letter can be written as ∆ k = − 1 2 dk (2π) 2Ṽ (k, −k )∆ k 1 E k + P µ − k 2 /2m ,(15) whereṼ(k, −k ) is the anti-symmetrized interaction ma-trixṼ (k, −k ) = 1 2 [V(k, −k ) − V(k, k )].(16) It can be shown that the interaction matrixṼ(k, −k ) has the following expansioñ V(k, −k ) = ∞ n,n =1 [V cc nn (k, k ) cos nφ cos n φ +V ss nn (k, k ) sin nφ sin n φ ] ,(17) where restricts the summation to odd indices, V cc nn (k, k ) = 2π 0 dφ π 2π 0 dφ π cos nφ cos n φ V(k, −k )(18) and V ss nn (k, k ) = 2π 0 dφ π π 0 dφ π sin nφ sin n φ V(k, −k ).(19) The sine and cosine terms in the expansion (17) are not coupled and, as a consequence, the gap equation (15) admits solutions with either even or odd parity with respect to the φ variable. For even solutions only the first part of the potential in (17) contributes to the integral in Eq. (15) and for odd solutions only the second part does. In order to see which part of the potential favours Cooper pairing, we express V cc nn (k, k ) and V ss nn (k, k ) in terms of the interaction potential in real space V (ρ) = V (ρ, θ). This can be done by approximating V(k, −k ) by the bare interaction matrix V (k − k ) = dρV (ρ, θ)e −ikρ cos(φ−θ) e −ik ρ cos(φ −θ)(20) in Eqs. (18) and (19). Using the expansion e ix cos θ = ∞ n=−∞ i n J n (x)e inθ , where J n (x) is the Bessel function of the first kind, and performing the integrals with respect to φ and φ , we find From these expressions we see that V cc nn (k, k ) mostly samples the attractive part of the potential in the real space (the sliver around the x axis) while V ss nn (k, k ) mostly samples the repulsive part. Let us take the the first diagonal elements V cc 11 (k, k ) and V ss 11 (k, k ) for example. These are in fact the most dominant matrix elements for V cc nn (k, k ) and V ss nn (k, k ) respectively. As the attractive sliver of the potential V (ρ) expands from the x axis with an increasing tilting angle Θ, V cc 11 (k, k ) can potentially become negative due to the fact the attractive part of potential is more significantly weighted in the integral. The matrix element V ss 11 (k, k ), on the other hand, remains positive for all tilting angles. This analysis motivates us to look for solutions to the gap equation with even parity with respect to φ. 3: Comparisons between V cc nn (k, k ) and V cc nn (k, k ) The Fourier components V cc nn (k, k ) (which are defined by Eq. (18) with V(k, −k ) replaced by V (k − k )) for the bare interaction can be evaluated analytically. Using Eq. (2) of the main Letter in Eq. (18) we find that the only non-vanishing matrix elements are those whose indices differ by 0 or ±2. That is, the matrix V cc nn (k, k ) has the following tridiagonal structure V cc nn (k, k ) = δ n,n V cc nn (k, k ) + δ n,n +2 V cc n+2,n (k, k ) + δ n,n −2 V cc n,n+2 (k, k ). For odd n we find V cc nn (k, k ) = 3πg m kk k 0 F (k + k ) 1 n [I n−1 (k, k ) − I n+1 (k, k )] × 1 − 3 2 + δ n,1 3 4 sin 2 Θ ,(24) V cc n+2,n (k, k ) = − 3πg 2m kk k 0 F (k + k ) × k k I n (k, k ) + k k I n+2 (k, k ) + 2I n+1 (k, k ) sin 2 Θ,(25) and V cc n,n+2 (k, k ) = − 3πg 2m kk k 0 F (k + k ) × k k I n+2 (k, k ) + k k I n (k, k ) + 2I n+1 (k, k ) sin 2 Θ, where I m (k, k ) = π 2 0 dφ cos 2mφ 1 − x 2 sin 2 φ(27) with x ≡ 4kk /(k + k ) 2 . The integral I m (k, k ) can generally be expressed in terms of the complete elliptic integrals. As a few examples, I m (k, k ) for m = 0, 1, 2 are shown below as I 0 (k, k ) = K(x),(28)I 1 (k, k ) = 1 x 2 [K(x)x 2 − 2K(x) + 2E(x)],(29) and I 2 (k, k ) = 1 3x 4 3K(x)x 4 − 16K(x)x 2 +8E(x)x 2 + 16K(x) − 16E(x) ,(30) where K(x) and E(x) are the complete elliptic integrals of the first and second kind respectively. As a useful gauge of the relative importance of the matrix elements for various n, we consider the special case of k = k . In this case the expressions in (24) and V cc n+2,n (k, k) = V cc n,n+2 (k, k) = 3πg (2n + 1)(2n + 3)m k k 0 F sin 2 Θ.(32) We see that the magnitudes of these matrix elements decreases rapidly as 1/n 2 . The Fourier components V cc nn (k, k ) for the quasiparticles are calculated numerically using Eq. (18). We find that the formation of density wave has minimal effects on the effective paring interaction, namely V cc nn (k, k ) agrees very well with V cc nn (k, k ). (see Figs. 6-7 as an example) for a wide range of g and Θ. Figure 6. The matrix elements (in units of g/m) V cc 11 (k 0 F , k 0 F ) (blue circle), V cc 13 (k 0 F , k 0 F ) (red circle) and V cc 33 (k 0 F , k 0 F ) (green circle) as a function of Θ for g = 0.95. The solid lines are V cc 11 (k 0 F , k 0 F ) (blue), V cc 13 (k 0 F , k 0 F ) (red) and V cc 33 (k 0 F , k 0 F ) (green) respectively. Figure 7. The matrix elements (in units of g/m) V cc 11 (k, k 0 F ) (blue circle), V cc 13 (k, k 0 F ) (red circle), V cc 31 (k, k 0 F ) (black circle) and V cc 33 (k, k 0 F ) (green circle) as a function of k/k 0 F for g = 0.95 and Θ = 0.27π. The solid lines are V cc 11 (k, k 0 F ) (blue), V cc 13 (k, k 0 F ) (red), V cc 31 (k, k 0 F ) (black) and V cc 33 (k, k 0 F ) (green) respectively. Figure 1 . 1Illustration of a 2D Fermi gas with dipoles aligned by an external field E in the supersolid phase. Stripes with high density are indicated with a dark color, and the p-wave nature of the pair wave function is indicated by green regions. Figure 2 . 2Mean-field phase diagram of 2D dipolar Fermi gas. The dashed line is Θ = arcsin(2/3) and the solid line just above it is obtained from a more accurate calculation. Figure 3 . 3Left: Amplitudes of the gap parameter as a function of k/k 0 F for g = 1 and Θ = 0.28π. Right: The gap parameter at the tip of Fermi surface as a function of tilting angle Θ. Figure 4 . 4Left: The pair correlation function for g = 1.0 and Θ = 0.28π. The Fermi surface kF (φ) in the stripe phase is shown by a green line. Right: |ψpair(ρ, −ρ)| 2 (normalised to the maximum value) as a function of ρ for the same parameters. Here the lengths are in units of 2π/qc. Figure 5 . 5The dotted line is Pr(Y) for g = 1 and Θ = 0.28π, and the solid curve is the dipole density variation n(Y ) = n0 + n1 cos(qcY ) along the y direction. Both quantities are normalised to their respective maximum values. The lengths are in units of 2π/qc. GMB would like to acknowledge the support of the Hartmann Foundation via grant A21352 and the Villum Foundation via grant VKR023163. ×× cos nθ cos n θJ n (kρ)J n (k ρ)V (ρ, θ) sin nθ sin n θJ n (kρ)J n (k ρ)V (ρ, θ).(22) . A F Andreev, I M Lifshitz, Sov. Phys. JETP. 291107A. F. Andreev and I. M. Lifshitz, Sov. Phys. JETP 29, 1107 (1969). . A J Leggett, Phys. Rev. Lett. 251543A. J. Leggett, Phys. Rev. Lett. 25, 1543 (1970). . G V Chester, Phys. Rev. A. 2256G. V. Chester, Phys. Rev. A 2, 256 (1970). . M Boninsegni, N V Prokof&apos;ev, Rev. Mod. Phys. 84759M. Boninsegni and N. V. Prokof'ev, Rev. Mod. Phys. 84, 759 (2012). . E Kim, M H W Chan, Nature. 427225E. Kim and M. H. W. Chan, Nature (London) 427, 225 (2004). . S Balibar, Nature. 464176S. Balibar, Nature (London) 464, 176 (2010). . K.-K Ni, S Ospelkaus, D Wang, G Quéméner, B Neyenhuis, M H G De Miranda, J L Bohn, J Ye, D S Jin, Nature. 4641324K.-K. Ni, S. Ospelkaus, D. Wang, G. Quéméner, B. Neyen- huis, M. H. G. de Miranda, J. L. Bohn, J. Ye, and D. S. Jin, Nature (London) 464, 1324 (2010). . B Deh, W Gunton, B G Klappauf, Z Li, M Semczuk, J Van Dongen, K W Madison, Phys. Rev. A. 8220701B. Deh, W. Gunton, B. G. Klappauf, Z. Li, M. Semczuk, J. Van Dongen, and K. W. Madison, Phys. Rev. A 82, 020701 (2010). . M.-S Heo, T T Wang, C A Christensen, T M Rvachov, D A Cotta, J.-H Choi, Y.-R Lee, W Ketterle, Phys. Rev. A. 8621602M.-S. Heo, T. T. Wang, C. A. Christensen, T. M. Rvachov, D. A. Cotta, J.-H. Choi, Y.-R. Lee, and W. Ketterle, Phys. Rev. A 86, 021602 (2012). . Mingwu Lu, Nathaniel Q Burdick, Benjamin L Lev, Phys. Rev. Lett. 108215301Mingwu Lu, Nathaniel Q. Burdick, and Benjamin L. Lev, Phys. Rev. Lett. 108, 215301 (2012). . C.-H Wu, J W Park, P Ahmadi, S Will, M W Zwierlein, Phys. Rev. Lett. 10985301C.-H. Wu, J. W. Park, P. Ahmadi, S. Will, and M. W. Zwierlein, Phys. Rev. Lett. 109, 085301 (2012). . T A Schulze, I I Temelkov, M W Gempel, T Hartmann, H Knöckel, S Ospelkaus, E Tiemann, Phys. Rev. A. 8823401T. A. Schulze, I. I. Temelkov, M. W. Gempel, T. Hart- mann, H. Knöckel, S. Ospelkaus, and E. Tiemann, Phys. Rev. A 88, 023401 (2013). . S.-K Tung, C Parker, J Johansen, C Chin, Y Wang, P S Julienne, Phys. Rev. A. 8710702S.-K. Tung, C. Parker, J. Johansen, C. Chin, Y. Wang, and P. S. Julienne, Phys. Rev. A 87, 010702 (2013). . M Repp, R Pires, J Ulmanis, R Heck, E D Kuhnle, M Weidemüller, E Tiemann, Phys. Rev. A. 8710701M. Repp, R. Pires, J. Ulmanis, R. Heck, E. D. Kuhnle, M. Weidemüller, and E. Tiemann, Phys. Rev. A 87, 010701 (2013). . B Capogrosso-Sansone, C Trefzger, M Lewenstein, P Zoller, G Pupillo, Phys. Rev. Lett. 104125301B. Capogrosso-Sansone, C. Trefzger, M. Lewenstein, P. Zoller, and G. Pupillo, Phys. Rev. Lett. 104, 125301 (2010). . L Pollet, J D Picon, H P Büchler, M Troyer, Phys. Rev. Lett. 104125302L. Pollet, J. D. Picon, H. P. Büchler, and M. Troyer, Phys. Rev. Lett. 104, 125302 (2010). . F Cinti, P Jain, M Boninsegni, A Micheli, P Zoller, G Pupillo, Phys. Rev. Lett. 105135301F. Cinti, P. Jain, M. Boninsegni, A. Micheli, P. Zoller, and G. Pupillo, Phys. Rev. Lett. 105, 135301 (2010). . Zhen-Kai Lu, D S Petrov, G V Shlyapnikov, arXiv:1409.7737Zhen-Kai Lu, D. S. Petrov, and G. V. Shlyapnikov, arXiv:1409.7737. . Y Li, L P Pitaevskii, S Stringari, Phys. Rev. Lett. 108225301Y. Li, L. P. Pitaevskii, and S. Stringari, Phys. Rev. Lett. 108, 225301 (2012). . Y Li, G I Martone, L P Pitaevskii, S Stringari, Phys. Rev. Lett. 110235302Y. Li, G. I. Martone, L. P. Pitaevskii, and S. Stringari, Phys. Rev. Lett. 110 235302 (2013). . Anne-Louise Gadsbolle, G M Bruun, Phys. Rev. A. 8521604Anne-Louise Gadsbolle and G. M. Bruun, Phys. Rev. A 85, 021604(R) (2012). . T S Zeng, L Yin, Phys. Rev. B. 89174511T. S. Zeng and L. Yin, Phys. Rev. B 89, 174511 (2014). . U R Fischer, Phys. Rev. A. 7331602U. R. Fischer, Phys. Rev. A 73, 031602 (2006). . Y Yamaguchi, T Sogo, T Ito, T Miyakawa, Phys. Rev. A. 8213643Y. Yamaguchi, T. Sogo, T. Ito and T. Miyakawa, Phys. Rev. A 82, 013643 (2010). . L M Sieberer, M A Baranov, Phys. Rev. A. 8463633L. M. Sieberer and M. A. Baranov, Phys. Rev. A 84 063633 (2011). . J K Block, N T Zinner, G M Bruun, New Journal of Physics. 14105006J. K. Block, N. T. Zinner, and G. M. Bruun, New Journal of Physics 14, 105006 (2012). . M Babadi, E Demler, Phys. Rev. B. 84235124M. Babadi and E. Demler, Phys. Rev. B 84, 235124 (2011). . M M Parish, F M Marchetti, Phys. Rev. Lett. 108145304M. M. Parish and F. M. Marchetti, Phys. Rev. Lett. 108, 145304 (2012). . J K Block, G M Bruun, Phys. Rev. 90155102J. K. Block and G. M. Bruun, Phys. Rev. B90, 155102 (2014) . G M Bruun, E Taylor, Phys. Rev. Lett. 101245301G. M. Bruun and E. Taylor, Phys. Rev. Lett. 101, 245301 (2008). See Supplemental Material for a description of the "extended zone scheme. pairing symmetry and a comparison of the bare and quasiparticle pairing interactionsSee Supplemental Material for a description of the "ex- tended zone scheme", pairing symmetry and a comparison of the bare and quasiparticle pairing interactions. . J Levinsen, N R Cooper, G V Shlyapnikov, Phys. Rev. A. 8413603J. Levinsen, N. R. Cooper, and G. V. Shlyapnikov, Phys. Rev. A 84, 013603 (2011). . N R Cooper, G V Shlyapnikov, Phys. Rev. Lett. 103155302N. R. Cooper and G. V. Shlyapnikov, Phys. Rev. Lett. 103, 155302 (2009). . J Alicea, Rep. Prog. Phys. 7576501J. Alicea, Rep. Prog. Phys. 75, 076501 (2012). . M Greiner, C A Regal, J T Stewart, D S Jin, Phys. Rev. Lett. 94110401M. Greiner, C. A. Regal, J. T. Stewart, and D. S. Jin, Phys. Rev. Lett. 94, 110401 (2005). . S Fölling, F Gerbier, A Widera, O Mandel, T Gericke, I Bloch, Nature. 434481S. Fölling, F. Gerbier, A. Widera, O. Mandel, T. Gericke, and I. Bloch, Nature (London) 434, 481 (2005). . T Rom, T Best, D Van Oosten, U Schneider, S Fölling, B Paredes, I Bloch, Nature. 444733T. Rom, T. Best, D. van Oosten, U. Schneider, S. Fölling, B. Paredes, and I. Bloch, Nature (London) 444, 733 (2006). . I B Spielman, W D Phillips, J V Porto, Phys. Rev. Lett. 9880404I. B. Spielman, W. D. Phillips, and J. V. Porto, Phys. Rev. Lett. 98, 080404 (2007). . T Sogo, L He, T Miyakawa, S Yi, H Lu, H Pu, New J. Phys. 1155017T. Sogo, L. He, T. Miyakawa, S. Yi, H. Lu, and H. Pu, New J. Phys. 11, 055017 (2009). . E Altman, E Demler, M D Lukin, Phys. Rev. A. 7013603E. Altman, E. Demler, and M. D. Lukin, Phys. Rev. A 70, 013603 (2004). . G M Bruun, O F Syljuåsen, K G L Pedersen, B M Andersen, E Demler, A S Sørensen, Phys. Rev. A. 8033622G. M. Bruun, O. F. Syljuåsen, K. G. L. Pedersen, B. M. Andersen, E. Demler, and A. S. Sørensen, Phys. Rev. A 80, 033622 (2009). . L E Hayward, D G Hawthorn, R E Melko, S Sachdev, Science. 3431336L. E. Hayward, D. G. Hawthorn, R. E. Melko, and S. Sachdev, Science 343, 1336 (2014). . J K Block, G M Bruun, Phys. Rev. B. 90155102J. K. Block and G. M. Bruun, Phys. Rev. B 90, 155102 (2014) . N W Ashcroft, N D Mermin, ; Holt, Rinehart , Winston , New YorkSolid State PhysicsN. W. Ashcroft and N. D. Mermin, Solid State Physics, (Holt, Rinehart, and Winston, New York, 1976).
[]
[ "Dual-distribution discrepancy with self-supervised refinement for anomaly detection in medical images", "Dual-distribution discrepancy with self-supervised refinement for anomaly detection in medical images" ]
[ "Yu Cai \nDepartment of Electronic and Computer Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina\n", "Hao Chen \nDepartment of Computer Science and Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina\n\nDepartment of Chemical and Biological Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina\n", "Xin Yang \nSchool of Electronic Information and Communications\nHuazhong University of Science and Technology\n430074WuhanChina A R\n\nT I C L E I N F O\n\n", "Yu Zhou \nSchool of Electronic Information and Communications\nHuazhong University of Science and Technology\n430074WuhanChina A R\n\nT I C L E I N F O\n\n", "Kwang-Ting Cheng \nDepartment of Electronic and Computer Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina\n\nDepartment of Computer Science and Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina\n" ]
[ "Department of Electronic and Computer Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina", "Department of Computer Science and Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina", "Department of Chemical and Biological Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina", "School of Electronic Information and Communications\nHuazhong University of Science and Technology\n430074WuhanChina A R", "T I C L E I N F O\n", "School of Electronic Information and Communications\nHuazhong University of Science and Technology\n430074WuhanChina A R", "T I C L E I N F O\n", "Department of Electronic and Computer Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina", "Department of Computer Science and Engineering\nThe Hong Kong University of Science and Technology\nHong KongChina" ]
[ "Medical Image Analysis" ]
A B S T R A C TMedical anomaly detection is a crucial yet challenging task aimed at recognizing abnormal images to assist in diagnosis. Due to the high-cost annotations of abnormal images, most methods utilize only known normal images during training and identify samples deviating from the normal profile as anomalies in the testing phase. Many readily available unlabeled images containing anomalies are thus ignored in the training phase, restricting the performance. To solve this problem, we introduce one-class semi-supervised learning (OC-SSL) to utilize known normal and unlabeled images for training, and propose Dual-distribution Discrepancy for Anomaly Detection (DDAD) based on this setting. Ensembles of reconstruction networks are designed to model the distribution of normal images and the distribution of both normal and unlabeled images, deriving the normative distribution module (NDM) and unknown distribution module (UDM). Subsequently, the intra-discrepancy of NDM and inter-discrepancy between the two modules are designed as anomaly scores. Furthermore, we propose a new perspective on self-supervised learning, which is designed to refine the anomaly scores rather than directly detect anomalies. Five medical datasets, including chest X-rays, brain MRIs and retinal fundus images, are organized as benchmarks for evaluation. Experiments on these benchmarks comprehensively compare a wide range of anomaly detection methods and demonstrate that our method achieves significant gains and outperforms the state-of-the-art. Code and organized benchmarks are available at https://github.com/caiyu6666/DDAD-ASR.
10.1016/j.media.2023.102794
[ "https://export.arxiv.org/pdf/2210.04227v3.pdf" ]
252,780,339
2210.04227
2ef79c7a9c110e1ebe51fe869fee58953ef1a3f0
Dual-distribution discrepancy with self-supervised refinement for anomaly detection in medical images 2023 Yu Cai Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology Hong KongChina Hao Chen Department of Computer Science and Engineering The Hong Kong University of Science and Technology Hong KongChina Department of Chemical and Biological Engineering The Hong Kong University of Science and Technology Hong KongChina Xin Yang School of Electronic Information and Communications Huazhong University of Science and Technology 430074WuhanChina A R T I C L E I N F O Yu Zhou School of Electronic Information and Communications Huazhong University of Science and Technology 430074WuhanChina A R T I C L E I N F O Kwang-Ting Cheng Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology Hong KongChina Department of Computer Science and Engineering The Hong Kong University of Science and Technology Hong KongChina Dual-distribution discrepancy with self-supervised refinement for anomaly detection in medical images Medical Image Analysis 2023Article history: Received 22 August 2022 Received in final form 13 February 2023 Accepted 6 March 2023Contents lists available at ScienceDirect Medical Image Analysis journal homepage: www.elsevier.com/locate/mediaAnomaly detection Reconstruction networks Self-supervised learning Benchmark A B S T R A C TMedical anomaly detection is a crucial yet challenging task aimed at recognizing abnormal images to assist in diagnosis. Due to the high-cost annotations of abnormal images, most methods utilize only known normal images during training and identify samples deviating from the normal profile as anomalies in the testing phase. Many readily available unlabeled images containing anomalies are thus ignored in the training phase, restricting the performance. To solve this problem, we introduce one-class semi-supervised learning (OC-SSL) to utilize known normal and unlabeled images for training, and propose Dual-distribution Discrepancy for Anomaly Detection (DDAD) based on this setting. Ensembles of reconstruction networks are designed to model the distribution of normal images and the distribution of both normal and unlabeled images, deriving the normative distribution module (NDM) and unknown distribution module (UDM). Subsequently, the intra-discrepancy of NDM and inter-discrepancy between the two modules are designed as anomaly scores. Furthermore, we propose a new perspective on self-supervised learning, which is designed to refine the anomaly scores rather than directly detect anomalies. Five medical datasets, including chest X-rays, brain MRIs and retinal fundus images, are organized as benchmarks for evaluation. Experiments on these benchmarks comprehensively compare a wide range of anomaly detection methods and demonstrate that our method achieves significant gains and outperforms the state-of-the-art. Code and organized benchmarks are available at https://github.com/caiyu6666/DDAD-ASR. Introduction Medical imaging is of vital importance to the diagnosis of a wide variety of pathologies. Take the case of chest X-rays (CXRs), which are the most commonly performed radiological exam (Ç allı et al., 2021), widely applied for the diagnosis of As annotations of normal images from healthy subjects are relatively easy to obtain while those of anomalies are complex, various and usually difficult to collect, most existing methods consider anomaly detection as a one-class classification (OCC) problem (Ruff et al., 2018), where only normal images are utilized for training and samples not conforming to the normal profile are identified as anomalies in the testing phase; thus there is no need for annotation of abnormal images during training. This setting has been extensively studied in anomaly detection for both computer vision tasks (Ruff et al., 2021) and medical image analysis (Baur et al., 2021). Nevertheless, due to the lack of training on real abnormal images, the discriminative capability of these methods is limited. Meanwhile, in medical images analysis, an important fact is ignored that, different from the application scenarios in computer vision tasks, like industrial defect detection (Bergmann et al., 2019) and video anomaly detection (Sultani et al., 2018;Li et al., 2013), where abnormal cases are rare, medical clinical practice provides plenty of readily available unlabeled images with a certain anomaly rate (AR). These unlabeled images, containing rich anomalous features, are wasted by methods based on the OCC setting, which restricts the performance of anomaly detection. Although some works have explored the utilization of unlabeled samples, the unlabeled abnormal samples have yet to be exploited successfully. Deep SAD (Ruff et al., 2019) introduced semi-supervised anomaly detection, however, it works under the condition that both labeled normal and abnormal samples are available, while the unlabeled data is mostly normal. This condition is difficult to achieve in practice, while anomalies in unlabeled data are not exploited. One-class SVM (OC-SVM) (Schölkopf et al., 1999) and Support Vector Data Description (SVDD) (Tax and Duin, 2004) utilize nonzero slack variables to penalize the objective function and learn soft margins, and thus tolerate a small number of outliers in the training set. However, they essentially try to reduce the effects of unlabeled abnormal samples for training on normal data similar to the OCC setting, rather than capture useful information from the unlabeled abnormal samples. It has been demonstrated that their performance will decrease consistently as the abnormal samples in the unlabeled data increase (Yoon et al., 2022). Up to now, there is still no notable work leveraging unlabeled images for anomaly detection effectively. Therefore, a question is naturally raised: can unlabeled images provide effective information of abnormalities as a complement to normal images to improve the performance of anomaly detection? Motivated by this question, in this work, we introduce and explore one-class semi-supervised learning (OC-SSL) to train the model on known normal and unlabeled images. A comparison of the OC-SSL with existing settings is shown in Fig. 1. As mentioned above, the OCC mode ( Fig. 1(a)) has been extensively studied in most existing anomaly detection works, but plenty of unlabeled images are ignored. Existing semi-supervised anomaly detection methods ( Fig. 1(b)) (Ruff et al., 2019) require both labeled normal and abnormal samples, while the unlabeled data should be mostly normal. It is intractable in practice, while unlabeled abnormal samples are not exploited. The introduced OC-SSL mode ( Fig. 1(c)) is capable of utilizing normal and unlabeled images with arbitrary ARs, while there is no need for labeled abnormal images. Therefore, the OC-SSL is more reasonable and consistent with the medical clinical practice. Based on the OC-SSL mode, we propose Dual-distribution Discrepancy for Anomaly Detection (DDAD), as shown in Fig. 3. To capture information from both known normal images and unlabeled images, we utilize ensembles of reconstruction networks to model the distribution of normal images and the distribution of both normal and unlabeled images, deriving the normative distribution module (NDM) and unknown distribution module (UDM). Subsequently, the intra-discrepancy of NDM and inter-discrepancy between the two modules are designed as anomaly scores (ASs). To further refine the two ASs, we design an Anomaly Score Refinement Net (ASR-Net), which is trained via self-supervised learning. Fig. 2 depicts our comparison with the standard self-supervised anomaly detection. Instead of learning to directly detect the synthetic abnormal patterns, the proposed ASR-Net learns to map the original AS to the final accurate abnormal regions, thereby avoiding overfitting and leading to better performance. Considering the lack of publicly available benchmarks for medical anomaly detection, we for the first time collect and organize five medical datasets including CXRs, brain MRIs and retinal fundus Fig. 3. Overview of the proposed DDAD. In the Stage 1, NDM and UDM model the distribution of known normal images and the distribution of known normal and unlabeled images, respectively. Then the intra-discrepancy inside NDM and inter-discrepancy between the two modules are designed as anomaly scores. In the Stage 2, the two anomaly scores are refined and fused by the ASR-Net F(·), deriving the final prediction R dual . UDM … × AE ො 1 … ො AE NDM … × AE AE - Stage 1 Stage 2 heat map GT … ො 1 … ො … Ƹ Ƹ images for evaluation and release them to facilitate other researchers evaluating their methods fairly. Experiments on these five datasets demonstrate that the proposed DDAD outperforms existing state-of-the-art methods, even if without unlabeled images, while unlabeled images can be utilized to further improve our performance by a large margin. Evaluation on unseen diseases further demonstrates the potential of our method for recognition of rare diseases, whose samples are inaccessible in the unlabeled data. A comprehensive comparison of a wide range of anomaly detection methods is also provided on the five datasets, revealing the performance of different families of methods and potential trends. Our main contributions are summarized as follows: • One-class semi-supervised learning (OC-SSL) is introduced. It utilizes known normal and unlabeled images with arbitrary ARs for anomaly detection, and is reasonable and consistent with clinical practice. • Based on the OC-SSL setting, ensembles of reconstruction networks are used to model the distribution of training data in an unsupervised fashion. Specifically, the NDM and UDM are designed to model the distribution of known normal images and the distribution of known normal and unlabeled images, respectively. It is the first time that unlabeled images are utilized to improve the performance of anomaly detection. • Two novel and powerful ASs, the intra-discrepancy inside NDM and inter-discrepancy between the NDM and UDM, are proposed to indicate anomalies. • An Anomaly Score Refinement Net (ASR-Net), trained via self-supervised learning, is proposed to refine and fuse the two ASs. Different from existing self-supervised anomaly detection methods that learn to detect synthetic abnormal patterns, it provides a new perspective on selfsupervised learning, i.e., learning to map the original AS to the final accurate abnormal regions. It avoids overfitting and achieves better performance. • Five medical datasets that include three modalities are collected and organized, and released as benchmarks for medical anomaly detection. These facilitate a fair comparison with other methods as there are few related existing benchmarks. • Extensive experiments on the five medical datasets demonstrate that the proposed method achieves consistent, significant gains and outperforms state-of-the-art methods in anomaly detection. A comprehensive comparison of a wide range of anomaly detection methods is provided to reveal the performance of different families of methods and potential trends. A preliminary version of this work was early accepted for MICCAI 2022 (Cai et al., 2022). In this paper, the major extensions include designing a new module, namely ASR-Net, that provides a new perspective on self-supervised learning in anomaly detection and improves the performance and robustness significantly; adding much more experiments on more datasets containing different modalities; elaborating the analysis; and providing a more comprehensive literature review. The rest of this paper is organized as follows: Section 2 presents related works. Section 3 describes in detail the proposed DDAD methods with our ASR-Net. In Section 4, extensive experiments on five datasets are conducted to demonstrate the effectiveness of our proposed method. Section 5 discusses advantages and limitations of the proposed method, and analyzes a wide variety of methods to reveal future directions and trends. We conclude our work in Section 6. Related works Anomaly detection aims at finding patterns in data that do not conform to expected behavior (Chandola et al., 2009). It is a promising field that has been widely applied in a variety of domains. Due to the difficulty of collecting abundant annotated abnormal samples, almost all the existing works utilize only normal images during training, which is the well-known OCC setting (Ruff et al., 2018). Classical anomaly detection methods, OC-SVM (Schölkopf et al., 1999) and SVDD (Tax and Duin, 2004), often fail in highdimensional data due to bad computational scalability and the curse of dimensionality. Their derived Deep SVDD (Ruff et al., 2018) utilizes neural networks to constrain the normal samples in a hypersphere with minimum volume, handling highdimensional data better but suffering from the mode collapse. Most recent state-of-the-art anomaly detection methods focus on reconstruction and self-supervised learning. As techniques highly related to our work, ensemble-based uncertainty estimates and semi-supervised learning for anomaly detection are also described in this section. Reconstruction-based Anomaly Detection Reconstruction-based methods are one of the most popular families in anomaly detection, especially for medical images (Baur et al., 2021). They usually utilize generative models, such as generative adversarial networks (GANs) (Goodfellow et al., 2014), auto-encoders (AEs) or their variants, to learn a mapping function to reconstruct normal images, while the unseen abnormal images are assumed unable to be reconstructed well by these models trained with only normal images, and in turn yield high reconstruction error. Schlegl et al. (2017) are the first to use GANs for anomaly detection. They proposed AnoGAN to learn the manifold of normal images. For a query image, a latent feature is found via an iterative process to generate an image most similar to the query image. The query image will be identified as abnormal if there is a large difference with the best generated image. To replace the time-consuming iterative process in the testing phase, Schlegl et al. (2019) further utilized an encoder to learn the mapping from the retinal OCT image to the latent space, and derived a fast version of AnoGAN, named f-AnoGAN. However, these GAN-based methods could suffer from memorization pitfalls, causing reconstructions to differ anatomically from the actual input. Various approaches also used variants of AEs for anomaly detection, including Variational AE (VAE) (Zimmerer et al., 2018), Adversarial AE (AAE) (Chen and Konukoglu, 2018), Vector Quantized VAE (VQ-VAE) (Marimont and Tarroni, 2021), etc. To avoid abnormal images being well reconstructed, Gong et al. (2019) proposed to augment the AE with a memory module, which can store the latent features of normal training samples. The reconstruction is obtained from a few most relevant memory records, thus tending to be close to a normal image and enlarging the reconstruction errors of abnormal images. Compared with GAN-based methods, AE-based methods can preserve more anatomical coherence, but usually generate blurry reconstructions (Baur et al., 2021), leading to false positive detection around high-frequency regions (e.g., boundaries). To mitigate this problem, Mao et al. (2020) proposed to automatically estimate the pixel-level uncertainty of reconstruction using an AE, which is used to normalize the reconstruction error and suppress the false positive detection in CXRs significantly. Recently, incorporating adversarial training into AEs has become popular, as it combines the advantages of both. Baur et al. (2018) demonstrated that AEs with spatial bottlenecks can reconstruct important fine details better than those with dense bottlenecks, and combined the spatial VAE with GAN to improve the realism of reconstructed normal samples for anomaly detection in brain MRIs. In addition to adversarial training, Akcay et al. (2018) used an extra encoder to map the reconstructed image to the latent space again, and minimized reconstruction errors in both the image space and latent space during training to aid in learning the data distribution for the normal samples. Zaheer et al. (2020) proposed to transform the fundamental role of a discriminator from identifying real and fake data to distinguishing between good and bad quality reconstructions, which is highly desirable in anomaly detection as a trained AE would not produce as good reconstructions for abnormal images as they would for normal images conforming to the learned representations. Self-Supervised Learning-based Anomaly Detection Self-supervised learning (Jing and Tian, 2020), referring to learning methods in which networks are explicitly trained using pretext tasks with generated pseudo labels, has also been extensively studied for anomaly detection. Sohn et al. (2020) proposed to first learn self-supervised representations from oneclass data and then build one-class classifiers on learned representations. Based on their proposed framework, they applied distribution augmentation (Jun et al., 2020) for one-class contrastive learning to reduce the uniformity of representations. Further, Tian et al. (2021) combined distribution-augmented contrastive learning (Sohn et al., 2020), augmentation prediction (Golan and El-Yaniv, 2018), and position prediction (Doersch et al., 2015) to learn feature representations for anomalysensitive detection models. Moreover, Li et al. (2021) proposed to learn representations by classifying normal data from their designed CutPaste, and then build a Gaussian density estimator on learned representations. In addition to the aforementioned representation-based methods, some works (Tan et al., 2020(Tan et al., , 2021Schlüter et al., 2022) proposed to manually synthesize defects to train models to detect irregularities. Various image processing approaches have been designed to synthesize abnormal images, including Cut-Paste (Li et al., 2021), Foreign Patch Interpolation (FPI) (Tan et al., 2020), Poisson Image Interpolation (PII) (Tan et al., 2021), etc. Recently, Schlüter et al. (2022 integrated Poisson image editing with rescaling, shifting and a new Gammadistribution-based patch shape sampling strategy to synthesize natural and diverse anomalies. Background constraints and pixel-level labels derived from the resulting difference to the normal image were designed to make the results more relevant to the task. However, these methods may not generalize well due to the inherent reliance on the similarity between synthetic abnormal patterns and real anomalies. Also, Zavrtanik et al. (2021) proposed to combine the reconstruction network with a self-supervised network. It feeds the concatenation of the original image and reconstruction result to a segmentation network trained via self-supervised learning, which is expected to learn a distance function between the original and reconstructed anomaly appearance. However, the self-supervised network could learn a shortcut to directly segment the synthesized anomalies, which is more accessible than learning the distance function. As a result, it still suffers from overfitting. Ensemble-based Uncertainty Estimates Deep Ensemble (Lakshminarayanan et al., 2017) is a simple but effective method for uncertainty estimates of deep neural networks, where high uncertainty will be expressed on out-ofdistribution (OOD) samples. It has been successfully applied in the fields of open-set recognition and active learning (Beluch et al., 2018). However, supervised training, like semantic segmentation or classification, is required in these methods, which is always undesirable in anomaly detection. Recently, Bergmann et al. (2020) proposed to utilize feature vectors of pretrained networks on normal regions as surrogate labels for the training of an ensemble of student networks, whose predictive variance was used as an AS to segment anomalous regions. They designed the ensemble-based method for industrial anomaly detection with no demand for labels, but required a powerful pretrained model, such as networks trained on ImageNet (Krizhevsky et al., 2012). Semi-Supervised Learning for Anomaly Detection Semi-supervised learning (Chapelle et al., 2009) is a learning paradigm in which the algorithm is provided with some labeled samples as well as unlabeled samples to improve the performance. Due to the advantages of leveraging unlabeled data, it is especially widely used in medical image analysis, where annotations are expensive and the amount of unlabeled data is huge. However, semi-supervised learning has not been successfully employed for medical anomaly detection due to two challenges. The first is that in anomaly detection, only normal images comprise the labeled data, which is inadequate for existing semi-supervised methods. Secondly, there are thousands of rare diseases, meaning that even though the unlabeled data may contain some types of anomalies, the testing data may contain many unseen types. It has been demonstrated that this mismatch can cause drastic performance degradation in semi-supervised learning (Oliver et al., 2018). Several attempts have been made to study semi-supervised learning for anomaly detection, but the two challenges remain unresolved. Bauman and Bauman (2018) proposed a semi-supervised learning algorithm for one-class classification. However, their setting is essentially transductive learning, where the model is directly tested on the unlabeled set. This is undesirable as, in practice, the trained model needs to be capable of finding anomalies from new data. Recently, Ruff et al. (2019) introduced Deep SAD for general semi-supervised anomaly detection. However, it works under the condition that there are a few labeled normal and abnormal samples, while the unlabeled data is mostly normal. This condition is difficult to achieve in practice, while anomalies in unlabeled data are not exploited. Some works (Akcay et al., 2018) refer to methods that train on only normal samples as "semi-supervised". Considering that only normal data is used for training, they are more precisely instances of one-class classification. Therefore, how to design an effective semi-supervised method or a variant to exploit unlabeled data for anomaly detection is still under study. Summary In summary, most of the previous anomaly detection methods used only normal images for training. Thus, plenty of unlabeled images in clinical practice were ignored. Although several works have tried to explore semi-supervised learning for anomaly detection, they work under strict conditions which do not meet clinical needs, and meanwhile no useful information is mined from the unlabeled data. To solve this problem, we introduce OC-SSL to train the model on known normal and unlabeled images. We design the NDM and UDM, which are ensembles of several reconstruction networks, to model the normative distribution of normal images and unknown distribution of known normal and unlabeled images. Then the intradiscrepancy inside the NDM and inter-discrepancy between the NDM and UDM are used as the AS. Compared with previous reconstruction-based methods (Baur et al., 2021), our scores are the discrepancy among the outputs of network ensembles, rather than the discrepancy between the input and output. Therefore, more information can be captured, while the high reconstruction errors in normal regions, caused by reconstruction ambiguity or memorization pitfalls, can be mitigated in some way. Compared with existing ensemble-based methods (Bergmann et al., 2020), we innovatively use reconstruction networks as the basic models for ensemble. They can be trained in an unsupervised fashion based on the images themselves, i.e., reconstruction. Therefore, neither labels nor pretrained models are required, meaning our method can be applied in various scenarios more easily, including but not limited to medical anomaly detection. Compared with previous attempts related to semi-supervised learning for anomaly detection, our OC-SSL setting requires only known normal and unlabeled images with arbitrary ARs for training, which greatly meets clinical needs. Also, through computing the inter-discrepancy between NDM and UDM, the unlabeled data can help the recognition of seen anomalies while no harm is caused to unseen anomalies, and thereby no performance degradation is caused by class distribution mismatch in the unlabeled data (Oliver et al., 2018). We further propose ASR-Net trained via self-supervised learning to refine and fuse the two designed ASs. Different from existing self-supervised anomaly detection methods that require realistic pseudo abnormal images, it learns to map the original AS to the final accurate abnormal regions, and is thus insensitive to the synthetic abnormal images, yielding better generalization. Method Problem Statement In this section, we will first formulate the anomaly detection problem. The differences between our setting and the previous setting will also be clarified. Most existing works formulate anomaly detection as an OCC problem. That is, given a normal dataset N normal images, and a test dataset D n = {x ni } N i=1 with UDM … × AE ො 1 … ො ො 1 … ො Normal dataset Unlabeled dataset AE NDM … × AE AE Normal image Unlabeled imageD t = {(x ti , y i )} T i=1 with T annotated normal or abnormal images, where y i ∈ {0, 1} is the image label (0 for normal image and 1 for abnormal image), the goal is to train a model based on the normal image set D n , which can identify anomalies in the test dataset D t during inference. Different from previous works, our proposed DDAD, based on the OC-SSL setting, makes full use of the unlabeled images in clinical practice. Specifically, in addition to the normal dataset D n , we also utilize a readily available unlabeled dataset D u = {x ui } M i=1 with M unlabeled images that includes both normal and abnormal images to improve the performance of anomaly detection. Dual-distribution Modeling As shown in Fig. 3, we use two modules, the NDM and UDM, in Stage 1 to model the dual-distribution. The training process is illustrated in Fig. 4. Each module is an ensemble of K reconstruction networks with the same architecture but different random initialization of parameters and random shuffling of training samples, and is trained by the mean squared error (MSE) loss to minimize reconstruction errors on the training set. Specifically, the NDM is trained on only the normal dataset D n as L NDM = 1 N x A ∈D n K i=1 x A −x Ai 2 ,(1) where N is the size of the normal dataset D n , x A is the input training image of the NDM, andx Ai is the reconstruction of x A from the i-th network in the NDM. Similarly, the loss function of UDM trained on both the normal image dataset D n and unlabeled dataset D u can be written as L UDM = 1 N + M x B ∈D n ∪D u K i=1 x B −x Bi 2 .(2) In this way, the NDM models the distribution of known normal images, while the UDM captures effective information of abnormalities from the unlabeled dataset as a complement to the normal images. Dual-distribution Discrepancy-based Anomaly Scores Given a testing image, the pixel-wise reconstruction error has been widely used as the anomaly score (AS). In this work, we design two innovative and effective ASs based on the proposed ensemble modules. Previous ensemble-based methods train the ensemble networks via supervised tasks like classification or regression, then utilize their output variance to identify OOD samples (Lakshminarayanan et al., 2017;Bergmann et al., 2020). In our DDAD, reconstruction networks are regarded as regressors that regress the gray value at each pixel. Therefore, based on Deep Ensemble (Lakshminarayanan et al., 2017), the reconstructions' standard deviation can be used to estimate the samples' uncertainty. Specifically, as the networks in NDM are trained on only normal images, they will express a high difference on their OOD samples, i.e., abnormal regions. We propose to use this intradiscrepancy inside the NDM as an AS: A p intra = 1 K K i=1 (μ p A −x p Ai ) 2 ,(3) where p is the index of pixels andμ A = 1 K K i=1x Ai is the average map of reconstructions from NDM. Meanwhile, as the UDM captures some anomalous features from unlabeled images that the NDM never sees, a high discrepancy between their outputs will also appear in these abnormal regions. We further propose to use this inter-discrepancy between the two modules as another AS: A p inter = |μ p A −μ p B |,(4)whereμ B = 1 K K i=1x Bi is the average map of reconstructions from the UDM. As shown in Fig. 3, our discrepancy maps can indicate potential abnormal regions based on the pixel-wise AS. The image-level AS is obtained by averaging the pixel-level scores in each image. Compared with A rec , our ASs consider the discrepancy between different distributions, leading to stronger discriminative capability. To understand why A inter works, we can consider three situations: (1) When the testing input is a normal image, the NDM and UDM will have consistent reconstructions as they are both well-trained to reconstruct it, resulting in a small interdiscrepancy. (2) When the testing input is an abnormal image containing a disease that appears in the unlabeled dataset, the UDM will tend to have a different reconstruction to the NDM as the UDM has been trained to reconstruct this type of anomalies that the NDM never sees, leading to a high inter-discrepancy. (3) When the testing input is an abnormal image containing only diseases that never appear in the unlabeled dataset, it can be considered an OOD sample of the NDM and UDM; therefore, the A inter performs similarly to the A intra for this case. Intuitively, seen diseases (situation (2)) can be distinguished better than unseen diseases (situation (3)) as the UDM has captured their information. Based on this hypothesis, a higher AR in the unlabeled data will increase seen abnormal samples and lead to a more competitive A inter . Therefore, our method is able to improve the performance on seen anomalies, while no harm is caused to unseen anomalies, i.e., no performance degradation caused by class distribution mismatch (Oliver et al., 2018). Experiments in Section 4.4 validate this hypothesis. In addition, the proposed method can achieve a consistent improvement compared with the reconstruction baseline even if the AR is 0, while a low AR can lead to a significant boost in performance. Our discrepancies are also all computed among reconstructions, rather than between the input and reconstruction as with A rec . This can reduce the false positive detection caused by the reconstruction ambiguity of the AE around high-frequency regions (Baur et al., 2021;Mao et al., 2020). Uncertainty-refined Anomaly Scores Due to the reconstruction ambiguity of the AE, high reconstruction errors often appear at high-frequency regions, e.g., around normal region boundaries, leading to false positives. To address this problem, AE-U (Mao et al., 2020) was proposed to refine the A rec using the estimated pixel-wise uncertainty. It generates the reconstructionx i and corresponding uncertainty σ 2 (x i ) for each input x i , and is trained by L = 1 NP N i=1 P p=1 { (x p i −x p i ) 2 σ 2 p (x i ) + logσ 2 p (x i )}.(5) Training on normal images, the numerator of the first term is an MSE loss to minimize the reconstruction error, while the σ 2 p (x i ) at the denominator will be learned automatically to be large at pixels with high reconstruction errors to minimize the first term. Additionally, the second term drives the predicted uncertainty to be small in other regions. The two loss terms together ensure that the predicted uncertainty will be larger at only normal regions with high reconstruction errors. Thus, it can be used to refine the AS at the pixel level. In this work, we design a strategy similar to that of AE-U while adapting it to DDAD well. We use AE-U as the backbone of DDAD, and utilize the uncertainty predicted by our NDM, which is trained on only the normal dataset, to refine our intraand inter-discrepancy at the p-th pixel as follows: A p intra = 1 K K i=1 (μ p A −x p Ai ) 2 σ p ,(6)A p inter = |μ p A −μ p B | σ p ,(7) where σ p is the average uncertainty predicted by AE-U in the NDM. 3.5. Self-supervised Learning-based Anomaly Score Refinement Net As shown in Fig. 3, the proposed A intra and A inter can overall express high values on abnormal regions, but some false positives and false negatives still appear. Based on the observations, we hypothesize that score maps can provide not only score values, but also spatial information to assist in the recognition of true positives. For example, false positives could be found around boundaries or noisy pixels. In this case, the discrepancy map on these regions would show the patterns as thin x and x f denote two normal images, x s denotes the synthetic abnormal image and y s is the corresponding binary pseudo label. bright lines or small bright points, which are different from the patterns on real abnormal regions. Similarly, although the discrepancy value is low on false negatives, it could have some spatial patterns that are different from those of real normal regions. Therefore, we argue that false positive and false negative patterns in the score map can be recognized, based on which the score map can be further refined by eliminating false positives and recovering false negatives. To validate this hypothesis, we design an ASR-Net, denoted as F(·), to capture the spatial information in the raw discrepancy maps and refine them accordingly. Specifically, the network can be formulated as R dual = F([A intra , A inter ]),(8) where the network F(·) takes the original dual-distribution discrepancy maps, A intra and A inter , as inputs, and then predicts the final accurate AS map R dual accordingly. To obtain an effective F(·), we design a self-supervised task, where pseudo abnormal images with the corresponding pixellevel binary labels are synthesized to train F(·). Specifically, we employ a simple approach for the synthesis of abnormal images referenced to FPI (Tan et al., 2020). As shown in Fig. 5, for each normal image x, we assign a random patch h and fuse x with another normal image x f in the region h with the interpolation α, deriving synthetic abnormal image x s . The operation is formulated as x p s = (1 − α)x p + αx p f , ∀p ∈ h,(9) where p is the index of pixels and the interpolation α ∼ U(0, 1). The random patch h is restricted by: h c ∼ U(0.1d, 0.9d), h s ∼ U(0.1d, 0.4d),(10) where d is the image width, h c is the patch center coordinate and h s is the patch size. After obtaining the synthetic abnormal image x s , we feed it through our well-trained NDM and UDM (i.e., Stage 1 in Fig. 3), and compute its A intra and A inter . With the supervision of corresponding pseudo label y s , F(·) is then trained by the Focal Loss (Lin et al., 2017) as where FL(·) is the Focal Loss function. For each pixel with prediction probability p t for the ground truth class, the focal loss is computed as L R = FL(F([A intra , A inter ]), y s ),(11)L f ocal (p t ) = −(1 − p t ) γ log(p t ),(12) where γ is the tunable focusing parameter. In this way, the ASR-Net F(·) can automatically learn to predict final accurate abnormal regions based on the patterns in original score maps, as shown in Stage 2 of Fig. 3. Different from previous self-supervised anomaly detection methods, ASR-Net learns the mapping function from the raw score maps to the final accurate abnormal regions, rather than learns to detect the synthetic abnormal patterns, achieving better generalization and less sensitivity to the quality of synthetic images. In addition, for the case that the unlabeled images are not acquired, we also explore the performance of using only A intra under the same setting as the OCC problem. The score map predicted by F(·) according to only A intra is denoted as R intra : R intra = F(A intra ).(13) 4. Experiments Datasets We conduct extensive experiments on three CXR datasets, one brain MRI dataset, and one retinal fundus image dataset: 1) RSNA Pneumonia Detection Challenge dataset, 1 2) Vin-BigData Chest X-ray Abnormalities Detection dataset (VinDr-CXR) 2 (Nguyen et al., 2022), 3) Chest X-ray Anomaly Detection (CXAD) dataset, 4) Brain Tumor MRI dataset, 3 and 5) Large-scale Attention-based Glaucoma (LAG) dataset (Li et al., 2019). RSNA dataset: The dataset contains 8851 normal and 6012 lung opacity CXRs. In experiments, we use 3851 normal images as the normal dataset D n , 4000 images with different ARs as the unlabeled dataset D u , and 1000 normal and 1000 lung opacity images as the test dataset D t . VinDr-CXR dataset: The dataset contains 10606 normal and 4394 abnormal CXRs that include 14 categories of anomalies in total. In experiments, we use 4000 normal images as D n , 4000 images as D u , and 1000 normal and 1000 abnormal images as D t . CXAD dataset: The dataset is collected by us for this study, and contains 3299 normal and 1701 abnormal CXRs that include 18 categories of anomalies in total. In experiments, we use 2000 normal images as D n , 2000 images as D u , and 499 normal and 501 abnormal images as D t . Brain Tumor MRI dataset: The dataset contains 2000 MRI slices with no tumors, 1621 with glioma, and 1645 with meningioma. The glioma and meningioma are regarded as anomalies. In experiments, we use 1000 normal images (with no tumors) as D n , 1000 images as D u , and 600 normal and 600 abnormal images (300 with glioma and 300 with meningioma) as D t . LAG dataset: The dataset contains 3143 normal retinal fundus images and 1711 abnormal retinal fundus images with glaucoma. In experiments, we use 1500 normal images as D n , 1500 images as D u , and 811 normal and 811 abnormal images as D t . We show a summary of the details of the dataset repartitions in Table 1. For the OCC setting, only D n is used during training. For our proposed training mode, both D n and D u are utilized. Except for our CXAD, the reorganized benchmarks and corresponding repartition files have been released for reproducibility. As publicly available benchmarks for anomaly detection in medical images are rare, our released benchmarks will significantly contribute to a fair comparison of studies. Implementation Details The AE in our experiments contains an encoder and a decoder. The encoder contains four convolutional layers with kernel size 4 and stride 2, whose channel sizes are 16-32-64-64. The decoder contains four deconvolutional layers with the same kernel size and stride as the encoder, and the channel sizes are 64-32-16-1. The encoder and decoder are connected by three fully connected layers. All layers except the output layer are followed by batch normalization (BN) and ReLU. For fair comparison, MemAE (Gong et al., 2019) and AE-U (Mao et al., 2020) are modified in our experiments based on this AE. All the input images are resized to 64 × 64, K is set to 3, and all the reconstruction models are trained for 250 epochs using the Adam optimizer with a learning rate of 5e-4. The proposed ASR-Net consists of three cascaded convolutional layers, connected by BN and ReLU. It is trained for 100 epochs with a learning rate of 1e-4 and a weight decay of 1e-4 to ensure convergence. All experiments are implemented using PyTorch. The performance is assessed with the area under the ROC curve (AUC) and average precision (AP). Table 2. Comparison with SOTA methods. For methods that do not use unlabeled images, the two best results are marked in bold and underlined. For methods that use unlabeled images, the best results are marked in underlined bold. Note that "IN-Pretr." refers to "ImageNet-Pretrained", "Scrat." refers to "trained-from-scratch" , "e2e" refers to end-to-end, and "*" refers to incorporating unlabeled data to synthesize anomalies in self-supervised methods. Comparison with State-of-the-art Methods In Table 2, we compare our proposed method with a wide range of state-of-the-art (SOTA) methods, including MemAE (Gong et al., 2019), Ganomaly (Akcay et al., 2018), DRAEM (Zavrtanik et al., 2021), CutPaste (including ImageNet-pretrained and trained-from-scratch versions) (Li et al., 2021), CutPaste (e2e) (Schlüter et al., 2022), FPI (Tan et al., 2020), PII (Tan et al., 2021), NSA (Schlüter et al., 2022), f-AnoGAN (Schlegl et al., 2019), IGD and AE-U (Mao et al., 2020). Note that the official code of CutPaste (Li et al., 2021) has not been released. Thus, we use a public implementation from https://github.com/Runinho/ pytorch-cutpaste. For fair comparison among standard selfsupervised methods, we use the unified implementation provided by NSA (Schlüter et al., 2022) for CutPaste (e2e), FPI, and PII. All other methods used in the experiments are implemented using their official codes. Performance under the OCC setting We compare our DDAD-R intra with others under the same OCC setting for fairness; i.e., only the normal dataset D n is used during training without the use of unlabeled images. Under the OCC setting, the two best results are marked in bold and underlined in Table 2. The results show that our DDAD built on AE-U using R intra as the AS achieves SOTA results on almost all the five benchmarks comprising three different medical image modalities (CXR, brain MRI and retinal fundus image), demonstrating the effectiveness and generality of our proposed method. Our method also outperforms other SOTA selfsupervised methods (e.g., NSA (Schlüter et al., 2022)). However, FPI (Tan et al., 2020), with the same synthesis approach as ours, performs poorly on the five datasets. The reason is that FPI (Tan et al., 2020) and other similar self-supervised methods overfit the synthetic anomalies. In contrast, our ASR-Net never sees the synthetic anomalies, and instead takes the anomaly score maps as input to learn the refinement, avoiding the overfitting problem. Specifically, standard self-supervised methods achieve satisfactory performance on the Brain Tumor MRI dataset, where the anomalies (tumors) present a notable intensity discrepancy from the normal regions, similar to the synthetic abnormal patterns. However, the predominant manifestation of abnormal (glaucoma) images in the LAG dataset (Li et al., 2019) is alterations in the optic disk appearance and vasculature, which differ significantly from the synthetic abnormal patterns. As a result, standard self-supervised methods fail to detect these anomalies, while in our proposed method, anomaly cues are effectively captured by DDAD and refined by our ASR-Net, resulting in accurate predicted abnormal regions. Another surprising observation is that MemAE (Gong et al., 2019) often performs worse than AE. The reason could be that the difference between normal and abnormal medical images is significantly smaller than that between natural images in the original paper of MemAE (Gong et al., 2019). In medical domains, abnormal images always contain only subtle lesions to differentiate them from normal images, and their features can be easily obtained using the combination of some normal features, as they are similar overall. Performance under the OC-SSL setting We evaluate the proposed method in the situation that the unlabeled image dataset D u is utilized, i.e., use R dual as the AS. Referencing the ARs of several public medical image datasets (e.g., 71% in RSNA, 46% in ChestX-ray8 (Wang et al., 2017) and 62% in Brain Tumor MRI), we generally assume an AR of Fig. 6. Performance of DDAD and the reconstruction baseline on the RSNA dataset with a varying AR of D u using AE as the backbone. $8& ''$'R dual ''$'R intra ''$'A inter ''$'A intra %DVHOLQH 60% for D u in the experiments. For fair comparison, we incorporate the unlabeled dataset for other self-supervised methods to synthesize more diverse anomalies in training. Under this setting, the best results are marked in underlined bold in Table 2. While our DDAD (AE-U) using R intra achieves SOTA results, our R dual further improves the performance with the help of unlabeled images, outperforming the previous methods by a larger margin. For other self-supervised methods, including CutPaste (e2e) (Schlüter et al., 2022), FPI (Tan et al., 2020), PII (Tan et al., 2021) and NSA (Schlüter et al., 2022), some performance improvement is obtained from the unlabeled data, but it is overall limited. These results indicate that our proposed method is able to more effectively capture useful information from unlabeled images for anomaly detection. Ablation Study DDAD with different ARs In clinical practice, the AR of unlabeled dataset D u is unknown. In order to simulate various real scenarios, we evaluate the proposed DDAD on the RSNA dataset with the AR of D u varying from 0 to 100%. We use the reconstruction method as the baseline for comparison. For fair comparison, all these methods use AE as the backbone. The results of proposed DDAD method using R dual , R intra , A inter and A intra , and the results of reconstruction baseline are shown in Fig. 6. They clearly demonstrate the effectiveness of our proposed anomaly scores and ASR-Net. Firstly, DDAD using the original A intra and A inter achieves consistent and significant improvement compared with the reconstruction baseline, suggesting that the two proposed ASs are more discriminative than the previous reconstruction error. Moreover, our A inter is better than A intra , while it performs better with an increasing AR of D u , consistent with our hypothesis in Section 3.3 that a higher AR of D u will result in a more competitive A inter . Because A intra is computed inside the NDM, it is irrelevant to the AR. It is worth noting that even in the extreme situation that AR is 0, our DDAD-A inter can still achieve better performance than baseline. That is to say, we can apply the DDAD in any situations and get improvement, regardless of the AR. Intuitively, when the AR is 0, dataset D n ∪D u only contains normal images, and thus the UDM degenerates to be the same as the NDM. However, in this situation the UDM is trained on a larger normal dataset than baseline, which leads to more robust models and supports the consistent improvement. Meanwhile, even if the AR is low (e.g., 20%), the DDAD can achieve a significant improvement (7.9% AUC higher than when the AR is 0). That means the proposed DDAD can improve the performance considerably in clinical practice as there are always some abnormal cases. Secondly, refined by the proposed ASR-Net, our R dual and R intra have a further significant gain compared with the original A inter and A intra . Specifically, when using only normal images, our ASR-Net F(·) refines A intra and derives R intra , which improves the AUC of A intra by a large margin of 16.9% (from 69.4% to 86.3%). Incorporating the unlabeled images, we can derive A inter as a complement to A intra . The two ASs are refined and fused by F(·), deriving R dual , which achieves an AUC of 87.0%-89.6% with the AR of D u varying from 0 to 100%, outperforming all the aforementioned methods. More importantly, while our R dual utilizes unlabeled images and achieves advanced performance, it is insensitive to the AR of D u . Even if the AR is 0, it can achieve an AUC of 87.0%, which outperforms A inter in any situations. Therefore, we can conclude that with the help of ASR-Net, the DDAD is more robust and it can handle various complex situations in clinical practice well. DDAD with different backbones Our proposed DDAD method can use any variants of AE as the backbone. To further prove its superiority, DDAD built on different backbones is compared with the corresponding reconstruction baselines (Rec.) in Table 3. The two best results for each backbone are marked in underlined bold and bold. Consistent with Section 4.3, we also assume an AR of 60% for D u in experiments. The results show that DDAD based on AE, MemAE (Gong et al., 2019) and AE-U (Mao et al., 2020) can all outperform the corresponding baselines on the five datasets by a large margin. Specifically, all of our original A intra and A inter , and the refined R intra and R dual perform competitively on the three CXR datasets (RSNA, VinDr-CXR and CXAD datasets). In terms of AUC, DDAD-A intra improves on the baselines AE, MemAE and AE-U by 2.5%, 4.9% and 0.6% on the RSNA dataset, 4.2%, 3.7% and 0.5% on the VinDr-CXR dataset, 4.2%, 3.4% and 2.8% on the CXAD dataset. DDAD-A inter improves on the same baselines by 14.6%, 10.8% and 4.3% on the RSNA dataset, 15.1%, 13.2% and 12.1% on the VinDr-CXR dataset, 6.5%, 3.9% and 5.0% on the CXAD dataset. With the help of our ASR-Net, DDAD-R intra improves the baselines AE, MemAE and AE-U by 19.4%, 19.2% and 1.6% on the RSNA dataset, 21.3%, 18.1% and 4.4% on the VinDr-CXR dataset, 8.2%, 6.4% and 3.0% on the CXAD dataset, while for DDAD-R dual , the improvement is 22.4%, 20.5% and 4.6% on the RSNA dataset, 21.5%, 19.5% and 12.1% on the VinDr-CXR dataset, 9.4%, 7.5% and 4.6% on the CXAD dataset. As for the Brain MRI and LAG dataset, the proposed original A intra performs worse than the corresponding reconstruction baseline. However, with the aid of our ASR-Net, R intra significantly improves the performance of A intra and outperforms the corresponding baseline by a large margin. The reason could be that, although the original A intra contains noises and works unsatisfactorily, it does encode useful information for anomaly detection, which is successfully extracted by our ASR-Net, deriving the R intra . Finally, consistent with the results on the three CXR datasets, our refined R intra and R dual outperform the original A intra and A inter on the Brain Tumor and LAG datasets, while showing their superiority to reconstruction baselines. We also test the ensemble of K reconstruction models using A rec , shown as "Rec. (ensemble)" in Table 3, demonstrating that a simple ensemble has no significant improvement. The reason why some ensembles result in slightly worse performance could be that the average reconstruction of ensemble networks may generalize better than the single network on some abnormal regions, causing reconstruction errors in these regions to be indistinguishable from those of normal regions. Performance on seen and unseen pathologies In clinical practice, the recognition of rare diseases is an important but very intractable task, where even unlabeled samples containing certain rare diseases are infeasible to acquire. Therefore, exploring our method's performance under the situation that the unlabeled dataset D u contains multiple diseases while the testing set contains different types of unseen dis- eases is meaningful. To simulate this situation and evaluate our method on seen and unseen pathologies, we utilize the VinDr-CXR dataset, which contains various types of pathologies as shown in Fig. 7. We define a set of several pathologies, P A = {aortic enlargement, cardiomegaly, lung opacity, pleural thickening, pulmonary fibrosis}, which contains the five most common pathologies in the dataset, as the seen pathologies to build the unlabeled dataset D u for training. For the unseen pathologies, we use the set of remaining less frequent pathologies, P B ={atelectasis, calcification, consolidation, ILD, infiltration, nodule/mass, pleural effusion, pneumothorax, other lesion}. We incorporate 1588 abnormal images containing a subset of diseases in P A and 2412 normal images as D u . For testing, we utilize 100 normal images, along with 100 abnormal images containing a subset of diseases in P A to evaluate the improvement on seen pathologies (Setting A), or 101 abnormal images containing a subset of diseases in P B to evaluate the improvement on unseen pathologies (Setting B). As the control group, A inter trained on the unlabeled dataset D u that contains only normal images is also evaluated. The results are shown in Table 4. It indicates that when incorporating abnormal images into the unlabeled set D u , DDAD-A inter has an improvement of 10.2% AUC and 10.8% AP on the seen pathologies (Setting A), while an improvement of 4.0% AUC and 4.7% AP is also achieved on even the unseen pathologies (Setting B). This reveals the tremendous potential of DDAD for improving the recognition of rare diseases, even if samples containing such diseases are unavailable in the unlabeled dataset. Table 4. Performance of DDAD on seen and unseen pathologies. Setting A indicates the testing set contains only pathologies in P A , which could appear in D u . Setting B indicates the testing set contains only pathologies in P B , which are unseen in D u . Method Unlabeled Normal Images Abnormal Images Low High Qualitative Analysis To further illustrate the superiority of the proposed method, we conduct qualitative analysis on the RSNA dataset in this section using AS histograms and heat maps. AS histograms To show the discriminative capability of different methods, we visualize the histograms of their AS for normal and abnormal images in Fig. 8 using AE as the backbone. The overlaps of normal and abnormal histograms indicate samples with the same AS but different categories; thus, they are indistinguishable. The χ 2 -distance shown in the figure measures the difference between the histograms of normal and abnormal images. Therefore, a larger difference between the ASs of normal and abnormal images will result in fewer overlaps and a larger χ 2distance, indicating stronger discriminative capability. Based on these analyses and observations, we can draw the conclusion that the proposed DDAD is superior to previous reconstruction methods and our ASR-Net is effective. The performance of different methods (ASs) can be ranked from better to worse as: R dual and R intra >A inter and A intra >A rec , which is consistent with our experimental results. Heat maps We visualize heat maps of A rec , A intra , A inter , and R dual on CXRs, brain MRIs, and retinal fundus images for comparison. In Fig. 9, the previous reconstruction method (in the second row) cannot identify subtle lesions well, while it always has false positives around the normal regions' boundaries. The two proposed discrepancy scores (in the third and fourth row), especially A inter (in the fourth row), show better discriminative capability to recognize most abnormal regions. With the ASR-Net, our R dual further remove the false positives of A intra and A inter in normal images, while its responses on abnormal regions are enhanced. It can thus perform as a rough localization result for radiologists to reference. Discussion Impact of the ensemble size K To analyze the impact of ensemble size K in DDAD, a series of experiments are conducted on the RSNA dataset. As shown in Table 5, results suggest that A intra is insensitive to K, while the performance of A inter first increases and then gradually becomes stable as K increases. Considering that a small K is sufficient to demonstrate the effectiveness of our method, Table 5. Impact of the ensemble size K. The performance is shown in the format AUC(%)/AP(%). Backbone AS and achieving better performance via a larger ensemble is not our main purpose, we simply choose K = 3 as a compromise between computational cost and performance. K = 2 K = 3 K = 5 K = 7 K = 11 K = Uncertainty estimates Other than Deep Ensemble, well-known methods for uncertainty estimates include Monte Carlo (MC) Dropout (Gal and Ghahramani, 2016), which is also simple and widely-used. MC Dropout has less computational cost compared with Deep Ensemble but the standard training and testing progress needs to be modified by randomly deactivating some neurons, while Deep Ensemble has better scalability and better performance, requiring few/no modifications to the standard learning progress of the network (Lakshminarayanan et al., 2017). The performances of using Deep Ensemble or MC Dropout in DDAD are shown in Table 6. The results indicate that Deep Ensemble can outperform MC Dropout consistently on both AUC and AP. More importantly, benefiting from the good scalability of Deep Ensemble, the powerful AE-U can be easily applied as our backbone. In contrast, it does not work well when MC Dropout is used. The reason could be that random dropout disturbs the prediction of the automatically learned pixel-level uncertainty map in AE-U, and thereby leads to serious performance deterioration. Self-supervised learning for anomaly detection Self-supervised learning-based methods have become very popular for anomaly detection (Li et al., 2021;Zavrtanik et al., 2021;Tan et al., 2020Tan et al., , 2021Schlüter et al., 2022), and some achieve extremely high performance in industrial or medical applications. However, in the industrial domain, most of the methods are evaluated only on the MVTec AD dataset (Bergmann et al., 2019), which could be insufficient as it is quite possible to synthesize defects in specific patterns that are very helpful for the recognition of anomalies in a specific test set, but not useful for other anomalies. In the medical domain, due to the lack of publicly available benchmarks, previous methods are evaluated on different datasets, hindering comprehensive and fair comparison. These hinder the reasonable analysis of self-supervised methods and restrict the development of anomaly detection. To analyze these methods better and reveal future trends, we compare various methods comprehensively and fairly on five medical datasets, as shown in Table 2. Surprisingly, our comparison reveals that, although selfsupervised methods can perform well on specific datasets, they always fail on other datasets. For example, DRAEM (Zavrtanik et al., 2021) achieves an image-level AUC of 98.0% on the MVTec AD dataset. However, it performs even worse than the vanilla AE on four of the five medical datasets. NSA (Schlüter et al., 2022), the SOTA self-supervised method, also performs worse than the vanilla AE on the LAG dataset. Meanwhile, several reconstruction-based methods (e.g., AE-U (Mao et al., 2020) and f-AnoGAN (Schlegl et al., 2019)) show more competitive results than the self-supervised methods on all five datasets. The reason is that most self-supervised methods essentially try to synthesize anomalies inherently similar to the real anomalies in specific datasets. They overfit the synthetic anomalies and cannot recognize real anomalies that are inherently different from their synthetic ones. Although NSA (Schlüter et al., 2022) is designed with some strategies to synthesize more natural and relevant anomalies and outperforms other self-supervised methods, it does not solve this problem and still performs poorly on the LAG dataset. In contrast, reconstruction-based methods recognize deviations from the normal pattern as anomalies, where different anomalies are treated equivalently, thus performing robustly on different datasets. Therefore, in the situations where abnormal patterns are unknown, reconstruction-based methods may be a better choice compared with self-supervised ones. Although the standard self-supervised methods suffer from overfitting, the results in Section 4 reveal that using selfsupervised learning for refinement or representation learning can achieve better performance. Table 2 and 3 show that our ASR-Net for self-supervised refinement significantly improves the performance on the five benchmarks based on the three backbones. However, FPI (Tan et al., 2020), using the same synthesis approach as ours, performs worse than ours on all five datasets. This phenomenon is highly related to what networks learn through self-supervised learning. The standard self-supervised methods directly learn to detect synthetic abnormal patterns, and thus easily overfit. In contrast, our ASR-Net learns the mapping function from the original AS to the final accurate abnormal regions, which are unrelated to the abnormal patterns, and thus generalizes well to anomalies in various scenarios. Moreover, CutPaste Scrat. (Li et al., 2021), which builds a Gaussian density estimator (GDE) (Rippel et al., 2021) on learned representations, outperforms CutPaste (e2e) (Schlüter et al., 2022) by a large margin on all five datasets. This reveals that although the synthetic abnormal patterns are not a perfect simulation of real anomalies, training the network to classify them is able to learn representations that can distinguish between normality and real abnormality. Therefore, using self-supervised representation is more promising than using the network trained via self-supervised learning to directly detect anomalies. In summary, compared with standard self-supervised methods that focus on training the network to directly detect anomalies, designing self-supervised tasks like refinement and representation learning that are insensitive to abnormal patterns is more generalizable, promising and competitive in complex real scenarios. Limitations Currently, our ASR-Net does have limitations. In the experiments, it shows only a small improvement when the original dual-distribution discrepancy refined by the uncertainty from AE-U has already achieved a high performance (i.e., DDAD (AE-U) in Table 3). The reason could be that our refinement strategy is conducted on the discrepancy maps of ensembles of reconstruction networks, causing the upper bound of performance to be limited by the distribution-modeling capability of these reconstruction networks. Therefore, some subtle abnormal regions that are reconstructed consistently by different networks in the ensemble are unable to be recognized, regardless of the subsequent refinement. In future work, we intend to explore a single network that models the distribution of training data explicitly to improve the distribution-modeling capability and achieve a higher upper bound of the performance. Additionally, although our approach makes use of unlabeled images successfully, a number of normal images are still required for training, which can also be time-consuming to collect in practice. Recently, Zaheer et al. (2022) proposed the generative cooperative learning (GCL) approach for anomaly detection, which is trained using only unlabeled images where normal samples are the majority. They designed a co-training strategy of an AE and a classifier to generate pseudo labels for unlabeled images. Inspired by this, we intend to explore a more effective pseudo label generation approach with reference to methods of learning with noisy labels (Wei et al., 2020;Jiang et al., 2018;Han et al., 2018), to develop a powerful anomaly detection framework without the requirement of any training annotations. Future directions and challenges Considering the current limitations, we summarize several promising emerging directions for anomaly detection: (1) unsupervised anomaly detection (Zaheer et al., 2022) (using only unlabeled images for training to detect anomalies), (2) openset supervised anomaly detection (Ding et al., 2022) (using a few labeled abnormal images and normal images for training to detect both seen anomalies and unseen anomalies), and (3) fewshot anomaly detection (Huang et al., 2022) (using only a limited number of normal images for training to detect anomalies). Actually, the first step for task (1), handling the unsupervised anomaly detection, is to generate reasonable pseudo labels for unlabeled training images. Once these pseudo normal or abnormal labels for the training data have been obtained, the task (1) can then be decomposed into the two further tasks, tasks (2) and (3). To explore the three emerging directions, several challenges need to be studied. Firstly, abnormal medical images only have subtle difference to normal ones. This could make it difficult to assign accurate pseudo labels using current methods for learning with noisy labels (Wei et al., 2020), where predictions are made by vanilla classification networks according to the whole image. Another challenge is that classes of anomalies are inexhaustible. Even if some abnormal images are labeled accurately, incorporating them into training can render models ineffective in generalizing to unseen anomaly classes. In summary, fine-grained models that are able to recognize subtle lesions and a new training paradigm for utilizing limited labeled images are in high demand for anomaly detection. Conclusion In this paper, we introduce one-class semi-supervised learning (OC-SSL) to utilize known normal and unlabeled data for training, and propose Dual-distribution Discrepancy for Anomaly Detection (DDAD) based on this setting. Two new anomaly scores, intra-and inter-discrepancy, are designed based on DDAD for identifying anomalies. In addition, an Anomaly Score Refinement Net (ASR-Net) trained via selfsupervised learning is designed to refine the two anomaly scores. It provides a new perspective on using self-supervised learning to improve anomaly detection and shows better robustness and performance than previous self-supervised methods on various datasets. To facilitate the fair and comprehensive comparison of different methods, we collect and organize five medical datasets that include three modalities and release them as benchmarks for medical anomaly detection. Experiments on the five benchmarks demonstrate that the proposed DDAD with ASR-Net is effective and generalizable, achieving state-of-theart performance on a wide variety of medical images. Evaluation on unseen diseases further demonstrates the potential of our method for recognition of rare diseases, whose samples are unavailable in the unlabeled data. Results also reveal how to use self-supervised learning for better anomaly detection. Compared with training the network to directly detect anomalies, using indirect strategies, such as applying self-supervised refinement and self-supervised representations, is more promising. As this work presents the first method that utilizes readily available unlabeled images to improve the performance of anomaly detection and provides a comprehensive comparison of various methods on various datasets, we hope it will inspire researchers to explore anomaly detection in a more effective way. We also hope our released benchmarks for medical anomaly detection will facilitate the fair comparison of related works and contribute to the development of this area. Fig. 1 . 1Different training modes for medical anomaly detection. (a) Oneclass classification mode, utilizing only normal images, is the most popular, but wastes unlabeled images. (b) Semi-supervised mode requires labeled normal and abnormal images, and mostly normal unlabeled images, thus infeasible in clinical practice. (c) The introduced OC-SSL mode utilizes normal and unlabeled images with arbitrary anomaly rates. Fig. 2 . 2Comparison of (a) the standard self-supervised anomaly detection and (b) the proposed self-supervised anomaly score refinement. (a) trains the network to directly detect the synthetic abnormal patterns from the input image, while (b) learns to refine the original anomaly score maps for the final accurate abnormal regions. Fig. 4 . 4Illustration of training NDM and UDM. Fig. 5 . 5Illustration of the synthesis of abnormal images. Fig. 7 . 7Class distribution of the VinDr-CXR dataset. Each abnormal image could contain multiple categories of diseases. Fig. 8 . 8dataset D u Setting A Setting B AUC (%) AP (%) AUC (%) AP (normal + 1588 abnormal images in P A ) 64.2 +10.2 70.8 +10.8 70.0 Histograms of anomaly score for normal (blue) and abnormal (red) images in the test set of RSNA. The backbone is AE. Scores are normalized to [0,1]. The χ 2 -distance measures the difference between the histograms of normal and abnormal images. Fig. 9 . 9Visualization of heat maps on medical datasets. From top to bottom: Original images, heat maps of A rec , heat maps of A intra , heat maps of A inter , heat maps of R dual . The green bounding boxes indicate abnormal regions. Table 1 . 1Summary of dataset repartitions. Note that D u is built using data selected from the images presented in parentheses without the use of their annotations.Dataset Table 3 . 3Performance of different methods using three backbones on five datasets. The best two results for each backbone are marked in underlined bold and bold.Method AS AUC RSNA VinDr-CXR CXAD Brain MRI LAG AE MemAE AE-U AE MemAE AE-U AE MemAE AE-U AE MemAE AE-U AE MemAE AE-U Rec. A rec 66.9 68.0 86.7 55.9 55.8 73.8 55.6 56.0 66.4 79.7 77.4 94.0 79.3 78.5 81.3 Rec. (ensemble) 66.9 67.0 86.6 55.5 55.3 73.1 55.0 55.2 65.9 81.3 79.2 93.3 78.8 79.2 82.1 DDAD A intra 69.4 72.9 87.3 60.1 59.5 74.3 59.8 59.4 69.2 55.9 52.6 84.5 72.1 71.3 75.3 A inter 81.5 78.8 91.0 71.0 69.0 85.9 62.1 59.9 71.4 84.4 83.2 97.1 87.2 88.5 80.6 R intra 86.3 87.2 88.3 77.2 73.9 78.2 63.8 62.4 69.4 85.0 82.9 94.2 79.5 80.1 86.0 R dual 89.3 88.5 91.3 77.4 75.3 85.9 65.0 64.5 71.0 93.0 91.4 97.2 89.0 88.7 93.1 Table 6. Comparison of Deep Ensemble and MC Dropout for uncertainty estimates in DDAD. Here Deep Ensemble uses an ensemble of three networks, while the MC Dropout executes the forward pass 256 times with random dropout for Monte Carlo estimates.15 AE A intra 69.5/69.3 69.4/68.5 69.5/68.9 69.7/69.2 69.0/68.4 69.1/68.5 A inter 79.6/79.3 81.5/81.0 84.2/83.4 84.8/83.9 85.4/84.6 86.0/85.1 Backbone AS Uncertainty Estimates AUC (%) AP (%) AE A intra Deep Ensemble 69.4 68.5 MC Dropout 69.5 67.8 A inter Deep Ensemble 81.5 81.0 MC Dropout 78.5 77.0 AE-U A intra Deep Ensemble 87.3 86.3 MC Dropout 63.1 63.4 A inter Deep Ensemble 91.0 91.3 MC Dropout 79.8 81.2 https://www.kaggle.com/c/rsna-pneumonia-detection-challenge 2 https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection 3 https://www.kaggle.com/datasets/masoudnickparvar/ brain-tumor-mri-dataset AcknowledgmentsThis work was supported in part by Hong Kong Innovation and Technology Fund (No. ITS/028/21FP), National Natural Science Foundation of China (61872417, 62061160490, 62176098, 61703049) and Natural Science Foundation of Hubei Province of China (2019CFA022). Ganomaly: Semisupervised anomaly detection via adversarial training. S Akcay, A Atapour-Abarghouei, T P Breckon, Asian conference on computer vision. SpringerAkcay, S., Atapour-Abarghouei, A., Breckon, T.P., 2018. Ganomaly: Semi- supervised anomaly detection via adversarial training, in: Asian conference on computer vision, Springer. pp. 622-637. One-class semi-supervised learning. E Bauman, K Bauman, Braverman Readings in Machine Learning. Key Ideas from Inception to Current State. SpringerBauman, E., Bauman, K., 2018. One-class semi-supervised learning, in: Braverman Readings in Machine Learning. Key Ideas from Inception to Cur- rent State. Springer, pp. 189-200. Autoencoders for unsupervised anomaly segmentation in brain mr images: a comparative study. C Baur, S Denner, B Wiestler, N Navab, S Albarqouni, Medical Image Analysis. 69101952Baur, C., Denner, S., Wiestler, B., Navab, N., Albarqouni, S., 2021. Autoen- coders for unsupervised anomaly segmentation in brain mr images: a com- parative study. Medical Image Analysis 69, 101952. Deep autoencoding models for unsupervised anomaly segmentation in brain mr images, in: International MICCAI brainlesion workshop. C Baur, B Wiestler, S Albarqouni, N Navab, SpringerBaur, C., Wiestler, B., Albarqouni, S., Navab, N., 2018. Deep autoencoding models for unsupervised anomaly segmentation in brain mr images, in: In- ternational MICCAI brainlesion workshop, Springer. pp. 161-169. The power of ensembles for active learning in image classification. W H Beluch, T Genewein, A Nürnberger, J M Köhler, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBeluch, W.H., Genewein, T., Nürnberger, A., Köhler, J.M., 2018. The power of ensembles for active learning in image classification, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9368- 9377. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. P Bergmann, M Fauser, D Sattlegger, C Steger, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionBergmann, P., Fauser, M., Sattlegger, D., Steger, C., 2019. Mvtec ad-a compre- hensive real-world dataset for unsupervised anomaly detection, in: Proceed- ings of the IEEE/CVF conference on computer vision and pattern recogni- tion, pp. 9592-9600. Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings. P Bergmann, M Fauser, D Sattlegger, C Steger, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionBergmann, P., Fauser, M., Sattlegger, D., Steger, C., 2020. Uninformed stu- dents: Student-teacher anomaly detection with discriminative latent embed- dings, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4183-4192. Dual-distribution discrepancy for anomaly detection in chest x-rays. Y Cai, H Chen, X Yang, Y Zhou, K T Cheng, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerCai, Y., Chen, H., Yang, X., Zhou, Y., Cheng, K.T., 2022. Dual-distribution discrepancy for anomaly detection in chest x-rays, in: International Con- ference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 584-593. Deep learning for chest x-ray analysis: A survey. E Ç Allı, E Sogancioglu, B Van Ginneken, K G Van Leeuwen, K Murphy, Medical Image Analysis. 72102125Ç allı, E., Sogancioglu, E., van Ginneken, B., van Leeuwen, K.G., Murphy, K., 2021. Deep learning for chest x-ray analysis: A survey. Medical Image Analysis 72, 102125. Anomaly detection: A survey. V Chandola, A Banerjee, V Kumar, ACM computing surveys (CSUR). 41Chandola, V., Banerjee, A., Kumar, V., 2009. Anomaly detection: A survey. ACM computing surveys (CSUR) 41, 1-58. Semi-supervised learning (chapelle. O Chapelle, B Scholkopf, A Zien, o. et al.book reviewsChapelle, O., Scholkopf, B., Zien, A., 2009. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. . IEEE Transactions on Neural Networks. 20IEEE Transactions on Neural Networks 20, 542-542. Unsupervised detection of lesions in brain mri using constrained adversarial auto-encoders. X Chen, E Konukoglu, arXiv:1806.04972arXiv preprintChen, X., Konukoglu, E., 2018. Unsupervised detection of lesions in brain mri using constrained adversarial auto-encoders. arXiv preprint arXiv:1806.04972 . Deep one-class classification via interpolated gaussian descriptor. Y Chen, Y Tian, G Pang, G Carneiro, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceChen, Y., Tian, Y., Pang, G., Carneiro, G., 2022. Deep one-class classification via interpolated gaussian descriptor, in: Proceedings of the AAAI Confer- ence on Artificial Intelligence, pp. 383-392. Catching both gray and black swans: Openset supervised anomaly detection. C Ding, G Pang, C Shen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDing, C., Pang, G., Shen, C., 2022. Catching both gray and black swans: Open- set supervised anomaly detection, in: Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pp. 7388-7398. Unsupervised visual representation learning by context prediction. C Doersch, A Gupta, A A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionDoersch, C., Gupta, A., Efros, A.A., 2015. Unsupervised visual representation learning by context prediction, in: Proceedings of the IEEE international conference on computer vision, pp. 1422-1430. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Y Gal, Z Ghahramani, PMLRinternational conference on machine learning. Gal, Y., Ghahramani, Z., 2016. Dropout as a bayesian approximation: Repre- senting model uncertainty in deep learning, in: international conference on machine learning, PMLR. pp. 1050-1059. Deep anomaly detection using geometric transformations. I Golan, R El-Yaniv, Advances in neural information processing systems. 31Golan, I., El-Yaniv, R., 2018. Deep anomaly detection using geometric trans- formations. Advances in neural information processing systems 31. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. D Gong, L Liu, V Le, B Saha, M R Mansour, S Venkatesh, A V Hengel, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionGong, D., Liu, L., Le, V., Saha, B., Mansour, M.R., Venkatesh, S., Hengel, A.v.d., 2019. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1705- 1714. Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems 27. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., 2014. Generative adversarial nets. Advances in neural information processing systems 27. Co-teaching: Robust training of deep neural networks with extremely noisy labels. B Han, Q Yao, X Yu, G Niu, M Xu, W Hu, I Tsang, M Sugiyama, Advances in neural information processing systems. 31Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I., Sugiyama, M., 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems 31. Registration based few-shot anomaly detection. C Huang, H Guan, A Jiang, Y Zhang, M Spratling, Y F Wang, European Conference on Computer Vision. SpringerHuang, C., Guan, H., Jiang, A., Zhang, Y., Spratling, M., Wang, Y.F., 2022. Registration based few-shot anomaly detection, in: European Conference on Computer Vision, Springer. pp. 303-319. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. L Jiang, Z Zhou, T Leung, L J Li, L Fei-Fei, PMLRInternational conference on machine learning. Jiang, L., Zhou, Z., Leung, T., Li, L.J., Fei-Fei, L., 2018. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels, in: International conference on machine learning, PMLR. pp. 2304-2313. Self-supervised visual feature learning with deep neural networks: A survey. L Jing, Y Tian, 43Jing, L., Tian, Y., 2020. Self-supervised visual feature learning with deep neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence 43, 4037-4058. Distribution augmentation for generative modeling. H Jun, R Child, M Chen, J Schulman, A Ramesh, A Radford, I Sutskever, PMLRInternational Conference on Machine Learning. Jun, H., Child, R., Chen, M., Schulman, J., Ramesh, A., Radford, A., Sutskever, I., 2020. Distribution augmentation for generative modeling, in: Interna- tional Conference on Machine Learning, PMLR. pp. 5006-5019. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25. A Krizhevsky, I Sutskever, G E Hinton, Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information pro- cessing systems 25. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30. B Lakshminarayanan, A Pritzel, C Blundell, Lakshminarayanan, B., Pritzel, A., Blundell, C., 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30. Cutpaste: Self-supervised learning for anomaly detection and localization. C L Li, K Sohn, J Yoon, T Pfister, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionLi, C.L., Sohn, K., Yoon, J., Pfister, T., 2021. Cutpaste: Self-supervised learning for anomaly detection and localization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9664-9674. Attention based glaucoma detection: a large-scale database and cnn model. L Li, M Xu, X Wang, L Jiang, H Liu, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionLi, L., Xu, M., Wang, X., Jiang, L., Liu, H., 2019. Attention based glau- coma detection: a large-scale database and cnn model, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10571-10580. Anomaly detection and localization in crowded scenes. W Li, V Mahadevan, N Vasconcelos, IEEE transactions. 36Li, W., Mahadevan, V., Vasconcelos, N., 2013. Anomaly detection and localiza- tion in crowded scenes. IEEE transactions on pattern analysis and machine intelligence 36, 18-32. Focal loss for dense object detection. T Y Lin, P Goyal, R Girshick, K He, P Dollár, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionLin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2017. Focal loss for dense object detection, in: Proceedings of the IEEE international conference on computer vision, pp. 2980-2988. Rethinking annotation granularity for overcoming shortcuts in deep learning-based radiograph diagnosis: A multicenter study. L Luo, H Chen, Y Xiao, Y Zhou, X Wang, V Vardhanabhuti, M Wu, C Han, Z Liu, X H B Fang, International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer4Oxnet: Deep omnisupervised thoracic disease detection from chest x-raysLuo, L., Chen, H., Xiao, Y., Zhou, Y., Wang, X., Vardhanabhuti, V., Wu, M., Han, C., Liu, Z., Fang, X.H.B., et al., 2022a. Rethinking annotation granu- larity for overcoming shortcuts in deep learning-based radiograph diagnosis: A multicenter study. Radiology: Artificial Intelligence 4, e210299. Luo, L., Chen, H., Zhou, Y., Lin, H., Heng, P.A., 2021. Oxnet: Deep omni- supervised thoracic disease detection from chest x-rays, in: International Conference on Medical Image Computing and Computer-Assisted Interven- tion, Springer. pp. 537-548. Pseudo biasbalanced learning for debiased chest x-ray classification. L Luo, D Xu, H Chen, T T Wong, P A Heng, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerLuo, L., Xu, D., Chen, H., Wong, T.T., Heng, P.A., 2022b. Pseudo bias- balanced learning for debiased chest x-ray classification, in: International Conference on Medical Image Computing and Computer-Assisted Interven- tion, Springer. pp. 621-631. Deep mining external imperfect data for chest x-ray disease screening. L Luo, L Yu, H Chen, Q Liu, X Wang, J Xu, P A Heng, IEEE transactions on medical imaging. 39Luo, L., Yu, L., Chen, H., Liu, Q., Wang, X., Xu, J., Heng, P.A., 2020. Deep mining external imperfect data for chest x-ray disease screening. IEEE trans- actions on medical imaging 39, 3583-3594. Abnormality detection in chest x-ray images using uncertainty prediction autoencoders. Y Mao, F F Xue, R Wang, J Zhang, W S Zheng, H Liu, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerMao, Y., Xue, F.F., Wang, R., Zhang, J., Zheng, W.S., Liu, H., 2020. Ab- normality detection in chest x-ray images using uncertainty prediction au- toencoders, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 529-538. Anomaly detection through latent space restoration using vector quantized variational autoencoders. S N Marimont, G Tarroni, 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), IEEE. Marimont, S.N., Tarroni, G., 2021. Anomaly detection through latent space restoration using vector quantized variational autoencoders, in: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), IEEE. pp. 1764-1767. Vindr-cxr: An open dataset of chest x-rays with radiologist's annotations. H Q Nguyen, K Lam, L T Le, H H Pham, D Q Tran, D B Nguyen, D D Le, C M Pham, H T Tong, D H Dinh, Scientific Data. 9Nguyen, H.Q., Lam, K., Le, L.T., Pham, H.H., Tran, D.Q., Nguyen, D.B., Le, D.D., Pham, C.M., Tong, H.T., Dinh, D.H., et al., 2022. Vindr-cxr: An open dataset of chest x-rays with radiologist's annotations. Scientific Data 9, 1-7. Realistic evaluation of deep semi-supervised learning algorithms. A Oliver, A Odena, C A Raffel, E D Cubuk, I Goodfellow, Advances in neural information processing systems. 31Oliver, A., Odena, A., Raffel, C.A., Cubuk, E.D., Goodfellow, I., 2018. Real- istic evaluation of deep semi-supervised learning algorithms. Advances in neural information processing systems 31. Modeling the distribution of normal data in pre-trained deep features for anomaly detection. O Rippel, P Mertens, D Merhof, 2020 25th International Conference on Pattern Recognition (ICPR). IEEERippel, O., Mertens, P., Merhof, D., 2021. Modeling the distribution of normal data in pre-trained deep features for anomaly detection, in: 2020 25th Inter- national Conference on Pattern Recognition (ICPR), IEEE. pp. 6726-6733. A unifying review of deep and shallow anomaly detection. L Ruff, J R Kauffmann, R A Vandermeulen, G Montavon, W Samek, M Kloft, T G Dietterich, K R Müller, Proceedings of the IEEE. Ruff, L., Kauffmann, J.R., Vandermeulen, R.A., Montavon, G., Samek, W., Kloft, M., Dietterich, T.G., Müller, K.R., 2021. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE . Deep one-class classification. L Ruff, R Vandermeulen, N Goernitz, L Deecke, S A Siddiqui, A Binder, E Müller, M Kloft, PMLRInternational conference on machine learning. Ruff, L., Vandermeulen, R., Goernitz, N., Deecke, L., Siddiqui, S.A., Binder, A., Müller, E., Kloft, M., 2018. Deep one-class classification, in: Interna- tional conference on machine learning, PMLR. pp. 4393-4402. Deep semi-supervised anomaly detection. L Ruff, R A Vandermeulen, N Görnitz, A Binder, E Müller, K R Müller, M Kloft, arXiv:1906.02694arXiv preprintRuff, L., Vandermeulen, R.A., Görnitz, N., Binder, A., Müller, E., Müller, K.R., Kloft, M., 2019. Deep semi-supervised anomaly detection. arXiv preprint arXiv:1906.02694 . f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. T Schlegl, P Seeböck, S M Waldstein, G Langs, U Schmidt-Erfurth, Medical image analysis. 54Schlegl, T., Seeböck, P., Waldstein, S.M., Langs, G., Schmidt-Erfurth, U., 2019. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. Medical image analysis 54, 30-44. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. T Schlegl, P Seeböck, S M Waldstein, U Schmidt-Erfurth, G Langs, International conference on information processing in medical imaging. SpringerSchlegl, T., Seeböck, P., Waldstein, S.M., Schmidt-Erfurth, U., Langs, G., 2017. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery, in: International conference on information pro- cessing in medical imaging, Springer. pp. 146-157. Natural synthetic anomalies for self-supervised anomaly detection and localization, in: European Conference on Computer Vision. H M Schlüter, J Tan, B Hou, B Kainz, SpringerSchlüter, H.M., Tan, J., Hou, B., Kainz, B., 2022. Natural synthetic anomalies for self-supervised anomaly detection and localization, in: European Con- ference on Computer Vision, Springer. pp. 474-489. Support vector method for novelty detection. B Schölkopf, R C Williamson, A Smola, J Shawe-Taylor, J Platt, Advances in neural information processing systems 12. Schölkopf, B., Williamson, R.C., Smola, A., Shawe-Taylor, J., Platt, J., 1999. Support vector method for novelty detection. Advances in neural informa- tion processing systems 12. Learning and evaluating representations for deep one-class classification. K Sohn, C L Li, J Yoon, M Jin, T Pfister, arXiv:2011.02578arXiv preprintSohn, K., Li, C.L., Yoon, J., Jin, M., Pfister, T., 2020. Learning and eval- uating representations for deep one-class classification. arXiv preprint arXiv:2011.02578 . Real-world anomaly detection in surveillance videos. W Sultani, C Chen, M Shah, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSultani, W., Chen, C., Shah, M., 2018. Real-world anomaly detection in surveil- lance videos, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6479-6488. Detecting outliers with foreign patch interpolation. J Tan, B Hou, J Batten, H Qiu, B Kainz, arXiv:2011.04197arXiv preprintTan, J., Hou, B., Batten, J., Qiu, H., Kainz, B., 2020. Detecting outliers with foreign patch interpolation. arXiv preprint arXiv:2011.04197 . Detecting outliers with poisson image interpolation. J Tan, B Hou, T Day, J Simpson, D Rueckert, B Kainz, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerTan, J., Hou, B., Day, T., Simpson, J., Rueckert, D., Kainz, B., 2021. Detecting outliers with poisson image interpolation, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 581-591. Support vector data description. D M Tax, R P Duin, Machine learning. 54Tax, D.M., Duin, R.P., 2004. Support vector data description. Machine learning 54, 45-66. Constrained contrastive distribution learning for unsupervised anomaly detection and localisation in medical images. Y Tian, G Pang, F Liu, Y Chen, S H Shin, J W Verjans, R Singh, G Carneiro, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerTian, Y., Pang, G., Liu, F., Chen, Y., Shin, S.H., Verjans, J.W., Singh, R., Carneiro, G., 2021. Constrained contrastive distribution learning for un- supervised anomaly detection and localisation in medical images, in: Inter- national Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 128-140. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weaklysupervised classification and localization of common thorax diseases. X Wang, Y Peng, L Lu, Z Lu, M Bagheri, R M Summers, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionWang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M., 2017. Chestx- ray8: Hospital-scale chest x-ray database and benchmarks on weakly- supervised classification and localization of common thorax diseases, in: Proceedings of the IEEE conference on computer vision and pattern recog- nition, pp. 2097-2106. Combating noisy labels by agreement: A joint training method with co-regularization. H Wei, L Feng, X Chen, B An, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionWei, H., Feng, L., Chen, X., An, B., 2020. Combating noisy labels by agree- ment: A joint training method with co-regularization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726-13735. Selfsupervise, refine, repeat: Improving unsupervised anomaly detection. J Yoon, K Sohn, C L Li, S O Arik, C Y Lee, T Pfister, Transactions on Machine Learning Research. Yoon, J., Sohn, K., Li, C.L., Arik, S.O., Lee, C.Y., Pfister, T., 2022. Self- supervise, refine, repeat: Improving unsupervised anomaly detection. Trans- actions on Machine Learning Research . Old is gold: Redefining the adversarially learned one-class classifier training paradigm. M Z Zaheer, J H Lee, M Astrid, S I Lee, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZaheer, M.Z., Lee, J.h., Astrid, M., Lee, S.I., 2020. Old is gold: Redefining the adversarially learned one-class classifier training paradigm, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14183-14193. Generative cooperative learning for unsupervised video anomaly detection. M Z Zaheer, A Mahmood, M H Khan, M Segu, F Yu, S I Lee, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZaheer, M.Z., Mahmood, A., Khan, M.H., Segu, M., Yu, F., Lee, S.I., 2022. Generative cooperative learning for unsupervised video anomaly detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pp. 14744-14754. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. V Zavrtanik, M Kristan, D Skočaj, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZavrtanik, V., Kristan, M., Skočaj, D., 2021. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8330- 8339. Context-encoding variational autoencoder for unsupervised anomaly detection. D Zimmerer, S A Kohl, J Petersen, F Isensee, K H Maier-Hein, arXiv:1812.05941arXiv preprintZimmerer, D., Kohl, S.A., Petersen, J., Isensee, F., Maier-Hein, K.H., 2018. Context-encoding variational autoencoder for unsupervised anomaly detec- tion. arXiv preprint arXiv:1812.05941 .
[ "https://github.com/caiyu6666/DDAD-ASR.", "https://github.com/Runinho/" ]
[ "Optimizing Rydberg Gates for Logical Qubit Performance", "Optimizing Rydberg Gates for Logical Qubit Performance" ]
[ "Sven Jandura \nCNRS, CESQ and ISIS (UMR 7006\nUniversity of Strasbourg\n67000StrasbourgFrance\n", "Jeff D Thompson \nDepartment of Electrical and Computer Engineering\nPrinceton University\n08544PrincetonNJUSA\n", "Guido Pupillo \nCNRS, CESQ and ISIS (UMR 7006\nUniversity of Strasbourg\n67000StrasbourgFrance\n" ]
[ "CNRS, CESQ and ISIS (UMR 7006\nUniversity of Strasbourg\n67000StrasbourgFrance", "Department of Electrical and Computer Engineering\nPrinceton University\n08544PrincetonNJUSA", "CNRS, CESQ and ISIS (UMR 7006\nUniversity of Strasbourg\n67000StrasbourgFrance" ]
[]
Robust gate sequences are widely used to reduce the sensitivity of gate operations to experimental imperfections. Typically, the optimization minimizes the average gate error, however, recent work in quantum error correction has demonstrated that the performance of encoded logical qubits is sensitive to not only the average error rate, but also the type of errors that occur. Here, we present a family of Rydberg blockade gates for neutral atom qubits that are robust against two common, major imperfections: intensity inhomogeneity and Doppler shifts. These gates outperform existing gates for moderate or large imperfections. We also consider the logical performance of these gates in the context of an erasure-biased qubit based on metastable 171 Yb. In this case, we observe that the robust gates outperform existing gates for even very small values of the imperfections, because they maintain the native large bias towards erasure errors for these qubits. These results significantly reduce the laser stability and atomic temperature requirements to achieve fault-tolerant quantum computing with neutral atoms. The approach of optimizing gates for logical qubit performance may be applied to other qubit platforms. arXiv:2210.06879v3 [quant-ph]
10.1103/prxquantum.4.020336
[ "https://export.arxiv.org/pdf/2210.06879v3.pdf" ]
252,873,545
2210.06879
e5182f4275b7ff19aac8879587fed7cf6aeb31dd
Optimizing Rydberg Gates for Logical Qubit Performance Sven Jandura CNRS, CESQ and ISIS (UMR 7006 University of Strasbourg 67000StrasbourgFrance Jeff D Thompson Department of Electrical and Computer Engineering Princeton University 08544PrincetonNJUSA Guido Pupillo CNRS, CESQ and ISIS (UMR 7006 University of Strasbourg 67000StrasbourgFrance Optimizing Rydberg Gates for Logical Qubit Performance (Dated: April 3, 2023) Robust gate sequences are widely used to reduce the sensitivity of gate operations to experimental imperfections. Typically, the optimization minimizes the average gate error, however, recent work in quantum error correction has demonstrated that the performance of encoded logical qubits is sensitive to not only the average error rate, but also the type of errors that occur. Here, we present a family of Rydberg blockade gates for neutral atom qubits that are robust against two common, major imperfections: intensity inhomogeneity and Doppler shifts. These gates outperform existing gates for moderate or large imperfections. We also consider the logical performance of these gates in the context of an erasure-biased qubit based on metastable 171 Yb. In this case, we observe that the robust gates outperform existing gates for even very small values of the imperfections, because they maintain the native large bias towards erasure errors for these qubits. These results significantly reduce the laser stability and atomic temperature requirements to achieve fault-tolerant quantum computing with neutral atoms. The approach of optimizing gates for logical qubit performance may be applied to other qubit platforms. arXiv:2210.06879v3 [quant-ph] I. INTRODUCTION Neutral atoms have emerged as a leading platform for digital quantum simulations and quantum computing [1][2][3][4][5]. Optical tweezer arrays enable hundreds of atoms to be configured in arbitrary, reconfigurable geometries [6][7][8][9][10]. Laser excitation to Rydberg states allows strong, controllable interactions via the Rydberg blockade [11], which enables studies of many-body dynamics [3] or quantum logic operations [12][13][14][15][16][17][18]. Recent work has demonstrated the execution of programmable quantum circuits on multi-qubit systems [19,20]. One of the most important directions for continued development of neutral atom processors is improving the fidelity of two-qubit Rydberg blockade gates [21]. The best demonstrated Bell state fidelity using hyperfine qubits is around F = 0.98 [14,22], and Bell state fidelities of F = 0.991 have been demonstrated between ground and Rydberg states [16]. While the achievable gate fidelity is fundamentally limited by the Rydberg state lifetime and the achievable laser intensity, currently demonstrated fidelities are dominated by technical imperfections including Doppler shifts from finite atomic temperature, spatial inhomogeneity of the laser intensity and frequency and intensity noise inherent to the laser [15,21,23,24]. With the recent demonstration of rudimentary faulttolerant quantum operations in several qubit platforms [25][26][27][28][29][30], there is a significant interest in predicting and optimizing the performance of not only physical qubit operations, but also logical qubit operations. The performance of quantum error correction depends strongly on the types of errors that occur. For example, qubits strongly biased towards certain Pauli errors [31][32][33][34][35], or erasure errors [36][37][38] can have significantly higher threshold error rates than unbiased qubits. However, maintaining this improved performance in a realistic architecture requires that the noise structure is preserved in a quantum circuit including realistic physical gate operations and the influence of imperfections [35,39]. In this work, we present a family of two-qubit Rydberg blockade gates that offer significantly improved performance in the presence of experimental imperfections. These gates are derived by combining analytic reasoning and quantum optimal control techniques [40,41] to produce experimentally realizable pulse shapes that are robust against laser amplitude inhomogeneity, Doppler shifts from finite atomic temperature, or both [42,43]. These pulses do not require individual addressability of the atoms and can be implemented using only a smooth modulation of the laser phase in time. In comparison to existing gates, these pulses reduce the error from intensity inhomogeneity and Doppler shifts by more than two orders of magnitude, at the expense of longer gate duration. Including the finite lifetime of the Rydberg state, we find that the robust pulses significantly outperform existing gates over a very wide parameter range. We also evaluate the performance of the robust gates in the context of a logical qubit. For this purpose, we adopt the XZZX surface code [44] and the strongly erasurebiased metastable 171 Yb qubit [37]. Surprisingly, we identify a large and experimentally relevant range of imperfections where the robust gates decrease the average gate fidelity at the physical qubit level (compared to existing gates), but improve the logical qubit performance by many orders of magnitude. This arises because the robust gates constrain imperfections to only cause Rydberg leakage, and not errors in the computational space. Since Rydberg leakage can be converted into erasure errors, these gates allow the predicted high erasure bias and threshold error rate of the metastable 171 Yb qubit to be extended to errors from laser amplitude fluctuations and Doppler shifts. There are two main implications of this work. First, we show that robust Rydberg blockade gates give rise to sig-nificantly improved physical and logical operation fidelity in the presence of technical imperfections, which significantly enhances the prospects for fault tolerant quantum computing (FTQC) with neutral atoms. Second, we demonstrate that physical and logical error rates can differ significantly in the context of qubits with biased noise, and that optimizing gates specifically for logical-level optimization can yield dramatic improvements. This insight may be applied to other qubit platforms with biased noise [31][32][33][34][35]. The remainder of this work is structured as follows: In Sec. II we introduce the Hamiltonian that we assume and the error sources that we consider. In Sec. III A we identify a gate which is robust against laser amplitude inhomogeneity, and prove in Sec. III B that a similar gate robust against laser detuning generally cannot exist. For two special cases of laser detuning we then show that robust gates nevertheless do exist: In Sec. III C we consider a laser detuning which arises due to an AC Stark shift and is thus correlated with the laser amplitude inhomogeniety, and identify robust pulses for this case. In Sec. IV, we propose a detuning robust gate based on reversing the sign of the detuning halfway through the gate, and observe that this can be realized for detunings arising from Doppler shifts by reversing the laser direction or exploiting the harmonic motion of trapped atoms. We then derive a gate that is robust to both Doppler and amplitude imperfections. In Sec. V we calculate the gate fidelities for the robust pulses using realistic experimental parameters, and in Sec. VI we calculate the logical error rate for our robust pulses in a small surface code, in the context of erasure-biased metastable 171 Yb qubits [37]. We use insights from Sec. VI in Sec. VII to design shorter pulses that are optimized for erasure bias but not average gate fidelity, which further improves the logical error rate in certain parameter regimes. II. LEVEL SCHEME AND HAMILTONIAN We assume the level scheme typical for Rydberg gates [11,12,14,42,[45][46][47] shown in Fig. 1(a). Each atom is modeled as a three level system with long-lived computational basis states |0 and |1 and an auxiliary Rydberg state |r with finite lifetime 1/γ. The |1 and |r states of each atom are coupled by a single global laser pulse with Rabi frequency Ω(t) = |Ω(t)| exp[iϕ(t)] of amplitude |Ω(t)| and phase ϕ(t). We require that at all times |Ω(t)| ≤ Ω max , where Ω max is the maximally achievable Rabi frequency. A van-der-Waals interaction between the two atoms shifts the energy of the state |rr by B |Ω(t)|, in the Rydberg blockade limit, such that the state |rr is never populated. We parameterize the experimental imperfections that can impact the gate in two ways. First, intensity fluctuations (i.e., from spatial inhomogeneity or power drifts of the laser field) give rise to a uncertain Rabi frequency at atom i (for i ∈ {1, 2}) of (1 + ε i )Ω(t). Second, laser FIG. 1. a) Qubits are stored in computational basis states |0 and |1 . The |1 state is coupled to a Rydberg state |r with lifetime 1/γ by a laser with Rabi frequency Ω(t). The amplitude of the laser has an unknown relative deviation of ε1 and ε2 and an unknown detuning ∆1 and ∆2 for atom 1 and 2, respectively. The van der Waals interaction leads to an energy shift B |Ω| if both atoms are in |r , preventing the simultaneous excitation of both atoms. b) The laser phase as a function of time for the time-optimal (TO) and amplituderobust (AR) pulse implementing the CZ gate. The amplitude of the pulses is always given by |Ω(t)| = Ωmax. frequency errors or Doppler shifts can result in a detuning ∆ i from the |1 ↔ |r transition. We consider ε i and ∆ i as unknown but constant over the duration of the gate. We note that the phase of Ω may vary from shot to shot by an uncertain offset, because it depends on the position of the atoms. However, in contrast to work on single qubit gates [48], this uncertainty does not lead to an error for two-qubit Rydberg blockade gates, as the population of the Rydberg state vanishes at the end of the gate, so that the additional phase can be absorbed into the definition of |r . The total Hamiltonian reads H = H 10 ⊕ H 01 ⊕ H 11 , with H 10 = (1 + ε 1 )Ω(t) 2 |10 r0| + h.c. + ∆ 1 |r0 r0| (1) H 01 = (1 + ε 2 )Ω(t) 2 |01 0r| + h.c. + ∆ 2 |0r 0r| (2) H 11 = (1 + ε 1 )Ω(t) 2 |11 r1| + h.c. + ∆ 1 |r1 r1| (3) + (1 + ε 2 )Ω(t) 2 |11 1r| + h.c. + ∆ 2 |1r 1r| Eq. (3) can be simplified to H 11 = √ 2(1 + ε + )Ω(t) 2 |11 W + | + h.c.(4)+ ∆ − |W + W − | + h.c. + ∆ + (|W + W + | + |W − W − |) . Here, |W ± = [(1 + ε 1 ) |r1 ± (1 + ε 2 ) |1r ] /β, with β = (1 + ε 1 ) 2 + (1 + ε 2 ) 2 , while ∆ ± = (∆ 1 ± ∆ 2 )/2 and ε + = (ε 1 + ε 2 )/2. Note that Eq. (3) and Eq. (4) only agree up to terms in second order in and ∆. Since and ∆ are typically small deviations, this suffices for the rest of our analysis. III. AMPLITUDE-AND DETUNING-ROBUST PULSES A. Amplitude Robust pulses We start by finding a pulse Ω(t) which is robust against amplitude deviations ε i = 0, while ∆ i = 0. We expand the Hamiltonians in ε as H 10 = H 10 + O(ε 2 1 ), with analogous expansions for |ψ 10 and |ψ 11 . A pulse Ω(t) of duration τ implements a CZ gate, up to single-qubit rotations, in the deviation-free case if for all q ∈ {10, 01, 11} it holds that |ψ (0) q (τ ) = e iθq |q with θ 11 − θ 10 − θ 01 = (2n + 1)π for an integer n. We measure the fidelity of a gate via the Bell-state fidelity, a commonly used fidelity measure on the Rydberg platform [14,15,49,50]. F = 1 16 1 + q∈{10,01,11} e −iθq q|ψ (0) q 2 .(5) The differences between the Bell-state fidelity and the average gate fidelity [51], a different common fidelity measure, are discussed in Ref. [40]. We say that the pulse is robust against amplitude deviations if |ψ (1) q = 0 for all q, such that the leading term of the deviation of |ψ q from e iθq |q is quadratic in ε i . To find the robust pulse we minimize the cost function J = 1 − F + q ψ (1) q |ψ (1) q(6) using a quantum optimal control method [52]. Following Ref. [40], we choose the numerical GRAPE algorithm [53] that assumes that Ω(t) is a piecewise constant pulse of duration τ described by N 1 parameters Ω 1 , ..., Ω N as Ω(t) = Ω j if t ∈ [(j − 1)τ /N, jτ /N ]. For a given set of parameters the cost function J can be found by solving the coupled differential equations |ψ [54]. The pulse minimizing J is found by optimizing over Ω 1 , ..., Ω N using a gradient descent optimizer, where GRAPE provides an efficient algorithm to calculate the gradient of J. (0) q = −iH (0) |ψ (0) q and |ψ (1) q = −iH (0) |ψ (1) q − iH (1) |ψ (0) q We find that for any pulse duration τ longer than a certain critical τ * ≈ 14.32/Ω max there exists a pulse with J = 0, i.e. a pulse that implements a CZ gate with fidelity F = 1 and is robust against amplitude deviations. We refer to the shortest possible pulse with τ = τ * as the "amplitude-robust" (AR) pulse. The AR pulse is of the form Ω(t) = Ω max exp[iϕ(t)], i.e. it has always maximal amplitude. The laser phase ϕ(t) of the AR pulse as a function of the dimensionless time tΩ max is shown in Fig. 1(b), together with the time-optimal (TO) pulse (without any robustness) found in Ref. [40]. We emphasize that the laser phase of the AR pulse is a smooth function of time, which may be easier to implement experimentally than a pulse with discontinuities in the amplitude or phase. The average time spent in the Rydberg state during the AR pulse is τ R = 4.74/Ω max , roughly 60% longer than the TO pulse, which achieves τ R = 2.96/Ω max . To demonstrate the robustness of the AR pulse we first calculate the infidelity 1 − F in the absence of Rydberg decay (γ = 0), as a function of the amplitude error ε 1 = ε 2 = ε. The infidelities are displayed in Fig. 2(a) by the orange dotted (AR pulse) and blue solid (TO pulse) lines. The AR pulse achieves an infidelity 1 − F < 10 −4 even for very large values of |ε| up to 0.05, improving on the TO pulse by several orders of magnitude. A similar robustness is obtained for ε 1 = ε 2 (not shown). B. Detuning Robust Pulses Now we turn to pulses which are robust against a detuning of the laser, but not against deviations of the laser amplitude, i.e. we assume ε 1 = ε 2 = 0. For this setting, we demonstrate analytically that no pulse exists for which the implemented gate is first-order insensitive to ∆ 1 and ∆ 2 . Analogously to the amplitude robust pulse, we expand the Hamiltonians and quantum states as H q = H (0) q + ∆ 1 H (1,1) q + ∆ 2 H (1,2) q and |ψ q (t) = |ψ (0) q + ∆ 1 |ψ (1,1) q + ∆ 2 |ψ (1,2) q + O(∆ 2 ). Through perturbation theory we find that for any pulse with |ψ (0) q = e iθq |q the first order correction satisfies q|ψ (1,j) q (τ ) = −ie iθq τ 0 ψ (0) q (t)|H (1,j) q |ψ (0) q (t) dt. (7) By using that j H (1,j) q = (I − |q q|) we see that j q|ψ (1,j) q (τ ) = −i exp(iθ q )τ R q , where τ R q = τ 0 dt(1 − | q|ψ (0) q | 2 ) is the average time spent outside of the computational subspace (i.e. in the Rydberg state) when starting in state |q . Since τ R q > 0 for all pulses which implement a CZ gate, we see that there is no pulse with |ψ (1) q (τ ) = 0. Hence there is no pulse such that the implemented gate is to first order insensitive to ∆ 1 and ∆ 2 . This motivates the search for different solutions to dominant detuning errors in experiments in Sec. IV below. Note that the same argument applies even if we restrict the discussion to equal detunings ∆ 1 = ∆ 2 = ∆. We note that while no robust pulse exists, it is still possible to minimize the sensitivity to finite ∆. In Appendix A we use a combination of GRAPE and analytical techniques to find the pulse that minimizes 1 − F at small but finite values of ∆, while still achieving F = 1 at ∆ = 0. This optimal pulse improves the infidelity by only 17% compared to the TO pulse, an improvement much less relevant than the several orders of magnitude achieved by, e.g., the AR pulse against amplitude deviations or the pulses described below. C. Stark-Shift Robust Pulses One important source of detuning are uncompensated AC-Stark shifts arising due to the off-resonant coupling of the laser to other states. For an uncertain Rabi frequency (1 + ε i )Ω max , these Stark shifts are of the form χ(1 + ε i ) 2 Ω 2 max . The known part χΩ 2 max of the Stark shift can be compensated by adjusting the laser frequency accordingly, leaving an unknown detuning ∆ i = ζε i Ω max + O(ε 2 ), where ζ = 2χΩ max is a dimensionless quantity measuring the strength of the Stark shift. Crucially, the detuning ∆ induced by a Stark shift and the amplitude deviation ε are correlated. This allows, in contrast to Sec. III B, for the existence of pulses which are to first order robust against Stark shifts, so called Stark shift robust (SSR) pulses. For SSR pulses we distinguish between the case of identical errors on both atoms (ε 1 = ε 2 ), and the case of independent errors (ε 1 = ε 2 ). For the case of identical errors, the SSR pulse (termed SSR1) can be found analogously to the AR pulse (Sec. III A) by changing the first order contribution H (1) q of the Hamiltonian to include the Stark shift (e.g. H 01 = Ω/2 |10 r0| + h.c. + ζΩ max |r0 r0|). The SSR1 pulses for ζ = 0.1 and ζ = 1 are shown in Fig. 3(a) together with the AR pulse (ζ = 0). The shape of the SSR1 pulse is a small perturbation of the AR pulse for small ζ (see ζ = 0.1), while for large ζ its shape is qualitatively different from the AR pulse (see ζ = 1). The SSR1 pulse for ζ = 0.1(ζ = 1) spends an average time of τ R = 4.76/Ω max (τ R = 4.22/Ω max ) in the Rydberg state , comparable to the AR pulse. For ζ 2, the optimization procedure fails to find an SSR1 pulse, which is consistent with the fact that for a pure detuning error (ζ → ∞), no robust pulse exists (Sec. III B). To quantify the performance of the SSR1 pulse compared to the AR pulse, Fig. 3(c) shows the infidelity 1−F at ε 1 = ε 2 = 0.01 for different values of ζ for the AR pulse (orange line) and the SSR1 pulses (brown triangles). While the infidelity of the AR pulse strongly increases with increasing ζ, the infidelity of the SSR pulses stays constant and outperforms the AR pulse at |ζ| = 1 by more than three orders of magnitude. For the case of independent errors, it has to be ensured that the final state |ψ 11 is also robust against amplitude deviations if ε − = (ε 1 − ε 2 )/2 = 0. This is achieved by term in the cost function (6). The resulting SSR pulse (termed SSR2) is displayed in Fig. 3(b) for ζ = 0.1. In contrast to the SSR1 pulse it is qualitatively different from the ADR pulse, due to the additional requirement that |ψ (1) 11 = 0. The SSR2 pulse for ζ = 0.1 spends an average time of τ R = 5.87/Ω max in the Rydberg state, roughly 25% more than the SSR1 pulse. expanding H 11 = H (0) 11 + ε + H (1) 11 + ε − H (1) 11 , where H (1) 11 = ζΩ max |W + W − | + h.c. The performance difference between the SSR1 and SSR2 pulse is demonstrated in Fig. 3(d,e). Panel (d) shows the infidelity of the AR (orange), SSR1(brown) and SSR2(pink) pulse at ζ = 0.1 as a function of ε + , while ε − = (ε 1 − ε 2 )/2 = 0. Here, the SSR1 and SSR2 pulses show a similar infidelity, and both significantly outperform the AR pulse. In contrast, panel (e) shows the infidelity of the same pulses as a function of ε − , while ε + = 0. As expected, the SSR2 pulse now significantly outperforms both the AR and the SSR1 pulse. IV. DOPPLER ROBUST PULSES A second practically relevant source of detuning error is the Doppler shift ∆ j = kv j where k is the wavevector of the laser and v j is the velocity of atom j along the direction of the laser. In contrast to a fixed detuning discussed in Sec. III B, the sign of the Doppler shift can be flipped by changing the sign of k or v. In the following we will argue that a robust gate can be achieved by splitting the gate into two identical halves applied sequentially, with the sign of ∆ j reversed between the two halves, as illustrated in Fig. 4(a). We start in Sec. IV A by showing how this reversal of ∆ j allows for pulses robust against Doppler shifts. In Sec. IV B we then discuss two potential experimental methods for reversing ∆ j . A. Design of Doppler Robust Pulses In order to be robust against Doppler errors, we use GRAPE to search for a pulse Ω(t) of duration τ that satisfies two conditions. First, implementing a controlled-R z (π/2) gate when ∆ = 0 (i.e., satisfying |ψ (0) q (τ ) = e iθq |q with θ 11 − θ 10 − θ 01 = (2n + 1/2)π for an integer n). Second, achieving a first order error |ψ (1,j) q which is entirely along the direction of |q , i.e. (I − |q q|) |ψ (1,j) q (τ ) = 0. The state after applying this pulse once is then |ψ q (τ ) =   e iθq + j ∆ j q|ψ (1,j) q (τ )   |q + O ∆ 2 . (8) When applying the pulse Ω(t) a second time, the sign of ∆ j is reversed. This implies |ψ q (2τ ) = e 2iθq |q +O(∆ 2 ), and thus the combined pulse is robust against Doppler errors. Crucially, the Rydberg state population after the first pulse is of order O(∆ 2 ), which makes the gate insensitive to the relative phase of the lasers between the two pulses and also allows an arbitrary waiting time between the pulses without incurring errors from Rydberg state decay. FIG. 4. Laser phase for a) the Doppler-robust (DR, green) b) the amplitude-and Doppler-robust (ADR, red), and c) the Stark-shift robust (SSR) ADR pulse . Each pulse consists of two identical halves, shown by the grey areas, applied with opposite Doppler shifts ∆ = kv (purple dashed lines), achieved as described in the text. In each of the halves the laser amplitude is maximal (|Ω(t)| = Ωmax), while Ω(t) = 0 outside of the grey areas. d) The population PR of the Rydberg state (averaged over the four computational basis states as initial states) for the DR and ADR pulse as a function of time. Note that between the two halves of the pulses (shown by the arrows) the population of the Rydberg state is zero. GRAPE can be applied to this problem analogously to the AR-case, with the cost function J = 1 − F + j,q ψ (1,j) q (τ )|(I − |q q|)|ψ (1,j) q (τ ) . (9) The shortest possible pulse which is robust against Doppler errors, called the "Doppler-robust" (DR) pulse, is shown in Fig. 4(a). The population of the Rydberg state (averaged over the four computational basis states as initial states) in shown in Fig. 4(d). As mentioned above, the population of the Rydberg state vanishes between the two pulses. The average time that the DR pulse spends in the Rydberg state (over the entire gate) is given by τ R = 5.56/Ω max . By simply adding the cost functions for the AR and the DR cases, we can identify the shortest possible pulse which is robust against both imperfections, which we call the "amplitude-and Doppler-robust" (ADR) pulse. The laser phase of the ADR pulse is displayed in Fig. 4(b), the population of the Rydberg state in Fig. 4(d). The ADR pulse spends an average time τ R = 10.37 in the Rydberg state, and is thus significantly more affected by its decay than the other three pulses. We remark that the ADR pulse is also robust against amplitude deviations ε i that are different in the two halves of the pulse, since each half is individually robust against amplitude deviations. The infidelity of all four pulses (TO, AR, DR and ADR) as a function of the detuning ∆ 1 of the first atom is shown in Fig. 2(b). For the TO and AR pulse the detuning is kept constant, while for the DR and ADR pulse its sign is switched after the first half of the pulse. The DR and ADR pulse achieve 1 − F < 10 −4 for |∆ 1 |/Ω max < 0.05, two to three orders of magnitude better than the TO and AR pulses. We also compare the performance of the DR and ADR pulses to the TO and AR pulses when varying the amplitude deviation in Fig. 2(a). As expected, the DR pulse does not show any robustness to amplitude deviations and behaves similar to the TO pulse, while the ADR pulse outperforms not only the TO pulse, but also the AR pulse. Analogously to the AR pulse, a Stark shift robust version exists also for the ADR pulse, shown for ζ = 0.1 in Fig. 4(d). As for the AR pulse, the Stark shift robust ADR pulse is qualitatively similar to the ADR pulse in the absence of a Stark shift. It spends an average time of τ R = 10.66/Ω max in the Rydberg state. Note that the ADR pulses is inherently robust against Stark shift errors with ε 1 = −ε 2 , so that in contrast to the AR pulse no distinction between identical and independent errors is necessary. To see this, first note that for the states |ψ 10 and |ψ 10 the distinction between identical and independent errors is irrelevant, because only one of the two atoms is affected by the error. The remaining state |ψ 11 (describing the state of the atoms after the first pulse half when staring in |11 ) is intrinsically robust against Stark shift errors with ε 1 = −ε 2 , because these errors only result in a nonzero detuning ∆ − = ζε 1 |Ω max | and a corresponding perturbation H (1) 11 = ζΩ max |W + W − | + h.c.. By the Doppler robustness of the ADR pulse, it holds that W ± |ψ (1) 11 = 0. But because H(1) 11 only leads to the population of |W − and otherwise leaves the evolution unchanged, it also holds that 11|ψ (1) 11 = 0, so that |ψ (1) 11 = 0. Hence each of the two pulse halves of the ADR pulse are robust against Stark shift errors with ε 1 = −ε 2 , so also the whole ADR pulse is robust against those errors. Note that this even holds if the ε i are different in each of the pulse halves. B. Reversing the Doppler Shift The DR and ADR pulse require that ∆ j is reversed after the first half of the pulse. Here we propose two methods for switching the sign of ∆ j , called the switch method and the wait method. In the switch method, the direction of the laser (i.e. the sign of k) is reversed between the two pulses. Note that the switch method works regardless of the relative phase between the two laser beams, because the Rydberg population in between the two pulse halves vanishes [see Fig. 4 (b)]. The wait method instead makes use of the fact that the atom is confined in a potential that is approximately harmonic, and therefore the velocity in the direction of the laser propagation is periodic with trap frequency ω tr . By waiting for a time π/ω tr between the two pulses, the sign of v, and thus the sign of ∆ j , is reversed. Since the Rydberg state is not populated between the two pulse halves, no additional decay error arises during this wait time even if ω tr γ. The wait method makes several implicit assumptions on the motion of the atoms. First, we assume that the propagation direction of the laser is along one of the normal modes of the trap. Second, we assume that the coherence of the atomic motion is much longer than one motional period [55]. Finally, we assume that the atomic temperature is low enough for the trap anhamonicity to be negligible. In Appendix B 1 we estimate the impact of a finite trap anharmonicity and show that for achievable experimental parameters it does not significantly affect the gate performance. Both the switch and the wait method require that the velocity of the atoms is approximately constant during each pulse. The acceleration of the atoms during the pulses is thus a source of error. For the switch method this error can be avoided by abruptly turning off the trapping potential during the pulses, which is already common practice in many experiments to avoid differential light shifts and anti-trapping of the Rydberg state [21]. However, this approach is unsuitable for the wait method, since it affects the velocity reversal, and is also undesirable because it heats the atom and may prevent the execution of deep circuits with many gates. To mitigate these disadvantages, we propose to modulate the trapping potential sinusoidally in time, and to apply the pulses at times where the potential is zero (Appendix C). This gives rise to an approximately constant velocity of the atoms during the pulses while also eliminating the differential light shift and heating from square-wave modulation [56]. For the wait method, we show that it is possible to apply the two pulse halves at two different times of vanishing potential such that the velocity is reversed between the two pulses. Note that fast sinusoidal trap modulation was experimentally demonstrated in an optical tweezer for the purpose of eliminating light shifts in a cavity QED experiment [56]. For the remainder of this work we assume that the trap modulation is applied for both the switch and the wait method, in Appendix B 2 we discuss the two methods without the trap modulation. In the last two subsections we have shown how the reversal of the Doppler shift in the middle of the gate allows to find a pulse that is robust against errors arising from Doppler shifts, and a pulse which is robust against both amplitude deviations and Doppler shifts. We demonstrated that the infidelity arising due to Doppler shifts is reduced by several orders of magnitude by the robust pulses, and provided two methods to switch the sign of the Doppler shift. V. INFIDELITIES IN A REALISTIC ERROR MODEL To assess the performance of these pulses in a realistic experiment, we now include the decay of the Rydberg state and atomic motion in a harmonic trap. For this we assume random and uncorrelated initial velocities and positions for both atoms, drawn from a normal distribution with standard deviation k B T /m and k B T /mω 2 tr , respectively, where T is the temperature and m is the mass of the atoms. We then assume that the atoms follow classical trajectories in the harmonic trap, which we incorporate as a modified Rabi frequencỹ Ω(t) = e −ikx(t) Ω(t). In the case of the DR and ADR pulses, we simulate both the switch and wait method of reversing the detuning between the two pulse halves, applying the modulation of the trapping potential as described in Appendix C for both methods. We sample the laser amplitude error ε from a normal distribution with standard deviation σ ε . For simplicity, we set ε 1 = ε 2 = ε, but note that the same robustness is achieved if ε 1 = ε 2 since the AR and ADR pulses are robust against ε 1 and ε 2 independently. For the Rydberg excitation, we consider parameters recently proposed for metastable 171 Yb qubits using a single-photon excitation to the |75 3 S 1 F = 3/2 Rydberg state [37], although we note that these are broadly similar to proposed or achieved values for other alkaline earth atoms such as Sr [16,41] and ground-state 171 Yb qubits [17,57]. The specific numerical values considered here are: Ω max = 2π×5.5 MHz, 2π/k = 302 nm, 1/γ = 100 µs, ω tr = 2π × 50 kHz and m = 171 u. The blockade strength is B ∼ 5 THz µm 6 /r 6 , so that B Ω max for realistic values of r in the range of 3-6 µm [17]. In the following calculations we assume a perfect Rydberg blockade.Similar parameters can be obtained for alkali atoms, but we note that two-photon excitation typically reduces the wavevector associated with Doppler shifts, at the expense of an additional decay error from the decay of the intermediate state. As the Stark shift strength in metastable 171 Yb qubits is unknown, we assume the value measured in 88 Sr qubits, given by 2πχ ≈ 10kHz/MHz 2 [16]. For the Rabi frequency Ω max this corresponds to ζ = 0.1 . In the following we always use the Stark shift robust variants of the AR and ADR pulse. Since we re- strict ourselves to the ε 1 = ε 2 case, we use the SSR1 pulse as the Stark shift robust variant of the AR pulse. The results are summarized in Fig. 5. We first consider the performance with only amplitude or Doppler errors. The infidelity as a function of σ ε with T = 0 is shown in Fig. 5(a). For small values of σ ε the decay of the Rydberg state is the dominant error, so the fidelities depend only on the time spent in the Rydberg state. The order of the pulses by increasing time spent in the Rydberg state (in our case identical to increasing pulse duration) is TO, AR, DR, ADR. In contrast, as σ ε increases, the infidelity of the AR and ADR pulses stays almost constant, while the infidelity of the TO and DR pulses increases quadratically. At σ ε 0.010 the AR pulse becomes favourable compared to the TO pulse, at σ ε 0.026 the ADR pulse becomes favorable compared to the TO pulse. The AR pulse outperforms the ADR pulse, because while both pulses are robust to deviations of the laser amplitude, the AR pulse spends less time in the Rydberg state. Next we consider the performance as a function of T , in the absence of amplitude errors (σ ε = 0) [ Fig. 5(b)]. At low temperatures the pulses are again ordered by the time they spent in the Rydberg state. With increasing temperature, however, the infidelity of the DR and ADR pulses stays almost constant, while the infidelity of the TO and AR pulses increases linearly with T (quadratically with ∆). For T 6µK the DR pulse outperforms the TO pulse, for T 28µK the ADR pulse outperforms the TO pulse. These results do not depend on whether the switch or the wait method is used. We note that the infidelity of the TO pulse at elevated temperatures is roughly consistent with previous estimates for various non-robust blockade gates [21,41]. We remark that with increasing temperature, other imperfections not considered here, such as the anharmonicity of the trap may become increasingly relevant (see Appendix B 1 for a discussion). Finally, we consider the infidelity in the presence of both amplitude and Doppler errors. Fig. 5(c) shows the infidelity of all four pulses over a range of imperfections. Additionally, the region in which each pulse has the lowest infidelity out of the four considered pulses is marked. The results are shown for the wait method, we verified that for the switch method identical results are obtained. As expected the TO pulse performs best when all imperfections are small, while ADR pulse is the best pulse for large amplitude uncertainties and large temperatures. The AR and DR pulses are the best choice when either the amplitude uncertainty is large or the temperature is large, while the other imperfection is small. VI. CONDITIONAL INFIDELITY AND LOGICAL ERROR RATE In the context of FTQC, both the error probability and the type of error are important. Recently, it was proposed that using the metastable state in 171 Yb to encode qubits ensures that the vast majority of Rydberg decay errors result in transitions out of the computational subspace that can be efficiently detected [37]. This converts decay errors into erasure errors, for which the FTQC threshold is much higher. Maintaining this advantage in the presence of experimental imperfections requires that the fraction of errors converted into erasures, R e , is close to unity. From the fact that a small fraction of decays of the Rydberg state does lead back to the computational subspace, Ref. [37] estimated that R e = 0.98 for the case of spontaneous decays from the Rydberg state, which is the only fundamental limitation to the fidelity of multi-qubit Rydberg gates. In this case, the estimated XZZX surface code threshold can be as high as p th = 4.15%, compared to p th = 0.93% for a comparable Pauli error model [37]. Similar estimates have been made for other erasure-biased qubits [38]. To understand the error decomposition of our robust pulses, we start by assuming that all Rydberg decay errors lead to transitions outside of the computational subspace and compute the conditional fidelity F c , i.e. the fidelity conditioned on the final state being in the computational subspace. The conditional infidelity 1 − F c is shown as a function of the amplitude uncertainty σ ε at T = 0 in Fig. 6(a) and as a function of T at σ ε = 0 in Fig. 6(b), assuming the same experimental parameters as in Sec. V. The robust pulses always outperform the non-robust pulses by several orders of magnitude and satisfy 1− For the rest of this work we show the results from the wait method, but have verified that they do not change significantly when using the switch method instead. To quantify the logical error rate achievable for the robust pulses for metastable 171 Yb qubits, we now include that not all, but only a fraction of r = 0.98 of the decay errors are converted into erasures [37]. To calculate the logical error rate we assume, analogously to Ref. [37], a Pauli error channel, in which an erasure error occurs with probability p e and a random Pauli error occurs with probability p p . We identify p e and p p from the probability p d of a decay error and the conditional infidelity 1 − F c in the exact error model as p e = rp d and p p ≈ (1 − r)p d + (1 − p d )(1 − F c ) . Then a fraction R e = p e /(p e +p d ) of all errors are converted into erasures. For the pulse proposed here, we compute a quantity related to R e , the erasure bias η e = 1/(1 − R e ). A larger erasure bias implies a larger fraction of erasure errors, and thus a higher threshold error probability. In the ε = ∆ = 0 case the erasure bias is then given by η e = 50, the predicted maximum value for 171 Yb [37]. Inspecting Fig. 6(c), it is clear that the TO pulse only achieves a large value of η e for very small temperatures and amplitude uncertainties and that η e drops rapidly to ∼ 1 in the presence of significant amplitude or Doppler errors. The ADR pulse instead maintains η e ≈ 50 over the whole range of considered parameters of σ ε up to 0.05 and T up to 50µK. The DR pulse achieves η e ≈ 50 as long as σ ε is small, but η e drops rapidly as σ ε increases, while the AR pulse shows the opposite behavior. Using the total infidelity and η e , we can estimate the logical error rate for a given error correcting code. For concreteness, we consider the d = 5 XZZX surface code, where the logical error rate after a single round of faulttolerant error correction, p L , is presented in Ref. [37] for a range of physical error rates p = 1 − F and erasure biases η e . The estimated value of p L is shown in Fig. 6(d) for the four pulses studied here. Remarkably, the ADR gate outperforms the other three gates by many orders of magnitude unless either σ e or T is very small. This is in contrast to Fig. 5(c), which shows that the total infidelity of the ADR gate is higher than the other three sequences until σ e or T reach modest values. This illustrates a fundamental tradeoff that is one of the central results of this paper: the larger infidelity associated with the longer ADR sequence is more than compensated by the increased η e . VII. CONDITIONALLY ROBUST PULSES In Sec. VI we saw that not only the infidelity, but also the conditional infidelity, plays an important role for the logical error rate. It is thus interesting to design pulses that are not robust, in the sense that the final state is to first order insensitive to imperfections, but only conditionally robust, in the sense that only the projection of the final state onto the computational subspace is to first order insensitive to imperfections. The advantage of these conditionally robust pulses is that they can be shorter and less susceptible to decay errors than their unconditional counterparts, since they have fewer robustness constraints to satisfy. Such conditionally robust pulses can be found with GRAPE by simply replacing the first order error |ψ J = 1 − F + q | q|ψ (1) q | 2(10) for conditionally amplitude robust pulses and simply J = 1 − F for conditionally Doppler robust pulses. In Fig. 7(a) we show the fastest possible conditionally amplitude-and Doppler-robust (CADR) pulse. This pulse spends only a time of τ R = 6.61/Ω max in the Rydberg state, less than half of the time of the ADR pulse and only 2.2 times longer than the TO pulse. Surprisingly, the laser phase ϕ(t) has a simpler shape than for the ADR pulse [See Fig. 4(a)], simplifying the experimental implementation. The logical error rate of the CADR pulse as a function if σ ε and T is shown in Fig. 7(b). We observe that there is a window of moderate temperatures up to 10 µK and moderate amplitude uncertainties up to 2% where the CADR pulse outperforms the TO, AR, DR and ADR pulse. For very low imperfections the TO, AR or DR pulse are better than the CADR pulse because they spend even less time in the Rydberg state, for large imperfections the ADR pulse is better because the logical error rate depends on the conditional as well as the unconditional fidelity, and for large enough imperfections FIG. 7. a) The laser phase as a function of time for the conditionally amplitude and Doppler robust (CADR) pulse (brown, solid line). The laser amplitude is maximal whenever the laser is turned on (marked by the gray areas) and zero in between the two halves of the pulse. The Doppler shift kv (purple, dashed line) has to switch sign between the two pulses. b) The logical error rate of the CADR pulse. The encircled region shows the range of imperfections where the CADR pulse outperforms the TO, AR, DR and ADR pulses. c) The logical error rate of the TO, ADR and CADR pulse along the dashed red line in panel b). Below 10 µK the CADR pulse has a lower logical error rate than the ADR pulse, the improvement is up to one order of magnitude. the unconditional infidelity for the CADR pulse is larger than for the ADR pulse. The logical error rate along a diagonal cut [red, dashed line in Fig. 7 Fig. 7(c). We observe that the CADR pulse can improve the logical error rate by up to an order of magnitude over the ADR pulse. (b)] is shown in We note that the same approach can be used to produce conditionally amplitude robust or conditionally Doppler robust pulses. However, these pulses only improve the logical error rate over a vanishingly small region in parameter space. VIII. CONCLUSION We have presented several new laser pulses that implement a CZ gate using a global laser and are robust against amplitude deviations of the laser and Doppler shifts, where the latter is achieved by reversing the sign of the Doppler shift between two halves of the pulse. Our robust pulses strongly suppress errors from amplitude deviations and Doppler shifts, at the cost of a slightly larger error due to decay of the Rydberg state. Additionally we estimated the logical qubit performance in the context of the erasure-biased metastable 171 Yb qubit, and found that two of the new pulses (ADR and CADR) outperform all other pulses unless the imperfections are very small, because they maintain the erasure bias even in the presence of imperfections. Robust pulses enable significant gains from quantum error correction even for significantly elevated temperatures and amplitude deviations. This work significantly relaxes the technical requirements for FTQC with neutral atoms [21] by extending the erasure conversion concept to amplitude and Doppler shift errors [37]. The most important of these is the constraint to have near-ground-state atomic temperatures: while this level of cooling has been achieved for a number of neutral atom qubit species [57][58][59][60][61], it is a fundamental challenge to maintain these temperatures over long sequences of gates or atom transport operations. We note that re-cooling after transport is a significant overhead in trapped ion CCD architectures [62,63], and that sympathetic cooling is not straightforward in neutral atoms [64]. Besides laser amplitude inhomogeneities, Doppler shifts, and decay of the Rydberg state, an experimental realization of the proposed gates will be affected by error sources not included in our analysis, including time dependent parameter fluctuations, such as laser phase noise, and uncertainties in the applied pulse shape. While our pulses are not explicitly robust to those errors, there is no indication that they are significantly more susceptible to them, compared to previous approaches such as the TO pulse. Including additional error sources in the design of robust pulses will be the subject of further investigation. Additionally, the presence of a finite blockade strength [40] and the effect of extra near-resonant levels such as additional hyperfine states in 171 Yb [17] can be included in the pulse design using the optimal control techniques applied in this work. Our work allows experimental efforts to be concentrated to errors sources against which our protocols are not robust, at the expense of error sources against which our protocols are robust. ACKNOWLEDGMENTS We are grateful to Shannon Whitlock, Hannes Pichler, Shuo Ma, Genyue Liu, Alex Burgers and Bichen Zhang for discussions. This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement number 955479, the Horizon Europe programme HORIZON-CL4-2021-DIGITAL-EMERGING-01-30 via the project "EuRyQa -European infrastructure for Rydberg Quantum Computing" grant agreement number 101070144 and from a state grant managed by the French National Research Agency under the Investments of the Future Program with the reference ANR-21-ESRE-0032. G. P. acknowledges support from the Institut Universitaire de France (IUF) and the University of Strasbourg Institute of Advanced Studies (USIAS). J.D.T acknowledges support from an ARO PECASE (W911NF-18-10215), and the ONR (N00014-20-1-2426). Note added -While finalizing this work we became aware of related work in Ref. [65]. Appendix A: Pulse Most Robust against Detuning Errors In Sec. III B we showed that there exists no pulse Ω(t) such that the quantum state after the pulse is to first order insensitive to the detunings ∆ 1 and ∆ 2 . In this appendix give a pulse which is nevertheless as robust as possible. For this we assume ∆ 1 = ∆ 2 = ∆, which is the case when the detuning error arises from frequency noise of the laser and not from Doppler shifts. The fidelity of the pulse can be expanded as F = F (0) + ∆F (1) + ∆ 2 F (2) + O(∆ 3 ) with F (0) given by Eq. 5 and F (1) = 1 2 F (0) q∈{10,01,11} Re e −iθq q|ψ (1) q (A1) F (2) = 1 16 q∈{10,01,11} e −iθq q|ψ (1) q 2 (A2) + 1 2 F (0) q∈{10,01,11} Re e −iθq q|ψ (2) q We now consider a pulse with F (0) = 1, i.e. |ψ 0) q = e iθq |q . By normalization of |ψ q it must hold that Re(e −iθq q|ψ F (2) = 1 16 q e −iθq q|ψ (1) q 2 − 1 4 q ψ (1) q |ψ (1) q (A3) Our goal is now to find the pulse which minimizes −F (2) while satisfying F (0) = 1. As a reference we calculate for the time-optimal pulse −F (2) = 3.45/Ω 2 max . We now insert the relation e −iθq q|ψ (7) into Eq. (A3) and use the fact that, because ∆ 1 = ∆ 2 , |ψ 10 and |ψ 01 are identical up to relabeling the states. We obtain −F (2) (1) q = −iτ R q (see Eq.= −F (2) c − F (2) r with −F (2) c = 1 16 4(τ R 10 ) 2 + 3(τ R 11 ) 2 − 2τ R 10 τ 11 (A4) −F (2) r = 1 2 | r0|ψ(1)10 | 2 + 1 4 | W + |ψ(1)11 | 2(A5) Here, −F We do so using the GRAPE algorithm with the cost func- tion J = C(1 − F (0) ) − F (2) c for a large C = 10 4 . The large value of C ensures that the pulse minimizing J will have F (0) ≈ 1, while the second term in J ensures that the pulse minimizes −F (2) c . We find that the minimal value is −F (2) c = 2.87/Ω 2 max , for a pulse Ω * with duration τ * = 7.70/Ω max . The pulse Ω * is shown in the shaded area in Fig. 8. Through the pulse Ω * the value of −F (2) c decreases by 17% compared to the TO pulse. Now we show the following: Using the pulse Ω * as a building block we can construct a new pulse Ω(t) with −F (2) r = 0, while still −F (2) c = 2.87/Ω 2 max , the same value as for Ω * . The amplitude and phase of Ω(t) are shown schematically in Fig. 8. The pulse is described by 5 parameters τ on , τ 1 , τ 2 , ϕ 1 and ϕ 2 and consists of seven parts: Ω(t) =                        Ω max e iϕ1 if t ∈ [0, t 1 ] 0 if t ∈ (t 1 , t 2 ] Ω max e i(ϕ1+π) if t ∈ (t 2 , t 3 ] Ω * (t) if t ∈ (t 3 , t 4 ] Ω max e iϕ2 if t ∈ (t 4 , t 5 ] 0 if t ∈ (t 5 , t 6 ] Ω max e i(ϕ2+π) if t ∈ (t 6 , t 7 ](A6) with t 1 = τ on , t 2 − t 1 = τ 1 , t 3 − t 2 = τ on , t 4 − t 3 = τ * , t 5 − t 4 = τ on , t 6 − t 5 = τ 2 and t 7 − t 6 = τ on . The pulse Ω(t) starts by turning on the laser for a time τ on with phase ϕ 1 , followed by an idle time of τ 1 and another pulse of duration τ on , but this time with opposite sign of Ω, i.e. with phase ϕ 1 + π. After these first three parts the pulse Ω * is applied, followed by the last three parts which have the same structure as the first three parts, but with laser phase ϕ 2 and idle time τ 2 . The pulse Ω(t) is designed such that for ∆ = 0 it implements the same gate as Ω * (t), because the second and the sixth part of Ω(t) have no effect and the first and third part as well as the fourth and seventh part cancel each other. In the following we consider the limit τ on → 0, while keeping τ on τ j constant (for j = 1, 2). We calculate how the pulse Ω(t) acts on the relevant computational basis states |10 , |01 and |11 , starting with |10 . We start by calculating the zeroth order contribution of the state, |ψ (0) 10 . Outside of the short parts of the pulse with duration τ on it is given by |ψ (0) 10 (t) = (A7)      |10 − ie −iϕ1 Ω max τ on /2 |r0 if t ∈ [t 1 , t 2 ] |ψ (0) 10, * (t − t 3 ) if t ∈ [t 3 , t 4 ] e iθ10 |10 − ie −iϕ2 Ω max τ on /2 |r0 if t ∈ [t 5 , t 6 ] where |ψ 10, * denotes the state when executing the pulse Ω * (t). Note that because we work in the limit τ on → 0 there is no population in the Rydberg state except when the pulse Ω * is executed, so that 10|ψ 10 (t 7 ) , we note that the pulse Ω * maps the state |r0 to e −iθ10 |r0 in the ∆ = 0 case. This is due to the symmetry between |10 and |r0 in the Hamiltonian (1). This fact together with Eq. (A8) gives r0|ψ (1) 10 (t 7 ) = − e i(−θ10−ϕ1) τ 1 τ on Ω max /2 (A8) + r0|ψ(1) 10, * (τ * ) − e i(θ10−ϕ2) τ 2 τ on Ω max /2 Analogously we find when starting in |11 that W + |ψ (1) 11 (t 7 ) = − √ 2e i(−θ11−ϕ1) τ 1 τ on Ω max /2 (A9) + W + |ψ (1) 11, * (τ * ) − √ 2e i(θ11−ϕ2) τ 2 τ on Ω max /2 The pulse Ω(t) thus satisfies −F (2) r = 0 if 1 2 e −iθ10 e iθ10 √ 2e −iθ11 √ 2e iθ11 ξ 1 ξ 2 = r0|ψ(1) 10, * (τ * ) W + |ψ (1) 11, * (τ * ) (A10) with ξ j = e −iφj τ j τ on Ω max . Now the ξ j and thus the τ j and ϕ j can be found by simply solving the linear system of equations (A10). We find the solutions ϕ 1 = 2.21, ϕ 2 = −0.05, τ 1 = τ 2 = 1.01/(τ on Ω 2 max ). We numerically verified that in the limit τ on → 0 the pulse Ω(t) indeed achieves −F (2) = 2.87/Ω 2 max . In Secs. V and VI we saw that the switch and the wait methods give identical fidelities and logical error rates. In this appendix we discuss two aspects in which the performance of the switch and the wait differ. We start in Sec. B 1 by discussing the effects of trap inhomogeneities and anharmonicities, which only affect the wait method. In Sec. B 2 by we then show that without the trap modulation the performance of the wait method is unaffected, while the performance of the switch method decreases significantly. We conclude with a comparison between the switch and the wait method in Sec. B 3. Robustness to Trap Inhomogeneities and Anharmonicity The wait method is sensitive to several non-ideal characteristics that can arise in practice. The first is an imprecise knowledge of the trap frequency, or a difference in frequency across multiple traps. This gives a contribution to the infidelity scaling as T σ 2 ω , where σ ω is the standard deviation of the trap frequency ω tr . We find that the induced errors are almost completely in the computational subspace, so that the contribution to the conditional infidelity is the same as to the infidelity. In order to keep a relative impact below 10% on the infidelity at T = 50 µK, we require σ ω /ω tr < 0.06 (0.04) for the DR (ADR) pulse. However, to maintain an erasure bias above 45 [90% of the value shown in Fig. 6(c)], the trap frequencies must be stabilized to σ ω /ω < 0.01 (0.005). We note that achieving a 1% frequency uniformity requires only 2% intensity uniformity (assuming equal beam sizes), a number which has been experimentally demonstrated in large-scale tweezer arrays [66]. The wait method is also sensitive to the anharmonicity of the trap, which naturally arises from the Gaussian shape of the optical tweezer, and gives rise to a temperature-dependent trap frequency. As before, there is a similar contribution to the infidelity and conditional infidelity, which scales as T 3 /U 2 0 , where U 0 is the tweezer depth. Considering an optical tweezer with a 1/e 2 intensity radius w 0 = 500 nm, the trap frequency ω tr = 2π×50 kHz assumed in Secs. V and VI is consistent with a tweezer depth of U 0 = 127 µK, for which the anharmonicity will affect the erasure bias in Fig. 6(c) significantly, reducing it to approximately η e = 27(17) at T = 10 µK and η e = 3.0(1.8) at T = 30 µK for the DR(ADR) pulse. However, the conditional infidelity improves as 1/U 2 0 , allowing for rapid improvement in deeper traps. For example, setting U 0 = 2 mK can recover η e ≈ 40 for the ADR pulse at temperatures up to 30 µK. At fixed w 0 , this will increase the trap frequency to ω tr = 2π × 200 kHz, which we have verified does not significantly affect the other results presented. The effect is even smaller for lighter atoms or larger w 0 . Note that trap inhomogenieties and anharmonicity affect the wait method regardless of the use of the trap modulation, while the switch method is unaffected by these imperfections. Performance without Trap Modulation In Fig. 9(a),(b) we show the infidelity and the conditional infidelity without the trap modulation. We observe that for the wait method the infidelity and conditional infidelity are essentially identical to the results including the trap modulation [see Figs. 5(b) and 5(c)]. In contrast, for the switch method without the trap modulation the ADR pulse has a slightly worse fidelity, and the conditional infidelity for the DR and ADR pulse increases by one to two orders of magnitude. We conclude that the switch method is significantly more sensitive to changes in the velocity during each pulse than the wait method. We attribute this to the fact that for a time dependent velocity, the wait method gives a more exact reversal of the Doppler shift than the switch method. This is because in the switch method only the Doppler shift at the end of first pulse is the negative of the Doppler shift at the beginning of the second pulse, while for the wait method the Doppler shift at each point in the first half is the negative of the Doppler shift of the same point in the second half. In Fig. 9(c) the logical error rate of the switch method without the trap modulation is shown. The error rate achievable at 50 µK using the ADR pulse increases by approximately one order of magnitude to 10 −4 compared to the use of the trap modulation. Again the wait method performs identically regardless of whether the trap modulation is applied (not shown). Note that it is still favourable to apply the trap modulation for the wait method, because it mitigates differential light shifts and the anti-trapping of the Rydberg state, which are not considered above. Comparison between Switch and Wait Method Combining the results from Secs. B 1 and B 2 we can summarize the advantages and disadvantages of both methods: The switch method requires a more elaborate experimental setup than the wait method, because the laser direction has to be switched. Additionally, its performance is worse than the wait method if the trap modulation is not applied. However, the switch method is more robust to trap inhomogenieties and anharmonicity. On the other hand, the wait method can be implemented with just a single laser beam, and achieves the same performance regardless of whether the trap modulation is applied (neglecting errors from differential light shifts). However, it is affected by trap inhomogenieties and anharmonicity, whose effect can be mitigated by increasing the trap frequency and depth. Appendix C: Modulation of the Trapping Potential The wait method proposed in Sec. IV B requires a periodic motion of the atoms in the optical tweezer trap. While this can be in principle achieved simply by keeping the trapping potential constant in time, this approach induces differential light shifts between ground and Rydberg states, and can also lead to the anti-trapping of the Rydberg state. In the following we provide a method to achieve a periodic motion of the atoms through a trapping potential which is sinusoidally modulated in time. The pulse halves can then be executed at times where the trap intensity vanishes and there is no differential light shift and no anti-trapping of the Rydberg state. Additionally this method ensures that the velocity of the atoms, and thus the Doppler shift, is constant during each of the pulse halves. This is an improvement be-FIG. 10. We propose a sinusodial modulation of the trapping potential V (blue, solid line) with frequency ν. The exemplary velocity of an atom moving in this potential is shown by the red, dash-dotted line. The two halves of the DR and ARD pulse are executed in adjacent time-slots marked by the green, vertical lines. During the time-slots the potential is zero, and the velocity is thus constant. The velocity switches sign between adjacent time-slots. cause the Doppler robust pulses are only designed to be robust against a Doppler shift which is constant during each pulse half. We propose to modulate the potential V induced by the optical tweezers trapping the atoms sinusoidally in time with frequency ν, so that it is given by V (t, x) = V 0 (1 − cos(νt))x 2 (C1) The evolution of the atoms in the trap is thus governed byẍ = − 2V0 m (1 − cos(νt))x, which is a rescaled version of the Mathieu differential equation [67]. According to Floquet's theorem, the solutions are of the form x(t) = e iωtrt y(t)+c.c. where y is a 2π/ν-periodic function and 2ω tr /ν is called the Mathieu characteristic exponent. While ω tr can in general be complex, it has been shown that ω tr ∈ R for sufficiently large ν [68]. For example, in the limit ν → ∞ the potential V (t) can be replaced by its time average and we obtain simply ω tr = 2V 0 /m. We now apply the two pulse halves that make up the DR and ADR pulse centered at times t 1 = 2πn 1 /ν and t 2 = 2πn 2 /ν, where n 1 and n 2 are integers. In this way V (t 1 ) = V (t 2 ) = 0, so the atoms move with a constant velocity and differential light shifts vanish. Now our goal is to find a modulation frequency ν such that a velocity reversal is achieved between t 1 and t 2 . Since we require v(t 1 ) = −v(t 2 ) the relation (t 2 − t 1 )ω tr = π has to be satisfied, so ν = 2(n 2 −n 1 )ω tr . For a given value of V 0 and (n 2 − n 1 ) we can now numerically find ν by first finding the Mathieu characteristic exponent ω tr . Real solutions for ν exist for n 2 −n 1 ≥ 2. To ensure that the duration of the time slots with almost constant velocity is as long as possible we take n 2 − n 1 = 2 and find ν = 4.079 2V 0 /m. In our numerical calculations of the infidelity we took V 0 such that ω tr = ν/4 stays at the value of 50 kHz that we assumed before the trap modulation. The maximum potential 2V 0 has to be roughly twice as much as the potential needed for an oscillation of the atoms at the same frequency without the trap modulation. The potential V (t) and the exemplary velocity v of an atom moving in this potential are shown in Fig. 10. The two pulse halves making up the Doppler robust pulse are to be applied in two adjacent time slots marked by the vertical green bars. As can be observed from the figure, the velocity is flat at these time slots, and changes sign between any two adjacent time slots. Furthermore, the differential light shift is strongly suppressed over the duration of the pulse. the quantum state when starting in |01 as |ψ 10 = |ψ FIG. 2 . 2Gate infidelity 1 − F in the absence of Rydberg decay. (a) Infidelity of the time-optimal (TO), amplitude-robust (AR), Doppler-robust (DR) and amplitude-and Dopplerrobust (ADR) pulses as a function of ε = ε1 = ε2, with ∆1 = ∆2 = 0. (b) Infidelity of the same pulses as a function of ∆1, with ε = 0 and ∆2 = 0. FIG. 3 3. a) The Stark shift robust pulses at identical errors ε1 = ε2 (SSR1) for ζ = 0.0 (identical to AR pulse), ζ = 0.1 and ζ = 1 b) The Stark shift robust pulses at independent errors ε1 == ε2 (SSR2) for ζ = 0.1 c) The infidelity 1 − F for the AR pulse (solid lines) and the SSR1 pulses (triangles) as a function of ζ at ε1 = ε2 = 0.01 d)(e) The infidelity as a function of ε+(ε−) for the AR, SSR1 and SSR2 pulse at ζ = 0.1 FIG. 5 . 5The infidelity 1 − F of the TO, AR, DR and ADR pulse at different values of the amplitude uncertainty σε and the atomic temperature T . a) 1 − F as a function of σε at T = 0. For the DR and ADR pulse, open symbols show the infidelity with the switch method, while filled symbols show the infidelity with the wait method. Sinusoidal modulation of the trap is applied in all cases. b) 1 − F as a function of T at σε = 0. c) Color plot of 1 − F as a function of both σε and T . For the DR and ADR pulse we use the wait method including the sinusoidal modulation of the trap. The encircled regions show the range of imperfections where each pulse performs the best. F c 1 − 1F [see Fig. 5(a),(b) and Fig. 6(a),(b)], showing that errors are dominated by transitions out of the computational subspace. This is expected because the robust pulses effectively trade sensitivity to imperfections for Rydberg decay. Note that again the switch and the wait method give identical results in Figs. 6(a)(b). FIG. 6 6. a) Conditional fidelity 1 − Fc of the four pulses as a function of the amplitude uncertainty σε at T = 0. For the DR and ADR pulse, open symbols show results with the switch method, while filled symbols show the results with the wait method. For both methods the sinusoidal modulation of the trap is applied. b) 1 − Fc as function of T at σε = 0. c) The erasure bias ηe as a function of T and σε (using the wait method including the trap modulation for the DR and ADR pulse), assuming ηe = 50 in the absence of imperfections. d) The logical error rate pL as a function of T and σε. The encircled regions show the range of imperfections where each pulse performs the best. onto the computational subspace in the objective function J [see Eqs. (6),(9)], giving Eqs. A1 and A2 we obtain F (1) = 0 and is the conditional fidelity and measures the infidelity arising from deviations of the final state in theFIG. 8. Schematic shape of the amplitude (blue, left vertical axis) and phase (red, right vertical axis) of the pulse achieving the highest robustness against the detuning of the laser. As explained in Appendix A the pulse consists of 7 pieces of durations τon, τ1, τon, τ * , τon, τ2 and τon, respectively. The shaded area shows the pulse minimizing −F (2) c , while the rest of the pulse compensates the errors in the Rydberg state. computational subspace while −F (2) r measures the infidelity arising from population of the Rydberg state. Since τ R q > 0 we see that also −F (2) c > 0 and thus −F (2) > 0. A lower bound on the minimal possible −F (2) is obtained by minimizing −F (2) c over all pulses with F (0) = 1. calculate the error along |r0 , r0|ψ FIG. 9 . 9Performance of the switch and the wait method without the trap modulation a) Infidelity at σε = 0, b) Conditional infidelity at σε = 0 c) The logical error rate switch method. For the ADR pulse a decrease of the logical error rate by approximately one order of magnitude is observed compared to the use of the trap modulation(Fig. 6d)) contains only Stark shift terms, and including the corresponding ψ(1) 11 |ψ (1) 11 Appendix B: Comparing Switch and Wait Method of Doppler Shift Reversal I Bloch, J Dalibard, W Zwerger, 10.1103/RevModPhys.80.885Many-body physics with ultracold gases. 80885I. Bloch, J. Dalibard, and W. Zwerger, Many-body physics with ultracold gases, Reviews of Modern Physics 80, 885 (2008). Quantum information with Rydberg atoms. M Saffman, T G Walker, K Mølmer, 10.1103/RevModPhys.82.2313Reviews of Modern Physics. 822313M. Saffman, T. G. Walker, and K. Mølmer, Quantum information with Rydberg atoms, Reviews of Modern Physics 82, 2313 (2010). Many-body physics with individually controlled Rydberg atoms. A Browaeys, T Lahaye, 10.1038/s41567-019-0733-zNature Physics. 16132A. Browaeys and T. Lahaye, Many-body physics with individually controlled Rydberg atoms, Nature Physics 16, 132 (2020). L Henriet, L Beguin, A Signoles, T Lahaye, A Browaeys, G.-O Reymond, C Jurczak, 10.22331/q-2020-09-21-327Quantum computing with neutral atoms, Quantum. 4327L. Henriet, L. Beguin, A. Signoles, T. Lahaye, A. Browaeys, G.-O. Reymond, and C. Jurczak, Quantum computing with neutral atoms, Quantum 4, 327 (2020). Quantum simulation and computing with Rydberg-interacting qubits. M Morgado, S Whitlock, 10.1116/5.0036562AVS Quantum Science. 323501M. Morgado and S. Whitlock, Quantum simulation and computing with Rydberg-interacting qubits, AVS Quan- tum Science 3, 023501 (2021). Atom-by-atom assembly of defect-free one-dimensional cold atom arrays. M Endres, H Bernien, A Keesling, H Levine, E R Anschuetz, A Krajenbrink, C Senko, V Vuletic, M Greiner, M D Lukin, 10.1126/science.aah3752Science. 3541024M. Endres, H. Bernien, A. Keesling, H. Levine, E. R. Anschuetz, A. Krajenbrink, C. Senko, V. Vuletic, M. Greiner, and M. D. Lukin, Atom-by-atom assembly of defect-free one-dimensional cold atom arrays, Science 354, 1024 (2016). An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays. D Barredo, S De Léséleuc, V Lienhard, T Lahaye, A Browaeys, 10.1126/science.aah3778Science. 3541021D. Barredo, S. de Léséleuc, V. Lienhard, T. Lahaye, and A. Browaeys, An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays, Science 354, 1021 (2016). Defect-Free Assembly of 2D Clusters of More Than 100. D Ohl De Mello, D Schäffner, J Werkmann, T Preuschoff, L Kohfahl, M Schlosser, G Birkl, D. Ohl de Mello, D. Schäffner, J. Werkmann, T. Preuschoff, L. Kohfahl, M. Schlosser, and G. Birkl, Defect-Free Assembly of 2D Clusters of More Than 100 Single-Atom Quantum Systems. 10.1103/PhysRevLett.122.203601Physical Review Letters. 122203601Single-Atom Quantum Systems, Physical Review Letters 122, 203601 (2019). Synthetic three-dimensional atomic structures assembled atom by atom. D Barredo, V Lienhard, S De Léséleuc, T Lahaye, A Browaeys, 10.1038/s41586-018-0450-2Nature. 56179D. Barredo, V. Lienhard, S. de Léséleuc, T. Lahaye, and A. Browaeys, Synthetic three-dimensional atomic struc- tures assembled atom by atom, Nature 561, 79 (2018). Large-scale multilayer architecture of single-atom arrays with individual addressability. M Schlosser, S Tichelmann, D Schäffner, D O De Mello, M Hambach, G Birkl, arXiv:1902.05424M. Schlosser, S. Tichelmann, D. Schäffner, D. O. de Mello, M. Hambach, and G. Birkl, Large-scale multi- layer architecture of single-atom arrays with individual addressability, arXiv:1902.05424 (2019). Fast Quantum Gates for Neutral Atoms. D Jaksch, J I Cirac, P Zoller, S L Rolston, R Côté, M D Lukin, 10.1103/PhysRevLett.85.2208Physical Review Letters. 852208D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, Fast Quantum Gates for Neutral Atoms, Physical Review Letters 85, 2208 (2000). Multibit C k NOT quantum gates via Rydberg blockade. L Isenhower, M Saffman, K Mølmer, 10.1007/s11128-011-0292-4Quantum Information Processing. 10755L. Isenhower, M. Saffman, and K. Mølmer, Multibit C k NOT quantum gates via Rydberg blockade, Quantum Information Processing 10, 755 (2011). Entanglement of Two Individual Neutral Atoms Using Rydberg Blockade. T Wilk, A Gaëtan, C Evellin, J Wolters, Y Miroshnychenko, P Grangier, A Browaeys, 10.1103/PhysRevLett.104.010502Physical Review Letters. 10410502T. Wilk, A. Gaëtan, C. Evellin, J. Wolters, Y. Miroshny- chenko, P. Grangier, and A. Browaeys, Entanglement of Two Individual Neutral Atoms Using Rydberg Blockade, Physical Review Letters 104, 010502 (2010). H Levine, A Keesling, G Semeghini, A Omran, T T Wang, S Ebadi, H Bernien, M Greiner, V Vuletić, H Pichler, M D Lukin, 10.1103/PhysRevLett.123.170503Parallel Implementation of High-Fidelity Multiqubit Gates with Neutral Atoms. 123170503H. Levine, A. Keesling, G. Semeghini, A. Omran, T. T. Wang, S. Ebadi, H. Bernien, M. Greiner, V. Vuletić, H. Pichler, and M. D. Lukin, Parallel Implementation of High-Fidelity Multiqubit Gates with Neutral Atoms, Physical Review Letters 123, 170503 (2019). Rydberg-Mediated Entanglement in a Two-Dimensional Neutral Atom Qubit Array. T M Graham, M Kwon, B Grinkemeyer, Z Marra, X Jiang, M T Lichtman, Y Sun, M Ebert, M Saffman, 10.1103/PhysRevLett.123.230501Physical Review Letters. 123230501T. M. Graham, M. Kwon, B. Grinkemeyer, Z. Marra, X. Jiang, M. T. Lichtman, Y. Sun, M. Ebert, and M. Saffman, Rydberg-Mediated Entanglement in a Two- Dimensional Neutral Atom Qubit Array, Physical Review Letters 123, 230501 (2019). High-Fidelity Entanglement and Detection of Alkaline-Earth Rydberg Atoms. I S Madjarov, J P Covey, A L Shaw, J Choi, A Kale, A Cooper, H Pichler, V Schkolnik, J R Williams, M Endres, 10.1038/s41567-020-0903-zNature Physics. 16857I. S. Madjarov, J. P. Covey, A. L. Shaw, J. Choi, A. Kale, A. Cooper, H. Pichler, V. Schkolnik, J. R. Williams, and M. Endres, High-Fidelity Entanglement and Detection of Alkaline-Earth Rydberg Atoms, Nature Physics 16, 857 (2020). Universal Gate Operations on Nuclear Spin Qubits in an Optical Tweezer Array of Yb 171. S Ma, A P Burgers, G Liu, J Wilson, B Zhang, J D Thompson, S. Ma, A. P. Burgers, G. Liu, J. Wilson, B. Zhang, and J. D. Thompson, Universal Gate Operations on Nuclear Spin Qubits in an Optical Tweezer Array of Yb 171 . Atoms, 10.1103/PhysRevX.12.021028Physical Review X. 1221028Atoms, Physical Review X 12, 021028 (2022). Long-lived Bell states in an array of optical clock qubits. N Schine, A W Young, W J Eckner, M J Martin, A M Kaufman, 10.1038/s41567-022-01678-wNat. Phys. 181067N. Schine, A. W. Young, W. J. Eckner, M. J. Martin, and A. M. Kaufman, Long-lived Bell states in an array of optical clock qubits, Nat. Phys. 18, 1067 (2022). A quantum processor based on coherent transport of entangled atom arrays. D Bluvstein, H Levine, G Semeghini, T T Wang, S Ebadi, M Kalinowski, A Keesling, N Maskara, H Pichler, M Greiner, V Vuletić, M D Lukin, 10.1038/s41586-022-04592-6Nature. 604451D. Bluvstein, H. Levine, G. Semeghini, T. T. Wang, S. Ebadi, M. Kalinowski, A. Keesling, N. Maskara, H. Pichler, M. Greiner, V. Vuletić, and M. D. Lukin, A quantum processor based on coherent transport of en- tangled atom arrays, Nature 604, 451 (2022). Multi-qubit entanglement and algorithms on a neutral-atom quantum computer. T M Graham, Y Song, J Scott, C Poole, L Phuttitarn, K Jooya, P Eichler, X Jiang, A Marra, B Grinkemeyer, M Kwon, M Ebert, J Cherek, M T Lichtman, M Gillette, J Gilbert, D Bowman, T Ballance, C Campbell, E D Dahl, O Crawford, N S Blunt, B Rogers, T Noel, M Saffman, 10.1038/s41586-022-04603-6Nature. 604457T. M. Graham, Y. Song, J. Scott, C. Poole, L. Phutti- tarn, K. Jooya, P. Eichler, X. Jiang, A. Marra, B. Grinke- meyer, M. Kwon, M. Ebert, J. Cherek, M. T. Licht- man, M. Gillette, J. Gilbert, D. Bowman, T. Ballance, C. Campbell, E. D. Dahl, O. Crawford, N. S. Blunt, B. Rogers, T. Noel, and M. Saffman, Multi-qubit en- tanglement and algorithms on a neutral-atom quantum computer, Nature 604, 457 (2022). Quantum computing with atomic qubits and Rydberg interactions: Progress and challenges. M Saffman, 10.1088/0953-4075/49/20/202001Journal of Physics B: Atomic, Molecular and Optical Physics. 49202001M. Saffman, Quantum computing with atomic qubits and Rydberg interactions: Progress and challenges, Journal of Physics B: Atomic, Molecular and Optical Physics 49, 202001 (2016). Highfidelity entanglement of neutral atoms via a Rydbergmediated single-modulated-pulse controlled-phase gate. Z Fu, P Xu, Y Sun, Y.-Y Liu, X.-D He, X Li, M Liu, R.-B Li, J Wang, L Liu, M.-S Zhan, 10.1103/PhysRevA.105.042430Physical Review A. 10542430Z. Fu, P. Xu, Y. Sun, Y.-Y. Liu, X.-D. He, X. Li, M. Liu, R.-B. Li, J. Wang, L. Liu, and M.-S. Zhan, High- fidelity entanglement of neutral atoms via a Rydberg- mediated single-modulated-pulse controlled-phase gate, Physical Review A 105, 042430 (2022). Analysis of imperfections in the coherent optical excitation of single atoms to Rydberg states. S De Léséleuc, D Barredo, V Lienhard, A Browaeys, T Lahaye, 10.1103/PhysRevA.97.053803Physical Review A. 9753803S. de Léséleuc, D. Barredo, V. Lienhard, A. Browaeys, and T. Lahaye, Analysis of imperfections in the coher- ent optical excitation of single atoms to Rydberg states, Physical Review A 97, 053803 (2018). High-Fidelity Control and Entanglement of Rydberg-Atom Qubits. H Levine, A Keesling, A Omran, H Bernien, S Schwartz, A S Zibrov, M Endres, M Greiner, V Vuletić, M D Lukin, 10.1103/PhysRevLett.121.123603Physical Review Letters. 121123603H. Levine, A. Keesling, A. Omran, H. Bernien, S. Schwartz, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, High-Fidelity Control and Entanglement of Rydberg-Atom Qubits, Physical Review Letters 121, 123603 (2018). Fault-tolerant control of an errorcorrected qubit. L Egan, D M Debroy, C Noel, A Risinger, D Zhu, D Biswas, M Newman, M Li, K R Brown, M Cetina, C Monroe, 10.1038/s41586-021-03928-yNature. 598281L. Egan, D. M. Debroy, C. Noel, A. Risinger, D. Zhu, D. Biswas, M. Newman, M. Li, K. R. Brown, M. Cetina, and C. Monroe, Fault-tolerant control of an error- corrected qubit, Nature 598, 281 (2021). C Ryan-Anderson, J G Bohnet, K Lee, D Gresh, A Hankin, J P Gaebler, D Francois, A Chernoguzov, D Lucchetti, N C Brown, T M Gatterman, S K Halit, K Gilmore, J A Gerber, B Neyenhuis, D Hayes, R P Stutz, 10.1103/PhysRevX.11.041058Realization of Real-Time Fault-Tolerant Quantum Error Correction. 1141058C. Ryan-Anderson, J. G. Bohnet, K. Lee, D. Gresh, A. Hankin, J. P. Gaebler, D. Francois, A. Chernogu- zov, D. Lucchetti, N. C. Brown, T. M. Gatterman, S. K. Halit, K. Gilmore, J. A. Gerber, B. Neyenhuis, D. Hayes, and R. P. Stutz, Realization of Real-Time Fault-Tolerant Quantum Error Correction, Physical Re- view X 11, 041058 (2021). Demonstration of fault-tolerant universal quantum gate operations. L Postler, S Heußen, I Pogorelov, M Rispler, T Feldker, M Meth, C D Marciniak, R Stricker, M Ringbauer, R Blatt, P Schindler, M Müller, T Monz, 10.1038/s41586-022-04721-1Nature. 605675L. Postler, S. Heußen, I. Pogorelov, M. Rispler, T. Feld- ker, M. Meth, C. D. Marciniak, R. Stricker, M. Ring- bauer, R. Blatt, P. Schindler, M. Müller, and T. Monz, Demonstration of fault-tolerant universal quantum gate operations, Nature 605, 675 (2022). Fault-tolerant operation of a logical qubit in a diamond quantum processor. M H Abobeih, Y Wang, J Randall, S J H Loenen, C E Bradley, M Markham, D J Twitchen, B M Terhal, T H Taminiau, 10.1038/s41586-022-04819-6Nature. 606884M. H. Abobeih, Y. Wang, J. Randall, S. J. H. Loenen, C. E. Bradley, M. Markham, D. J. Twitchen, B. M. Ter- hal, and T. H. Taminiau, Fault-tolerant operation of a logical qubit in a diamond quantum processor, Nature 606, 884 (2022). Realizing repeated quantum error correction in a distance-three surface code. S Krinner, N Lacroix, A Remm, A Di Paolo, E Genois, C Leroux, C Hellings, S Lazar, F Swiadek, J Herrmann, G J Norris, C K Andersen, M Müller, A Blais, C Eichler, A Wallraff, 10.1038/s41586-022-04566-8Nature. 605669S. Krinner, N. Lacroix, A. Remm, A. Di Paolo, E. Genois, C. Leroux, C. Hellings, S. Lazar, F. Swiadek, J. Her- rmann, G. J. Norris, C. K. Andersen, M. Müller, A. Blais, C. Eichler, and A. Wallraff, Realizing repeated quantum error correction in a distance-three surface code, Nature 605, 669 (2022). . Y Zhao, Y Ye, H.-L Huang, Y Zhang, D Wu, H Guan, Q Zhu, Z Wei, T He, S Cao, F Chen, T.-H Chung, H Deng, D Fan, M Gong, C Guo, S Guo, L Han, N Li, S Li, Y Li, F Liang, J Lin, H Qian, H Rong, H Su, L Sun, S Wang, Y Wu, Y Xu, C Ying, J Yu, C Zha, K Zhang, Y.-H Huo, C.-Y Lu, C.-Z , Y. Zhao, Y. Ye, H.-L. Huang, Y. Zhang, D. Wu, H. Guan, Q. Zhu, Z. Wei, T. He, S. Cao, F. Chen, T.-H. Chung, H. Deng, D. Fan, M. Gong, C. Guo, S. Guo, L. Han, N. Li, S. Li, Y. Li, F. Liang, J. Lin, H. Qian, H. Rong, H. Su, L. Sun, S. Wang, Y. Wu, Y. Xu, C. Ying, J. Yu, C. Zha, K. Zhang, Y.-H. Huo, C.-Y. Lu, C.-Z. Realization of an Error-Correcting Surface Code with Superconducting Qubits. X Peng, J.-W Zhu, Pan, 10.1103/PhysRevLett.129.030501Physical Review Letters. 12930501Peng, X. Zhu, and J.-W. Pan, Realization of an Error- Correcting Surface Code with Superconducting Qubits, Physical Review Letters 129, 030501 (2022). Fault-tolerant quantum computation against biased noise. P Aliferis, J Preskill, 10.1103/PhysRevA.78.052331Physical Review A. 7852331P. Aliferis and J. Preskill, Fault-tolerant quantum com- putation against biased noise, Physical Review A 78, 052331 (2008). Exponential suppression of bit-flips in a qubit encoded in an oscillator. R Lescanne, M Villiers, T Peronnin, A Sarlette, M Delbecq, B Huard, T Kontos, M Mirrahimi, Z Leghtas, 10.1038/s41567-020-0824-xNature Physics. 16509R. Lescanne, M. Villiers, T. Peronnin, A. Sarlette, M. Delbecq, B. Huard, T. Kontos, M. Mirrahimi, and Z. Leghtas, Exponential suppression of bit-flips in a qubit encoded in an oscillator, Nature Physics 16, 509 (2020). Stabilization and operation of a Kerr-cat qubit. A Grimm, N E Frattini, S Puri, S O Mundhada, S Touzard, M Mirrahimi, S M Girvin, S Shankar, M H Devoret, 10.1038/s41586-020-2587-zNature. 584205A. Grimm, N. E. Frattini, S. Puri, S. O. Mundhada, S. Touzard, M. Mirrahimi, S. M. Girvin, S. Shankar, and M. H. Devoret, Stabilization and operation of a Kerr-cat qubit, Nature 584, 205 (2020). Practical Quantum Error Correction with the XZZX Code and Kerr-Cat Qubits. A S Darmawan, B J Brown, A L Grimsmo, D K Tuckett, S Puri, 10.1103/PRXQuantum.2.030345PRX Quantum. 230345A. S. Darmawan, B. J. Brown, A. L. Grimsmo, D. K. Tuckett, and S. Puri, Practical Quantum Error Correc- tion with the XZZX Code and Kerr-Cat Qubits, PRX Quantum 2, 030345 (2021). Hardware-Efficient, Fault-Tolerant Quantum Computation with Rydberg Atoms. I Cong, H Levine, A Keesling, D Bluvstein, S.-T Wang, M D Lukin, 10.1103/PhysRevX.12.021049Physical Review X. 1221049I. Cong, H. Levine, A. Keesling, D. Bluvstein, S.- T. Wang, and M. D. Lukin, Hardware-Efficient, Fault- Tolerant Quantum Computation with Rydberg Atoms, Physical Review X 12, 021049 (2022). Thresholds for Topological Codes in the Presence of Loss. T M Stace, S D Barrett, A C Doherty, 10.1103/PhysRevLett.102.200501Physical Review Letters. 102T. M. Stace, S. D. Barrett, and A. C. Doherty, Thresholds for Topological Codes in the Presence of Loss, Physical Review Letters 102, 200501 (2009). Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays. Y Wu, S Kolkowitz, S Puri, J D Thompson, 10.1038/s41467-022-32094-6Nature Communications. 134657Y. Wu, S. Kolkowitz, S. Puri, and J. D. Thompson, Era- sure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays, Nature Communi- cations 13, 4657 (2022). A Kubica, A Haim, Y Vaknin, F Brandão, A Retzker, arXiv:2208.05461Erasure qubits: Overcoming the T1 limit in superconducting circuits. A. Kubica, A. Haim, Y. Vaknin, F. Brandão, and A. Ret- zker, Erasure qubits: Overcoming the T1 limit in super- conducting circuits, arXiv:2208.05461 (2022). S Puri, L St-Jean, J A Gross, A Grimm, N E Frattini, P S Iyer, A Krishna, S Touzard, L Jiang, A Blais, S T Flammia, S M Girvin, 10.1126/sciadv.aay5901Bias-preserving gates with stabilized cat qubits. 65901S. Puri, L. St-Jean, J. A. Gross, A. Grimm, N. E. Frat- tini, P. S. Iyer, A. Krishna, S. Touzard, L. Jiang, A. Blais, S. T. Flammia, and S. M. Girvin, Bias-preserving gates with stabilized cat qubits, Science Advances 6, eaay5901 (2020). Time-Optimal Two-and Three-Qubit Gates for Rydberg Atoms. S Jandura, G Pupillo, 10.22331/q-2022-05-13-7126712S. Jandura and G. Pupillo, Time-Optimal Two-and Three-Qubit Gates for Rydberg Atoms, Quantum 6, 712 (2022). Error budgeting for a controlled-phase gate with strontium-88 Rydberg atoms. A Pagano, S Weber, D Jaschke, T Pfau, F Meinert, S Montangero, H P Büchler, 10.1103/PhysRevResearch.4.033019Physical Review Research. 433019A. Pagano, S. Weber, D. Jaschke, T. Pfau, F. Meinert, S. Montangero, and H. P. Büchler, Error budgeting for a controlled-phase gate with strontium-88 Rydberg atoms, Physical Review Research 4, 033019 (2022). Robust Mølmer-Sørensen gate for neutral atoms using rapid adiabatic Rydberg dressing. A Mitra, M J Martin, G W Biedermann, A M Marino, P M Poggi, I H Deutsch, 10.1103/PhysRevA.101.030301Physical Review A. 10130301A. Mitra, M. J. Martin, G. W. Biedermann, A. M. Marino, P. M. Poggi, and I. H. Deutsch, Robust Mølmer- Sørensen gate for neutral atoms using rapid adiabatic Ry- dberg dressing, Physical Review A 101, 030301 (2020). Robustness of high-fidelity Rydberg gates with single-site addressability. M H Goerz, E J Halperin, J M Aytac, C P Koch, K B Whaley, 10.1103/PhysRevA.90.032329Physical Review A. 9032329M. H. Goerz, E. J. Halperin, J. M. Aytac, C. P. Koch, and K. B. Whaley, Robustness of high-fidelity Rydberg gates with single-site addressability, Physical Review A 90, 032329 (2014). The XZZX surface code. J P Bonilla Ataides, D K Tuckett, S D Bartlett, S T Flammia, B J Brown, 10.1038/s41467-021-22274-1Nature Communications. 122172J. P. Bonilla Ataides, D. K. Tuckett, S. D. Bartlett, S. T. Flammia, and B. J. Brown, The XZZX surface code, Na- ture Communications 12, 2172 (2021). Symmetric Rydberg controlled-Z gates with adiabatic pulses. M Saffman, I I Beterov, A Dalal, E J Paez, B C Sanders, 10.1103/PhysRevA.101.062309Physical Review A. 10162309M. Saffman, I. I. Beterov, A. Dalal, E. J. Paez, and B. C. Sanders, Symmetric Rydberg controlled-Z gates with adi- abatic pulses, Physical Review A 101, 062309 (2020). Controlled Phase Gate Protocol for Neutral Atoms via Off-Resonant Modulated Driving. Y Sun, P Xu, P.-X Chen, L Liu, 10.1103/PhysRevApplied.13.024059Physical Review Applied. 1324059Y. Sun, P. Xu, P.-X. Chen, and L. Liu, Controlled Phase Gate Protocol for Neutral Atoms via Off-Resonant Modulated Driving, Physical Review Applied 13, 024059 (2020). Construction of robust Rydberg controlled-phase gates. C.-P Shen, J.-L Wu, S.-L Su, E Liang, 10.1364/OL.44.002036Optics Letters. 442036C.-P. Shen, J.-L. Wu, S.-L. Su, and E. Liang, Construc- tion of robust Rydberg controlled-phase gates, Optics Letters 44, 2036 (2019). Robust phase-controlled gates for scalable atomic quantum processors using optical standing waves. S Whitlock, arXiv:2210.00576S. Whitlock, Robust phase-controlled gates for scalable atomic quantum processors using optical standing waves, arXiv:2210.00576 (2022). High-fidelity Rydberg-blockade entangling gate using shaped, analytic pulses. L S Theis, F Motzoi, F K Wilhelm, M Saffman, 10.1103/PhysRevA.94.032306Physical Review A. 9432306L. S. Theis, F. Motzoi, F. K. Wilhelm, and M. Saffman, High-fidelity Rydberg-blockade entangling gate using shaped, analytic pulses, Physical Review A 94, 032306 (2016). Photonrecoil and laser-focusing limits to Rydberg gate fidelity. F Robicheaux, T M Graham, M Saffman, 10.1103/PhysRevA.103.022424Physical Review A. 10322424F. Robicheaux, T. M. Graham, and M. Saffman, Photon- recoil and laser-focusing limits to Rydberg gate fidelity, Physical Review A 103, 022424 (2021). L H Pedersen, N M Møller, K Mølmer, 10.1016/j.physleta.2007.02.069Fidelity of quantum operations. 36747L. H. Pedersen, N. M. Møller, and K. Mølmer, Fidelity of quantum operations, Physics Letters A 367, 47 (2007). Training Schrödinger's cat: Quantum optimal control: Strategic report on current status, visions and goals for research in Europe. S J Glaser, U Boscain, T Calarco, C P Koch, W Köckenberger, R Kosloff, I Kuprov, B Luy, S Schirmer, T Schulte-Herbrüggen, D Sugny, F K Wilhelm, 10.1140/epjd/e2015-60464-1The European Physical Journal D. 69279S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny, and F. K. Wilhelm, Training Schrödinger's cat: Quantum optimal control: Strategic report on current status, visions and goals for research in Europe, The European Physical Journal D 69, 279 (2015). Optimal control of coupled spin dynamics: Design of NMR pulse sequences by gradient ascent algorithms. N Khaneja, T Reiss, C Kehlet, T Schulte-Herbrüggen, S J Glaser, 10.1016/j.jmr.2004.11.004Journal of Magnetic Resonance. 172296N. Khaneja, T. Reiss, C. Kehlet, T. Schulte-Herbrüggen, and S. J. Glaser, Optimal control of coupled spin dy- namics: Design of NMR pulse sequences by gradient as- cent algorithms, Journal of Magnetic Resonance 172, 296 (2005). T Propson, B E Jackson, J Koch, Z Manchester, D I Schuster, arXiv:2103.15716Robust Quantum Optimal Control with Trajectory Optimization. T. Propson, B. E. Jackson, J. Koch, Z. Manchester, and D. I. Schuster, Robust Quantum Optimal Control with Trajectory Optimization, arXiv:2103.15716 (2021). Analysis of dephasing mechanisms in a standing-wave dipole trap. S Kuhr, W Alt, D Schrader, I Dotsenko, Y Miroshnychenko, A Rauschenbeutel, D Meschede, 10.1103/PhysRevA.72.023406Physical Review A. 7223406S. Kuhr, W. Alt, D. Schrader, I. Dotsenko, Y. Miroshny- chenko, A. Rauschenbeutel, and D. Meschede, Analysis of dephasing mechanisms in a standing-wave dipole trap, Physical Review A 72, 023406 (2005). Nanophotonic quantum phase switch with a single atom. T G Tiecke, J D Thompson, N P De Leon, L R Liu, V Vuletić, M D Lukin, 10.1038/nature13188Nature. 508241T. G. Tiecke, J. D. Thompson, N. P. de Leon, L. R. Liu, V. Vuletić, and M. D. Lukin, Nanophotonic quantum phase switch with a single atom, Nature 508, 241 (2014). Ytterbium Nuclear-Spin Qubits in an Optical Tweezer Array. A Jenkins, J W Lis, A Senoo, W F Mcgrew, A M Kaufman, 10.1103/PhysRevX.12.021027Physical Review X. 1221027A. Jenkins, J. W. Lis, A. Senoo, W. F. McGrew, and A. M. Kaufman, Ytterbium Nuclear-Spin Qubits in an Optical Tweezer Array, Physical Review X 12, 021027 (2022). Cooling a single atom in an optical tweezer to its quantum ground state. A M Kaufman, B J Lester, C A , 10.1103/PhysRevX.2.041014Physical Review X. 241014A. M. Kaufman, B. J. Lester, and C. A. Regal, Cooling a single atom in an optical tweezer to its quantum ground state, Physical Review X 2, 041014 (2012). Coherence and Raman Sideband Cooling of a Single Atom in an Optical Tweezer. J D Thompson, T G Tiecke, A S Zibrov, V Vuletić, M D Lukin, 10.1103/PhysRevLett.110.133001Physical Review Letters. 110133001J. D. Thompson, T. G. Tiecke, A. S. Zibrov, V. Vuletić, and M. D. Lukin, Coherence and Raman Sideband Cool- ing of a Single Atom in an Optical Tweezer, Physical Review Letters 110, 133001 (2013). Alkaline-Earth Atoms in Optical Tweezers. A Cooper, J P Covey, I S Madjarov, S G Porsev, M S Safronova, M Endres, 10.1103/PhysRevX.8.041055Physical Review X. 841055A. Cooper, J. P. Covey, I. S. Madjarov, S. G. Porsev, M. S. Safronova, and M. Endres, Alkaline-Earth Atoms in Optical Tweezers, Physical Review X 8, 041055 (2018). Microscopic Control and Detection of Ultracold Strontium in Optical-Tweezer Arrays. M A Norcia, A W Young, A M Kaufman, 10.1103/PhysRevX.8.041054Physical Review X. 841054M. A. Norcia, A. W. Young, and A. M. Kaufman, Mi- croscopic Control and Detection of Ultracold Strontium in Optical-Tweezer Arrays, Physical Review X 8, 041054 (2018). Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation. A Bermudez, X Xu, R Nigmatullin, J O&apos;gorman, V Negnevitsky, P Schindler, T Monz, U G Poschinger, C Hempel, J Home, F Schmidt-Kaler, M Biercuk, R Blatt, S Benjamin, M Müller, 10.1103/PhysRevX.7.041061Physical Review X. 741061A. Bermudez, X. Xu, R. Nigmatullin, J. O'Gorman, V. Negnevitsky, P. Schindler, T. Monz, U. G. Poschinger, C. Hempel, J. Home, F. Schmidt-Kaler, M. Bier- cuk, R. Blatt, S. Benjamin, and M. Müller, Assessing the Progress of Trapped-Ion Processors Towards Fault- Tolerant Quantum Computation, Physical Review X 7, 041061 (2017). Neyenhuis, Demonstration of the trapped-ion quantum CCD computer architecture. J M Pino, J M Dreiling, C Figgatt, J P Gaebler, S A Moses, M S Allman, C H Baldwin, M Foss-Feig, D Hayes, K Mayer, C Ryan-Anderson, B , 10.1038/s41586-021-03318-4Nature. 592209J. M. Pino, J. M. Dreiling, C. Figgatt, J. P. Gaebler, S. A. Moses, M. S. Allman, C. H. Baldwin, M. Foss-Feig, D. Hayes, K. Mayer, C. Ryan-Anderson, and B. Neyen- huis, Demonstration of the trapped-ion quantum CCD computer architecture, Nature 592, 209 (2021). Nondestructive Cooling of an Atomic Quantum Register via State-Insensitive Rydberg Interactions. R Belyansky, J T Young, P Bienias, Z Eldredge, A M Kaufman, P Zoller, A V Gorshkov, 10.1103/PhysRevLett.123.213603Physical Review Letters. 123213603R. Belyansky, J. T. Young, P. Bienias, Z. Eldredge, A. M. Kaufman, P. Zoller, and A. V. Gorshkov, Nondestruc- tive Cooling of an Atomic Quantum Register via State- Insensitive Rydberg Interactions, Physical Review Let- ters 123, 213603 (2019). C Fromonteil, D Bluvstein, H Pichler, arXiv:2210.08824Protocols for Rydberg entangling gates featuring robustness against quasi-static errors. C. Fromonteil, D. Bluvstein, and H. Pichler, Protocols for Rydberg entangling gates featuring robustness against quasi-static errors, arXiv:2210.08824 (2022). Entangling, Controlling, and Detecting Individual Strontium Atoms in Optical Tweezer Arrays. I S Madjarov, California Institute of TechnologyPh.D. thesisI. S. Madjarov, Entangling, Controlling, and Detecting Individual Strontium Atoms in Optical Tweezer Arrays, Ph.D. thesis, California Institute of Technology (2021). Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. M. Abramowitz and I. A. StegunNew York, NYDover Publ9th ed.M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables, 9th ed., Dover Books on Mathe- matics (Dover Publ, New York, NY, 2013). T Tamir, 10.1090/S0025-5718-1962-0135739-3Characteristic exponents of Mathieu functions. 16100T. Tamir, Characteristic exponents of Mathieu functions, Mathematics of Computation 16, 100 (1962).
[]
[ "A Physics-Based Hybrid Dynamical Model of Hysteresis in Polycrystalline Shape Memory Alloy Wire Transducers", "A Physics-Based Hybrid Dynamical Model of Hysteresis in Polycrystalline Shape Memory Alloy Wire Transducers" ]
[ "Michele A Mandolino ", "Dominik Scholtes ", "Senior Member, IEEEFrancesco Ferrante ", "Member, IEEEGianluca Rizzello " ]
[]
[]
Shape Memory Alloys (SMAs) are a class of smart materials that exhibit a macroscopic contraction of up to 5% when heated via an electric current. This effect can be exploited for the development of novel unconventional actuators. Despite having many features such as compactness, lightweight, and high energy density, commercial SMA wires are characterized by a highly nonlinear behavior, which manifests itself as a load-, temperature-, and rate-dependent hysteresis exhibiting a complex shape and minor loops. Accurate modeling and compensation of such hysteresis are fundamental for the development of highperformance SMA applications. In this work, we propose a new dynamical model to describe the complex hysteresis of polycrystalline SMA wires. The approach is based on a reformulation of the Müller-Achenbach-Seelecke model for uniaxial SMA wires within a hybrid dynamical framework. In this way, we can significantly reduce the numerical complexity and computation time without losing accuracy and physical interpretability. After describing the model, an extensive experimental validation campaign is carried out on a 75 µm diameter SMA wire specimen. The new hybrid model will pave the development of hybrid controllers and observers for SMA actuators.
10.1109/tmech.2023.3253250
[ "https://export.arxiv.org/pdf/2305.13928v1.pdf" ]
257,747,422
2305.13928
0ed1bf6788148f710903db5713bf99760d5eb768
A Physics-Based Hybrid Dynamical Model of Hysteresis in Polycrystalline Shape Memory Alloy Wire Transducers Michele A Mandolino Dominik Scholtes Senior Member, IEEEFrancesco Ferrante Member, IEEEGianluca Rizzello A Physics-Based Hybrid Dynamical Model of Hysteresis in Polycrystalline Shape Memory Alloy Wire Transducers 10.1109/TMECH.2023.3253250This is an archival version of our paper. Please cite the published version: https://Index Terms-Shape memory alloySMA wire actuatorpoly- crystallinehysteresisminor loopsmodelinghybrid systems Shape Memory Alloys (SMAs) are a class of smart materials that exhibit a macroscopic contraction of up to 5% when heated via an electric current. This effect can be exploited for the development of novel unconventional actuators. Despite having many features such as compactness, lightweight, and high energy density, commercial SMA wires are characterized by a highly nonlinear behavior, which manifests itself as a load-, temperature-, and rate-dependent hysteresis exhibiting a complex shape and minor loops. Accurate modeling and compensation of such hysteresis are fundamental for the development of highperformance SMA applications. In this work, we propose a new dynamical model to describe the complex hysteresis of polycrystalline SMA wires. The approach is based on a reformulation of the Müller-Achenbach-Seelecke model for uniaxial SMA wires within a hybrid dynamical framework. In this way, we can significantly reduce the numerical complexity and computation time without losing accuracy and physical interpretability. After describing the model, an extensive experimental validation campaign is carried out on a 75 µm diameter SMA wire specimen. The new hybrid model will pave the development of hybrid controllers and observers for SMA actuators. I. INTRODUCTION S HAPE memory alloy (SMA) transducers commonly consist of NiTi wires which undergo a contraction in length when heated, e.g., via an electric current [1], [2]. The wire recovers its original shape once the current is removed, provided that it is preloaded with a mechanical biasing mechanism. SMA features include high energy density, high flexibility, bio-compatibility, actuation strain up to 5 %, lightweight, and ability to simultaneously work as actuators and sensors (selfsensing) [3]. SMAs have been used as mechatronic actuators in many fields, e.g., biomedical systems [4], [5], artificial muscles [6], [7], robotics [8], automotive [9], and aerospace [10]. One of the major challenges encountered when developing SMA actuators lies in the prediction and compensation of their Manuscript complex temperature-, rate-, and load-dependent hysteresis. Various modeling approaches have been proposed for SMA in the literature [11], [12]. On the one hand, physics-based models offer a thermodynamically-consistent framework to describe SMA hysteresis and related physical phenomena [13], [14], [15], [16], [17]. These constitutive models can be effectively used to predict the structural response of complex structures coupled with SMA elements. However, since those are generally implemented in finite element software, they suffer from high numerical complexity. As a further complication, finite element models are usually expressed in a mathematical formalism that differs from the state-space dynamic representation adopted in control engineering. Therefore, many of these models turn out to be unsuitable for model-based control applications in real-time. On the other hand, phenomenological models such based on Preisach and Prandtl-Ishlinskii operators have also been used to describe SMA devices. The ability of those models to reproduce the hysteresis minor loops in an accurate and efficient way, as well as the fact that they are naturally formulated in a control-oriented formalism, has made them highly popular in SMA hysteresis compensation applications [18], [19], [20], [21], [22]. However, since those models lack physical interpretation, they are not able to predict how the SMA hysteresis changes in response to a different external temperature or applied mechanical load. With the aim of exploiting the advantages of a physics-based SMA description within a control-oriented framework, in this work we propose a novel lumped-parameter dynamical model for one-dimensional polycrystalline SMA wire actuators. Polycrystalline SMA wires, like the ones commonly available on the market, exhibit a complex hysteresis shape with minor loops upon partial loading/unloading. The envisioned model must provide a numerically efficient and accurate tool to support SMA motion control applications in real-time. Ideally, the model must be formulated as a mechanical power port (i.e., an input-velocity, output-force state-space realization), so that it can be causally coupled with an external mechanical structure and, in turn, allow various types of SMA-driven systems to be described in a control-oriented fashion. To achieve this goal, a valuable baseline is offered by the mesoscopic model for polycrystalline SMA wires presented by Rizzello et al. in [23], which in turn is grounded on the physics-based and control-oriented description of single-crystal SMA provided by Müller-Achenbach-Seelecke (MAS) [24]. In [23], it is shown how such a model well reproduces smooth hysteresis loops, as well as minor loops, observed in a 508 µm superelastic SMA wire by NDC as well as a 76 µm quasi-plastic wire by SAES Getters. Despite those advantages, such a model is affected by high numerical stiffness due to strong nonlinearities, which results in large simulation times. A potential way to improve this limitation consists of eliminating the stiff dynamics from the model in [23] and replacing it with instantaneous hybrid transitions. This approach was already proven to be successful when modeling various electro-mechanical systems [25], [26]. The hybrid dynamical framework formalized in [27] offers an ideal set of mathematical tools to obtain a convenient control-oriented model of the SMA. Other than providing advantages in terms of numerical robustness, simulation time, and sound mathematical modeling, this framework opens up the possibility to develop hybrid controllers for hysteresis compensation as well as hybrid observers for self-sensing applications [28], [29]. At the same time, the hybrid framework allows to describe additional effects which are hard to take into account in a conventional modeling setting, i.e., the wire slack occurring upon loss of mechanical tension and intrinsic twoway effect. This approach is inspired by our previous work in [30], where we developed a hybrid model for idealized single-crystal SMA hysteresis as in Fig. 1(a). Here, the method from [30] is generalized for the first time to polycrystalline SMA exhibiting a smooth hysteresis and minor loops, as in Fig. 1(b). The main challenges arise from the significantly higher complexity of the polycrystalline model compared to the single-crystal one, together with the need to find a suitable mathematical formulation of the slack dynamics. After presenting the hybrid model, an extensive experimental validation campaign is carried out on a 75 µm quasi-plastic SMA wire by DYNALLOY. It is shown how the model predicts both the stress-strain and resistance-strain curves of the wire for different deformation rates and patterns, as well as applied electrical inputs. The new model also overperforms the one in [23] in terms of both simulation time and accuracy. The remainder of this paper is organized as follows. Section II summarizes the physics-based MAS model for polycrystalline SMA wires. The novel hybrid dynamical model is then presented in Section III. In Section IV, model characterization and validation are performed via a dedicated experimental setup. Concluding remarks are finally discussed in Section V. II. SMA CONSTITUTIVE MODELING The polycrystalline SMA model previously developed in [23] is briefly summarized in this section. We assume that the crystal lattice of a SMA material can be divided into different Fig. 2. An unactuated SMA wire coupled to a mechanical bias structure (a). When we apply electric power to the wire, the Joule effect heats up the material, producing a contraction due to the interaction with the bias system (b). Causal coupling between the SMA model and an external structure (c). phases, or variants, each one associated with a specific geometry. For the particular case of a uniaxial SMA wire, these phases are called respectively austenite and martensite. The relative amount of each variant within the material depends on the thermo-mechanical loading conditions. To describe the lattice distribution, we introduce phase fraction variables x A for austenite and x M for martensite, respectively, such that x A + x M = 1, x A ∈ [0, 1], x M ∈ [0, 1] .(1) By exploiting (1), we can define the stress-strain relationship: σ := σ(ε, x M ) = ε − ε T x M E −1 M x M + E −1 A (1 − x M ) ,(2) where σ is the SMA axial stress, ε is the wire axial strain, E A and E M are austenite and martensite Young's moduli respectively, while ε T represent the transformation strain. The dependence of σ on phase fraction x M introduces a temperaturedependent stress-strain hysteresis. Strain and stress can be related to wire force f , length l, and deformation rate v via f = πr 2 0 σ , l = l 0 (1 + ε) , v =l = l 0ε ,(3) where r 0 and l 0 represent the cross-sectional radius and length of the undeformed and fully austenitic SMA wire, respectively. To predict the dynamic evolution of x M , we definė x M = −p M A (σ, T )x M + p AM (σ, T ) (1 − x M ) ,(4) where p M A and p AM depend on macroscopic stress σ and temperature T , and represent the probability of a mesoscopic martensite layer to transform into austenite and vice-versa. The generic expression of the transition probabilities is as follows p αβ (σ, T ) = τ −1 x e − V L k B T ∆g αβ (σ,T ) ,(5) where τ x is a time constant related to the thermal activation, V L is the volume of a mesoscopic layer, k B is the Boltzmann constant, and ∆g αβ (σ, T ) represents the energy barrier existing between phases α and β in the Gibbs free-energy density landscape. The evolution of the SMA temperature can be determined via the internal energy balance equation, namely Ωρ V c VṪ = −λA S (T − T E ) + J + Ωρ V h MẋM .(6) In (6), Ω = πr 2 0 l 0 represents the wire volume, ρ V is the SMA volumetric density, c V is the specific heat, λ is the convective cooling coefficient between SMA and environment, A S = 2πr 0 l 0 is the heat exchange lateral surface of the wire, accordingly. This process can be repeated recursively to compute any inner loop, regardless of its hierarchical level. Further policies are also implemented to account for inner loop closure, causing a change from n l to n l − 2. A detailed description of the theory behind scaling policy and bookkeeping algorithm is beyond the scope of this paper, please refer to [23] for more details. Finally, we consider a model for the SMA electrical resistance, which is important for self-sensing applications [3]: where ν is the Poisson's ratio, ρ eM and ρ eA are the electrical resistivity of the two phase fraction of the material, given by R = l 0 (1 + ε) πr 2 0 (1 − νε) [ρ eM (T )x M + ρ eA (T ) (1 − x M )] ,(14)ρ eM = ρ eM (T 0 )[1 + α M (T − T 0 )] ,(15)ρ eA = ρ eA (T 0 )[1 + α A (T − T 0 )] .(16) Constants ρ eM (T 0 ), ρ eA (T 0 ), α M , and α A in (15)- (16) represent constitutive material parameters. The complete model of the polycrystalline SMA wire can be obtained by collecting (2)-(4), (6), and (14):                                 ε = l −1 0 v x M := ϕx M = −p M A x M + p AM (1 − x M ) T = J − λAs(T − T E ) Ωρ V c V + h M ϕx M c V f = πr 2 0 ε − ε T x M E −1 M x M + E −1 A (1 − x M ) R = l 0 (1 + ε) πr 2 0 (1 − νε) [ρ eM (T )x M + ρ eA (T ) (1 − x M )] .(17) The state variable of (17) are ε, x M , and T , the inputs are v, J, and T E , and the outputs are f and R. A block diagram depiction of the model is shown in Fig. 2(a). This representation allows us to express the SMA model in impedance form (velocity-input, force-output), so it can be easily coupled with a generic mechanical structure naturally expressed in admittance form in a causal way (e.g., a mass-spring-damper or Euler-Lagrange model, naturally expressed as force-input, velocity-output), see Fig. 2(b). Therefore, model (17) allows to describe mechanical structures actuated by 1D SMA wires in a lumped-parameter and control-oriented fashion. III. SMA HYBRID DYNAMICAL MODEL The hybrid reformulation of (17) is discussed in this section. A. Preliminaries on Hybrid Systems We consider hybrid systems with state x ∈ R n and input u ∈ R m of the form H : ẋ = F (x, u) (x, u) ∈ C x + ∈ G(x, u) (x, u) ∈ D ,(18) where F : R n+m → R n is the flow map, C ⊂ R n is the flow set, D ⊂ R n is the jump set, and the set-valued map G : R n ⇒ R n is the jump map. The symbolẋ denotes the time-derivative of state x during flows, while x + represents the value of state x after an instantaneous change. To denote the above hybrid system, we use the shorthand notation H = (C, F, D, G). For more details on hybrid system, please refer to [27]. B. Characterization of the Operative Modes As pointed out in [32, Assumption 2], if we choose the values of τ x and V L in a physically meaningful way, the transition probabilities p M A and p M A defined in (5) behave approximately as high-gain threshold functions. As a result, (4) becomes responsible for the high numerical stiffness of model (17). Following the analysis in [32], it can be shown that during phase transformation (i.e.,ẋ M ̸ = 0) the following approximation tightly holds for the polycrystalline model (17) σ(ε, x M ) = σ (n l ) A (x M , T ) ifẋ M > 0 ,(19)σ(ε, x M ) = σ (n l ) M (x M , T ) ifẋ M < 0 ,(20) where σ(ε, x M ) is defined in (2), while σ (19)- (20). Indeed, by using (19)- (20), we can compute x M without the need to integrate the stiff equation (4), therefore improving the numerical properties of model (17), as shown in the sequel. As a first step, a set of operative modes needs to be identified. We consider in Fig. 4(a) the qualitative stress-strain hysteresis of a pseudoelastic polycrystalline SMA wire with T higher than the austenite transformation temperature [14]. When moving along the branches of this hysteresis, either (19) or (20) always holds in a mutually exclusive manner. Based on this observation, we can determine three distinct operating modes: 1) AM: Austenite to Martensite (or loading) branch; 2) MA: Martensite to Austenite (or unloading) branch; 3) M: full Martensite branch. The finite state machine which defines the transition logic between those modes is sketched in Fig. 4(b). A hypothetical operating sequence of the model is as follows. We assume that the pseudoelastic SMA wire starts in a full austenitic condition (mode AM). When subject to increasing mechanical load, the amount of austenitic crystal lattice is decreased while the martensitic one increases, i.e.,ẋ M > 0 and thus (19) holds while moving along the outer hysteresis loop. If the loading state exceeds a certain threshold (dictated by material-specific and external inputs), the SMA wire transforms completely into martensite (mode M). When the material is still working below the above threshold and we start to unload it, it gradually transforms from martensite to austenite (mode MA), i.e., x M < 0 and thus (20) holds while moving along this inner hysteresis loop. Inner loops appearing upon several partial loading-unloading cycles are handled by the same AM and MA modes. For each of these transitions, a different pair of σ (n l ) A and σ (n l ) M is generated to describe the current minor loop, parameterized through variable n l (with n l = 1 for the outer loop and n l > 1 for the minor loops, respectively). At room temperature, a SMA wire does not usually exhibit a stable austenitic phase. Instead, detwinning of the martensite causes a quasi-plastic material behavior, which results in a qualitative hysteresis shape as the blue one in Fig. 5(a). Compared to the pseudoelastic case, quasi-plastic SMA shows a region of zero stress. This corresponds to the wire being slack, and thus not subject to any mechanical tension, while still exhibiting a residual strain. The slack condition is frequently observed when characterizing quasi-plastic SMA wires as well as in agonist-antagonist SMA actuator applications, in which the unactuated wire loses tension during normal operating conditions, only for regaining it after being re-activated [33]. This effect is not covered by model (17), which becomes inaccurate when σ < 0. To model the slack, we include two additional modes by duplicating AM and MA. A subscript is attached to each mode to indicate whether the slack is present (1) or not (0). Note that the operative mode M is not duplicated, since it exists only for σ ≥ 0. During slack the wire is no longer under tension, thus its axial stress becomes zero. By setting σ = 0 in (2), we obtain that ε = x M ε T always holds true in slack. To quantify the SMA mechanical state in both tensioned and slacked conditions, we define an effective (i.e., residual) strain ε ef f as ε ef f := ε if the wire is tensioned x M ε T if the wire is slacked .(21) Starting from a tensioned state, we initiate a slack whenever the wire begins losing tension, i.e., σ = 0 andσ < 0. When being in a slacked configuration, the wire recovers the tensioned state whenever ε ef f = ε andε ef f > 0. The resulting finite state machine, updated with the slack modes, is shown in Fig. 5(b). This hybrid automaton is composed by a set of modes Q defined as follows Q = {Q1, Q2, Q3, Q4, Q5} = {AM0, MA0, M0, MA1, AM1},(22) and a set of edges E = (Q a , Q b ) representing pairs such that a transition from Q a to Q b is possible, i.e., E = {E1, E2, E3, E4, E5, E6, E7, E8, E9, E10, E11, E12, E13, E14, E15, E16} = {(AM0, MA0), (AM0, M0), (AM0, AM1), (AM0, AM0), (MA0, AM), (MA0, MA1), (MA0, MA0), (M0, MA0), (M0, MA1), (MA1, MA1), (MA1, MA0), (MA1, AM1), (AM1, AM1), (AM1, AM0), (AM1, M0), (AM1, MA1)}.(23) Next, we need to define the evolution of the model states for each mode in the set Q. We denote byε (i) ,ẋ (i) M , andṪ (i) the time derivatives of strain, phase fraction, and temperature for the generic mode i. For ease of notation, we also definė ε (i) := ϕ (i) ε = ϕ (i) ε (ε, x M , T, v, J, T E ) ,(24) x (i) M := ϕ (i) x M = ϕ (i) x M (ε, x M , T, v, J, T E ) ,(25)T (i) := ϕ (i) T = ϕ (i) T (ε, x M , T, v, J, T E ) ,(26) with i ∈ Q. Based on (17), we can compute ϕ (i) ε and ϕ (i) T for every mode without any loss of generality ϕ (i) ε = l −1 0 v , ∀ i ∈ Q ,(27)ϕ (i) T = Λ + c −1 V h M ϕ (i) x M , ∀ i ∈ Q ,(28) with Λ = (Ωρ V c V ) −1 [J − λA s (T − T E )] .(29) Instead of relying on (17) to also compute ϕ (i) x M , we exploit (19)- (20) to establish useful alternative relationships. When the material is transforming from A to M (ẋ M > 0) and no slack occurs, we are in mode i = Q 1 . In here, (19) implieṡ σ(ε, x M ) =σ (n l ) A (x M , T ) ⇒ ⇒ ∂σ(ε, x M ) ∂εε + ∂σ(ε, x M ) ∂x Mẋ M = = ∂σ (n l ) A (x M , T ) ∂x Mẋ M + ∂σ (n l ) A (x M , T ) ∂TṪ .(30) By replacing (24)- (28) in (30) and solving for ϕ (Q1) x M , we have ϕ (Q1) x M = ∂σ(ε,x M ) ∂ε v l0 − ∂σ (n l ) A (x M ,T ) ∂T Λ ∂σ (n l ) A (x M ,T ) ∂x M − ∂σ(ε,x M ) ∂x M + ∂σ (n l ) A (x M ,T ) ∂T h M c V .(31) The partial derivatives of σ(ε, x M ) appearing in (31) can be easily computed based on (2). Conversely, σ M (x M , T ) are not available analytically but rather follow from the inner loop scaling policy. Thus, one must resort to numerical methods to compute the corresponding partial derivatives, since their expression changes at each reversal point. When the SMA transforms from martensite to austenite (ẋ M < 0) and no slack occurs (i = Q 2 ), (20) similarly implieṡ σ(ε, x M ) = ∂σ (n l ) M (x M , T ) ∂x Mẋ M + ∂σ (n l ) M (x M , T ) ∂TṪ ,(32) and, by repeating the previous steps, we obtain ϕ (Q2) x M = ∂σ(ε,x M ) ∂ε v l0 − ∂σ (n l ) M (x M ,T ) ∂T Λ ∂σ (n l ) M (x M ,T ) ∂x M − ∂σ(ε,x M ) ∂x M + ∂σ (n l ) M (x M ,T ) ∂T h M c V .(33) When the material is in mode i = Q 3 (full martensite), x M is constant and equal to 1, therefore we simply have ϕ (Q3) x M = 0.(34) During slack, identity σ = 0 holds. This condition can be used to characterize ϕ (i) x M analytically for the two slack modes. For mode i = Q 4 , (19) together with σ = 0 implieṡ σ (n l ) A (x M , T ) = 0 ⇒ ⇒ ∂σ (n l ) A (x M , T ) ∂x Mẋ M + ∂σ (n l ) A (x M , T ) ∂TṪ = 0 ,(35) and, by replacingṪ with ϕ (i) T according to (28), we have ϕ (Q4) x M = − ∂σ (n l ) A (x M ,T ) ∂T Λ ∂σ (n l ) A (x M ,T ) ∂x M + ∂σ (n l ) A (x M ,T ) ∂T h M c V .(36) By using a similar reasoning for mode i = Q 5 , we have ϕ (Q5) x M = − ∂σ (n l ) M (x M ,T ) ∂T Λ ∂σ (n l ) M (x M ,T ) ∂x M + ∂σ (n l ) M (x M ,T ) ∂T h M c V .(37) As a last step, we can replace the expressions for ϕ It is also possible to find alternative algebraic relationships to express x M as a function of T and ε. We define x (i) M := ζ (i) x M = ζ (i) x M (ε, T ) ,(38) representing the different phase fraction algebraic description ∀i ∈ Q. By recalling (19)- (20), and since x M = 1 for i = Q 3 and that σ = 0 for i = Q 4 and i = Q 5 , we have ζ (Q 1 ) x M (ε, T ) = {x M ∈ X (n l ) M : σ (n l ) A (x M , T ) − σ(ε, x M ) = 0}, (39) ζ (Q 2 ) x M (ε, T ) = {x M ∈ X (n l ) M : σ (n l ) M (x M , T ) − σ(ε, x M ) = 0}, (40) ζ (Q 3 ) x M = {1},(41)ζ (Q 4 ) x M (T ) = {x M ∈ X (n l ) M : σ (n l ) A (x M , T ) = 0},(42)ζ (Q 5 ) x M (T ) = {x M ∈ X (n l ) M : σ (n l ) M (x M , T ) = 0}.(43) We can use (38)-(43) to eliminate any explicit dependency of ϕ M (x M , T ), in combination with the phase-dependent residual strain (21) occurring when the wire is in slack, represent the main mechanisms which are responsible for describing the shape memory effect. C. Hybrid Polycrystalline Model Formulation Based on the discussion in Section III-B, we can reformulate the polycrystalline SMA model (17) as a hybrid system H in the framework of [27] (cf. Section III-A). To this end, we consider as a continuous-time state vector x C := x C1 , x C2 = ε, T ∈ X C ,(44) with X C := R ≥0 × R ≥0 , and as discrete-time state vector x D := x D1 , x D2 , x D3 = q, s, n l ∈ X D ,(45) where X D := {1, −1, 0} × {0, 1} × N ≥1 . Henceforth, the q values can be defined symbolically for visual simplicity as follows {AM, MA, M}, respectively. While x C simply accounts for the SMA strain ε and temperature T , x D holds in memory the branch typology q (q = 1 for AM, q = −1 for MA, q = 0 for M), the slack state s (s = 0 for no slack, s = 1 for slack), and the current inner loop identifier n l (according to the description provided in Section II), which uniquely determine the current operative mode of the system. The full state vector x of the hybrid system is defined as x = x C , x D ∈ X.(46) with X = X C × X D . The input vector is defined as follows u := u 1 , u 2 , u 3 = v, J, T E ∈ U (47) where U := R × R ≥0 × R ≥0 . To simplify the formalization of the four set-valued maps of the SMA hybrid model H = (C, F, D, G), we define some auxiliary objects. These objects, denoted as D x D , represent the jump conditions that may occur when the discrete state x D assumes a specific value for different operative modes i ∈ Q. We start by defining the possible jump conditions for q as D q1 (x, u) := q = AM : ϕ (i) x M ≤ 0, ε ≤ E −1 Mσ + ε T ,(48)D q2 (x, u) := q = AM : ε ≥ E −1 Mσ + ε T , l −1 0 v ≥ E −1 Mσ S Λ , (49) D q3 (x, u) := q = MA : ϕ (i) x M ≥ 0 ,(50)D q4 (x, u) := q = M : ε ≤ E −1 Mσ + ε T , l −1 0 v ≤ E −1 Mσ S Λ ,(51)whereσ := σ (1) A (xM , T ) x M =1 andσS := σS(xM )| x M =1 . For instance, if the hybrid model has a discrete state q = AM, the corresponding jumps are determined depending on whether the conditions given by D q1 or D q2 are met. Equivalently, the value of s produces the following slack conditions D s1 (x, u) := s = 0, n l ∈ N e ≥1 : σ (n l ) A ≤ 0, ϕ (n l ) σ A ≤ 0 ,(52)D s2 (x, u) := s = 0, n l ∈ N o ≥1 : σ (n l ) M ≤ 0, ϕ (n l ) σ M ≤ 0 ,(53)D s3 (x, u) := s = 1 : ε ≥ ζ (Q 4 ) x M ε T , l −1 0 v ≥ ϕ (Q 4 ) x M ε T ,(54)D s4 (x, u) := s = 1 : ε ≥ ζ (Q 5 ) x M ε T , l −1 0 v ≥ ϕ (Q 5 ) x M ε T ,(55) where ϕ Lastly, we have jump conditions based on the range limits for ε and x M in which the current branch n l is defined, given by Dn l1 (x, u) := n l ∈ N e ≥3 : x (n l ) M ≥ ζ (i) x M , ϕ (i) x M ≤ 0 ∪ (58) n l ∈ N e ≥3 : ζ (i) x M ≥x (n l ) M , ϕ (i) x M ≥ 0 ∪ (59) n l ∈ N e ≥3 : h (n l ) ε A ≥ ε ef f , ϕε ef f ≤ 0 ∪ (60) n l ∈ N e ≥3 : ε ef f ≥h (n l ) ε A , ϕε ef f ≥ 0 ,(61)Dn l2 (x, u) := n l ∈ N o ≥3 : x (n l ) M ≥ ζ (i) x M , ϕ (i) x M ≤ 0 ∪ (62) n l ∈ N o ≥3 : ζ (i) x M ≥x (n l ) M , ϕ (i) x M ≥ 0 ∪ (63) n l ∈ N o ≥3 : h (n l ) ε M ≥ ε ef f , ϕε ef f ≤ 0 ∪ (64) n l ∈ N o ≥3 : ε ef f ≥h (n l ) ε M , ϕε ef f ≥ 0 ,(65) where h (n l ) ε A = h (n l ) ε A x M =x (n l ) M ,h (n l ) ε A = h (n l ) ε A x M =x (n l ) M , h (n l ) ε M = h (n l ) ε M x M =x (n l ) M , andh (n l ) ε M = h (n l ) ε M x M =x (n l ) M , with h (n l ) ε A = (E −1 M xM + E −1 A (1 − xM ))σ (n l ) A (xM , T ) + εT xM , (66) h (n l ) ε M = (E −1 M xM + E −1 A (1 − xM ))σ (n l ) M (xM , T ) + εT xM , (67) while ε ef f is obtained by formalizing (21) as follows ε ef f (x) := ε(1 − s) + ζ (i) x M εT s ,(68) and ϕ ε ef f =ε ef f is given by differentiation of (68) withṡ = 0 ϕε ef f :=ε ef f (x) = l −1 0 v(1 − s) + ϕ (i) x M εT s .(69) If the state flows outside the phase fraction or strain range limits defined above, the hybrid model will jump into a new operative mode, generating the new hysteresis branch n + l and the corresponding state variables. According to the previous definitions, we can write the flow conditions, i.e., the ones under which the system persists in its actual condition, based on the three discrete states Cq(x, u) := q = AM : ϕ (i) x M ≥ 0, ε ≤ E −1 Mσ + ε T ∪ q = MA : ϕ (i) x M ≤ 0 ∪ q = M : ε ≥ E −1 Mσ + ε T ,(70) Cs(x, u) := s = 0, n l ∈ N o ≥1 : σ (n l ) A ≥ 0 ∪ s = 1 : ε ≤ ζ (Q 4 ) x M ε T ∪ s = 0, n l ∈ N e ≥1 : σ (n l ) M ≥ 0 ∪ s = 1 : ε ≤ ζ (Q 5 ) x M ε T . (71) Cn l (x, u) := {n l < 3} ∪ n l ∈ N e ≥3 : ζ (i) x M ∈ X (n l ) M , ε ef f ∈ [h (n l ) ε A ,h (n l ) ε A ] } ∪ n l ∈ N o ≥3 : ζ (i) x M ∈ X (n l ) M , ε ef f ∈ [h (n l ) ε M ,h (n l ) ε M ] } .(72) This is an archival version of our paper. Please cite the published version: https://doi.org/10.1109/TMECH.2023.3253250 7 We can then define C as the intersection of all conditions such that the system persists in the current operative mode C(x, u) := Cq(x, u) ∩ Cs(x, u) ∩ Cn l (x, u) , (x, u) ∈ X × U. (73) For compactness of notation, next explicit dependency of subsets C and D on system states and inputs has been omitted. Hence, the flow set can be defined as C := 5 m=1 C m ,(74) where C1 := {(x, u) ∈ X × U : q = AM, s = 0, n l ∈ N e ≥1 , x ∈ C}, (75) C2 := {(x, u) ∈ X × U : q = MA, s = 0, n l ∈ N o ≥1 , x ∈ C}, (76) C3 := {(x, u) ∈ X × U : q = M, s = 0, n l = 1, x ∈ C},(77)C4 := {(x, u) ∈ X × U : q = MA, s = 1, n l ∈ N o ≥1 , x ∈ C}, (78) C5 := {(x, u) ∈ X × U : q = AM, s = 1, n l ∈ N e ≥1 , x ∈ C},(79) and, for all i ∈ Q, the flow map is defined as F (x, u) := ϕ (i) ε , ϕ (i) T , 0, 0, 0 , ∀(x, u) ∈ C. (80) According to (80), only the strain ε and temperature T can undergo continuous changes during flows. Transitions between operating modes are triggered by the following jump set D := 16 m=1 D m ,(81)D 4 := {(x, u) ∈ X × U : q = AM, s = 0, n l ∈ N e ≥3 , x ∈ Dn l1 }, (85) D 5 := {(x, u) ∈ X × U : q = MA, s = 0, n l ∈ N o ≥1 , x ∈ D q3 }, (86) D 6 := {(x, u) ∈ X × U : q = MA, s = 0, n l ∈ N o ≥1 , x ∈ D s2 }, (87) D 7 := {(x, u) ∈ X × U : q = MA, s = 0, n l ∈ N o ≥3 , x ∈ Dn l2 }, (88) D 8 := {(x, u) ∈ X × U : q = M, s = 0, n l = 1, x ∈ D q4 },(89)D 9 := {(x, u) ∈ X × U : q = M, s = 0, n l = 1, x ∈ D q4 ∪ D s4 },(90)D 10 := {(x, u) ∈ X × U : q = MA, s = 1, n l ∈ N o ≥3 , x ∈ Dn l2 },(91)D 11 := {(x, u) ∈ X × U : q = MA, s = 1, n l ∈ N o ≥1 , x ∈ D s4 },(92)D 12 := {(x, u) ∈ X × U : q = MA, s = 1, n l ∈ N o ≥1 , x ∈ D q3 }, (93) D 13 := {(x, u) ∈ X × U : q = AM, s = 1, n l ∈ N e ≥3 , x ∈ Dn l1 }, (94) D 14 := {(x, u) ∈ X × U : q = AM, s = 1, n l ∈ N e ≥1 , x ∈ D s3 }, (95) D 15 := {(x, u) ∈ X × U : q = AM, s = 1, n l ∈ N e ≥1 , x ∈ D q2 }, (96) D 16 := {(x, u) ∈ X × U : q = AM, s = 1, n l ∈ N e ≥1 , x ∈ D q1 }. (97) The jump map is derived so as to enforce the transitions E previously enunciated in (23) and graphically shown in Fig. 5(b). In particular, we define: G(x) := m∈{h∈E : x∈D h } g m (x), x ∈ D,(98) where: ∀x ∈ D 16 . (114) so that we can designate the new operative mode in which the system will flow. Note that the index variable m between D m and g m with m ∈ {1, 2 . . . , 16} varies following the edges E previously defined. In (103) and (110), we have a univocally determined solution that depends on x M , given by x + 5 = x 5 + 1 x M > 0 1 x M = 0 .(115) This allows us to determine if the model must create a new hysteresis branch (i.e., minor loop), or if it has to return to the outermost branch of the hysteresis. Based on the defined sets and maps, the polycrystalline SMA wire can be modeled via hybrid system H = (C, F, D, G). IV. EXPERIMENTAL VALIDATION In this section, the SMA hybrid model is evaluated based on a large set of experimental acquisitions conducted on a quasi-plastic NiTi wire. To validate the model and demonstrate its ability in reproducing hysteresis outer/minor loops and shape memory effect, an implementation in Matlab/Simulink is carried out thanks to the Hybrid Equation (HyEQ) Toolbox Each experiment, consisting of three loading/unloading cycles, is preceded by a small (< 0.5 %) stretching test at maximum power (500 mW). This is required to restore the material to a known initial condition, i.e., fully austenitic phase and absence of residual strain from the previous load history. A picture of the experimental setup is shown in Fig. 6. The specimen under investigation is clamped at both ends and deformed constantly with triangular strain rate profiles through a linear direct drive Aerotech ANT-25LA. By means of its encoder, the linear drive acts as a displacement sensor to reconstruct the wire length, while a Futek LSB200 load cell is used to measure the SMA force. The electric power sent to the wire is regulated by a control algorithm implemented in LabVIEW. Current and voltage are measured via a NI PXI-7852R and are used to reconstruct the wire resistance. All tests are conducted in a temperature-controlled environment at 298 K. The parameter identification process is carried out on a restricted set of 6 experiments, corresponding to A meaningful subset of the validation results is shown in Fig. 7 and Fig. 8, where both stress-strain and resistancestrain curves are paired together. For ease of clarity, only the final hysteresis loop is shown in those figures. From Fig. 7, it can readily be observed that both models well reproduce all minor loops obtained experimentally for different power levels, even though calibration was performed based on minimum and maximum strain/power values only. A closer inspection of the results reveals that the hybrid model has generally higher accuracy than the polycrystalline MAS one. In Fig. 8, it can be seen that the dependence of the hysteresis on the input power is reproduced with high fidelity for different power and strain rate values. This is especially important for SMA applications as actuators. In particular, it can be noted how the zero-stress residual strain shifts to the left for an increasing Joule heating. This phenomenon is a direct consequence of the two-way shape memory effect and is well-predicted by the proposed model. The resistance trend is overall well reproduced, as well. Note how a small hysteresis is observed in the simulated resistances in Fig. 7 and Fig. 8, which is instead much tighter in the corresponding experimental curves. This may be due to oversimplifications in the adopted resistance model, e.g., the fact that R-phase has been neglected [35]. Also here, the hybrid model provides an overall higher accuracy than the MAS, due to the inability of the latter to predict the resistance plateaus in the low-power tests (as it lacks the slack dynamics). The behavior of the experimental stress response during several testing cycles, starting from a zero-stress loading condition, is instead shown in Fig. 9 for a pair of selected experiments, i.e., slower (0.5×10 −3 s −1 ) and faster (5×10 −3 s −1 ) strain rate tests at 310 mW of power and 4.5 % of maximum strain. It is clear how the residual strain causes deviations between the first and subsequent cycles. More specifically, for the slower experiment in Fig. 9(a), the second and third loops close at a fixed residual strain which is larger than the starting value during cycle 1, as a result of the shape memory effect occurring in the material. For the faster experiment in Fig. 9(b), instead, the wire temperature changes continuously during the slack mode, causing in turn a reduction of the residual strain between two subsequent cycles. As a result, the second and third loops no longer close at a fixed strain value. These experimental observations are automatically captured by the hybrid model, thanks to the inclusion of slack dynamics. The polycrystalline MAS model, on the other hand, incorrectly predicts that the strain returns to the starting point after each cycle, causing all loops to be identical in each experiment. To let the polycrystalline MAS model correctly describe the changing behavior of the hysteresis for different loops, a preprocessing of the input signals would be required. A quantitative measure of the accuracy of both hybrid and polycrystalline MAS models is shown in Table I. The accuracy of the models is quantified by using the same FIT index as in [23], which represents a normalized root mean squared error expressed as a percentage. Except for the smallest strain cases, in which the FIT values are small due to numerical artifacts, an accuracy higher than 90% for the stress and 80% for the resistance is obtained in most of the cases, thus confirming the accuracy of the hybrid model. Only in some low-power cases, the resistance fitness exhibits poor values, reported as a zero FIT in Table I, and resulting in an inaccurate prediction. The lower resistance accuracy is reasonably due to the presence of the small hysteresis in the simulated results, which is instead less pronounced in the corresponding experiments, see Fig. 7 and Fig. 8. Additionally, for some experiments at low power (0.5 mW) and small strain (0.5 -2.5 %), no force is measured due to the wire always being in slack. Those experiments correspond to the empty cells in Table I. The accuracy of the polycrystalline MAS model is generally similar to the one of the hybrid model, even though its inability to predict the reversal strain during multiple cycling causes a slightly smaller accuracy overall, in agreement with what is observed in Fig. 7-Fig. 9. As a further comparison, Table I also reports the simulation speed of both the hybrid model and the polycrystalline MAS model. For this comparison, the latter is integrated into Matlab with the stiff solver ode15s, which is the one providing the least amount of time compared to non-stiff solvers (such as ode45). The new hybrid model requires between 13 and 21 seconds of simulation time per test. It is observed that such simulation time mostly depends on the number of jumps since the numerical integration is fast during flows. The polycrystalline MAS model, on the other hand, exhibits simulation times similar to the hybrid one for short experiments, while this time increases by a factor of up to three for the longer experiments. This is especially visible from the last nine entries in Table I. The behavior can be explained considering that the polycrystalline MAS model requires stiff equation (4) to be integrated continuously, thus making the simulation time proportional to the experiment duration. V. CONCLUSIONS In this paper, a novel dynamic model is proposed to describe polycrystalline SMA wire in a lumped-parameter and controloriented fashion for actuators. The approach relies on hybrid system theory to improve the numerical robustness and model efficiency, thus reducing simulation time without affecting physical interpretability. Comparative studies showed high accuracy in reproducing both stress-strain and resistance-strain curves for various electro-mechanical loading histories, corresponding to different values of maximum strain, strain rate, and input power. The hybrid model well describes the complex SMA hysteresis and minor loops and also accounts for the wire behavior during slack in an automatic way. Future studies will include model improvements, such as more accurate descriptions of the electrical resistance, or novel parameterizations of the hysteresis outer loop with the aim of reducing the number of free parameters. Furthermore, the model will be used for the simulation and dynamic optimization of SMAdriven systems, as well as for developing hybrid controllers for hysteresis compensation and self-sensing observers. Fig. 1 . 1Example of single-crystal (a) and polycrystalline (b) SMA hysteresis. Fig. 3 . 3Example of SMA inner hysteresis loops in the stress-phase plane. Fig. 4 . 4Qualitative sketch of a pseudoelastic stress-strain characteristic of a polycrystalline SMA wire (a), and corresponding finite state machine (b). A (x M , T ) and σ (n l ) M (x M , T ) describe the current minor loop of the hysteresis. The hybrid reformulation is grounded on the MAS model structural property defined by Fig. 5 . 5Different behaviors of a polycrystalline SMA wire at low and high temperatures (a), and the corresponding finite state machine which also includes the slack dynamics (b). T , and compute in closed form the time derivative of the temperature ∀i ∈ Q. T on x M , thus making it no longer necessary to include the phase fraction among the continuous states of the hybrid model. As σ (n l ) A (x M , T ) and σ (n l ) M (x M , T ) are usually available in the form of look-up tables, no analytical solution is expected to be found for x M in (39)-(43). Therefore, for model implementation purpose, the solution of equations (39)-(43) must be addressed via numerical methods, e.g., by means of auxiliary look-up tables. The existence of a unique solution to (39)-(43) must be ensured by proper calibration of the model parameters. As a final remark, we point out that the dependency of thermal flows ϕ σ M :=σ (n l ) M , which can be expressed as a function of the state via (24)-(26), on whether we are in slack (s = 1)or not (s = 0), respectively. Terms N o ≥k and N e ≥k denote the sets of odd and even integers not smaller than a given k, i.e., N e ≥k := {n ≥ k : ∃h ∈ N ≥1 : n = 2h − 1} , ≥k := {n ≥ k : ∃h ∈ N ≥1 : n = 2h} . where D 1 1:= {(x, u) ∈ X × U : q = AM, s = 0, n l ∈ N e ≥1 , x ∈ D q1 }, (82) D 2 := {(x, u) ∈ X × U : q = AM, s = 0, n l ∈ N e ≥1 , x ∈ D q2 }, (83) D 3 := {(x, u) ∈ X × U : q = AM, s = 0, n l ∈ N e ≥1 , x ∈ D s1 }, (84) [34]. Using the Lite HyEQ Simulator available in the HyEQ toolbox library, it is possible to define H = (C, F, D, G) functions, conditions, and hybrid states. For the numerical integration, the non-stiff Runge-Kutta solver ode45 is chosen.A total of 60 experiments are performed on a pre-trained 1 commercial SMA wire by DYNALLOY , having a diameter of 75 µm and an austenitic length of 100.5 mm. The tests are conducted by varying the maximum strain, the strain rate, and the applied electrical power, considering all the possible combinations of the following values• Strain 0.5 %, 1.5 %, 2.5 %, 3.5 %, 4.5 %; • Strain rate 0.5×10 −3 s −1 , 1×10 −3 s −1 , 5×10 −3 cs −1 ; Fig. 6. Picture of the measurement system for the SMA wire tensile tests. • Power 0.5 mW, 310 mW, 360 mW, 410 mW. • Strain 0.5 %, Strain rate 0.5×10 −3 s −1 , Power 410 mW; • Strain 0.5 %, Strain rate 5×10 −3 s −1 , Power 410 mW; • Strain 4.5 %, Strain rate 0.5×10 −3 s −1 , Power 0.5 mW; • Strain 4.5 %, Strain rate 5×10 −3 s −1 , Power 0.5 mW; • Strain 4.5 %, Strain rate 0.5×10 −3 s −1 , Power 410 mW; • Strain 4.5 %, Strain rate 5×10 −3 s −1 , Power 410 mW.The selected experiments capture the fundamental dynamics of hysteresis. All the remaining 54 experiments are used to validate the prediction of the model. Some of the model parameters are known or can be set a-priori, namely r 0 = 37.5 × 10 −6 m, l 0 = 100 × 10 −3 m, ρ v = 6500 kg/m 3 , ν = 0.3, and T 0 = 393 K. For the polycrystalline MAS model only, we additionally set τ x = 0.01 s, V L = 5 × 10 −23 m 3 , and k B = 1.38 × 10 −23 J/K. The remaining parameters are calibrated with the same procedure described in[23], briefly summarized in the following. First, since it is observed that c V and h M have a negligible impact on the model output for slow experiments, all the thermo-mechanical parameters except for those two are identified based on the tests conducted with a strain rate of 0.5×10 −3 s −1 , by means of a nonlinear optimization algorithm built upon the Nelder-Mead simplex method. During this step, structural relationships are exploited to reduce the number of free parameters describing the hysteresis outer loop interpolators (11)-(13), as in[23]. Subsequently,Fig. 7. Subset of validation experiments, obtained with a power of 360 mW and a strain rate of 0.5×10 −3 s −1 for different strains (minor loops), is shown in the left column. Corresponding simulation results of both hybrid and MAS models are reported in the center and right columns, respectively. the faster experiments conducted with a strain rate of 5 ×10 −3 s −1 are used to tune c V and h M . Finally, the linear dependence of the resistance on the unknown electrical parameters in (14)-(16) is exploited to calibrate them via standard least squares. The corresponding values of the identified parameters are: • Mechanical parameters: E A = 50 × 10 9 N/m 2 , E M = 31 × 10 9 N/m 2 , ε T = 4.07 × 10 −2 ; • Thermal parameters: λ = 235 W/(K × m 2 ), c V = 450 J/(K × kg), h M = 22 × 10 3 J/kg; • Electrical parameters: ρ eA = 8.11×10 −7 Ω × m, ρ eM = 10.02 × 10 −7 Ω × m, α eA = 0 1/K, α eM = 1.4 × 10 −3 1/K; • Outer hysteresis loop parameters: E AL = 2.397 × 10 8 , E AR = −8.765 × 10 8 , E AC = −1.560 × 10 9 , λ AL = 120, λ AR = 4, σ AB = 1.411 × 10 9 , E M L = 5.245 × 10 8 , E M R = −1.791 × 10 8 , E M C = −1.060 × 10 9 , λ M L = 9.5, λ M R = 100, σ M B = 8.263 × 10 8 , E SL = 7.709 × 10 6 , E SR = −3.904 × 10 5 , E SC = 3.997 × 10 5 , λ SL = 80, λ SR = 20, x 0SL = −1.000×10 −3 , x 0SR = 1, σ SB = 3.821 × 10 5 . Fig. 8 . 8Subset of validation experiments with maximum strain (4.5%). In each column, we have different powers, respectively 0.5 mW, 310 mW, and 410 mW, for stresses first and then resistances. Instead, rows differ from strain rates values, 0.5×10 −3 s −1 and 5×10 −3 s −1 . Fig. 9 . 9Comparison between the polycrystalline MAS model and the hybrid reformulation on all three cycles for two validation experiments with slow (0.5×10 −3 s −1 ) and fast (5×10 −3 s −1 ) strain rate, both corresponding to a 310 mW power. received Month Day, 2022; revised Month Day, 2022; accepted Month Day, 2022. Recommended by Technical Editor -and Senior Editor-. Michele A. Mandolino, Dominik Scholtes, and Gian- luca Rizzello are with the Department of Systems En- gineering, Saarland University, 66123 Saarbrücken, Ger- many {michele.mandolino, dominik.scholtes, gianluca.rizzello}@imsl.uni-saarland.de Francesco Ferrante is with the Department of Engineering, University of Perugia, 06123 Perugia, Italy [email protected] Digital Object Identifier (DOI): -. TABLE I PERFORMANCE ICOMPARISON FOR ALL THE EXPERIMENTS.Max Power Strain Time Time FIT σ FIT σ FIT R FIT R Strain Level Rate HYB MAS HYB MAS HYB MAS [%] [mW] [10 -3 /s] [s] [s] [%] [%] [%] [%] 0.5 0.5 0.5 - - - - - - 0.5 0.5 1.0 - - - - - - 0.5 0.5 5.0 - - - - - - 0.5 310 0.5 13.3 15.5 40.7 33.7 22.9 0.0 0.5 310 1.0 13.6 15.1 7.8 9.3 0.0 0.0 0.5 310 5.0 13.1 15.1 22.8 51.6 0.0 0.0 0.5 360 0.5 14.2 16.0 83.8 23.4 73.6 40.0 0.5 360 1.0 14.2 24.4 57.2 67.5 40.9 0.0 0.5 360 5.0 14.0 15.4 68.9 40.7 54.2 23.4 0.5 410 0.5 15.2 16.2 96.7 66.5 92.8 63.9 0.5 410 1.0 14.8 16.1 83.2 32.6 76.4 0.0 0.5 410 5.0 14.8 19.0 88.6 68.3 80.5 65.8 1.5 0.5 0.5 - - - - - - 1.5 0.5 1.0 - - - - - - 1.5 0.5 5.0 - - - - - - 1.5 310 0.5 18.9 31.5 97.0 68.2 81.7 67.5 1.5 310 1.0 18.5 32.6 86.9 82.7 72.3 53.7 1.5 310 5.0 17.7 30.2 90.9 75.5 73.6 57.0 1.5 360 0.5 19.8 34.5 96.6 71.0 89.7 80.2 1.5 360 1.0 18.8 34.0 90.7 81.8 81.0 69.2 1.5 360 5.0 18.7 28.2 95.1 74.7 84.7 75.7 1.5 410 0.5 19.9 30.3 93.8 79.2 94.3 79.7 1.5 410 1.0 19.4 30.3 90.2 71.6 84.4 60.3 1.5 410 5.0 19.2 22.8 94.9 77.6 91.1 78.8 2.5 0.5 0.5 - - - - - - 2.5 0.5 1.0 - - - - - - 2.5 0.5 5.0 - - - - - - 2.5 310 0.5 19.6 46.9 90.6 76.8 80.8 79.8 2.5 310 1.0 19.7 40.9 94.6 84.5 77.5 75.9 2.5 310 5.0 19.2 38.0 94.3 81.6 78.4 72.9 2.5 360 0.5 20.2 45.1 90.4 80.2 86.6 84.9 2.5 360 1.0 20.2 40.2 95.0 86.1 83.3 81.8 2.5 360 5.0 20.4 39.8 93.8 81.0 84.9 81.4 2.5 410 0.5 21.1 48.8 92.3 86.6 91.3 86.5 2.5 410 1.0 20.8 41.8 95.1 85.7 88.1 81.1 2.5 410 5.0 20.1 36.5 93.5 84.1 89.6 84.1 3.5 0.5 0.5 8.6 17.7 6.9 0.0 13.6 0.0 3.5 0.5 1.0 8.8 12.5 16.4 0.0 39.0 0.0 3.5 0.5 5.0 7.7 13.1 1.9 0.0 25.6 0.0 3.5 310 0.5 20.6 52.6 89.8 82.9 80.9 84.9 3.5 310 1.0 20.6 48.9 93.3 87.6 77.1 83.2 3.5 310 5.0 19.6 41.0 94.5 86.5 78.5 79.1 3.5 360 0.5 20.8 52.7 91.2 85.3 86.8 87.3 3.5 360 1.0 20.7 49.9 94.2 89.1 82.7 86.3 3.5 360 5.0 21.0 46.5 94.2 85.4 85.1 84.3 3.5 410 0.5 22.4 53.7 93.8 90.4 90.6 88.7 3.5 410 1.0 21.3 51.3 93.0 90.4 85.7 86.9 3.5 410 5.0 21.2 48.4 94.6 87.8 89.9 86.6 4.5 0.5 0.5 10.0 29.7 93.4 10.9 68.1 0.0 4.5 0.5 1.0 9.1 27.0 88.6 0.0 53.7 0.0 4.5 0.5 5.0 9.3 20.9 92.4 25.5 31.0 0.0 4.5 310 0.5 21.5 62.4 89.3 86.1 83.0 87.7 4.5 310 1.0 20.8 57.0 91.7 89.2 79.3 86.6 4.5 310 5.0 20.8 51.1 95.4 89.6 81.0 83.5 4.5 360 0.5 23.2 65.0 92.0 88.0 89.1 89.1 4.5 360 1.0 21.1 61.9 93.2 90.6 82.9 88.5 4.5 360 5.0 21.7 52.8 94.0 87.9 87.2 86.9 4.5 410 0.5 21.4 62.7 94.8 91.8 91.4 90.7 4.5 410 1.0 21.6 60.9 91.1 91.7 84.3 88.9 4.5 410 5.0 21.2 55.3 94.1 89.1 91.6 88.0 A thermo-mechanical training process, consisting of 100 cycles at 5 % max strain with a power of 410 mW, is performed to stabilize the wire hysteresis. A review on applications of nitinol shape memory alloy. R Chaudhari, J J Vora, D M Parikh, Recent Advances in Mechanical Infrastructure. A. K. Parwani, P. Ramkumar, K. Abhishek, and S. K. YadavSingapore; SingaporeSpringerR. Chaudhari, J. J. Vora, and D. M. Parikh, "A review on applications of nitinol shape memory alloy," in Recent Advances in Mechanical Infrastructure, A. K. Parwani, P. Ramkumar, K. Abhishek, and S. K. Yadav, Eds. Singapore: Springer Singapore, 2021, pp. 123-132. Adaptive resetting of SMAactuators. S Langbein, T Sadek, A Czechowicz, Proc. ASME Conf. Smart Mater. ASME Conf. Smart Mater44168S. Langbein, T. Sadek, and A. Czechowicz, "Adaptive resetting of SMA- actuators," in Proc. ASME Conf. Smart Mater. Adapt. Struct. Intell. Syst., vol. 44168, 2010, pp. 399-404. Shape memory alloy wire for selfsensing servo actuation. D J S Ruth, K Dhanalakshmi, Mech Syst Signal Process. 83D. J. S. Ruth and K. Dhanalakshmi, "Shape memory alloy wire for self- sensing servo actuation," Mech Syst Signal Process, vol. 83, pp. 36-52, 2017. Investigation on a new approach for designing articulated soft robots with discrete variable stiffness. Y Zhong, R Du, P Guo, H Yu, IEEE/ASME Trans Mechatron. 266Y. Zhong, R. Du, P. Guo, and H. Yu, "Investigation on a new approach for designing articulated soft robots with discrete variable stiffness," IEEE/ASME Trans Mechatron, vol. 26, no. 6, pp. 2998-3009, 2021. Biomedical applications of shape memory alloys. L Petrini, F Migliavacca, J. metall. mater. sci. L. Petrini and F. Migliavacca, "Biomedical applications of shape mem- ory alloys," J. metall. mater. sci., 2011. Wrist assisting soft wearable robot with stretchable coolant vessel integrated SMA muscle. J Jeong, K Hyeon, J Han, C H Park, S.-Y Ahn, S.-K Bok, K.-U Kyung, IEEE/ASME Trans Mechatron. 272J. Jeong, K. Hyeon, J. Han, C. H. Park, S.-Y. Ahn, S.-K. Bok, and K.- U. Kyung, "Wrist assisting soft wearable robot with stretchable coolant vessel integrated SMA muscle," IEEE/ASME Trans Mechatron, vol. 27, no. 2, pp. 1046-1058, 2021. Shape memory materials for electrically-powered soft machines. X Huang, M Ford, Z J Patterson, M Zarepoor, C Pan, C Majidi, J Mater Chem B. 821X. Huang, M. Ford, Z. J. Patterson, M. Zarepoor, C. Pan, and C. Majidi, "Shape memory materials for electrically-powered soft machines," J Mater Chem B, vol. 8, no. 21, pp. 4539-4551, 2020. A state-of-the-art review on robots and medical devices using smart fluids and shape memory alloys. J W Sohn, G.-W Kim, S.-B Choi, Appl. Sci. 8101928J. W. Sohn, G.-W. Kim, and S.-B. Choi, "A state-of-the-art review on robots and medical devices using smart fluids and shape memory alloys," Appl. Sci., vol. 8, no. 10, p. 1928, 2018. Overview and future advanced engineering applications for morphing surfaces by shape memory alloy materials. A Sellitto, A Riccio, Materials. 125708A. Sellitto and A. Riccio, "Overview and future advanced engineering applications for morphing surfaces by shape memory alloy materials," Materials, vol. 12, no. 5, p. 708, 2019. Shape memory alloys for aerospace, recent developments, and new applications: A short review. G Costanza, M E Tata, Materials. 1381856G. Costanza and M. E. Tata, "Shape memory alloys for aerospace, recent developments, and new applications: A short review," Materials, vol. 13, no. 8, p. 1856, 2020. Shape-memory alloys handbook. C Lexcellent, John Wiley & SonsC. Lexcellent, Shape-memory alloys handbook. John Wiley & Sons, 2013. A review of constitutive models and modeling techniques for shape memory alloys. C Cissé, W Zaki, T B Zineb, Int. J. Plast. 76C. Cissé, W. Zaki, and T. B. Zineb, "A review of constitutive models and modeling techniques for shape memory alloys," Int. J. Plast., vol. 76, pp. 244-284, 2016. Improvements and algorithmical considerations on a recent three-dimensional model describing stress-induced solid phase transformations. F Auricchio, L Petrini, Int J Numer Methods Eng. 5511F. Auricchio and L. Petrini, "Improvements and algorithmical consid- erations on a recent three-dimensional model describing stress-induced solid phase transformations," Int J Numer Methods Eng, vol. 55, no. 11, pp. 1255-1284, 2002. Shape Memory Alloys: Modeling and Engineering Applications, ser. Springer ebook collection / Chemistry and Materials Science. D Lagoudas, Springer USD. Lagoudas, Shape Memory Alloys: Modeling and Engineering Appli- cations, ser. Springer ebook collection / Chemistry and Materials Science 2005-2008. Springer US, 2008. Explicit finite element implementation of an improved three dimensional constitutive model for shape memory alloys. A Stebner, L C Brinson, Comput Methods Appl Mech Eng. 257A. Stebner and L. C. Brinson, "Explicit finite element implementation of an improved three dimensional constitutive model for shape memory alloys," Comput Methods Appl Mech Eng, vol. 257, pp. 17-35, 2013. A constitutive model for cyclic actuation of high-temperature shape memory alloys. Y Chemisky, G Chatzigeorgiou, P Kumar, D C Lagoudas, Mech. Mater. 68Y. Chemisky, G. Chatzigeorgiou, P. Kumar, and D. C. Lagoudas, "A constitutive model for cyclic actuation of high-temperature shape memory alloys," Mech. Mater., vol. 68, pp. 120-136, 2014. Coupled finite element simulation of shape memory bending microactuator. G K Tshikwand, L Seigner, F Wendler, M Kohl, Shape Mem. Superelasticity. G. K. Tshikwand, L. Seigner, F. Wendler, and M. Kohl, "Coupled finite element simulation of shape memory bending microactuator," Shape Mem. Superelasticity, pp. 1-21, 2022. S Dutta, F Ghorbel, Differential hysteresis modeling of a shape memory alloy wire actuator. 10S. Dutta and F. Ghorbel, in Differential hysteresis modeling of a shape memory alloy wire actuator, vol. 10, no. 2, 2005, pp. 189-197. System identification of a NiTi-based SMA actuator using a modified Preisach model and adaptive control. L F Toledo, Z G Joey, J M Oxoby, Y Chen, N O Pérez-Arancibia, Proc. Am. Control Conf. IEEE, 2017. Am. Control Conf. IEEE, 2017L. F. Toledo, Z. G. Joey, J. M. Oxoby, Y. Chen, and N. O. Pérez- Arancibia, "System identification of a NiTi-based SMA actuator using a modified Preisach model and adaptive control," in Proc. Am. Control Conf. IEEE, 2017, pp. 183-190. Hysteresis modeling, identification and fuzzy PID control of SMA wire actuators using generalized Prandtl-Ishlinskii model with experimental validation. H Basaeri, M Zakerzadeh, A Yousefikoma, N Rad, M Mahdavian, Journal of Computational Applied Mechanics. 502H. Basaeri, M. Zakerzadeh, A. Yousefikoma, N. Faridi Rad, and M. Mah- davian, "Hysteresis modeling, identification and fuzzy PID control of SMA wire actuators using generalized Prandtl-Ishlinskii model with experimental validation," Journal of Computational Applied Mechanics, vol. 50, no. 2, pp. 263-274, 2019. Stress-dependent generalized Prandtl-Ishlinskii hysteresis model of a NiTi wire with superelastic behavior. H Yoong, C Su, K Yeo, J Intell Mater Syst Struct. 3215H. Yoong, C. Su, and K. Yeo, "Stress-dependent generalized Prandtl- Ishlinskii hysteresis model of a NiTi wire with superelastic behavior," J Intell Mater Syst Struct, vol. 32, no. 15, pp. 1713-1724, 2021. Preisach-model-based position control of a shape-memory alloy linear actuator in the presence of time-varying stress. Z G Joey, L Chang, N O Pérez-Arancibia, Mechatronics. 73102452Z. G. Joey, L. Chang, and N. O. Pérez-Arancibia, "Preisach-model-based position control of a shape-memory alloy linear actuator in the presence of time-varying stress," Mechatronics, vol. 73, p. 102452, 2021. An accurate dynamic model for polycrystalline shape memory alloy wire actuators and sensors. G Rizzello, M A Mandolino, M Schmidt, D Naso, S Seelecke, Smart Mater Struct. 28225020G. Rizzello, M. A. Mandolino, M. Schmidt, D. Naso, and S. Seelecke, "An accurate dynamic model for polycrystalline shape memory alloy wire actuators and sensors," Smart Mater Struct, vol. 28, no. 2, p. 025020, jan 2019. Mesoscopic free energy as a framework for modeling shape memory alloys. W Ballew, S Seelecke, J Intell Mater Syst Struct. 30W. Ballew and S. Seelecke, "Mesoscopic free energy as a framework for modeling shape memory alloys," J Intell Mater Syst Struct, vol. 30, pp. 1969-2012, May 2019. Hybrid dynamical model for reluctance actuators including saturation, hysteresis, and eddy currents. E Ramirez-Laboreo, M G L Roes, C Sagues, IEEE/ASME Transactions on Mechatronics. 243E. Ramirez-Laboreo, M. G. L. Roes, and C. Sagues, "Hybrid dynamical model for reluctance actuators including saturation, hysteresis, and eddy currents," in IEEE/ASME Transactions on Mechatronics, vol. 24, no. 3, 2019, pp. 1396-1406. Robust global stabilization of the DC-DC boost converter via hybrid control. T A Theunisse, J Chai, R G Sanfelice, W M H Heemels, IEEE Trans Circuits Syst I Regul Pap. 624T. A. Theunisse, J. Chai, R. G. Sanfelice, and W. M. H. Heemels, "Robust global stabilization of the DC-DC boost converter via hybrid control," IEEE Trans Circuits Syst I Regul Pap, vol. 62, no. 4, pp. 1052- 1061, 2015. R Goebel, R G Sanfelice, A R Teel, Hybrid Dynamical Systems: Modeling, Stability, and Robustness. New JerseyPrinceton University PressR. Goebel, R. G. Sanfelice, and A. R. Teel, Hybrid Dynamical Systems: Modeling, Stability, and Robustness. New Jersey: Princeton University Press, 2012. R G Sanfelice, Hybrid Feedback Control. Princeton University PressR. G. Sanfelice, Hybrid Feedback Control. Princeton University Press, 2020. On notions of detectability and observers for hybrid systems. P Bernard, R G Sanfelice, Proc. IEEE Conf. Decis. IEEE Conf. DecisP. Bernard and R. G. Sanfelice, "On notions of detectability and observers for hybrid systems," in Proc. IEEE Conf. Decis., December 2020. A hybrid dynamical modeling framework for shape memory alloy wire actuated structures. M A Mandolino, F Ferrante, G Rizzello, IEEE Robot. Autom. Lett. 62M. A. Mandolino, F. Ferrante, and G. Rizzello, "A hybrid dynamical modeling framework for shape memory alloy wire actuated structures," IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 3886-3893, mar 2021. A coupled thermomechanical model for shape memory alloys-From single crystal to polycrystal. O Heintze, S Seelecke, Mater. Sci. Eng. O. Heintze and S. Seelecke, "A coupled thermomechanical model for shape memory alloys-From single crystal to polycrystal," Mater. Sci. Eng., vol. 481-482, pp. 389-394, 2008. Passivity analysis and porthamiltonian formulation of the Müller-Achenbach-Seelecke model for shape memory alloys: the isothermal case. G Rizzello, D Naso, S Seelecke, IFAC-PapersOnLineG. Rizzello, D. Naso, and S. Seelecke, "Passivity analysis and port- hamiltonian formulation of the Müller-Achenbach-Seelecke model for shape memory alloys: the isothermal case," IFAC-PapersOnLine, pp. 713-718, 2018. Design equations for binary shape memory actuators under arbitrary external forces. A Spaggiari, I Spinella, E Dragoni, J Intell Mater Syst Struct. 246A. Spaggiari, I. Spinella, and E. Dragoni, "Design equations for binary shape memory actuators under arbitrary external forces," J Intell Mater Syst Struct, vol. 24, no. 6, pp. 682-694, 2013. A toolbox for simulation of hybrid systems in Matlab/Simulink: Hybrid Equations (HyEQ) Toolbox. R G Sanfelice, D A Copp, P Nanez, Proceedings of Hybrid Systems: Computation and Control Conference. Hybrid Systems: Computation and Control ConferenceR. G. Sanfelice, D. A. Copp, and P. Nanez, "A toolbox for simula- tion of hybrid systems in Matlab/Simulink: Hybrid Equations (HyEQ) Toolbox," in Proceedings of Hybrid Systems: Computation and Control Conference, 2013, p. 101-106. A one-dimensional constitutive model for NiTi shape memory alloys considering inelastic strains caused by the r-phase transformation. L Wang, P Feng, X Xing, Y Wu, Z Liu, J. Alloys Compd. 868159192L. Wang, P. Feng, X. Xing, Y. Wu, and Z. Liu, "A one-dimensional constitutive model for NiTi shape memory alloys considering inelastic strains caused by the r-phase transformation," J. Alloys Compd., vol. 868, p. 159192, 2021. 2018) degrees in Automation and Control Engineering from the Polytechnic of Bari, Italy. He is currently a Ph.D. candidate at Saarland University under the supervision of Jun. Michele A , Prof. Dr. Gianluca Rizzello and Prof. Dr.-Ing. Stefan Seelecke. His research focuses on hybrid nonlinear modeling, control, realization, and testing of structures based on smart materials for unconventional actuators and soft robots. Mandolino was born in Gravina in Puglia, Italy, in 1994. He received B.Sc. (2015) and M.Sc.Michele A. Mandolino was born in Gravina in Puglia, Italy, in 1994. He received B.Sc. (2015) and M.Sc. (2018) degrees in Automation and Control Engineering from the Polytechnic of Bari, Italy. He is currently a Ph.D. candidate at Saarland University under the supervision of Jun.-Prof. Dr. Gianluca Rizzello and Prof. Dr.-Ing. Stefan Seelecke. His research focuses on hybrid nonlinear modeling, con- trol, realization, and testing of structures based on smart materials for unconventional actuators and soft robots. Dominik Scholtes was born 1991 in Lebach, Germany. He received is B.Eng. in Mechanical Engineering from the University of Applied Sciences in Saarbrücken in 2014 and his M.Sc. in Mechanical Engineering from Saarland University in 2017. He is currently a doctoral researcher at Saarland University under the supervision of Prof. Dr.-Ing. Dominik Scholtes was born 1991 in Lebach, Ger- many. He received is B.Eng. in Mechanical Engi- neering from the University of Applied Sciences in Saarbrücken in 2014 and his M.Sc. in Mechan- ical Engineering from Saarland University in 2017. He is currently a doctoral researcher at Saarland University under the supervision of Prof. Dr.-Ing. His research interests are mechanical characterization of shape memory alloy (SMA) actuator wires, joining technology of SMA micro wires as well as the development and design of actuator systems based on SMA. Stefan Seelecke, Stefan Seelecke. His research interests are mechan- ical characterization of shape memory alloy (SMA) actuator wires, joining technology of SMA micro wires as well as the development and design of actuator systems based on SMA. Laurea Magistrale) degree (cum laude) in control engineering from the Università degli Studi di Roma Tor Vergata, Italy, in 2012, and the Ph.D. degree in control theory from the Institut supérieur de l'aéronautique et de l'espace (SUPAERO) Toulouse, France. Francesco Ferrante Francesco Ferrante, Senior Member, IEEE) received the B.Sc. (Laurea) degree in control engineering from the Sapienza -Università di Roma, Italy, in 2010, the M.Sc. Clemson, SC, USADepartment of Electrical and Computer Engineering, Clemson University ; Department of Engineering of the University of Perugiahe was an assistant professor at University of Grenoble Alpes, France. Italy where he is currently a tenure track assistant professor of systems and control. He currently serves as an Associate Editor for the IEEE Control Systems Letters, the European Journal of Control, and the IMA Journal of Mathematical Control and InformationFrancesco Ferrante Francesco Ferrante (Senior Member, IEEE) received the B.Sc. (Laurea) degree in control engineering from the Sapienza -Uni- versità di Roma, Italy, in 2010, the M.Sc. (Laurea Magistrale) degree (cum laude) in control engineer- ing from the Università degli Studi di Roma Tor Ver- gata, Italy, in 2012, and the Ph.D. degree in control theory from the Institut supérieur de l'aéronautique et de l'espace (SUPAERO) Toulouse, France, in 2015. From November 2015 to August 2016, he was a Post-Doctoral Fellow at the Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, USA. From August 2015 to September 2016, he held a position as a Post- Doctoral Scientist at the Hybrid Systems Laboratory (HSL), University of California at Santa Cruz. From September 2017 to September 2021 he was an assistant professor at University of Grenoble Alpes, France. In September 2021 he joined the Department of Engineering of the University of Perugia, Italy where he is currently a tenure track assistant professor of systems and control. He currently serves as an Associate Editor for the IEEE Control Systems Letters, the European Journal of Control, and the IMA Journal of Mathematical Control and Information. in the role of a postdoc researcher and Group Leader in Smart Material Modeling and Control (2016-2019), and subsequently as Assistant Professor in Adaptive Polymer Systems (2020 -present). His research interests involve modeling, control, and self-sensing of innovative mechatronic and robotic systems based on unconventional drive technologies. Gianluca Rizzello (M'16) was born in. Taranto, Italy; Bari, Italy; Bari, and Milano, Italy; Saarbrücken, Germany, firsthe joined the Saarland UniversityHe received his Ph.D. in Information and Communication Technologies from Scuola Interpolitecnica di Dottorato, a joint program between Polytechnic Universities of Torino. such as smart materialsGianluca Rizzello (M'16) was born in Taranto, Italy, in 1987. He received the master's (Hons.) degree in control engineering from the Polytechnic University of Bari, Bari, Italy, in 2012. He received his Ph.D. in Information and Communication Tech- nologies from Scuola Interpolitecnica di Dottorato, a joint program between Polytechnic Universities of Torino, Bari, and Milano, Italy, in 2016. After his doctoral studies, he joined the Saarland University, Saarbrücken, Germany, first in the role of a postdoc researcher and Group Leader in Smart Material Modeling and Control (2016-2019), and subsequently as Assistant Professor in Adaptive Polymer Systems (2020 -present). His research interests involve modeling, control, and self-sensing of innovative mechatronic and robotic systems based on unconventional drive technologies, such as smart materials.
[]
[ "Possible molecular dibaryons with csssqq quarks and their baryon-antibaryon partners", "Possible molecular dibaryons with csssqq quarks and their baryon-antibaryon partners" ]
[ "Shu-Yi Kong \nSchool of Physics and Technology\nNanjing Normal University\n210097NanjingChina\n", "Jun-Tao Zhu \nLanzhou Center for Theoretical Physics\nLanzhou University\n730000LanzhouChina\n", "Jun He \nSchool of Physics and Technology\nNanjing Normal University\n210097NanjingChina\n\nLanzhou Center for Theoretical Physics\nLanzhou University\n730000LanzhouChina\n" ]
[ "School of Physics and Technology\nNanjing Normal University\n210097NanjingChina", "Lanzhou Center for Theoretical Physics\nLanzhou University\n730000LanzhouChina", "School of Physics and Technology\nNanjing Normal University\n210097NanjingChina", "Lanzhou Center for Theoretical Physics\nLanzhou University\n730000LanzhouChina" ]
[]
In this work, we systematically investigate the charmed-strange dibaryon systems with csssqq quarks and their baryon-antibaryon partners from the interactions
10.1140/epjc/s10052-023-11625-5
[ "https://export.arxiv.org/pdf/2304.02920v2.pdf" ]
257,984,938
2304.02920
71b0d3188d5dfabbcc74540a0689737a12bcfc42
Possible molecular dibaryons with csssqq quarks and their baryon-antibaryon partners 8 Apr 2023 Shu-Yi Kong School of Physics and Technology Nanjing Normal University 210097NanjingChina Jun-Tao Zhu Lanzhou Center for Theoretical Physics Lanzhou University 730000LanzhouChina Jun He School of Physics and Technology Nanjing Normal University 210097NanjingChina Lanzhou Center for Theoretical Physics Lanzhou University 730000LanzhouChina Possible molecular dibaryons with csssqq quarks and their baryon-antibaryon partners 8 Apr 2023Received: date / Revised version: dateNoname manuscript No. (will be inserted by the editor) In this work, we systematically investigate the charmed-strange dibaryon systems with csssqq quarks and their baryon-antibaryon partners from the interactions Introduction As an important type of exotic hadrons, the dibaryons with baryon quantum number B = 2 attract much attention from the hadron physics community. In fact, one type of the exotic hadrons proposed earliest in the literature is the dibaryons predicted by Dyson and Xuong in 1964 based on the SU (6) symmetry almost at the same time of the proposal of the quark model [1]. The WASA-at-COSY collaboration reported a new resonance d * (2380) with quantum number I(J P ) = 0(3 + ), a mass of about 2370 MeV, and a width of about 70 MeV in the process pp → dπ 0 π 0 at [2]. Soon after the observation of the d * (2380), it is related to the dibaryon a Corresponding author: [email protected] predicted [3,4] while there is still other interpretations, such as a triangle singularity in the last step of the reaction in a sequential single pion production process [5]. More experimental and theoretical works are still required to clarify its origin. These early proposed dibaryons are exotic hadrons in the light flavor sector. In the past decades, many candidates of exotic states in charmed sector, such as hidden-charm tetraquarks and pentaquarks, have been observed in experiment, for example, the X(3872) and Z c (3900) [6][7][8][9][10], and a series of hidden-charm pentaquarks P c [11][12][13][14]. These states were observed near the thresholds of two charmed hadrons. Hence, it is natural to interpret them as the molecular states produced from interactions of a pair of charm and anticharm hadrons. Motivated by the observations of these states, theorists expect that there may exist dibaryon molecules composed of two heavy baryons. Due to large masses of the heavy baryons, the kinetic energy of a dibaryon system is reduced, which makes it easier to form a bound state. Possible hidden-charm and double-charm dibaryons were investigated in different approaches [15][16][17][18][19][20][21][22][23]. These results suggest that attraction may exist between a charmed baryon and an anticharmed or charmed baryon by light meson exchanges, which favors the existence of hidden-charm dibaryon molecular states and their double-charm partners. In addition to the above hidden-charm and double-charm states, some charmed-strange states were also observed these years, and taken as the candidates of molecular states of a charmed meson and a strange meson in the literature. As early as 2003, the BaBar collaboration reported a narrow peak D * s0 (2317) near the DK threshold [24], and later confirmed at CLEO and BELLE [25,26]. The CLEO collaboration also observed another narrow peak, the D s1 (2460) near the D * K threshold [26]. These states can not be well put into the conventional quark model with a charmed and an antistrange quark. Since these charmed-strange states are very close to the threshold of a charmed meson and a strange meson, some authors interpreted them as the molecules of corresponding charmed and strange mesons [27][28][29][30][31][32][33][34]. Recently the LHCb collaboration reported the X 0 (2900) and X 1 (2900) near theD * K * threshold [35,36]. Such states should be composed of four different quarks, and soon be explained as D * K * molecular state [37][38][39][40][41][42][43]. By adding an additional light quark to the above charmed-strange molecular states, the existence of charmed-strange pentaquark molecular states were also predicted in Refs. [44][45][46]. Following this way, if we continue to add light quark and convert all antiquarks to quarks, we will reach a charmstrange dibaryon systems. In Ref. [47], we systematically investigated the charmed-strange dibaryons with csqqqq quarks and their baryon-antibaryon partners from the interactions of a charmed baryon and a strange baryon Λ c Λ, Λ c Σ ( * ) , Σ ( * ) c Λ, and Σ ( * ) c Σ ( * ) , and corresponding interactions of a charmed baryon and an antistrange baryon Λ cΛ , Λ cΣ ( * ) , Σ ( * ) cΛ , and Σ ( * ) cΣ ( * ) . The calculation suggests that attractions widely exist in charmed-strange dibaryon systems while few bound states are produced from the charmedantistrange interactions. If one u/d quark in each constituent baryon is simultaneously replaced by a strange quark, we can reach charmed-strange dibaryon systems Ξ ( ′ , * ) c Ξ ( * ) , which are scarcely studied in the literature. In this work, we will study these systems together with the systems Ω ( * ) c Λ, Ω ( * ) c Σ ( * ) , Λ c Ω and Σ ( * ) c Ω with the same quark components, csssqq quarks, and their baryon-antibaryon partners Ξ ( ′ , * ) cΞ ( * ) , Ω ( * ) cΛ , Ω ( * ) cΣ ( * ) , Λ cΩ and Σ ( * ) cΩ . The work is organized as follows. After introduction, the potential kernels of systems considered are presented, which are obtained with the help of the effective Lagrangians with SU(3), heavy quark, and chiral symmetries. The quasipotential Bethe-Salpeter equation (qBSE) approach will also be introduced briefly. In Section 3, The bound states from all interactions will be searched with single-channel calculations. In Section 4, the bound states of the molecular states from full coupled-channel calculation will be presented. And the poles from two-channel calculations are also provided to estimate the strengths of the couplings between a molecular state and corresponding channels. In Section 5, discussion and summary are given. Theoretical frame In this work, we consider the possible molecular dibaryons from the interactions Ξ ( ′ , * ) c Ξ ( * ) , Ω ( * ) c Λ, Ω ( * ) c Σ ( * ) , Λ c Ω and Σ ( * ) c Ω and their baryon-antibaryon partners Ξ ( ′ , * ) cΞ ( * ) , Ω ( * ) cΛ , Ω ( * ) cΣ ( * ) , Λ cΩ and Σ ( * ) cΩ . The coupling between different channels will also be included to make a coupled-channel calculation to obtain the scattering amplitude by solving the qBSE. To achieve this aim, the potential will be constructed by the light meson exchanges. The Lagrangians are required to obtain the vertices, and will be given below. Relevant Lagrangians For the couplings of strange baryons with light mesons, we consider the exchange of pseudoscalar mesons P (π, η, ρ), vector mesons V (ω, φ, K, K * ), and σ mesons. For the former seven mesons, the vertices can be described by the effective Lagrangians with SU(3) and chiral symmetries [48,49]. The explicit the effective Lagrangians reads, L BBP = − g BBP m PB γ 5 γ µ ∂ µ PB,(1)L BBV = −B g BBV γ µ − f BBV 2m B σ µν ∂ ν V µ B,(2)L B * B * P = g B * B * P m PB * µ γ 5 γ ν B * µ ∂ ν P,(3)L B * B * V = −B * τ g B * B * V γ µ − f B * B * V 2m B * σ µν ∂ ν V µ B * τ ,(4)L BB * P = g BB * P m PB * µ ∂ µ PB + h.c.,(5)L BB * V = −i g BB * V m VB * µ γ 5 γ ν V µν B + h.c.,(6) where m p,V is the mass of the pseudoscalar or vector meson. B ( * ) is the field of the strange baryon. V µν = ∂ µ V ν −∂ µ V µ . The coupling constants can be determined by the SU(3) symmetry [48,[50][51][52] with the coupling constants for the nucleon and ∆. The SU(3) relations and the explicit values of coupling constants are calculated and listed in Table 1. For the couplings of strange baryons with the scalar meson σ, the Lagrangians read [53] L BBσ = −g BBσB σB, L B * B * σ = g B * B * σB * µ σB * µ .(7) The different choices of the mass of σ meson from 400 to 550 MeV affects the result a little, which can be smeared by a small variation of the cutoff in the calculation. In this work, we adopt a σ mass of 500 MeV. In general, we choose the coupling constants g BBσ and g B * B * σ as the same value as g BBσ = g B * B * σ = 6.59 [53]. For the couplings of charmed baryons with light mesons, the Lagrangians can be constructed under the heavy quark and chiral symmetries [54][55][56][57]. The explicit forms of the Lagrangians can be written as, Table 1 The coupling constants in effective Lagrangians. Here, g BBP = g NNπ = 0.989, g BBV = g NNρ = 3.25, g B * B * P = √ 60g ∆∆π = 13.78, g B * B * V = √ 60g ∆∆ρ = 59.41, g BB * P = √ 20g N∆π = 9.48, g BB * V = √ 20g N∆ρ = 71.69, α P = 0.4, α V = 1.15, f NNρ = g NNρ κ ρ , f ∆∆ρ = g ∆∆ρ κ ρ with κ ρ = 6.1, f NNω = 0 [48,51,52]. L BBP = − 3g 1 4 f π √ mBm B ǫ µνλκ ∂ ν P i=0,1B iµ ← → ∂ κ B jλ , L BBV = −i β S g V 2 2mBm B V ν i=0,1B µ i ← → ∂ ν B jµ − i λ S g V √ 2 (∂ µ V ν − ∂ ν V µ ) i=0,1B µ i B ν j ,Coupling SU(3) Relation Values Coupl. SU(3) Relation Values g ΞΞπ (2α P − 1)g BBP −0.20 g ΞΞη − g B * B * V −7.67 f Ξ * Σ * K * -f ∆∆ρ 46.78 g Ξ * ΩK * 1 2 √ 10 g B * B * V 9.39 f Ξ * ΩK * √ 6 2 f ∆∆ρ 57.29 g ΞΞ * π 1 2 √ 30 g BB * P 0.86 g ΞΞ * η − 1 2 √ 10 g BB * P −1.50 g ΞΞ * ρ 1 2 √ 30 g BB * V 6.54 g ΞΞ * ω − 1 2 √ 30 g BB * V −6.54 g ΞΞ * φ − 1 2 √ 15 g BB * V −9.25 g ΣΣ * π 1 2 √ 30 g BB * P 0.86 g ΣΣ * η − 1 2 √ 10 g BB * P −1.49 g ΣΣ * ρ 1 2 √ 30 g BB * V 6.54 g ΣΣ * ω − 1 2 √ 30 g BB * V −6.54 g ΣΣ * φ − 1 2 √ 15 g BB * V −9.25 g Ξ * ΛK 1 2 √ 10 g BB * P 1.50 g Ξ * ΛK * 1 2 √ 10 g BB * V 11.34 g ΞΩK 1 2 √ 5 g BB * P 2.12 g ΞΩK * 1 2 √ 5 g BB * V 16.03 g Ξ * ΣK − 1 2 √ 30 g BB * P −0.86 g Ξ * ΣK * − 1 2 √ 30 g BB * V −6.54 g ΞΣ * K − 1 2 √ 30 g BB * P −0.86 g ΞΣ * K * − 1 2 √ 30 g BB * V −6.54 L BBσ = ℓ S σ i=0,1B µ i B jµ , L B3B3V = −i g V β B 2 2mB¯3m B3 V µB3 ← → ∂ µ B3, L B3B3σ = ℓ B σB3B3, L BB3P = −i g 4 f π iB µ i ∂ µ PB3 + H.c., L BB3V = g V λ I 2mBm B3 ǫ µνλκ ∂ λ V κ iB iν ← → ∂ µ B3 + H.c.,(9) where mB ,B,B 3 ,B 3 is the mass of the charmed baryon. S µ ab is composed of the Dirac spinor operators, S ab µ = − 1 3 (γ µ + v µ )γ 5 B ab + B * ab µ ≡ B ab 0µ + B ab 1µ , S ab µ = 1 3B ab γ 5 (γ µ + v µ ) +B * ab µ ≡B ab 0µ +B ab 1µ ,(10) and the charmed baryon matrices are defined as, B3 =               0 Λ + c Ξ + c −Λ + c 0 Ξ 0 c −Ξ + c −Ξ 0 c 0               , B =                 Σ ++ c 1 √ 2 Σ + c 1 √ 2 Ξ ′+ c 1 √ 2 Σ + c Σ 0 c 1 √ 2 Ξ ′0 c 1 √ 2 Ξ ′+ c 1 √ 2 Ξ ′0 c Ω 0 c                 , B * =                 Σ * ++ c 1 √ 2 Σ * + c 1 √ 2 Ξ * + c 1 √ 2 Σ * + c Σ * 0 c 1 √ 2 Ξ * 0 c 1 √ 2 Ξ * + c 1 √ 2 Ξ * 0 c Ω * 0 c                 .(11) The P and V are the pseudoscalar and vector matrices as, P =                 √ 3π 0 +η √ 6 π + K + π − − √ 3π 0 +η √ 6 K 0 K −K0 − 2η √ 6                 , V =                 ρ 0 +ω √ 2 ρ + K * + ρ − −ρ 0 +ω √ 2 K * 0 K * −K * 0 φ                 . The parameters in the above Lagrangians are listed in Table 2, which are cited from the literature [58][59][60][61]. Table 2 The parameters and coupling constants. The λ, λ S ,I and f π are in the unit of GeV −1 . Others are in the unit of 1. f π g V β S ℓ S g 1 0.132 5.9 -1.74 6.2 -0.94 λ S β B ℓ B g 4 λ I -3.31 −β S /2 −ℓ S /2 g 1 / 2 √ 2 3 −λ S / √ 8 Potential kernel of interactions With the above Lagrangians for the vertices, the potential kernel can be constructed in the one-boson-exchange model with the help of the standard Feynman rule as in Refs. [62,63]. The propagators of the exchanged light mesons are defined as, P P,σ (q 2 ) = i q 2 − m 2 P,σ f i (q 2 ), P µν V (q 2 ) = i −g µν + q µ q ν /m 2 V q 2 − m 2 V f i (q 2 ),(12) where the form factor f i (q 2 ) is adopted to reflect the off-shell effect of exchanged meson, which is in form of e −(m 2 e −q 2 ) 2 /Λ 4 e with m e and q being the mass and momentum of the exchanged mesons, respectively. In this work, we still do not give the explicit form of the potential due to the large number of channels to be considered. Instead, we input the vertices Γ obtained from the Lagrangians and the above propagators P into the code directly. The dibaryon systems potential can be constructed with the help of the standard Feynman rule as [62], (13) where I P,V,σ is the flavor factors of the certain meson exchange, which are listed in Table 3. The interaction of their baryon-antibaryon partners interactions will be rewritten to the charmed-strange interactions by the well-known Gparity rule V = i ζ i V i [64,65]. The G parities of the exchanged mesons i are left as a ζ i factor. Since π, ω and φ mesons carry odd G parity, the ζ π , ζ ω and ζ φ should equal −1, and others equal 1. V P,σ = I P,σ Γ 1 Γ 2 P P,σ (q 2 ), V V = I V Γ 1µ Γ 2ν P µν V (q 2 ), The qBSE approach The Bethe-Salpeter equation is a 4-dimensional relativistic integral equation, which can be used to treat two body scattering. In order to reduce the 4-dimensional Bethe-Salpeter equation to a 3-dimensional integral equation, we adopt the covariant spectator approximation, which keeps the unitary and covariance of the equation [66]. In such treatment, one of the constituent particles, usually the heavier one, is put on shell, which leads to a reduced propagator for two constituent particles in the center-of-mass frame as [63,67], Table 3 The flavor factors I e for charmed-strange interactions. The values for charmed-antistrange interactions can be obtained by Gparity rule from these of charmed-strange interactions. The I σ should be 0 for coupling between different channels. I π η ρ ω φ σ K K * Ξ c Ξ ( * ) -Ξ c Ξ ( * ) 0 − − − 3 √ 2 2 √ 2 2 1 2 − − 1 − − √ 2 2 √ 2 2 1 2 − − Ξ ′ , * c Ξ ( * ) -Ξ ′ , * c Ξ ( * ) 0 − 3 √ 2 4 − 1 2 √ 6 − 3 √ 2 4 1 2 √ 2 1 2 1 − − 1 √ 2 4 − 1 2 √ 6 √ 2 4 1 2 √ 2 1 2 1 − − Ξ ′ , * c Ξ ( * ) -Ξ c Ξ ( * ) 0 − 3 2 √ 3 2 − 3 2 1 2 − 1 √ 2 − − − 1 1 2 √ 3 2 1 2 1 2 − 1 √ 2 − − − Ω ( * ) c Λ-Ω ( * ) c Λ 0 − − 2 √ 6 − − 1 − − − Ω ( * ) c Σ ( * ) -Ω ( * ) c Σ ( * ) 1 − − 2 √ 6 − − 1 − − − Ξ ′ , * c Ξ ( * ) -Λ c Ω 0 − − − − − − −1 −1 Ξ c Ξ ( * ) -Λ c Ω 0 − − − − − − − √ 2 Ξ ′ , * c Ξ ( * ) -Ω ( * ) c Λ 0 − − − − − − −1 −1 Ξ c Ξ ( * ) -Ω ( * ) c Λ 0 − − − − − − − √ 2 − √ 2 Ξ ′ , * c Ξ ( * ) -Σ c Ω 1 − − − − − − 1 √ 2 1 √ 2 Ξ c Ξ ( * ) -Σ c Ω 1 − − − − − − −1 −1 Ξ ′ , * c Ξ ( * ) -Ω ( * ) c Σ ( * ) 1 − − − − − − −1 −1 Ξ c Ξ ( * ) -Ω ( * ) c Σ ( * ) 1 − − − − − − − √ 2 − √ 2 G 0 = δ + (p ′′ 2 h − m 2 h ) p ′′ 2 l − m 2 l = δ + (p ′′0 h − E h (p ′′ )) 2E h (p ′′ )[(W − E h (p ′′ )) 2 − E 2 l (p ′′ )] .(14) As required by the spectator approximation adopted in the curren work, the heavier particle (h represents the charmed baryons) satisfies p ′′0 h = E h (p ′′ ) = (m 2 h +p ′′2 ) 1/2 . The p ′′0 l for the lighter particle (remarked as l) is then W − E h (p ′′ ). Here and hereafter, the value of the momentum in center-of-mass frame is defined as p = | p|. Then the 3-dimensional Bethe-Saltpeter equation can be reduced to a 1-dimensional integral equation with fixed spinparity J P by partial wave decomposition [63], iM J P λ ′ λ (p ′ , p) = iV J P λ ′ ,λ (p ′ , p) + λ ′′ p ′′2 dp ′′ (2π) 3 · iV J P λ ′ λ ′′ (p ′ , p ′′ )G 0 (p ′′ )iM J P λ ′′ λ (p ′′ , p),(15) where the sum extends only over nonnegative helicity λ ′′ . The partial wave potential in 1-dimensional equation is defined with the potential of the interaction obtained in the above as V J P λ ′ λ (p ′ , p) = 2π d cos θ [d J λλ ′ (θ)V λ ′ λ ( p ′ , p) + ηd J −λλ ′ (θ)V λ ′ −λ ( p ′ , p)],(16) where η = PP 1 P 2 (−1) J−J 1 −J 2 with P and J being parity and spin for the system. The initial and final relative momenta are chosen as p = (0, 0, p) and p ′ = (p ′ sin θ, 0, p ′ cos θ). The d J λλ ′ (θ) is the Wigner d-matrix. Here, a regularization is usually introduced to avoid divergence, when we treat an integral equation. In the qBSE approach, we usually adopt an exponential regularization by introducing a form factor into the propagator as f (q 2 ) = e −(k 2 l −m 2 l ) 2 /Λ 4 r , where k l and m l are the momentum and mass of the lighter one of and baryon. In the current work, the relation of the cutoff Λ r = m + α r 0.22 GeV with m being the mass of the exchanged meson is also introduced into the regularization form factor as in those for the exchanged mesons. The cutoff Λ e and Λ r play analogous roles in the calculation of the binding energy. For simplification, we set Λ e = Λ r in the calculations. The partial-wave qBSE is a one-dimensional integral equation, which can be solved by discretizing the momenta with the Gauss quadrature. It leads to a matrix equation of a form M = V + VGM [63]. The molecular state corresponds to the pole of the amplitude, which can be obtained by varying z to satisfy |1 − V(z)G(z)| = 0 where z = E R − iΓ/2 being the exact position of the bound state. Single-channel results With previous information, the explicit numerical calculations will be performed on the systems mentioned above. In the current model, we have the only one free parameter α. In the following, we vary the free parameter in a range of 0-5 to find the S-wave bound states with binding energy smaller than 30 MeV. In this work, we consider all possible channels with csssqq quarks, that is, Ξ ( c Ω and their baryon-antibaryon partners can not be considered in single-channel calculations due to the lack of exchanges of light mesons in the one-boson-exchange model considered in the current work. However, these channels will be considered in the later couple-channel calculations. Based on the quark configurations in different hadron clusters, these single-channel interactions can be divided into two categories: the Ξ ( ′ , * ) c Ξ ( ′ , * ) and Ω ( * ) c Λ or Ω ( * ) c Σ ( * ) and their baryon-antibaryon partners. Fig 1. The results suggest that fourteen interactions produce bound states in considered range of parameter α. All eight bound states from the Ξ * c Ξ ( * ) interaction can appear at α values less than 1. The binding energies of the isovector Ξ c Ξ states with (0, 1) + and the isoscalar and isovector Ξ c Ξ * states with (1, 2) + both increase rapidly to 30 MeV at α values of about 1.5, which indicates the strong attraction. However, the binding energies of isoscalar bound states from the Ξ c Ξ interaction with (1, 2) + increase slowly to 20 MeV at α values of about 5. The variation tendencies of the binding energies of the Ξ c Ξ ( * ) states with different spin parities are analogous. Almost all bound states from baryon-antibaryon interactions appear at the α values more than 3 and the isovector Ξ cΞ * interaction with (1, 2) − can no produce bound state. It suggests that the possibility of the existence of these baryon-antibaryon bound states is relatively low. In Fig. 2, the single-channel results about interactions Ξ ′ c Ξ ( * ) and Ξ ′ cΞ ( * ) are presented. In these systems, the charmed baryon belongs to the multiplet B 6 . The results suggest that twelve bound states can be produced from these interactions within considered range of parameter α. All eight bound states from baryon-baryon interactions appear at α values less than 1. Among these bound states, the two isoscalar bound states from the Ξ ′ c Ξ interaction with (0, 1) + are well distinguished and increase relatively slowly to 20 MeV at α values of about 3.5 and 5.0, respectively. Other six bound states increase rapidly to 30 MeV at α values of about 1.5, and the binding energies for states with different spins are almost the same. However, only four bound states can be produced from baryon-antibaryon interactions, which include the isoscalar and isovector Ξ state with 1 − . Again, one can still find that the states with the larger spin are easy to be produced for the isoscalar interactions, while the states with the smaller spin are easy to be produced for the isovector interactions. Still, these baryonantibaryon states are produced at α values around or more than 3, which makes their coexistence less possible. In the following Fig. 3, we present the results of the Ξ ( * ) c Ξ ( * ) and Ξ * cΞ ( * ) systems, in which the charmed baryon belongs to the multiplet B * 6 . The results suggest that bound states can be produced from eighteen interactions. For the baryon-baryon systems, the bound states can be produced from all channels, and appear at α values below 1.5. The curves of two isoscalar Ξ * c Ξ states with (0, 1) + are separated obviously, and their binding energies reach 5 MeV relative slowly at α values about 4.5 and 2, respectively. Besides the two states, other ten states increase with the parameter α to 30 MeV relatively rapidly at α values of about 2.5. Meanwhile, the interaction with the smaller spins have stronger attractions, which is reflected by the binding energies increasing faster with the variation of parameter. For their baryonanibaryon partners, two isoscalar states from the Ξ * cΞ interaction with 2 − and interaction Ξ * cΞ * with 3 − , as well as four isovector states from the Ξ * cΞ interaction with (1, 2) − and the Ξ * cΞ * interaction with (0, 1) − , can be produced at the cutoff over 2.5. cΣ ( * ) . In Fig. 4, we first give the results about the interactions Ω c Λ, Ω c Σ ( * ) , Ω cΛ and Ω cΣ ( * ) , in which the charmed baryons belong the multiplet B 6 . Only seven states are produced from those interactions. For the Ω c Λ interaction and its baryonantibaryon partner Ω cΛ with isospin I = 0, only the states that spin J = 1 can be produced at the cutoff about 4.0 and 3.0, respectively. There is no bound state produced from the isovector interaction Ω c Σ with (0, 1) + in the considered range of the parameter α. Two bound states from the Ω cΣ interaction with (0, 1) − appear at α values of about 3.0 and 3.6, respectively. Two bound states from the isovector Ω c Σ * interaction with (1, 2) + appear at α value of about 3.0 and 1.5, respectively, while only an isovector Ω cΣ * state with 1 − can be produced at α value of about 4.8. The states from the baryon-antibaryon interactions are still less likely to coexistence due to the large values of parameter α required to produce the bound states. In Fig. 5, the results about the interactions Ω * c Λ, Ω * c Σ ( * ) , Ω * cΛ , and Ω * cΣ ( * ) are presented. Here, the charmed baryons are in the B * 6 multiplet. The single-channel calculation suggests that nine bound states can be produced from sixteen interactions considered. The isoscalar Ω * c Λ state and its baryon-antibaryon partner Ω * cΛ interaction with spin J = 2 appear at α of about 3.5. As the Ω c Σ * interaction, the isovector Ω * c Σ systems with (1, 2) + are unbound. The Ω * cΣ state with 1 − is produced at α larger than 3.0. The isovector interactions Ω * c Σ * and Ω * cΣ * are found attractive, and four states with spin parities (0, 1, 2, 3) + and two states with (0, 1) − are produced, respectively. The Ω * c Σ * states with 0 + appear at α value of about 3.5, while the (1, 2, 3) + states all appear at cutoff about 2.0. The two Ω * cΣ * with (0, 1) − is produced at cutoff about 4.6. X c X 0(0 - ) 0(1 - ) X c X 1(0 + ) 1(1 + ) X c X 1(0 - ) 1(1 - ) 0(1 + ) 0(2 + ) X c X * 0(1 - ) 0(2 - ) X c X * 1(1 + ) 1(2 + ) X c X * X c X * a E B (MeV) Coupled-channel results In the previous single-channel calculations, many bound states are produced from the considered interactions within allowed range of parameter α. To estimate the strength of the coupling between a molecular state and the corresponding decay channels, we will consider the couple-channel effects. In the coupled-channel calculations, the channels with the same quark components and the same quantum numbers can couple to each other, which will make the pole of the bound state deviate from the real axis to the complex energy plane and acquire an imaginary part. The imaginary part corresponds to the state of the width as Γ = 2Imz. Here, we present the coupled-channel results of the position of bound state as M th − z instead of the origin position z of the pole, with the M th being the nearest threshold. In the above singlechannel calculations, much larger α values are required to produce the bound states from the baryon-antibaryon interactions, which suggests that the possibility of the existence of these states are very low. Hence, in the following coupledchannel calculations, we only consider the baryon-baryon interactions. In the Table. 4, we present the coupled-channel results of the isoscalar baryon-baryon interactions, which involve all possible couplings between the channels Ξ ( ′ , * ) c Ξ ( * ) , Λ c Ω and Ω ( * ) c Λ. The poles of full coupled-channel interaction under the corresponding threshold with different α are given in the second and third columns. Glancing over the coupled-channel results of channels Table. 4, we can find that the real parts of most poles from the coupled-channel calculation are similar to those from the single-channel calculations, and the small widths are acquired from the couplings with the channels considered. However, it has a great impact on the Ξ * c Ξ channel after including the full coupled-channel interactions as suggested by the variation in the mass and width. Compared with single-channel calculations, the masses change significantly, and the widths are much larger. Two-channel calculations are also performed, and the results are presented in the fourth to eleventh columns. For the states near the Ξ ( ′ , * ) c Ξ ( * ) threshold with (0, 1, 2, 3) + , relatively obvious twochannel couplings can be found in the Ξ ′ c Ξ channel. For the states near the Ξ ′ c Ξ * threshold with (1, 2) + , the main twochannel couplings can be found in the Ξ ′ c Ξ channel. For two states near the Ξ c Ξ * threshold with (1, 2) + , the widths from two-channel couplings are both less than 1.0 MeV. For the states near the Ξ * c Ξ threshold with (1, 2) + , the main decay channel are Ω * c Λ, which leads to a width of about a dozen of MeVs and large increase of binding energy. Similarly, the states near the Ξ ′ c Ξ threshold with (0, 1) + have considerable large couplings with the Ω c Λ channel, which leads to obvious increase of mass. For the state near the Ω * c Λ threshold with 1 + , the Ξ c Ξ channel is the dominant channel to produce their total widths. Since the Ω c Λ channel has the second lowest threshold, it can only couple to the Ξ c Ξ channel so that the only two-channel coupling width came from the Ξ c Ξ channel. Ξ ( ′ , * ) c Ξ * , Ξ ′ c Ξ and Ω ( * ) Λ in The coupled-channel results of isovector baryon-baryon interactions are presented in Table 5. For the isovector states near the Ξ * c Ξ * threshold with (0, 1, 2) + , large couplings can be found in the Ω * c Σ * channel and their binding energies also decrease a little compared with the single-channel results after including the two-channel couplings. Among the states near the Ω * c Σ * threshold with (0, 1, 2, 3) + , there exist some differences between different two-channel couplings. After including the two-channel couplings between the channel Ω * c Σ * and the channels Ξ ′ c Ξ * , Ξ c Ξ * or Ξ c Ξ * , the binding energy of the state with 0 + becomes obviously larger than the single-channel value together with considerable widths. The coupling to the Ω * c Σ channel leads to a decrease of the binding energy. Other two-channel couplings affect a little on the single-channel results in mass and lead to small widths. For the state with 1 + , large couplings can be found in the channels Ξ ′ c Ξ * and Ω * c Σ with large widths. However, when it couples to channels Ω c Σ * , Ξ * c Ξ or Ξ c Ξ, the bound state appears at a large α value of about 2.4. The state with 2 + strongly couples to channel Ω * c Σ, and the couplings with channels Ξ ′ c Ξ * , Ξ c Ξ * , Ξ ′ c Ξ or Ξ c Ξ result in decreases of the binding energy. When the (2, 3) + states couple to the channel Ω * c Σ at the parameters 2.9 and 2.8, respectively, the two "−−" in table mean the binding energies beyond our coupled-channel calculation range with binding energy less than 50 MeV. For the isovector states near the Ξ ′ c Ξ * threshold with (1, 2) + , the coupling effects have no significant effect compared with the single-channel results as suggested by the almost unchanged masses and very small widths. However, the coupling effects decrease the binding energy and brings considerable widths when they couple to the Ω c Σ * channel. Hence, the two-channel results with the channel Ω c Σ * , to some extent, affect the overall coupled-channel results a lot and give rise to the noticeable reduction in binding energies. For the states near Ω c Σ * threshold with (1, 2) + , the channels Ξ c Ξ * and Ω c Σ are dominant. In addition, the states with (1, 2) + are not attractive enough to be produced within the range of parameter value considered after coupling to the Ξ ′ c Ξ channel. No obvious strongly coupled-channel effects can be found for the left states near the Ξ c Ξ * , Ξ * c Ξ and Ξ ′ c Ξ thresholds, and the width from the two-channel couplings are all less than 1 MeV. Summary and discussion In this work, we systematically study the charmed-strange baryon systems composed of csssqq quarks and their baryon-antibaryon partners, in a qBSE approach. The potential kernels are constructed with the help of the effective Lagrangians with SU(3), chiral and heavy quark symmetries. The S-wave bound states are searched for as the pole of the scattering amplitudes. All S-wave charmed-strange dibaryon interactions Ξ ( ′ , * ) c Ξ ( * ) , Ω ( * ) c Λ, Ω ( * ) c Σ ( * ) , Λ c Ω and Σ ( * ) c Ω and their baryon-antibaryon partners Ξ ( ′ , * ) cΞ ( * ) , Ω ( * ) cΛ , Ω ( * ) cΣ ( * ) , Λ cΩ and Σ ( * ) cΩ are considered, which leads to 84 channels with different spin parities. The single-channel calculations suggest that 36 and 24 bound states can be produced from the baryon-baryon and baryon-antibaryon interactions, respectively. Most bound states from baryon-antibaryon interactions are produced at much larger values of parameter α, which suggests that these bound states are less possible to be found in future experiments than corresponding dibaryon states. Such results are consistent with our previous results [47] that fewer states can be produced in the charmed-antistrange interaction than charmed-strange interactions. Furthermore, the coupling effects on the produced bound states in the single-channel calculations are studied. Since the states from the baryon-antibaryon interactions are less possible to exist, we do not consider these interactions in coupled-channel calculations. For the isoscalar interactions, the coupled-channel calculations hardly change the conclusion from the single-channel calculations, which means that Table 4 The masses and widths of isoscalar baryon-baryon molecular states at different values of α. The "CC" means full coupled-channel calculation. The values of the complex position means mass of corresponding threshold subtracted by the position of a pole, M th − z, in the unit of MeV. The two short line "−−" means the coupling does not exist. The imaginary part shown as "0.0" means too small value under the current precision chosen. I = 0 α r CC Ξ ′ c Ξ * Ξ c Ξ * Ξ * c Ξ Λ c Ω Ξ ′ c Ξ Ω * c Λ Ω c Λ Ξ c Ξ Ξ * c Ξ * (0 + ) 0.3 1 + 0.4i 1 + 0.0i 1 + 0.1i 1 + 0.0i 1 + 0.0i 1 + 0.3i 1 + 0.0i 1 + 0.2i 1 + 0.0i 4178 MeV 0.5 5 + 0.8i 5 + 0.1i 5 + 0.1i 5 + 0.1i 5 + 0.0i 5 + 0.8i 5 + 0.1i 5 + 0.6i 5 + 0.1i 0.7 11 + 1.5i 10 + 0.2i 10 + 0.1i 9 + 0.2i 9 + 0.0i 9 + 1.7i 9 + 0.1i 10 + 1.0i 9 + 0.2i Ξ * c Ξ * (1 + ) 0.3 0 + 0.6i 0 + 0.1i 0 + 0.2i 0 + 0.2i 0 + 0.0i 0 + 0.2i 0 + 0.1i 0 + 0.1i 0 + 0.Ξ * c Ξ(2 + ) 1.5 0 + 14.0i −− −− −− 0 + 0.0i 0 + 0.1i 3 + 10.6i 0 + 0.0i 0 + 0.1i 3963 MeV 2.0 17 + 22.0i −− −− −− 4 + 0.5i 1 + 0.3i 15 + 18.1i 1 + 0.1i 1 + 0.1i 2.5 31 + 29.7i −− −− −− 6 + 0.0i 3 + 0.6i 31 + 22.0i 2 + 0.2i 2 + 0.1i Ξ ′ c Ξ(0 + ) 0.8 2 + 10.8i −− −− −− −− −− 1 + 0.7i 1 + 6.9i 1 + 0.0i 3896 MeV 1.0 11 + 18.6i −− −− −− −− −− 4 + 1.0i 5 + 12.0i 2 + 0.0i 1.2 18 + 16.2i −− −− −− −− −− 16 + 1.3i 14 + 16.7i 4 + 0.0i Ξ ′ c Ξ(1 + ) 0.8 2 + 10.1i −− −− −− −− −− 1 + 0.8i 0 + 16.3i 0 + 0.0i 3896 MeV 1.0 10 + 16.4i −− −− −− −− −− 3 + 1.2i 4 + 10.9i 1 + 0.0i 1.2 15 + 14.8i −− −− −− −− −− 5 + 1.6i 9 + 15.2i 3 + 0. 0i Table 5 The masses and widths of isovector charmed-strange molecular states at different values of α. Other notations are the same as Table 4. the coupled-channel effects are not very significant. However, for the isovector interactions, the coupled-channel effects have obvious effects, which usually cause great variations of binding energy together with considerable widths. Compared with our previous coupled-channel calculations in Refs. [68,69], the coupled-channel effect has obvious large influence on both the real part and imaginary part of poles. It may be related to the constituent hadrons considered in the current work. The systems studied in the current work are composed of a light hadron and a charmed hadron. Compared with the double-charmed or double-bottom systems, the systems containing light hadrons are usually more unstable. Ω * c Λ(1 + ) 3.9 1 + 0.5i −− −− −− −− −− −− 1 + 0.0i 1 + 0.6i 3882 MeV 4.1 5 + 1.1i −− −− −− −− −− −− 3 + 0.0i 6 + 1.4i 4.2 8 + 1.4i −− −− −− −− −− −− 5 + 0.0i 9 + 1.7iI = 1 α r CC Ω * c Σ * Σ c Ω Ξ ′ c Ξ * Ω c Σ * Ξ c Ξ * Ξ * c Ξ Ω * c Σ Ξ ′ c Ξ Ω c Σ Ξ c Ξ Ξ * c Ξ * (0 + ) Generally speaking, the charmed-strange dibaryon systems with csssqq quarks are usually attractive enough to produce bound states, while their baryon-antibaryon partners are less or hardly attractive. Both theoretical and experimental studies are suggested to give more valuable information. ′ , * ) c Ξ ( * ) , Ω ( * ) c Λ, Ω ( * ) c Σ ( * ) , Λ c Ω and Σ ( * ) c Ω and their baryon-antibaryon partners Ξ ( ′ , * ) cΞ ( * ) , Ω ( * ) cΛ , Ω ( * ) cΣ ( * ) , Λ cΩ and Σ ( * ) cΩ . However, the Λ c Ω, Σ ( * ) 3. 1 1Molecular states from interactions Ξ ( ′ , * ) c Ξ ( ′ , * ) and Ξ ( ′ , * ) cΞ ( ′ , * ) First, we consider the interactions Ξ ( ′ , * ) c Ξ ( * ) and Ξ ( ′ , * ) cΞ ( * ) with quark configurations as [csq][ssq] and [csq][ssq], respectively. The single-channel results for the interactions Ξ c Ξ ( * ) and Ξ cΞ ( * ) , in which the charmed baryon belongs to the multiplet B3, are illustrated in Fig. 1 1Binding energies of bound states from the interactions Ξ c Ξ ( * ) (left) and Ξ cΞ ( * ) (right) with thresholds of 3787 (4002) MeV with the variation of α in single-channel calculation. Fig. 2 2Binding energies of bound states from the interactions Ξ ′ c Ξ ( * ) (left) and Ξ ′ cΞ ( * ) (right) with thresholds of 3896 (4111) MeV with the variation of α in single-channel calculation. Fig. 3 3Binding energies of bound states from the interactions Ξ * c Ξ ( * ) (left) and Ξ * cΞ ( * ) (right) with thresholds of 3963 (4178) MeV with the variation of α in single-channel calculation. 3.2 Molecular states from interactions Ω ( * ) c Λ/Ω ( * ) c Σ ( * ) and Ω ( * ) cΛ /Ω ( * ) cΣ ( * ) For the systems composed of [css][sqq] and [css][sqq], there exist interactions Ω ( * ) c Λ, Ω ( * ) c Σ ( * ) and their baryonantibaryon partners, interactions Ω ( * ) cΛ and Ω ( * ) Fig. 4 4Binding energies of bound states from the interactions Ω c Λ/Ω c Σ ( * ) (left) and Ω cΛ /Ω cΣ ( * ) (right) with thresholds of 3810/3888 (4079) MeV with the variation of α in single-channel calculation. Fig. 5 5Binding energies of bound states from the Ω * c Λ/Ω * c Σ ( * ) (left) and Ω * cΛ /Ω * cΣ ( * ) (right) with thresholds of 3882/3959 (4150) MeV with the variation of α in single-channel calculation. Y=2 States in Su(6) Theory. F Dyson, N H Xuong, Phys. Rev. Lett. 1326F. Dyson and N. H. Xuong, "Y=2 States in Su(6) The- ory," Phys. Rev. Lett. 13, no.26, 815-817 (1964) ABC Effect in Basic Double-Pionic Fusion -Observation of a new resonance?. P Adlarson, WASA-at-COSYPhys. Rev. Lett. 106242302P. Adlarson et al. [WASA-at-COSY], "ABC Effect in Basic Double-Pionic Fusion -Observation of a new res- onance?," Phys. Rev. Lett. 106, 242302 (2011) Three-Body Calculation of the Delta-Delta Dibaryon Candidate D(03) at 2. A Gal, H Garcilazo, 37A. Gal and H. Garcilazo, "Three-Body Calculation of the Delta-Delta Dibaryon Candidate D(03) at 2.37 . Gev, Phys. Rev. Lett. 111172301GeV," Phys. Rev. Lett. 111, 172301 (2013) Is d* a candidate for a hexaquark-dominated exotic state?. F Huang, Z Y Zhang, P N Shen, W L Wang, Chin. Phys. C. 39771001F. Huang, Z. Y. Zhang, P. N. Shen and W. L. Wang, "Is d* a candidate for a hexaquark-dominated exotic state?," Chin. Phys. C 39, no.7, 071001 (2015) Triangle singularity mechanism for the pp → π + d fusion reaction. N Ikeno, R Molina, E Oset, Phys. Rev. C. 104114614N. Ikeno, R. Molina and E. Oset, "Triangle singularity mechanism for the pp → π + d fusion reaction," Phys. Rev. C 104, no.1, 014614 (2021) Observation of a narrow charmonium-like state in exclusive B ± → K ± π + π − J/ψ decays. S K Choi, BellePhys. Rev. Lett. 91262001S. K. Choi et al. [Belle], "Observation of a narrow charmonium-like state in exclusive B ± → K ± π + π − J/ψ decays," Phys. Rev. Lett. 91, 262001 (2003) Isospin breaking of the narrow charmonium state of Belle at 3872-MeV as a deuson. N A Tornqvist, Phys. Lett. B. 590N. A. Tornqvist, "Isospin breaking of the narrow char- monium state of Belle at 3872-MeV as a deuson," Phys. Lett. B 590, 209-215 (2004) Observation of a Charged Charmoniumlike Structure in e + e − → π + π − J/ψ at √ s =4.26 GeV. M Ablikim, BESIIIPhys. Rev. Lett. 110252001M. Ablikim et al. [BESIII], "Observation of a Charged Charmoniumlike Structure in e + e − → π + π − J/ψ at √ s =4.26 GeV," Phys. Rev. Lett. 110, 252001 (2013) Study of e + e − → π + π + J/ψ and Observation of a Charged Charmoniumlike State at Belle. Z Q Liu, BellePhys. Rev. Lett. 11019901Phys. Rev. Lett.Z. Q. Liu et al. [Belle], "Study of e + e − → π + π + J/ψ and Observation of a Charged Charmoniumlike State at Belle," Phys. Rev. Lett. 110, 252002 (2013) [erratum: Phys. Rev. Lett. 111, 019901 (2013)] Observation of the Charged Hadron Z ± c (3900) and Evidence for the Neutral Z 0 c (3900) in e + e − → ππJ/ψ at √ s = 4170 MeV. T Xiao, S Dobbs, A Tomaradze, K K Seth, Phys. Lett. B. 727T. Xiao, S. Dobbs, A. Tomaradze and K. K. Seth, "Ob- servation of the Charged Hadron Z ± c (3900) and Evi- dence for the Neutral Z 0 c (3900) in e + e − → ππJ/ψ at √ s = 4170 MeV," Phys. Lett. B 727, 366-370 (2013) Observation of J/ψp Resonances Consistent with Pentaquark States in Λ 0 b → J/ψK − p Decays. R Aaij, LHCbPhys. Rev. Lett. 11572001R. Aaij et al. [LHCb], "Observation of J/ψp Res- onances Consistent with Pentaquark States in Λ 0 b → J/ψK − p Decays," Phys. Rev. Lett. 115, 072001 (2015) Observation of a narrow pentaquark state, P c (4312) + , and of two-peak structure of the P c (4450) +. R Aaij, LHCbPhys. Rev. Lett. 12222222001R. Aaij et al. [LHCb], "Observation of a narrow pen- taquark state, P c (4312) + , and of two-peak structure of the P c (4450) + ," Phys. Rev. Lett. 122, no.22, 222001 (2019) Evidence of a J/ψΛ structure and observation of excited Ξ − states in the Ξ − b →. R Aaij, LHCbR. Aaij et al. [LHCb], "Evidence of a J/ψΛ struc- ture and observation of excited Ξ − states in the Ξ − b → . J Decay, Sci. Bull. 66J/ψΛK − decay," Sci. Bull. 66, 1278-1287 (2021) Observation of a J/ψΛ resonance consistent with a strange pentaquark candidate in B − → J/ψΛp decays. [LHCb], "Observation of a J/ψΛ resonance consistent with a strange pentaquark candidate in B − → J/ψΛp de- cays," Heavy flavor dibaryons. T F Carames, A Valcarce, Phys. Rev. D. 92334015T. F. Carames and A. Valcarce, "Heavy flavor dibaryons," Phys. Rev. D 92, no.3, 034015 (2015) Fullyheavy hexaquarks in a constituent quark model. Q F Lü, D Y Chen, Y B Dong, arXiv:2208.03041hep-phQ. F. Lü, D. Y. Chen and Y. B. Dong, "Fully- heavy hexaquarks in a constituent quark model," [arXiv:2208.03041 [hep-ph]]. Search for doubly-heavy dibaryons in a quark model. J Vijande, A Valcarce, J M Richard, P Sorba, Phys. Rev. D. 94334038J. Vijande, A. Valcarce, J. M. Richard and P. Sorba, "Search for doubly-heavy dibaryons in a quark model," Phys. Rev. D 94, no.3, 034038 (2016) Deuteron-like states composed of two doubly charmed baryons. L Meng, N Li, S L Zhu, Phys. Rev. D. 9511114019L. Meng, N. Li and S. L. Zhu, "Deuteron-like states composed of two doubly charmed baryons," Phys. Rev. D 95, no.11, 114019 (2017) Hadronic Molecular States Composed of Heavy Flavor Baryons. N Li, S L Zhu, Phys. Rev. D. 8614020N. Li and S. L. Zhu, "Hadronic Molecular States Com- posed of Heavy Flavor Baryons," Phys. Rev. D 86, 014020 (2012) A survey of heavy-antiheavy hadronic molecules. X K Dong, F K Guo, B S Zou, Progr. Phys. 41X. K. Dong, F. K. Guo and B. S. Zou, "A survey of heavy-antiheavy hadronic molecules," Progr. Phys. 41, 65-93 (2021) Systematics of the heavy flavor hadronic molecules. K Chen, R Chen, L Meng, B Wang, S L Zhu, Eur. Phys. J. C. 827581K. Chen, R. Chen, L. Meng, B. Wang and S. L. Zhu, "Systematics of the heavy flavor hadronic molecules," Eur. Phys. J. C 82, no.7, 581 (2022) Where are the hidden-charm hexaquarks?. Z Liu, H T An, Z W Liu, X Liu, Phys. Rev. D. 105334006Z. Liu, H. T. An, Z. W. Liu and X. Liu, "Where are the hidden-charm hexaquarks?," Phys. Rev. D 105, no.3, 034006 (2022) Possible molecular states from interactions of charmed baryons. D Song, L Q Song, S Y Kong, J He, Phys. Rev. D. 106774030D. Song, L. Q. Song, S. Y. Kong and J. He, "Possible molecular states from interactions of charmed baryons," Phys. Rev. D 106, no.7, 074030 (2022) Observation of a narrow meson decaying to D + s π 0 at a mass of 2.32 GeV/c 2. B Aubert, Phys. Rev. Lett. 90242001B. Aubert et al. [BaBar], "Observation of a narrow me- son decaying to D + s π 0 at a mass of 2.32 GeV/c 2 ," Phys. Rev. Lett. 90, 242001 (2003) Observation of the D sJ (2317) and D sJ (2457) in B decays. P Krokovny, BellePhys. Rev. Lett. 91262002P. Krokovny et al. [Belle], "Observation of the D sJ (2317) and D sJ (2457) in B decays," Phys. Rev. Lett. 91, 262002 (2003) Observation of a narrow resonance of mass 2.46 GeV/c 2 decaying to D * + s π 0 and confirmation of the D sJ (2317) state. D Besson, Phys. Rev. D. 68119908Phys. Rev. DD. Besson et al. [CLEO], "Observation of a narrow resonance of mass 2.46 GeV/c 2 decaying to D * + s π 0 and confirmation of the D sJ (2317) state," Phys. Rev. D 68, 032002 (2003) [erratum: Phys. Rev. D 75, 119908 (2007)] Implications of a DK molecule at 2.32 GeV. T Barnes, F E Close, H J Lipkin, Phys. Rev. D. 6854006T. Barnes, F. E. Close and H. J. Lipkin, "Implications of a DK molecule at 2.32 GeV," Phys. Rev. D 68, 054006 (2003) Semileptonic B s and B decays testing the molecular nature of D * s0 (2317) and D * 0 (2400). E Oset, F Navarra, M Nielsen, T Sekihara, AIP Conf. Proc. 1735150017E. Oset, F. Navarra, M. Nielsen and T. Sekihara, "Semileptonic B s and B decays testing the molecular nature of D * s0 (2317) and D * 0 (2400)," AIP Conf. Proc. 1735, no.1, 050017 (2016) On Heavy light meson resonances and chiral symmetry. E E Kolomeitsev, M F M Lutz, Phys. Lett. B. 582E. E. Kolomeitsev and M. F. M. Lutz, "On Heavy light meson resonances and chiral symmetry," Phys. Lett. B 582, 39-48 (2004) Dynamically generated 0+ heavy mesons in a heavy chiral unitary approach. F K Guo, P N Shen, H C Chiang, R G Ping, B S Zou, Phys. Lett. B. 641F. K. Guo, P. N. Shen, H. C. Chiang, R. G. Ping and B. S. Zou, "Dynamically generated 0+ heavy mesons in a heavy chiral unitary approach," Phys. Lett. B 641, 278-285 (2006) Dynamically generated 1+ heavy mesons. F K Guo, P N Shen, H C Chiang, Phys. Lett. B. 647F. K. Guo, P. N. Shen and H. C. Chiang, "Dynamically generated 1+ heavy mesons," Phys. Lett. B 647, 133- 139 (2007) Effects of S-wave thresholds. J L Rosner, Phys. Rev. D. 7476006J. L. Rosner, "Effects of S-wave thresholds," Phys. Rev. D 74, 076006 (2006) Possible S-wave bound-states of two pseudoscalar mesons. Y J Zhang, H C Chiang, P N Shen, B S Zou, Phys. Rev. D. 7414013Y. J. Zhang, H. C. Chiang, P. N. Shen and B. S. Zou, "Possible S-wave bound-states of two pseudoscalar mesons," Phys. Rev. D 74, 014013 (2006) X 0 (2866) as a D * K * molecular state. M Z Liu, J J Xie, L S Geng, Phys. Rev. D. 102991502M. Z. Liu, J. J. Xie and L. S. Geng, "X 0 (2866) as a D * K * molecular state," Phys. Rev. D 102, no.9, 091502 (2020) A model-independent study of resonant structure in B + → D + D − K + decays. R Aaij, LHCbPhys. Rev. Lett. 125242001R. Aaij et al. [LHCb], "A model-independent study of resonant structure in B + → D + D − K + decays," Phys. Rev. Lett. 125, 242001 (2020) Amplitude analysis of the B + → D + D − K + decay. R Aaij, LHCbPhys. Rev. D. 102112003R. Aaij et al. [LHCb], "Amplitude analysis of the B + → D + D − K + decay," Phys. Rev. D 102, 112003 (2020) New scalar resonance X 0 (2900) as a molecule: mass and width. S S Agaev, K Azizi, H Sundu, J. Phys. G. 48885012S. S. Agaev, K. Azizi and H. Sundu, "New scalar res- onance X 0 (2900) as a molecule: mass and width," J. Phys. G 48, no.8, 085012 (2021) Strong decays ofD * K * molecules and the newly observed X 0,1 states. Y Huang, J X Lu, J J Xie, L S Geng, Eur. Phys. J. C. 8010973Y. Huang, J. X. Lu, J. J. Xie and L. S. Geng, "Strong decays ofD * K * molecules and the newly observed X 0,1 states," Eur. Phys. J. C 80, no.10, 973 (2020) Monte-Carlo based QCD sum rules analysis of X 0 (2900) and X 1 (2900). H Mutuk, J. Phys. G. 48555007H. Mutuk, "Monte-Carlo based QCD sum rules analysis of X 0 (2900) and X 1 (2900)," J. Phys. G 48, no.5, 055007 (2021) Molecular picture for the X 0 (2866) as a D * K * J P = 0 + state and related 1 + , 2 + states. R Molina, E Oset, Phys. Lett. B. 811135870R. Molina and E. Oset, "Molecular picture for the X 0 (2866) as a D * K * J P = 0 + state and related 1 + , 2 + states," Phys. Lett. B 811, 135870 (2020) Study of the decays of S −waveD * K * hadronic molecules: The scalar X 0 (2900) and its spin partners X J(J=1,2). C J Xiao, D Y Chen, Y B Dong, G W Meng, Phys. Rev. D. 103334004C. J. Xiao, D. Y. Chen, Y. B. Dong and G. W. Meng, "Study of the decays of S −waveD * K * hadronic molecules: The scalar X 0 (2900) and its spin partners X J(J=1,2) ," Phys. Rev. D 103, no.3, 034004 (2021) Heavystrange meson molecules and possible candidates D * s0 (2317), D s1 (2460), and X 0 (2900). J He, D Y Y Chen ; S, J T Kong, D Zhu, J Song, He, Chin. Phys. C. 45694012Phys. Rev. DJ. He and D. Y. Chen, "Molecular picture for X 0 (2900) and X 1 (2900)," Chin. Phys. C 45, no.6, 063102 (2021) 43. S. Y. Kong, J. T. Zhu, D. Song and J. He, "Heavy- strange meson molecules and possible candidates D * s0 (2317), D s1 (2460), and X 0 (2900)," Phys. Rev. D 104, no.9, 094012 (2021) Possible charmed-strange molecular pentaquarks in quark delocalization color screening model. X Liu, Y Tan, X Chen, D Chen, H Huang, J Ping, X. Liu, Y. Tan, X. Chen, D. Chen, H. Huang and J. Ping, "Possible charmed-strange molecular pentaquarks in quark delocalization color screening model," Discovery of T a cs0 (2900) (0,++) implies new charmed-strange pentaquark system. H T An, Z W Liu, F S Yu, X Liu, Phys. Rev. D. 10611111501H. T. An, Z. W. Liu, F. S. Yu and X. Liu, "Discovery of T a cs0 (2900) (0,++) implies new charmed-strange pen- taquark system," Phys. Rev. D 106, no.11, L111501 (2022) From the isovector molecular explanation of the newly T a0(++) cs (2900) to possible charmed-strange molecular pentaquarks. R Chen, Q Huang, R. Chen and Q. Huang, "From the isovector molecu- lar explanation of the newly T a0(++) cs (2900) to possible charmed-strange molecular pentaquarks," Possible charmedstrange molecular dibaryons. S Y Kong, J T Zhu, J He, Eur. Phys. J. C. 829834S. Y. Kong, J. T. Zhu and J. He, "Possible charmed- strange molecular dibaryons," Eur. Phys. J. C 82, no.9, 834 (2022) Coupled-channel dynamics in the reactions πN → πN, ηN, KΛ, KΣ. D Ronchen, M Doring, F Huang, H Haberzettl, J Haidenbauer, C Hanhart, S Krewald, U G Meissner, K Nakayama, Eur. Phys. J. A. 4944D. Ronchen, M. Doring, F. Huang, H. Haberzettl, J. Haidenbauer, C. Hanhart, S. Krewald, U. G. Meiss- ner and K. Nakayama, "Coupled-channel dynamics in the reactions πN → πN, ηN, KΛ, KΣ," Eur. Phys. J. A 49, 44 (2013) Dynamical coupled-channels study of πN → ππN reactions. H Kamano, B Julia-Diaz, T S H Lee, A Matsuyama, T Sato, Phys. Rev. C. 7925206H. Kamano, B. Julia-Diaz, T. S. H. Lee, A. Matsuyama and T. Sato, "Dynamical coupled-channels study of πN → ππN reactions," Phys. Rev. C 79, 025206 (2009) The Octet model and its Clebsch-Gordan coefficients. J J De Swart, Rev. Mod. Phys. 35J. J. de Swart, "The Octet model and its Clebsch-Gordan coefficients," Rev. Mod. Phys. 35, 916-939 (1963) Possible molecular states from the N∆ interaction. Z T Lu, H Y Jiang, J He, Phys. Rev. C. 102445202Z. T. Lu, H. Y. Jiang and J. He, "Possible molecular states from the N∆ interaction," Phys. Rev. C 102, no.4, 045202 (2020) Systematical study of Ω c -like molecular states from interactions Ξ ( ′ , * ) c K ( * ) and Ξ ( * ) D ( * ). J T Zhu, S Y Kong, L Q Song, J He, Phys. Rev. D. 105994036J. T. Zhu, S. Y. Kong, L. Q. Song and J. He, "Systemat- ical study of Ω c -like molecular states from interactions Ξ ( ′ , * ) c K ( * ) and Ξ ( * ) D ( * ) ," Phys. Rev. D 105, no.9, 094036 (2022) Mesonexchange model for the ΛΛ interaction. L Zhao, N Li, S L Zhu, B S Zou, Phys. Rev. D. 87554034L. Zhao, N. Li, S. L. Zhu and B. S. Zou, "Meson- exchange model for the ΛΛ interaction," Phys. Rev. D 87, no.5, 054034 (2013) Chiral Lagrangians for radiative decays of heavy hadrons. H Y Cheng, C Y Cheung, G L Lin, Y C Lin, T M Yan, H L Yu, Phys. Rev. D. 47H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin, T. M. Yan and H. L. Yu, "Chiral Lagrangians for radia- tive decays of heavy hadrons," Phys. Rev. D 47, 1030- 1042 (1993) Heavy quark symmetry and chiral dynamics. T M Yan, H Y Cheng, C Y Cheung, G L Lin, Y C Lin, H L Yu, Phys. Rev. D. 46T. M. Yan, H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin and H. L. Yu, "Heavy quark symmetry and chiral dynamics," Phys. Rev. D 46, 1148-1164 (1992) Chiral perturbation theory for hadrons containing a heavy quark. M B Wise, Phys. Rev. D. 4572188M. B. Wise, "Chiral perturbation theory for hadrons containing a heavy quark," Phys. Rev. D 45, no.7, R2188 (1992) Phenomenology of heavy meson chiral Lagrangians. R Casalbuoni, A Deandrea, N Di Bartolomeo, R Gatto, F Feruglio, G Nardulli, Phys. Rept. 281R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio and G. Nardulli, "Phenomenology of heavy meson chiral Lagrangians," Phys. Rept. 281, 145-238 (1997) Strong LHCb evidence supporting the existence of the hiddencharm molecular pentaquarks. R Chen, Z F Sun, X Liu, S L Zhu, Phys. Rev. D. 100111502R. Chen, Z. F. Sun, X. Liu and S. L. Zhu, "Strong LHCb evidence supporting the existence of the hidden- charm molecular pentaquarks," Phys. Rev. D 100, no.1, 011502 (2019) Λ c N bound states revisited. Y R Liu, M Oka, Phys. Rev. D. 8514015Y. R. Liu and M. Oka, "Λ c N bound states revisited," Phys. Rev. D 85, 014015 (2012) Charming penguins in B → K * π, K(ρ, ω, φ) decays. C Isola, M Ladisa, G Nardulli, P Santorelli, Phys. Rev. D. 68114001C. Isola, M. Ladisa, G. Nardulli and P. Santorelli, "Charming penguins in B → K * π, K(ρ, ω, φ) decays," Phys. Rev. D 68, 114001 (2003) Strong decays of excited heavy mesons in chiral perturbation theory. A F Falk, M E Luke, Phys. Lett. B. 292A. F. Falk and M. E. Luke, "Strong decays of excited heavy mesons in chiral perturbation theory," Phys. Lett. B 292, 119-127 (1992) Study of P c (4457), P c (4440), and P c (4312) in a quasipotential Bethe-Salpeter equation approach. J He, Eur. Phys. J. C. 795393J. He, "Study of P c (4457), P c (4440), and P c (4312) in a quasipotential Bethe-Salpeter equation approach," Eur. Phys. J. C 79, no.5, 393 (2019) The Z c (3900) as a resonance from the DD * interaction. J He, Phys. Rev. D. 92334004J. He, "The Z c (3900) as a resonance from the DD * in- teraction," Phys. Rev. D 92, no.3, 034004 (2015) Antinuclear Forces. R J N Phillips, Rev. Mod. Phys. 39R. j. n. Phillips, "Antinuclear Forces," Rev. Mod. Phys. 39, 681-688 (1967) Antinucleon nucleon interaction at low energy: Scattering and protonium. E Klempt, F Bradamante, A Martin, J M Richard, Phys. Rept. 368E. Klempt, F. Bradamante, A. Martin and J. M. Richard, "Antinucleon nucleon interaction at low energy: Scat- tering and protonium," Phys. Rept. 368, 119-316 (2002) Relativistic one boson exchange model for the nucleon-nucleon interaction. F Gross, J W Van Orden, K Holinde, Phys. Rev. C. 45F. Gross, J. W. Van Orden and K. Holinde, "Relativis- tic one boson exchange model for the nucleon-nucleon interaction," Phys. Rev. C 45, 2094-2132 (1992) The open-charm radiative and pionic decays of molecular charmonium Y(4274). J He, X Liu, arXiv:1102.1127Eur. Phys. J. C. 721986hep-phJ. He and X. Liu, "The open-charm radiative and pionic decays of molecular charmonium Y(4274)," Eur. Phys. J. C 72, 1986 (2012) [arXiv:1102.1127 [hep-ph]]. Hidden-bottom molecular states from Σ ( * ) b B ( * ) −Λ b B ( * ) interaction. J T Zhu, S Y Kong, Y Liu, J He, EurJ. T. Zhu, S. Y. Kong, Y. Liu and J. He, "Hidden-bottom molecular states from Σ ( * ) b B ( * ) −Λ b B ( * ) interaction," Eur. . arXiv:2007.07596Phys. J. C. 80111016hep-phPhys. J. C 80, no.11, 1016 (2020) [arXiv:2007.07596 [hep-ph]]. Molecular states from Σ ( * ) cD ( * ) − Λ cD ( * ) interaction. J He, D Y Chen, arXiv:1909.05681Eur. Phys. J. C. 7911hep-phJ. He and D. Y. Chen, "Molecular states from Σ ( * ) cD ( * ) − Λ cD ( * ) interaction," Eur. Phys. J. C 79, no.11, 887 (2019) [arXiv:1909.05681 [hep-ph]].
[]
[ "As similarly seen in IEEE Black Sea Conference on Communication and Networking", "As similarly seen in IEEE Black Sea Conference on Communication and Networking" ]
[ "Farooq Shaikh \nUniv. of South Florida\n\n", "Elias Bou-Harb \nUniv. of Texas San Antonio\n\n", "Aldin Vehabovic \nUniv. of South Florida\n\n", "Jorge Crichigno \nUniv. of South Carolina\n\n", "Aysegül Yayimli \nValparaiso University\n\n", "Nasir Ghani \nUniv. of South Florida\n\n" ]
[ "Univ. of South Florida\n", "Univ. of Texas San Antonio\n", "Univ. of South Florida\n", "Univ. of South Carolina\n", "Valparaiso University\n", "Univ. of South Florida\n" ]
[]
The Internet of Things (IoT) paradigm provides persistent sensing and data collection capabilities and is becoming increasingly prevalent across many market sectors. However, most IoT devices emphasize usability and function over security, making them very vulnerable to malicious exploits. This concern is evidenced by the increased use of compromised IoT devices in large scale bot networks (botnets) to launch distributed denial of service (DDoS) attacks against high value targets. Unsecured IoT systems can also provide entry points to private networks, allowing adversaries relatively easy access to valuable resources and services. Indeed, these evolving IoT threat vectors (ranging from brute force attacks to remote code execution exploits) are posing key challenges. Moreover, many traditional security mechanisms are not amenable for deployment on smaller resourceconstrained IoT platforms. As a result, researchers have been developing a range of methods for IoT security, with many strategies using advanced machine learning (ML) techniques. Along these lines, this paper presents a novel generative adversarial network (GAN) solution to detect threats from malicious IoT devices both inside and outside a network. This model is trained using both benign IoT traffic and global darknet data and further evaluated in a testbed with real IoT devices and malware threats.
10.1109/blackseacom54372.2022.9858239
[ "https://export.arxiv.org/pdf/2305.15191v1.pdf" ]
251,772,826
2305.15191
6eb6143d2a938d1000c34a6187db200d1d8711ba
As similarly seen in IEEE Black Sea Conference on Communication and Networking June 2022 Farooq Shaikh Univ. of South Florida Elias Bou-Harb Univ. of Texas San Antonio Aldin Vehabovic Univ. of South Florida Jorge Crichigno Univ. of South Carolina Aysegül Yayimli Valparaiso University Nasir Ghani Univ. of South Florida As similarly seen in IEEE Black Sea Conference on Communication and Networking Sofia BulgariaJune 2022Index Terms-Machine learningdeep learningIoT securitymalwaregenerative adversarial networks (GAN) The Internet of Things (IoT) paradigm provides persistent sensing and data collection capabilities and is becoming increasingly prevalent across many market sectors. However, most IoT devices emphasize usability and function over security, making them very vulnerable to malicious exploits. This concern is evidenced by the increased use of compromised IoT devices in large scale bot networks (botnets) to launch distributed denial of service (DDoS) attacks against high value targets. Unsecured IoT systems can also provide entry points to private networks, allowing adversaries relatively easy access to valuable resources and services. Indeed, these evolving IoT threat vectors (ranging from brute force attacks to remote code execution exploits) are posing key challenges. Moreover, many traditional security mechanisms are not amenable for deployment on smaller resourceconstrained IoT platforms. As a result, researchers have been developing a range of methods for IoT security, with many strategies using advanced machine learning (ML) techniques. Along these lines, this paper presents a novel generative adversarial network (GAN) solution to detect threats from malicious IoT devices both inside and outside a network. This model is trained using both benign IoT traffic and global darknet data and further evaluated in a testbed with real IoT devices and malware threats. I. INTRODUCTION Continued advances in sensing, computing, and networking technologies have resulted the novel Internet of Things (IoT) paradigm. This approach uses a multitude of smaller embedded devices (augmented with sensing capabilities) to collect and relay information about their surroundings and environment. In many cases, these devices also interact with each other. This sensing information is then transferred to large datacenter sites (in the cloud) for further processing and data analytics purposes, i.e., to improve situational awareness and decision making processes. Indeed, IoT paradigms are seeing widespread traction across many diverse market sectors with many billions of devices already deployed, e.g., in domains such as transport, utility, building/infrastructure, manufacturing, healthcare, home automation, etc. Although IoT-based solutions offer tremendous benefits in terms of productivity and efficiency, they also introduce a plethora of security challenges. Namely, most IoT system manufacturers have emphasized cost reduction and rapid time-to-market over security support. As a result, many designs use very basic Linux or Unix operating systems (OS) and have very limited (computational, storage) resource capabilities. Hence IoT devices are highly vulnerable to exploitation by malicious actors. Moreover, it is generally difficult (infeasible) to constantly patch these devices given their sheer scale of deployment, complicated access, and the lack of vendor support (update mechanisms). In light of the above, IoT infrastructures clearly represent a very large and growing cyberthreat surface. As a result, unsolicited IoT-related activity on the Internet continues to grow at a steady rate (with telnet being the most commonly exploited service). More importantly, hackers have also developed advanced IoTspecific malware to essentially recruit and coordinate much larger groups of devices and launch large scale distributed denial of service (DDoS) attacks [1]. By far the most notable example here is the Mirai botnet which attacked Dyn DNS servers and caused massive Internet outages in 2016 1 . The financial and service sectors were also impacted by this IoT botnet. Overall, Mirai remains a major cyberthreat today, and hackers continue to evolve new variants with increasingly complex code structures and attack strategies [2]. In addition, further methods are also being developed to target boot loader and firmware on IoT platforms. For example, the UbootKit malware can manipulate the boot loader to grant root privileges to an attacker [3]. Indeed, many of these malware types are becoming increasingly difficult to detect using traditional intrusion detection mechanisms. Despite these vulnerabilities, IoT technologies and solutions continue to gain traction across a wide range of private, commercial, and governmental domains. Therefore it critical to proactively identify and mitigate IoT-based threats before they materialize. In response, researchers have proposed a range of machine learning (ML) schemes to detect anomalous behaviors in IoT domains [4], [5], [6], [7], [8], [9]. However for the most part, many of these studies do not consider prominent IoT malware families and/or utilize realistic attack datasets. Hence there is a pressing need to leverage realworld empirical network data to counter IoT threats. In light of the above, this paper presents a novel anomaly detection solution to identify threats from malicious IoT devices from both inside and outside the network. Namely, the scheme utilizes a generative adversarial network (GAN) approach to model traffic distributions-for both benign and anomalous data-to identify malicious behaviors. This particular neural network (NN) based scheme is chosen as it is very effective in modeling latent representations of data and reconstructing samples from their underlying distribution. An experimental real-world testbed is also developed to evaluate the proposed solution in realistic settings using the Mirai and Bashlite malware families. In particular, GAN models are trained using benign IoT data traffic collected from the testbed as well as global darknet data extracted from a large external network telescope (CAIDA repository). The latter traffic has been shown to be an effective indicator of Internet-scale malicious IoT device activity [10]. Overall, this effort presents some key contributions. Foremost, it details one of the first known solutions which applies the advantages of GANs to the IoT security domain. This study also utilizes real-world darknet data (passive measurements) to further validate the efficacy of the model. Finally, the work builds a live operational testbed (using the most popular classes of IoT devices targeted and recruited by large-scale botnets) and evaluates the proposed solution using several IoT malware families. This paper is organized as follows. First, Section II reviews a number of related works in the area of anomaly detection and ML techniques for both IoT and non-IoT networks. Next, Section III details the experimental testbed setup along with the proposed attack methodology. Section IV then discusses the GAN model for anomaly detection of IoT devices. Finally, Section V details the key research findings from the testbed performance evaluation study followed by concluding remarks and discussions on future research in Section VI. II. RELATED WORK Anomaly detection algorithms have been widely used across a range of application domains, e.g., such as fraud detection, medical imaging, network intrusion detection, etc. The overall objective here is to detect any outliers in the data that deviate from expected or normal behaviors. Accordingly, the authors in [11] present a comprehensive survey of anomaly detection applications and highlight the generic nature of many of these algorithms. Now many anomaly detection schemes have been used within the computer security domain. For example, [12] uses a sequential matching algorithm to perform similarity measurements and identify deviations from past user behaviors. However, this method is ineffective against scanning and exploitation techniques directed at IoT devices. Meanwhile, [13] details a statistical anomaly detection scheme that uses a multi-level hierarchical K-map model for network anomaly detection. However, this work does not consider IoT devices and the unique vulnerabilities of their ecosystem. Furthermore, the authors in [14] detail an intrusion detection system for web-based attacks that analyzes server log files and produces anomaly scores for web requests. However this scheme is only designed for web applications and cannot detect network or link-layer layer attacks. Various signature-based defense mechanisms have also been proposed, i.e., by comparing network or application behaviors against a stored database of known malicious signatures [11]. However, these schemes require detailed a-priori knowledge of malicious attacks, as well as constant updates to the threat signature database. As such, these requirements can be difficult to meet if attackers constantly adapt their behaviors to avoid detection. Now within the context of this study, [4] presents an extensive survey on the latest developments in intrusion detection systems for IoT systems/devices. For example, the authors in [5] propose an IoT network intrusion detection system called SVELTE. Specifically, this system is intended for 6LoWPAN IoT networks running the Routing Protocol for Low-Power and Lossy Networks (RPL). Sample evaluation is also done using the Contiki OS [15], using bit pattern matching of IoT payloads to generate normal profiles and flag anomalous deviations. However, this solution is only suitable for small sensoror control-based devices whose packets do not exhibit much variation in terms of their transmitted data. Meanwhile, machine learning (ML) and deep learning (DL) methods have also seen much traction in recent years owing to advances in computational hardware systems and optimized software packages. Specifically, open-source libraries such as TensorFlow and Keras provide extensive capabilities to rapidly design/test complex ML models. As a result, researchers have developed a range of intelligent ML-based security algorithms to address the large data volumes generated by IoT devices. For example, [16] presents a network traffic classification model which is trained and evaluated on darknet data using several common supervised ML algorithms, including random forest, gradient boosting, and Ada Boost. Meanwhile, the authors in [17] develop association rules to detect anomalies, although this method still requires human expertise to draw meaningful conclusions. Others have also used DL methodologies to detect unsolicited IoT behaviors. For example, the scheme in [6] trains and tests a DL model using the well-known NSL-KDD dataset. However, this dataset is not specific to IoT devices and hence is not necessarily reflective of their specialized attack traffic, as noted in [18]. Meanwhile, [7] also uses simulated data to train a NN to detect malicious IoT devices. However, the focus here is on detecting DoS/DDoS attacks against IoT sensor nodes themselves, and the lack of realistic attack traffic is also problematic. Indeed, the latter issue is a major concern when trying to develop real-world operational intrusion and anomaly detection systems. Additionally, the study in [8] uses a computational intelligence algorithm to generate behavioral profiles and flag deviations. However this effort only considers wireless systems. Meanwhile, the authors in [19] present a scheme to train autoencoders for every IoT device connected to a network. Hence when a device is infected, its previously-trained autoencoder can be used to identify the anomaly. Clearly, this approach has high overheads as separate autoencoders must be trained for every IoT device. Moreover, this solution does not account for external threats since the autoencoder can only identify anomalies for the device for which it was trained. Other efforts in [9] have also looked at using variational autoencoders for anomaly detection. Nevertheless, this work does not consider prominent malware families such as Mirai, Reaper, etc. In light of the above, the proposed ML-based solution leverages GANs for anomaly detection in IoT settings. Namely, this particular type of NN is well-suited for scenarios with sparse training datasets. GANs can also excel at generating latent representations of highdimensional complex datasets. Hence this algorithm is used here to model both normal traffic behaviors for clean IoT devices as well as malicious traffic behaviors for compromised/infected IoT devices (from darknet data). The GAN models are then used to detect anomalous activities arising from infected devices or network scanning activities. Overall, the key objective here is to stop attacks in the earlier scanning stage, thereby minimizing potential damage. Further details are now presented. III. EXPERIMENTAL TESTBED Before presenting the IoT anomaly detection solution, the experimental IoT testbed is detailed first, Figure 1. The overall objective here is to build a realistic evaluation environment comprised of physical IoT devices and then use it to study actual IoT-based malware attacks. Accordingly, two major IoT malware types are considered, i.e., Mirai and Bashlite. These DDoS families are capable of generating massive attack traffic volumes and have caused serious Internet-scale outages [2]. Further details on the testbed systems and attack methodologies are now presented. A. Testbed Systems Overall, two main types of IoT systems are chosen to best represent the categories of devices commonly targeted by malicious actors, i.e., Wifi-based access point (AP) routers and webcam digital video recorders (DVR) systems. Foremost, Wifi-based routers are very common in many private and enterprise settings and represent lucrative attack targets. Additionally, DVR devices are also ubiquitous and have been regularly exploited by large botnets to launch massive Internetscale DDoS attacks [10]. Many of these devices also offer a high level of security against physical threats, i.e., since they are usually located in secured indoor areas. In light of the above, three commercial IoT devices are incorporated into the testbed. Foremost, the Netgear DGN 2200 Wifi router is chosen, as this specific model is known to be highly susceptible to remote code execution exploits 2 . For example, one these vulnerabilities includes the simple HTTP POST command (with malicious code) to gain administrative access to the device. Meanwhile, two types of webcams are also chosen, including the Yi 1080p Home Camera and a Samsung Smart IP camera. Again, such camera devices are very prevalent in many private and commercial settings and also use a basic Linux-based OS. Since both the Mirai and Bashlite botnets use brute forcing of default manufacturer credentials, the above platforms are well-suited for recreating such scenarios. Indeed, default usernames/passwords are readily available on the Internet for many existing IoT systems. This provides a relatively easy attack vector for hackers since most users do not change these default manufacturer settings. For example, the Mirai source code contains a list of 50-60 default usernames/passwords to try to gain access to vulnerable devices. Once compromised, architecturespecific malware binaries are downloaded onto the IoT platforms to launch future DDoS or scanning attacks. Now many large IoT botnets are also managed by external command and control (C&C) servers, i.e., which initiate various commands and attack sequences on the infected devices. To effectively model these frameworks in the testbed, the VMware ESXi hypervisor tool is used to setup two Ubuntu virtual machines (VM). The C&C server code bases for the two different IoT This tool also provides a seamless separation between the OS and underlying hardware, along with centralized management and reliable backup/restore capabilities. Note that the testbed also creates different subnets for the IoT devices and C&C control servers in order to better emulate real-world networking setups. Many malware designs also use separate distribution servers to disseminate their malicious binaries. Accordingly, another VM is setup to run the Apache webserver for such purposes. Namely, the C&C servers (running on the VMware ESXi hypervisor) instruct the compromised IoT devices to contact this web server to download malware binaries specific to their architecture (using the WGET or FTP commands). Finally, the Metasploit tool 3 is also used to launch actual scanning and reconnaissance attacks against the testbed IoT devices. Specifically, a Kali Linux VM is chosen for this purpose (scanner VM, Figure 1) since this Debianbased OS includes the Metasploit module with many different payload, auxiliary, and post exploitation capabilities. Note that this tool has also been used by penetration testers to identify network or system weaknesses, see [20]. Finally, it is important to incorporate network firewall capabilities into the testbed as these systems are widely deployed in real-world settings. Hence the pfsense 3 https://metasploit.help.rapid7.com/docs virtual router and firewall solution [21] is chosen to manage the testbed networking setup (instead of actual physical devices). This package offers increased flexibility and scalability, as well as centralized network management support. The pfsense packet management system also interfaces with other tools such as Snort (open-source intrusion detection system) and OpenVPN. As such, this software provides a global view of all network communications and can be used to implement access control and other security policies via its simple web interface. For example, the virtualized pfsense router can be used in conjunction with Snort to block any flagged traffic, thereby emulating firewall actions. B. Attack Methodology Malware developers are continually evolving new attack strategies to counteract defense mechanisms put in place after the emergence of Mirai and other IoT botnets. Namely, advanced techniques are being used to infect devices [22], unlike earlier brute forcing mechanisms. Accordingly, a list of potential cyberattacks against the testbed IoT devices is shown in Table I. Now in order to recreate such scenarios, the Kali Linux VM is used to generate some attacks via its Nessus module, e.g., such as checking for default credentials, web application scanning, etc. Since many IoT devices also have web management interfaces, Nessus can also be used to run web application scanning and help attackers identify potential weaknesses in web interfaces. Hence this tool is used to launch about two thirds of the cyberattacks in Table I, including command injection and remote code execution. Furthermore, the Nmap tool is also used for network scanning purposes, as it provides important information on the device OS, architecture, list of open ports, etc. Specifically, this tool can emulate the reconnaissance phase of a malware attack and provide hackers with critical information to improve their attack targeting. Now as noted earlier, some versions of the Netgear DGN 2200 router are known to be vulnerable to multiple exploits (and a brief description of some of these vulnerabilities is given in Table II). However, detailed testing of the particular system purchased indicated that the latest firmware was no longer susceptible to these exploits. Hence in order to overcome this issue, the firmware was manually downgraded to a version that was vulnerable to the aforementioned attacks. Overall, the IoT testbed design is quite flexible and can be used to model a range of IoT cyberattacks, as summarized in Table II. For example, consider a manin-the-middle (MITM) attack on the (wireless) Yi 1080p Home Camera. Here an address resolution protocol (ARP) cache poisoning scenario can be considered where an adversary poses as a legitimate Wifi access point (AP) by advertising a valid gateway media access control (MAC) address. Once the camera connects to this "fake" AP, the attacker can then decipher all communications from this device. Hence in order to recreate this MITM attack scenario, a wireless adapter can be used to sniff all wireless network traffic and perform packet injection. The Wifite module (attack tool) in the Kali Linux VM can then be used to emulate a malicious entity posing as a legitimate AP. IV. GENERATIVE ADVERSARIAL NETWORK As noted earlier, the proposed IoT anomaly detection solution leverages the GAN algorithm to model underlying benign and/or anomalous data traffic patterns. Consider some details on this NN-based scheme first. The GAN concept was originally proposed in [23] and uses an adversarial NN-based framework to estimate a generative model. In particular, this setup consists of two NN entities, a generator, G, and a discriminator, D. Namely, the former captures the distribution of the data and generates "fake" (synthetic) samples. Meanwhile the latter tries to estimate whether a given sample comes from the actual data, x, or the from the latent distribution, z. Using these two networks, the GAN approach basically implements a zero-sum game, where the objective of the generator is to produce samples to "fool" the discriminator into thinking they came from real data (rather than the model distribution). Overall, studies have shown that this approach is very well-suited for anomaly detection, see [24], [25]. For example, the authors in [24] use a GAN to detect anomalies in medical image data and accurately predict the early onset of certain diseases. Similarly, [25] uses a GAN to identify anomalies in non-image data, and tests with the KDD99 and the MNIST datasets also show very promising results. Since GANs can excel at modeling complex distributions, the proposed anomaly detection framework herein leverages them to generate normal (benign) and malicious traffic profiles for IoT devices. Specifically, the working hypothesis here is that if a GAN can successfully capture the distribution of given type of data stream, then it should also be able to flag any deviating/outlier behaviors. Hence akin to [24], the following optimization problem is considered: V (D, E, G) = E x∼p X E z∼p E (·|x) [log D(x, z)] + E z∼p Z E x∼p G (·|z) [1 − log D(x, z)] (1) where E x∼p X is the distribution of the real dataset (pertaining to benign or malicious IoT samples), E z∼p E (·|x) is the distribution of the latent space captured by G, D(x, z) is the discriminator function, p E is the latent distribution, and p X is the actual data distribution. Meanwhile, E is the encoder that maps the data to the latent space and learns simultaneously along with the generator and discriminator. Hence for anomaly detection using benign training data from the IoT testbed, all malicious samples deviating from the normal traffic distribution will be flagged as anomalous. Conversely, for anomaly detection using malicious training data from the darknet, all normal samples deviating from the malicious traffic distribution will be flagged as anomalous. Note that the similarity between a test sample and the generated latent representation will depend upon how similar it is to the data being used to train the GAN. Finally, akin to the work in [24], the GAN algorithm also uses a convex multi-modal loss function comprised of weighted generator and discriminator losses. Namely, the generator loss, L G (x), measures the similarity between a new sample and the data generated from the latent space, Z. Meanwhile, the discriminator loss, L D (x), measures the fidelity of the generated synthetic samples and is also used to drive the generator to produce samples belonging to the real data distribution. Hence the weighted loss function is given by: L(x) = αL G (x) + (1 − α)L D (x)(2) where 0 ≤ α ≤ 1. Note that the discriminator loss, L D (x), is based on a feature matching methodology that measures the similarity between features of a given sample and that of the generated synthetic data. V. EMPIRICAL EVALUATION The GAN-based model is now evaluated for anomaly detection in the IoT testbed. First, consider dataset Duration for which the connection was active generation and feature selection for ML training. Here the pfsense toolkit is used to capture raw packet data in the testbed during "normal" operation. This benign traffic is then used to train a GAN. In particular, feature selection is done for bi-directional flows over 3 windows spanning the most recent 50, 100, 500, and 2,000 packets. The mean and standard deviation values for several parameters are then computed for each of these windows, i.e., including the number of packets, packet lengths, packet inter-arrival times, etc. Hence 13 features are extracted for each of the 4 windows, yielding a total of 52 features indexed by the source IP addresses of the IoT devices (see Table III). Once the GAN model has been trained using the benign data features, further data collection is also done for attack traffic. Namely, several types of cyberattacks (detailed in Section III-B) are launched against the IoT devices, and above data collection/processing steps are repeated to extract the relevant features for attack traffic. As noted earlier, this study also leverages empirical darknet data to further train and validate the GAN approach (i.e., in addition to the GAN trained using data from the IoT testbed). Specifically, the CAIDA repository (www.caida.org) is used to extract passive darknet measurements from the network telescope at the University of California San Diego (UCSD). Studies have shown that this data can provide key insights and evidence of Internet-scale unsolicited IoT device behaviors [10]. Hence the working assumption here is that a GAN trained using this data will be able to identify anomalous IoT traffic as belonging to the same distribution as malicious IoT data from the darknet. By extension any normal traffic (generated by clean IoT devices) will appear as "anomalous" to this GAN. Accordingly, the benign and malicious IoT traffic datasets are used to test the efficiency of the darknet GAN model as well. In particular, a total of 50,000 benign instances and 3,000 malicious instances of data are generated for each of the three IoT testbed devices. All ML evaluation is done using the TensorFlow model library. Namely, the GAN model is first trained using a mix of testbed and CAIDA data (as detailed above). Here, the generator network is formed with 3 dense layers of size 64, 128, and 56 nodes and a rectified linear unit (RELU) non-linearity. Similarly, the encoder network has 3 dense layers comprising of 128, 64, and 32 nodes, respectively. Finally, the discriminator network consists of a single dense layer with 128 nodes. Akin to [25], two different discriminator models are also used here, i.e., one which attempts to correctly identify real samples and another which attempts to correctly identify latent samples. Furthermore, the batch size is set to 50 samples, and training is done over 100 epochs with a learning rate of 0.1 (10%). Furthermore, a total of 3,000 instances are used for the test dataset (both benign and malicious samples). Results for the overall precision and recall performance of the GAN models are presented in Figure 2. These findings show a precision rate of 100% and a recall rate of 93% when the GAN is trained using benign IoT samples from the testbed. In other words, the latter value implies that no benign IoT samples are misidentified as malicious. However, approximately 7% of the malicious samples are still incorrectly classified as benign. Meanwhile, when the GAN model is trained using the CAIDA darknet data, it yields slightly lower performance with a precision rate of 97.13% and a recall of 92%. Nevertheless, this performance is still a very impressive considering the fact that the darknet data is largely unrelated to the that of the IoT devices operating in the testbed. For comparison purposes, two other classical (non-NN) supervised ML algorithms are also tested, i.e., random forest and gradient boosting. Namely, results for the random forest classifier ( Figure 2) indicate a precision rate of 97.29% and a recall rate of 97.3%. Meanwhile, the gradient boosting classifier gives slightly better performance, with a precision rate of 100% and a recall rate of 98.90%. Although these algorithms closely match (even slightly exceed) the performances of the GAN-based schemes, a major drawback here is that they require both benign and malicious data samples for training purposes. By contrast, the GAN models are much more efficient and can be trained using only benign (testbed) or malicious (darknet) samples. Therefore in practical IoT realms, the GAN-based approach will be much more feasible versus these supervised learning methods, i.e., as it only requires data packets from devices directly connected to the network. Finally, the inference times of the various ML schemes are also measured and shown in Figure 3. These findings confirm that the GAN-based algorithms yield the fastest (lowest) times, averaging about 6.05 ms. Conversely, the random forest and gradient boosting algorithms are notably slower, with average inference times of 12.46 and 14.25 ms, respectively. Again, these findings confirm more efficient run-time operation in practical real-world settings with the GAN-based solution. VI. CONCLUSIONS AND FUTURE DIRECTIONS Internet of things (IoT) paradigms are seeing wide scale traction across many market sectors. However, the prevalence of billions of IoT devices with limited security provisions presents a very large attack surface for malicious attackers to exploit. As a result, IoT security will remain a major concern for the foreseeable future. Along these lines, this effort presents a novel generative adversarial network (GAN) solution to identify threats to IoT devices from both inside and outside a network. Specifically, this anomaly detection scheme leverages the excellent data mapping capabilities of this algorithm to generate traffic profiles for IoT devices and identify outlier behaviors. In particular, the GAN-based models are trained using both benign IoT traffic data and darknet data from a global network telescope repository. A detailed testbed is also built to implement and validate the proposed scheme using realworld IoT devices and well-known IoT malware threats (Mirai and BashLite). Overall findings show very promising results with the GAN-based solutions, i.e., in terms of overall precision and recall rates and inference times. Although, several classical supervised learning algorithms give marginally better results in some cases, they are much more compute and data intensive, i.e., requiring both benign and malicious samples for practical training purposes. Overall, the contributions of this study can be extended along several key directions. Foremost, a wider range of physical IoT systems can be incorporated into the testbed. Although the two types of devices used in this study (Wifi routers and webcams) represent a significant portion of the systems targeted by IoT malware, more specialized and advanced devices can also be considered, e.g., such as industrial supervisory control and data acquisition (SCADA) systems. Additionally, improved feature selection can also be done to further improve GAN accuracy and better capture the intricate interactions of advanced exploits. Finally, threat mitigation schemes can also be developed and tested using pfsense and other open-source software networking tools. Fig. 1 : 1Overview of IoT testbed setup malware families (Mirai and Bashlite) are then run on these VM instances, limiting the need for separate physical machines. Overall, the VMware ESXi virtualization tool provides a simplified user interface for deploying VMs and can support most OS types. Fig. 2: Precision and recall rates Fig. 3 : 3Inference times for test data (ms) TABLE I : ICyberattacks against IoT testbed devicesAttack Netgear DGN2200 Yi 1080p Camera Samsumg IP Camera Nmap scanning Yes Yes Yes Command injection Yes No No Nessus scanning Yes No Yes Remote code execution Yes No No Mirai Yes Yes Yes Bashlite Yes Yes Yes MITM No Yes No TABLE II : IIExploits against Netgear DGN 2200 Wifi routerVulnerability Description Command injection Using special HTTP POST request & login credentials Remote command execution Access to default user account Cross-site request forgery Unauthenticated remote code execution TABLE III : IIIFeature extraction from raw packet dataFeature Description Number of packets Mean & std (total number of packets in both directions) Packet length Mean & std (length of the packets in both directions) Number of unique ports Ports used for communication in both directions Packet inter arrival time Mean & std (time between arrival of packets in both directions) TCP 1 if using TCP, 0 if using UDP PSH flag Number of packets with PSH flag set URG flag Number of packets with URG flag set Idle time Duration for which the connection was idle Active https://www.exploit-db.com/exploits/31617/ Towards a unified in-network ddos detection and mitigation strategy. K Friday, E Kfoury, E Bou-Harb, J Crichigno, IEEE International Conference on Network Softwarization (NetSoft) 2020. Ghent, BelgiumK. Friday, E. Kfoury, E. Bou-Harb, and J. Crichigno, "Towards a unified in-network ddos detection and mitigation strategy," in IEEE International Conference on Network Softwarization (NetSoft) 2020, Ghent, Belgium, June 2020. Ddos in the iot: Mirai and other botnets. K Constantinos, K Georgios, S Angelos, Computer. 50K. Constantinos, K. Georgios, and S. Angelos, "Ddos in the iot: Mirai and other botnets," Computer, vol. 50, pp. 80-84, 2017. Ubootkit: A worm attack for the bootloader of iot devices. Y Jingyu, G Chen, L Zaho, L Chendong, G Jiahua, L Guize, M Jinsong, BlackHat Asia. Y. Jingyu, G. Chen, L. Zaho, L. Chendong, G. Jiahua, L. Guize, and M. Jinsong, "Ubootkit: A worm attack for the bootloader of iot devices," in BlackHat Asia 2018, Singapore, March 2018. A surevey of intrusion detection in the internet of things. B Zarpelao, R Miani, C Kawakani, S Alvarenga, Journal of Network and Computer Applications. 84B. Zarpelao, R. Miani, C. Kawakani, and S. Alvarenga, "A surevey of intrusion detection in the internet of things," Journal of Network and Computer Applications, vol. 84, pp. 25-37, April 2017. Svelte: Real-time intrusion detection in the internet of things. S Raza, L Wallgren, T Voigt, Ad Hoc Networks. 11S. Raza, L. Wallgren, and T. Voigt, "Svelte: Real-time intrusion detection in the internet of things," Ad Hoc Networks, vol. 11, pp. 2661-2674, November 2013. Distributed attack detection scheme using deep learning approach for internet of things. A Diro, N Chilamkurti, Future Generation Computer Systems. 82A. Diro and N. Chilamkurti, "Distributed attack detection scheme using deep learning approach for internet of things," Future Generation Computer Systems, vol. 82, pp. 761-768, May 2018. Threat analysis of iot networks using artificial neural network intrusion detection system. E Hodo, X Bellekens, A Hamilton, P Dubouilh, E Iorkyase, C Tachtatzis, R Atkinson, International Symposium on Networks, Computers and Communications. Hammamet, TunisiaISNCC 2016E. Hodo, X. Bellekens, A. Hamilton, P. Dubouilh, E. Iorkyase, C. Tachtatzis, and R. Atkinson, "Threat analysis of iot networks using artificial neural network intrusion detection system," in International Symposium on Networks, Computers and Commu- nications (ISNCC 2016), Hammamet, Tunisia, May 2016. Computational intelligence based intrusion detection systems for wireless communication and pervasive computing networks. A Gupta, O Pandey, M Shukla, A Dadhich, S Mathur, A Ingle, 2013 International Conference on Computational Intelligence and Computing Research. Madurai, IndiaA. Gupta, O. Pandey, M. Shukla, A. Dadhich, S. Mathur, and A. Ingle, "Computational intelligence based intrusion detection systems for wireless communication and pervasive computing networks," in 2013 International Conference on Computational Intelligence and Computing Research, Madurai, India, Decem- ber 2013. Novel detection and analysis of deep variational autoencoders. G Tucker, B , Rochester Institue of Technology. Ph.D. dissertationG. Tucker B., "Novel detection and analysis of deep variational autoencoders," Ph.D. dissertation, Rochester Institue of Tech- nology, 2018. Internet of malicious things: Correlating active and passive measurements for inferring and characterizing internet-scale unsolicited iot devices. F Shaikh, E Bou-Harb, N Neshenko, A Wright, N Ghani, IEEE Communications Magazine. 56F. Shaikh, E. Bou-Harb, N. Neshenko, A. Wright, and N. Ghani, "Internet of malicious things: Correlating active and passive measurements for inferring and characterizing internet-scale un- solicited iot devices," IEEE Communications Magazine, vol. 56, pp. 170-177, September 2018. Anomaly detection: A survey. V Chandola, A Banerjee, V Kumar, ACM Computing. 41V. Chandola, A. Banerjee, and V. Kumar, "Anomaly detection: A survey," ACM Computing, vol. 41, pp. 1-58, July 2009. Sequence matching and learning in anomaly detection for computer security. T Lane, C Brodley, WS-97-07AAAI Technical ReportT. Lane and C. Brodley, "Sequence matching and learning in anomaly detection for computer security," AAAI Technical Report WS-97-07, 1997. Hierarchical kohonenen net for anomaly detection in network security. S Sarasama, Q Zhu, J Huff, IEEE Transactions on Systems, Man and Cybernetics. 35Part B (Cybernetics)S. Sarasama, Q. Zhu, and J. Huff, "Hierarchical kohonenen net for anomaly detection in network security," IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol. 35, pp. 302-312, April 2005. Anomaly detection of web-based attacks. C Kruegel, G Vigna, 10th ACM Conference on Computer and Communications Security. Washington D.C., USAC. Kruegel and G. Vigna, "Anomaly detection of web-based attacks," in 10th ACM Conference on Computer and Communi- cations Security, Washington D.C., USA, October 2003. Ultra-lightweight deep packet anomaly detection for internet of things devices. D Summerville, K Zach, Y Chen, IEEE International Performance Computing and Communications Conference (IPCCC 2015). Nanjing, ChinaD. Summerville, K. Zach, and Y. Chen, "Ultra-lightweight deep packet anomaly detection for internet of things devices," in IEEE International Performance Computing and Communica- tions Conference (IPCCC 2015), Nanjing, China, December 2015. A machine learning model for classifying unsoicited iot devices by observing network telescopes. F Shaikh, E Bou-Harb, J Crichigno, N Ghani, 14th International Wireless Communications & Mobile Computing Conference. Limassol, CyprusIWCMC 2018F. Shaikh, E. Bou-Harb, J. Crichigno, and N. Ghani, "A ma- chine learning model for classifying unsoicited iot devices by observing network telescopes," in 14th International Wireless Communications & Mobile Computing Conference (IWCMC 2018), Limassol, Cyprus, June 2018. Characterizing network traffic by means of the netmine framework. D Apiletti, E Baralis, T Cerquitelli, V D Elia, Computer Networks. 53D.Apiletti, E.Baralis, T.Cerquitelli, and V. D.Elia, "Character- izing network traffic by means of the netmine framework," Computer Networks, vol. 53, pp. 774-789, April 2009. A study on nsl-kdd dataset for intrusion detection system based on classification algorithms. L Dhanabal, S Shantharajah, International Journal of Advanced Research in Computer and Communication Engineering. 6L. Dhanabal and S. Shantharajah, "A study on nsl-kdd dataset for intrusion detection system based on classification algorithms," International Journal of Advanced Research in Computer and Communication Engineering, vol. 6, pp. 446-452, June 2015. N-baiot: Network-based detection of iot botnet attacks using deep autoencoders. Y Median, M Bohadana, Y Mathov, Y Mirsky, A Shabtai, D Breitenbacher, Y Elovici, IEEE Pervasive Computing. Y. Median, M. Bohadana, Y. Mathov, Y. Mirsky, A. Shabtai, D. Breitenbacher, and Y. Elovici, "N-baiot: Network-based de- tection of iot botnet attacks using deep autoencoders," IEEE Pervasive Computing, pp. 12-22, July-September 2018. Evaluation of penetration testing tools of kali linux. G Singh, J Singh, International Journal of Innovations and Advancement in Computer Science. 5G. Singh and J. Singh, "Evaluation of penetration testing tools of kali linux," International Journal of Innovations and Advance- ment in Computer Science, vol. 5, pp. 28-32, September 2016. Implementation of firewall & intrusion detection system using pfsense to enhance network security. D Kumar, M Gupta, International Journal of Electrical Electronics & Computer Science Engineering. D. Kumar and M. Gupta, "Implementation of firewall & intru- sion detection system using pfsense to enhance network secu- rity," International Journal of Electrical Electronics & Computer Science Engineering, pp. 131-137, 2018. N Dragoni, A Giaretta, M Mazzara, arXiv:1707.08380The internet of hackable things. N. Dragoni, A. Giaretta, and M. Mazzara, "The internet of hackable things," in arXiv:1707.08380, 2018. Generative adversarial nets. I Goodfellow, J Pugetabadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Neural Information Processing and Systems (NIPS 2016). Barcelona, SpainI. Goodfellow, J. PugetAbadie, M. Mirza, B. Xu, D. Warde- Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Neural Information Processing and Systems (NIPS 2016), Barcelona, Spain, December 2016. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. T Schlegl, P Seebock, S M Waldstein, U Schmidterfurth, G Langs, Information Processing in Medical Imaging. North Carolina, USAT. Schlegl, P. Seebock, S. M. Waldstein, U. SchmidtErfurth, and G. Langs, "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery," in Information Processing in Medical Imaging 2016, North Carolina, USA, June 2017. Efficient gan-based anomaly detection. H Zenati, C Foo, B Lecouat, G Manek, V Chandrasekhar, arXiv:1802.06222H. Zenati, C. Foo, B. Lecouat, G. Manek, and V. Chandrasekhar, "Efficient gan-based anomaly detection," in arXiv:1802.06222, 2018.
[]
[ "Gibbs-Bogoliubov inequality on the Nishimori line", "Gibbs-Bogoliubov inequality on the Nishimori line" ]
[ "Manaka Okuyama \nGraduate School of Information Sciences\nTohoku University\n980-8579SendaiJapan\n", "Masayuki Ohzeki \nGraduate School of Information Sciences\nTohoku University\n980-8579SendaiJapan\n\nInternational Research Frontier Initiative\nTokyo Institute of Technology\n105-0023TokyoJapan\n\nDepartment of Physics\nTokyo Institute of Technology\n152-8551TokyoJapan\n\nSigma-i Co., Ltd\n108-0075TokyoJapan\n" ]
[ "Graduate School of Information Sciences\nTohoku University\n980-8579SendaiJapan", "Graduate School of Information Sciences\nTohoku University\n980-8579SendaiJapan", "International Research Frontier Initiative\nTokyo Institute of Technology\n105-0023TokyoJapan", "Department of Physics\nTokyo Institute of Technology\n152-8551TokyoJapan", "Sigma-i Co., Ltd\n108-0075TokyoJapan" ]
[]
The Gibbs-Bogoliubov inequality states that the free energy of a system is always lower than that calculated by a trial function. In this study, we show that a counterpart of the Gibbs-Bogoliubov inequality holds on the Nishimori line for Ising spin-glass models with Gaussian randomness. Our inequality states that the quenched free energy of a system is always lower than that calculated using a quenched trial function. The key component of the proof is the convexity of the pressure function E log Z with respect to the parameters along the Nishimori line, which differs from the conventional convexity with respect to the inverse temperature. When our inequality was applied to mean-field models, such as the Sherrington-Kirkpatrick model and p-spin model, the bound coincided with the replica-symmetric solution indicating that the equality holds.
null
[ "https://export.arxiv.org/pdf/2208.12311v2.pdf" ]
251,881,379
2208.12311
02ba201fb8434771a2db92495d2a9e2fa82fccfb
Gibbs-Bogoliubov inequality on the Nishimori line 7 Jun 2023 Manaka Okuyama Graduate School of Information Sciences Tohoku University 980-8579SendaiJapan Masayuki Ohzeki Graduate School of Information Sciences Tohoku University 980-8579SendaiJapan International Research Frontier Initiative Tokyo Institute of Technology 105-0023TokyoJapan Department of Physics Tokyo Institute of Technology 152-8551TokyoJapan Sigma-i Co., Ltd 108-0075TokyoJapan Gibbs-Bogoliubov inequality on the Nishimori line 7 Jun 2023(Dated: June 8, 2023) The Gibbs-Bogoliubov inequality states that the free energy of a system is always lower than that calculated by a trial function. In this study, we show that a counterpart of the Gibbs-Bogoliubov inequality holds on the Nishimori line for Ising spin-glass models with Gaussian randomness. Our inequality states that the quenched free energy of a system is always lower than that calculated using a quenched trial function. The key component of the proof is the convexity of the pressure function E log Z with respect to the parameters along the Nishimori line, which differs from the conventional convexity with respect to the inverse temperature. When our inequality was applied to mean-field models, such as the Sherrington-Kirkpatrick model and p-spin model, the bound coincided with the replica-symmetric solution indicating that the equality holds. I. INTRODUCTION The variational method is a powerful approximation technique and is used in various research areas of physics. The Gibbs-Bogoliubov (GB) inequality [1][2][3][4][5], or the Gibbs-Bogoliubov-Feynman inequality [6], is one of the most famous variational inequalities in statistical physics, which shows that the quantity calculated by the variational method is always greater than or equal to the free energy of a system. The essence of the GB inequality lies in the convexity of the pressure function log Z with respect to the inverse temperature, which immediately leads to the GB inequality [3]. Although the GB inequality works well for ferromagnetic models, it does not perform satisfactorily for spin-glass models owing to randomness. Thus, it leads to a natural question: Is there a variational inequality valid for spin-glass models? In this study, we focused only on the Nishimori line and partially answered this question. On the Nishimori line, various exact results are obtained: exact solutions of the internal energy [7], absence of replica symmetry breaking [8], and a counterpart of the Griffiths second inequalities [9,10]. We prove that the counterpart of the GB inequality holds for the general Ising spin-glass model with Gaussian randomness on the Nishimori line. The key element of the proof is the convexity peculiar to the Nishimori line. When we control the parameters along the Nishimori line, that is, when we change the temperature and randomness simultaneously, it is possible to show that the pressure function E log Z is a convex function, which is different from the conventional convexity with respect to the inverse temperature. Consequently, we obtain the GB inequality on the Nishimori line from convexity. As the "naive counterpart" of the GB inequality in spin glass, one might think of taking the quenched average after applying the conventional GB inequality. However, we emphasize that this "naive counterpart" of the GB inequality is weak compared with the GB inequality on the Nishimori line obtained in the present study. The organization of this paper is as follows. In Sec. II, we define the model and prove the GB inequality on the Nishimori line. In Sec. III, we apply the attained inequality to several models. Finally, our discussion is presented in Sec. IV. II. DEFINITIONS AND RESULTS We consider a generic form of the Ising spin-glass model on the Nishimori line S A = i∈A S i ,(1)Z = Tr e A⊂Ω β A J A S A .(2) where S i = ±1, Ω is the set of sites, the sum over A is over all the subsets of Ω in which interactions exist, the lattice structure adopts any form, and β A is the local inverse temperature of subset A. The distribution of randomness of the interactions J A follows a Gaussian distribution with mean J A0 and variance σ 2 A . The thermal average · · · and quenched average E [· · · ] are defined by f = Tr f e A⊂Ω β A J A S A Z ,(3)E f =           A⊂Ω dJ A 2πσ 2 A 1/2 e − (J A −J A0 ) 2 2σ 2 A           f.(4) We shall be interested in the pressure function E log Z =           A⊂Ω dJ A 2πσ 2 A 1/2 e − (J A −J A0 ) 2 2σ 2 A           log Tr e A⊂Ω β A J A S A .(5) The condition of the Nishimori line is given by β A = J A0 σ 2 A ,(6) for all A. It is convenient to introduce the parameters x A ≥ 0 [9,11] as β A = √ x A σ A ,(7)J A0 = σ A √ x A ,(8) then the pressure function depends only on {x A } E log Z =        A⊂Ω dJ A (2π) 1/2 e − J 2 A 2        log Tr e A⊂Ω ( √ x A J A +x A )S A .(9) Our first result is on the convexity of the pressure function, which is obtained from the calculations of previous studies [9,11]; however, to the best of our knowledge, this has not been pointed out until now. Theorem 1 (convexity of Nishimori line). Pressure function E log Z is convex for {x A }. Proof. For any x B and x C , the derivative of the pressure function is as follows [9,11]: ∂E log Z ∂x B = 1 2 E[1 + S B ],(10)∂ 2 E log Z ∂x B ∂x C = 1 2 E ( S B S C − S B S C ) 2 .(11) The following identity is useful ( S B S C − S B S C ) 2 = S 1 B − S B S 2 B − S B S 1 C − S C S 2 C − S C 1,2 = a B a C 1,2 ,(12) where a B = S 1 B − S B S 2 B − S B , · · · 1,2 is the thermal average with respect to the two replicas, and S 1 B and S 2 B are the spin in the first and second replica, respectively. Then, we rewrite Eq. (11) as ∂ 2 E log Z ∂x B ∂x C = 1 2 E a B a C 1,2 .(13) Equation (13) means that the Hessian matrix of E log Z with respect to {x A } is positive semidefi- nite, that is, E log Z is convex for {x A }. The Gibbs-Bogoliubov inequality is a consequence of the convexity of the pressure function with respect to the inverse temperature [3]. Using the convexity on the Nishimori line, we arrive at the Gibbs-Bogoliubov inequality on the Nishimori line. Theorem 2 (Gibbs-Bogoliubov inequality on Nishimori line). For any two Ising spin-glass models on the Nishimori line, E log Z 1 = E log Tr e A⊂Ω 1 β A J A S A ,(14)E log Z 0 = E log Tr e B⊂Ω 0 β B J B S B ,(15) the following inequality holds E log Z 1 ≥ E log Z 0 + E         1 2 A⊂Ω 1 x A (1 + S A 0 ) − 1 2 B⊂Ω 0 x B (1 + S B 0 )         ,(16) where Ω 0 and Ω 1 are any set of sites and 0 is the thermal average with respect to Z 0 . Proof. We define the interpolating pressure function on the Nishimori line as E log Z(t) =         A⊂Ω 1 dJ A (2π) 1/2 e − J 2 A 2                 B⊂Ω 0 dJ B (2π) 1/2 e − J 2 B 2         log Tr e A⊂Ω 1 ( √ tx A J A +tx A )S A + B⊂Ω 0 ( √ (1−t)x B J B +(1−t)x B )S B ,(17) where E log Z(1) = E log Z 1 and E log Z(0) = E log Z 0 . From Theorem 1, we immediately obtain d 2 E log Z(t) dt 2 ≥ 0,(18) and E log Z(1) − E log Z(0) − dE log Z(t) dt t=0 ≥ 0.(19) Using integration by parts, we find dE log Z(t) dt t=0 = 1 2 A⊂Ω 1 x A − 1 2 B⊂Ω 0 x B + E         1 2 A⊂Ω 1 x A S A 0 − 1 2 B⊂Ω 0 x B S B 0         ,(20) where we use the identity E S C 2 0 = E [ S C 0 ] for any spin product S C = i∈C S i on the Nishimori line. Combining Eqs. (19) and (20), we prove Eq. (16). Remark 3. Note that the left-hand side of Eq. (19) is called the Bregman divergence in informa- tion geometry [12]. In general, for any convex function f (t), the Bregman divergence is defined as f (1) − f (0) − f ′ (0) ≥ 0. If we consider conventional convexity with respect to the inverse temperature, the Bregman divergence is reduced to the Kullback-Leibler divergence and we can reproduce the conventional GB inequality. III. EXAMPLES In this section, we apply the GB inequality on the Nishimori line to spin-glass models and derive a mean-field approximation on the Nishimori line. In particular, it is noteworthy that the equality holds for the Sherrington-Kirkpatrick (SK) model in the thermodynamic limit instead of the inequality. A. mean-field approximation First, we consider the Ising spin-glass models on the Nishimori line with the coordination number z E log Z 1 = E log Tr e i, j β i, j J i, j S i S j ,(21)x i, j = β 2 .(22) Setting a quenched trial function as a random field E log Z 0 = E log Tr e N i=1 β i J i S i ,(23)x i = β 2 zq,(24) we apply the GB inequality on the Nishimori line to Eq. (21), which yields: 1 N E log Z 1 ≥ 1 N E log Z 0 + 1 N 1 2 i, j x i, j − 1 2N N i=1 x i + 1 N E         1 2 i, j x i, j S i S j 0 − 1 2 N i=1 x i S i 0         = Dy log 2 cosh β √ zqy + β 2 zq +β 2 z       1 4 − 1 2 q + 1 4 Dy tanh β √ zqy + β 2 zq 2 − 1 2 q Dy tanh β √ zqy + β 2 zq       ,(25) where Dy = ∞ −∞ dye −y 2 /2 / √ 2π and i, j = zN/2. By maximizing the right-hand side with respect to q, we obtain 1 N E log Z 1 ≥ Dy log 2 cosh β √ zqy + β 2 zq + zβ 2 4 (1 − q) 2 − zβ 2 q 2 2 ,(26)q = Dy tanh β √ zqy + β 2 zq .(27) This means that the GB inequality on the Nishimori line produces a mean-field approximation on the Nishimori line when a random field is chosen for a quenched trial function. B. mean-field models Next, we consider the SK model on the Nishimori line E log Z SK = E log Tr e N i, j=1 β i, j J i, j S i S j ,(28)x i, j = β 2 2N . (29) lim N→∞ 1 N E log Z SK = Dy log 2 cosh β √ qy + β 2 q + β 2 4 (1 − q) 2 − β 2 q 2 2 , (30) with q = Dy tanh β √ qy + β 2 q .(31) With a quenched trial function as a random field E log Z RF = E log Tr e N i=1 β i J i S i ,(32)x RF i = β 2 q,(33) we apply the GB inequality on the Nishimori line to the SK model (28). Then Eq. (16) is reduced to 1 N E log Z SK ≥ Dy log 2 cosh β √ qy + β 2 q + β 2 4 − β 2 q 2 + β 2 4 Dy tanh β √ qy + β 2 q 2 − β 2 q 2 Dy tanh β √ qy + β 2 q . (34) By maximizing the right-hand side with respect to q, we arrive at 1 N E log Z SK ≥ Dy log 2 cosh β √ qy + β 2 q + β 2 4 (1 − q) 2 − β 2 q 2 2 ,(35)with q = Dy tanh β √ qy + β 2 q ,(36) which coincides with the exact solution of Eqs. (30) and (31). Therefore, the equality of the GB inequality on the Nishimori line holds for the SK model in the thermodynamic limit. Similar results also hold for the p-spin glass model with a random field on the Nishimori line, E[log Z p ] = E          log Tr          exp          i 1 <···<i p β i 1 ,··· ,i p J i 1 ,··· ,i p σ i 1 · · · σ i p + i β i h i σ i                            ,(37)x i 1 ,··· ,i p = β 2 p! 2N p−1 ,(38)x i = β 2 h,(39) where h ≥ 0 and p is any positive integer. By choosing a quenched trial function as a random field with the parameter x RF i = pβ 2 q p−1 /2 + β 2 h and maximizing the GB inequality on the Nishimori line with respect to q, we arrive at 1 N E[log Z p ] ≥ Dz log 2 cosh z p 2 β 2 q p−1 + β 2 h + p 2 β 2 q p−1 + β 2 h + β 2 4 (1 − pq p−1 ) + β 2 4 (1 − p)q p , (40) with q = Dz tanh z p 2 β 2 q p−1 + β 2 h + p 2 β 2 q p−1 + β 2 h .(41) We note that the right-hand side of Eq. (40) coincides with the exact solution in the thermodynamic limit [13,14,16]. Finally, while the inequalities (34) and (40) for mean-field models have already been obtained in Ref. [14,16], our derivation based on the GB inequality on the Nishimori line is much simpler. Additionally, in the case of odd p, our inequality (40) is true for all h ≥ 0, whereas previous studies [14,16] were limited to Lebesgue almost every h ≥ 0. Thus, our derivation is slightly better than previous studies. IV. DISCUSSIONS We showed that the counterpart of the GB inequality holds on the Nishimori line for general spin-glass models with Gaussian randomness. The convexity on the Nishimori line plays an essential role in the derivation. When a random field is chosen for a quenched trial function, the GB inequality on the Nishimori line recovers the mean-field approximation on the Nishimori line. Moreover, the equality of the GB inequality on the Nishimori line holds for the Sherrington-Kirkpatrick model in the thermodynamic limit, which corresponds to the fact that the equality of the conventional GB inequality holds for the Curie-Weiss model in the thermodynamic limit when we set a trial function as the magnetic field. These results show that the GB inequality on the Nishimori line perfectly corresponds to the conventional GB inequality. When the GB inequality on the Nishimori line is applied to p-spin model with odd p, the obtained inequality (40) is valid for any h ≥ 0 whereas the corresponding inequalities in previous studies [14,16] are limited to Lebesgue almost every h ≥ 0. Therefore, our inequality is slightly stronger than previous studies [14,16]. It is expected that the GB inequality on the Nishimori line can also be applied to more complicated mean-field models [11,[15][16][17]. While we only considered Gaussian randomness, the Nishimori line exists in other randomness, such as ±J Ising model. It is interesting to determine whether the GB inequality on the Nishimori line holds for other randomness. It is important to see how the pressure function behaves in the case of J A0 = 0, which is most interesting to us. Recent studies [18,19] have proven that when the inverse temperature scale is changed to √ β, E log Tr e √ β A⊂Ω J A S A is concave with respect to β in several mean-field spinglass models using exact solutions of the thermodynamic limit. There is no guarantee that this property holds for general spin-glass models, and it would be interesting to investigate its behavior in finite-dimensional models. This work was financially supported by JSPS KAKENHI Grant Nos. 19H01095, 20H02168, and 21K13848. . I A Kvasnikov, Dokl. Akad. Nauk SSSR. 110755I. A. Kvasnikov, Dokl. Akad. Nauk SSSR 110, 755 (1956). . N N Bogoliubov, Dokl. Akad. Nauk USSR. 119244N. N. Bogoliubov, Dokl. Akad. Nauk USSR 119, 244 (1958) . R B Griffiths, J. M. Phys. 51215R. B. Griffiths, J. M. Phys. 5. 1215 (1964) . A Isihara, J. Phys. A: General Phys. 1539A. Isihara, J. Phys. A: General Phys. 1, 539 (1968). . A L Kuzemsky, Int. J. Mod. Phys. B. 291530010A. L. Kuzemsky, Int. J. Mod. Phys. B 29, 1530010 (2015). . R P Feynman, Phys. Rev. 97660R. P. Feynman, Phys. Rev. 97, 660 (1955). . H Nishimori, Prog. Theor. Phys. 661169H. Nishimori, Prog. Theor. Phys. 66, 1169 (1981). H Nishimori, D Sherrington, AIP Conference Proceedings. 55367H. Nishimori and D. Sherrington, In: AIP Conference Proceedings, vol. 553, p. 67 (2001). . S Morita, H Nishimori, P Contucci, J. Phys. A. 37203S. Morita, H. Nishimori, and P. Contucci, J. Phys. A 37, L203 (2004). . H Kitatani, J. Phys. Soc. Jpn. 7844714H. Kitatani, J. Phys. Soc. Jpn. 78, 044714 (2009). . D Alberici, F Camilli, P Contucci, E Mingione, Comm. Math. Phys. 3871191D. Alberici, F. Camilli, P. Contucci, and E. Mingione, Comm. Math. Phys. 387, 1191 (2021). Information geometry and its applications. S-I. Amari, SpringerS-i. Amari, Information geometry and its applications. (Springer, 2016). H Nishimori, Statistical Physics of Spin Glasses and Information Processing: An Introduction. OxfordOxford University PressH. Nishimori, Statistical Physics of Spin Glasses and Information Processing: An Introduction (Ox- ford University Press, Oxford, 2001). . S B Korada, N Macris, J. Stat. Phys. 136205S. B. Korada and N. Macris, J. Stat. Phys. 136, 205 (2009). . J Barbier, N Macris, J. Phys. A. 52294002J. Barbier and N. Macris, J. Phys. A 52, 294002 (2019). . J Barbier, N Macris, 1741133Probab. Theory Relat. FieldsJ. Barbier and N. Macris, Probab. Theory Relat. Fields 174, 1133 (2019). . D Alberici, F Camilli, P Contucci, E Mingione, J. Stat. Phys. 182D. Alberici, F. Camilli, P. Contucci, and E. Mingione, J. Stat. Phys. 182, 2 (2021). . A Auffinger, W.-K Chen, Comm. Math. Phys. 348751A. Auffinger and W.-K. Chen, Comm. Math. Phys. 348, 751 (2016). . A Auffinger, W.-K Chen, Electron. J. Probab. 221A. Auffinger and W.-K. Chen, Electron. J. Probab. 22, 1 (2017).
[]
[ "On the rank of Hankel matrices over finite fields", "On the rank of Hankel matrices over finite fields", "On the rank of Hankel matrices over finite fields", "On the rank of Hankel matrices over finite fields" ]
[ "Omesh Dhar Dwivedi ", "Darij Grinberg ", "Omesh Dhar Dwivedi ", "Darij Grinberg " ]
[]
[]
Given three nonnegative integers p, q, r and a finite field F, how many Hankel matrices x i+j 0 i p, 0 j q over F have rank r ? This question is classical, and the answer (q 2r when r min {p, q}) has been obtained independently by various authors using different tools ([Daykin60, Theorem 1 for m = n],[Elkies02,(26)], [GaGhRa11, Theorem 5.
10.1016/j.laa.2022.02.014
[ "https://export.arxiv.org/pdf/2109.05415v1.pdf" ]
237,491,018
2109.05415
87775424da5223ce16922a33df0064789a57f255
On the rank of Hankel matrices over finite fields Sep 2021 Omesh Dhar Dwivedi Darij Grinberg On the rank of Hankel matrices over finite fields Sep 2021 Given three nonnegative integers p, q, r and a finite field F, how many Hankel matrices x i+j 0 i p, 0 j q over F have rank r ? This question is classical, and the answer (q 2r when r min {p, q}) has been obtained independently by various authors using different tools ([Daykin60, Theorem 1 for m = n],[Elkies02,(26)], [GaGhRa11, Theorem 5. Results We let N denote the set {0, 1, 2, . . .}. Fix a field F. For any n ∈ N, any (n + 1)-tuple x = (x 0 , x 1 , . . . , x n ) ∈ F n+1 , and any two integers p, q ∈ {−1, 0, 1, . . .} satisfying p + q n, we define a (p + 1) × (q + 1)-matrix H p,q (x) by H p,q (x) := x i+j 0 i p, 0 j q =      x 0 x 1 · · · x q x 1 x 2 · · · x q+1 . . . . . . . . . . . . x p x p+1 · · · x p+q      ∈ F (p+1)×(q+1) . Such a matrix H p,q (x) is called a Hankel matrix. The study of Hankel matrices has a long history in linear algebra (see, e.g., [Iohvid82]) and relates to linearly recurrent sequences ( [Elkies02], [ . 1 Numerous results have been obtained about their ranks in particular ( [Iohvid82,§11]). When the field F is finite, a strikingly simple formula can be given for the number of Hankel matrices of a given rank (more precisely, of rank to a given number): Theorem 1.1. Assume that F is finite. Let q = |F|. Let r, m, n ∈ N satisfy r m and r n. The number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) r is q 2r . Example 1.2. For a simple example, let r = 1 and m = 2 and n = 3. Thus, for every x = (x 0 , x 1 , x 2 , x 3 , x 4 , x 5 ) ∈ F 6 , we have H m,n (x) = H 2,3 (x) =   x 0 x 1 x 2 x 3 x 1 x 2 x 3 x 4 x 2 x 3 x 4 x 5   . Theorem 1.1 yields that the number of 6-tuples x ∈ F 6 satisfying rank (H 2,3 (x)) 1 is q 2·1 = q 2 . These 6-tuples can indeed be described explicitly: • Any 6-tuple of the form u, uv, uv 2 , uv 3 , uv 4 , uv 5 with u ∈ F \ {0} and v ∈ F is such a 6-tuple x. This gives a total of |F \ {0}| · |F| = (q − 1) q many such 6-tuples. • Any 6-tuple of the form (0, 0, 0, 0, 0, w) with w ∈ F is such a 6-tuple x. This gives a total of |F| = q many such 6-tuples. For higher values of r, it is harder to describe all the q 2r pertinent tuples. To our knowledge, Theorem 1.1 has not appeared in this exact form in the literature; however, it is easily seen to be equivalent to the following variant, which has appeared in [Daykin60, Theorem 1]: Corollary 1.3. Assume that F is finite. Let q = |F|. Let r, m, n ∈ N satisfy m n. The number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r is            1, if r = 0; q 2r−2 q 2 − 1 , if 0 < r m; q 2r−2 q n−m+1 − 1 , if r = m + 1; 0, if r > m + 1. The particular case of Corollary 1.3 for m = n also appears in [GaGhRa11, Theorem 5.1] 2 and [Elkies02,(26)]. The particular case when r = m = n appears in [KalLob96, Corollary 3] as well. Another setting in which Hankel matrices appear is the theory of symmetric functions, specifically Schur functions (see, e.g., [Stanle01,Chapter 7]). While we will not use this setting to prove our main results, it has provided the main inspiration for this note, so we shall briefly recall it now. The Jacobi-Trudi formula [Stanle01,Theorem 7.16.1] expresses a Schur function s λ as the determinant of a matrix, which is a Hankel matrix when the partition λ is rectangleshaped. The recent result [ACGKLP18, Corollary 6.4] by Anzis, Chen, Gao, Kim, Li and Patrias can thus be framed as a formula for the probability of a certain (n + 1) × (n + 1) Hankel matrix over a finite field to have determinant 0 (that is, rank n). This would be a particular case of Theorem 1.1 if not for the fact that the entries of the relevant Hankel matrix are not chosen uniformly at random; instead, the first few of them are fixed, while the rest are chosen uniformly at random 3 . This suggests a generalization of Theorem 1.1 in which the first few entries 4 of the (m + n + 1)-tuples x ∈ F m+n+1 are fixed. The existence of such a generalization was suggested to us by Peter Scholze. This generalization indeed exists, and will be the main result of this note. In stating it, we will use the following notation: Definition 1.4. Let n ∈ N. Let x = (x 0 , x 1 , . . . , x n ) be any (n + 1)-tuple of any kinds of objects. Let i ∈ {0, 1, . . . , n + 1}. Then, x [0,i) denotes the i-tuple (x 0 , x 1 , . . . , x i−1 ). For instance, (a, b, c, d, e) [0,3) = (a, b, c). We can now state our generalization of Theorem 1.1: Theorem 1.5. Assume that F is finite. Let q = |F|. Let k, r, m, n ∈ N satisfy k r m and r n. Fix any k-tuple a = (a 0 , a 1 , . . . , a k−1 ) ∈ F k . The number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying x [0,k) = a and rank (H m,n (x)) r is q 2r−k . Example 1.6. For an example, let k = 2, r = 2, m = 3 and n = 3. Let a = (a 0 , a 1 ) ∈ F 2 . Then, Theorem 1.5 yields that the number of 7-tuples x ∈ F 7 satisfying x [0,2) = a and rank (H 3,3 (x)) 2 is q 2·2−2 = q 2 . Note that a 7-tuple x ∈ F 7 satisfying x [0,2) = a is nothing but a 7-tuple x ∈ F 7 that begins with the entries a 0 and a 1 ; thus, we could just as well be counting the 5-tuples (x 2 , x 3 , x 4 , x 5 , x 6 ) ∈ F 5 satisfying rank (H 3,3 (a 0 , a 1 , x 2 , x 3 , x 4 , x 5 , x 6 )) 2. Clearly, Theorem 1.1 is the particular case of Theorem 1.5 for k = 0, since the 0-tuple a = () ∈ F 0 automatically satisfies x [0,0) = a for every x ∈ F m+n+1 . By specializing Theorem 1.1 to the case r = m = n (and recalling that a square matrix has determinant 0 if and only if it has less-than-full rank), we can easily obtain the following: Corollary 1.7. Assume that F is finite. Let q = |F|. Let k, n ∈ N satisfy k n. Fix any k-tuple a = (a 0 , a 1 , . . . , a k−1 ) ∈ F k . The number of (2n + 1)-tuples x ∈ F 2n+1 satisfying x [0,k) = a and det (H n,n (x)) = 0 is q 2n−k . We shall prove Theorem 1.5 in Section 4; we will then derive Theorem 1.1, Corollary 1.3 and Corollary 1.7 from it. Finally, in Section 5, we will explain how Corollary 1.7 generalizes [ACGKLP18, Corollary 6.4]. Remark 1.8. Theorem 1.5 also holds if we replace the assumptions "k r m and r n" by "k r m n + 1". In fact, the only case covered by the latter assumptions but not by the former is the case when k r = m = n + 1; however, Theorem 1.5 is easy to prove directly in this case. (To wit, if k r = m = n + 1, then every (m + n + 1)-tuple x ∈ F m+n+1 satisfies rank (H m,n (x)) r, since the matrix H m,n (x) has n + 1 columns and therefore has rank n + 1 = r. Hence, the number of (m + n + 1)tuples x ∈ F m+n+1 satisfying x [0,k) = a and rank (H m,n (x)) r equals the number of all (m + n + 1)-tuples x ∈ F m+n+1 satisfying x [0,k) = a in this case. But this number is easily seen to be q m+n+1−k = q 2r−k (since m =r + n + 1 =r = r + r = 2r). Thus, Theorem 1.5 is proved in the case when k r = m = n + 1.) As a consequence, Theorem 1.1 also holds if we replace the assumptions "r m and r n" by "r m n + 1". Hence, Corollary 1.3 still holds if we replace the assumption "m n" by "m n + 1". However, we gain nothing significantly new in this way, since the newly covered cases can also be easily obtained from the old ones. Rank lemmas Before we come to the proof of Theorem 1.5, we are going to build a toolbox of general lemmas about ranks of the Hankel matrices H p,q (x). We note that none of these lemmas requires F to be finite; they can equally well be applied to fields like R and C. Lemma 2.1. Let n ∈ N. Let p, q ∈ N be such that p + q n + 1. If x ∈ F n+1 satisfies rank H p,q−1 (x) p, then rank H p,q−1 (x) rank H p−1,q (x) . Proof of Lemma 2.1. We proceed by induction on p (without fixing x): Induction base: Proving Lemma 2.1 in the case when p = 0 is easy: In this case, the assumption rank H p,q−1 (x) p rewrites as rank H p,q−1 (x) 0, which immediately yields the claim. Induction step: Let p be a positive integer. Assume (as the induction hypothesis) that Lemma 2.1 holds for p − 1 instead of p. Our goal is now to prove Lemma 2.1 for p. Let q ∈ N be such that p + q n + 1. Let x ∈ F n+1 satisfy rank H p,q−1 (x) p. We must thus prove that rank H p,q−1 (x) rank H p−1,q (x) . Write the (n + 1)-tuple x ∈ F n+1 as x = (x 0 , x 1 , . . . , x n ). Then, H p,q−1 (x) =      x 0 x 1 · · · x q−1 x 1 x 2 · · · x q . . . . . . . . . . . . x p x p+1 · · · x p+q−1      and H p−1,q (x) =      x 0 x 1 · · · x q x 1 x 2 · · · x q+1 . . . . . . . . . . . . x p−1 x p · · · x p+q−1      and H p−1,q−1 (x) =      x 0 x 1 · · · x q−1 x 1 x 2 · · · x q . . . . . . . . . . . . x p−1 x p · · · x p+q−2      . Hence, the matrix H p,q−1 (x) is H p−1,q−1 (x) with one extra row inserted at the bottom, whereas the matrix H p−1,q (x) is H p−1,q−1 (x) with one extra column inserted at the right end. For any matrix A that has at least one row, we let A denote the matrix A with its first row removed. The following properties of A are well-known: • If the first row of A is a linear combination of the remaining rows, then rank A = rank A. (1) • If the first row of A is not a linear combination of the remaining rows, then rank A = rank A + 1. (2) It is furthermore well-known that if A is any matrix, and if B is any submatrix of A, then rank B rank A. However, the matrix H p,q−1 (x) is a submatrix of H p−1,q (x) (indeed, it can be obtained from H p−1,q (x) by removing the first column). Hence, rank H p,q−1 (x) rank H p−1,q (x) . Let x denote the n-tuple (x 1 , x 2 , . . . , x n ) ∈ F n . It is easy to see that H u,v (x) = H u−1,v (x)(3) for all u ∈ N and v ∈ {−1, 0, 1, . . .} satisfying u + v n. Thus, in particular, H p,q−1 (x) = H p−1,q−1 (x)(4) and H p−1,q (x) = H p−2,q (x) .(5) If the first row of the matrix H p,q−1 (x) is a linear combination of the remaining rows, then (1) yields rank H p,q−1 (x) = rank H p,q−1 (x) rank H p−1,q (x) , which is precisely what we wanted to show. Hence, for the rest of this proof, we WLOG assume that the first row of the matrix H p,q−1 (x) is not a linear combination of the remaining rows. Thus, (2) yields rank H p,q−1 (x) = rank H p,q−1 (x) + 1. In view of (4), this rewrites as rank H p,q−1 (x) = rank H p−1,q−1 (x) + 1. (6) Hence, rank H p−1,q−1 (x) = rank H p,q−1 (x) p −1 p − 1. Recall that the first row of the matrix H p,q−1 (x) is not a linear combination of the remaining rows. This entails that the first row of the matrix H p−1,q−1 (x) is not a linear combination of the remaining rows (since the matrix H p−1,q−1 (x) is the same as H p,q−1 (x) without the last row). Therefore, the first row of the matrix H p−1,q (x) is not a linear combination of the remaining rows (since the matrix H p−1,q (x) is just H p−1,q−1 (x) with an extra column). Thus, (2) yields rank H p−1,q (x) = rank H p−1,q (x) + 1. In view of (5), this rewrites as rank H p−1,q (x) = rank H p−2,q (x) + 1. However, our induction hypothesis shows that we can apply Lemma 2.1 to n − 1, p − 1 and x instead of n, p and x (since rank H p−1,q−1 (x) p − 1). We thus obtain rank H p−1,q−1 (x) rank H p−2,q (x) . Adding 1 to both sides of this inequality, we find rank H p−1,q−1 (x) + 1 rank H p−2,q (x) + 1. In view of (6) and (7), this rewrites as rank H p,q−1 (x) rank H p−1,q (x) . This completes the induction step. Thus, Lemma 2.1 is proved. Lemma 2.2. Let n ∈ N. Let p, q ∈ N be such that p + q n + 1. If x ∈ F n+1 satisfies rank H p−1,q (x) q, then rank H p−1,q (x) rank H p,q−1 (x) . Proof of Lemma 2.2. This is just a restatement of Lemma 2.1 (applied to p and q instead of q and p), since the matrices H p−1,q (x) and H p,q−1 (x) are the transposes of the matrices H q,p−1 (x) and H q−1,p (x). (Alternatively, you can prove it by the same argument as we used to prove Lemma 2.1, except that rows and columns switch roles.) Lemma 2.3. Let n ∈ N. Let p, q ∈ N be such that p + q n + 1. If x ∈ F n+1 satisfies rank H p,q−1 (x) p and rank H p−1,q (x) q, then rank H p,q−1 (x) = rank H p−1,q (x) . Proof of Lemma 2.3. This follows by combining Lemma 2.1 with Lemma 2.2. Our next lemma is a simple corollary of Lemma 2.3: Lemma 2.4. Let n ∈ N. Let p, q ∈ N be such that p + q n + 1. Let r ∈ N satisfy r + 1 p and r + 1 q. Let x ∈ F n+1 . Then, we have the logical equivalence rank H p,q−1 (x) r ⇐⇒ rank H p−1,q (x) r . Proof of Lemma 2.4. We must prove the two implications rank H p,q−1 (x) r =⇒ rank H p−1,q (x) r(8) and rank H p−1,q (x) r =⇒ rank H p,q−1 (x) r .(9) We shall only prove (8), since (9) is entirely analogous. So let us prove (8). We assume that rank H p,q−1 (x) r; we then must show that rank H p−1,q (x) r. The matrix H p−1,q−1 (x) is a submatrix of H p,q−1 (x) , and thus its rank cannot surpass the rank of H p,q−1 (x). In other words, we have rank H p−1,q−1 (x) rank H p,q−1 (x) . However, the matrix H p−1,q (x) can be viewed as being the matrix H p−1,q−1 (x) with one extra column attached to it (at its right end). Thus, rank H p−1,q (x) rank H p−1,q−1 (x) + 1 (since attaching one column cannot increase the rank of a matrix by more than 1). Hence, rank H p−1,q (x) rank H p−1,q−1 (x) rank(H p,q−1 (x)) r +1 r + 1 q. Moreover, rank H p,q−1 (x) r r + 1 p. Hence, we can apply Lemma 2.3, and conclude that rank H p,q−1 (x) = rank H p−1,q (x) . Thus, of course, rank H p−1,q (x) r follows immediately from our assumption rank H p,q−1 (x) r. Hence, (8) is proved. As we said, the proof of (9) is analogous. Thus, the proof of Lemma 2.4 is complete. The following lemma is a (much simpler) counterpart to Lemma 2.1 that replaces the assumption rank H p,q−1 (x) p by the reverse inequality: Lemma 2.5. Let n ∈ N. Let p, q ∈ N be such that p + q n + 1. If x ∈ F n+1 satisfies rank H p,q−1 (x) > p, then rank H p−1,q (x) = p. Proof of Lemma 2.5. Let x ∈ F n+1 satisfy rank H p,q−1 (x) > p. The assumption rank H p,q−1 (x) > p shows that the p + 1 rows of the matrix H p,q−1 (x) are linearly independent. Hence, in particular, the p rows of the matrix H p−1,q−1 (x) are linearly independent (since these p rows are simply the first p rows of the matrix H p,q−1 (x)). Therefore, the p rows of the matrix H p−1,q (x) are linearly independent as well (since the matrix H p−1,q (x) is just H p−1,q−1 (x) with an extra column, and therefore the rows of the former contain the rows of the latter as subsequences). In other words, rank H p−1,q (x) = p. This proves Lemma 2.5. Our above lemmas have related ranks of the "adjacent" Hankel matrices rank H p,q−1 (x) and rank H p−1,q (x) . By induction, we shall now extend these to further-apart Hankel matrices: Lemma 2.6. Let u ∈ N. Let m, n, r ∈ N be such that m + n u and r m and r n. Let s = m + n − r. Let x ∈ F u+1 be arbitrary. Then, we have the logical equivalence (rank (H m,n (x)) r) ⇐⇒ (rank (H r,s (x)) r) . Before we prove this lemma, let us comment on its significance (even though we will use it rather directly): If one wants to determine the rank of a (m + 1) × (n + 1)-matrix A, it suffices to probe for each r ∈ {0, 1, . . . , min {m, n}} whether rank A r is true (since 0 rank A min {m, n} + 1). Thus, Lemma 2.6 allows us to determine the ranks of the various matrices H m,n (x) for a given x ∈ F u+1 if we know which pairs (r, s) satisfy rank (H r,s (x)) r. Proof of Lemma 2.6. From s = m + n − r = (m − r) + n, we obtain s − (m − r) = n. Furthermore, s = m + n − r m m + n − m = n and similarly s m. Hence, m s. We now claim that the equivalence (rank (H r+i,s−i (x)) r) ⇐⇒ (rank (H r,s (x)) r)(10) holds for each i ∈ {0, 1, . . . , s − r}. [Proof of (10): We proceed by induction on i: Induction base: Clearly, (10) holds for i = 0, since we have H r+i,s−i (x) = H r+0,s−0 (x) = H r,s (x) in this case. Induction step: Let j ∈ {1, 2, . . . , s − r}. Assume (as the induction hypothesis) that (10) holds for i = j − 1. We must prove that (10) holds for i = j. In other words, we must prove the equivalence rank H r+j,s−j (x) r ⇐⇒ (rank (H r,s (x)) r) .(11) However, our induction hypothesis tells us that the equivalence rank H r+(j−1),s−(j−1) (x) r ⇐⇒ (rank (H r,s (x)) r)(12) holds. We have (r + j) + (s − j + 1) = r + s =m+n−r +1 = r + (m + n − r) + 1 = m + n u +1 u + 1. Furthermore, we have j ∈ {1, 2, . . . , s − r}, so that 1 j s − r. From j s − r, we obtain r s − j, so that r + 1 s − j + 1. This entails s − j + 1 ∈ N (since r + 1 ∈ N). Also, r + 1 r + j (since j 1). Hence, Lemma 2.4 (applied to p = r + j and n = s − j + 1) yields that we have the logical equivalence rank H r+j,s−j+1−1 (x) r ⇐⇒ rank H r+j−1,s−j+1 (x) r . In other words, we have the equivalence rank H r+j,s−j (x) r ⇐⇒ rank H r+(j−1),s−(j−1) (x) r (since s − j + 1 − 1 = s − j and r + j − 1 = r + (j − 1) and s − j + 1 = s − (j − 1)). Combining this equivalence with (12), we obtain precisely the equivalence (11) that we were meaning to prove. Thus, we have shown that (10) holds for i = j. This completes the induction step, so that (10) is proven.] Now, m ∈ {r, r + 1, . . . , s} (since r m s), so that m − r ∈ {0, 1, . . . , s − r}. Hence, we can apply (10) to i = m − r. As a result, we obtain that the equivalence rank H r+m−r,s−(m−r) (x) r ⇐⇒ (rank (H r,s (x)) r) holds. In other words, the equivalence (rank (H m,n (x)) r) ⇐⇒ (rank (H r,s (x)) r) holds (since r + m − r = m and s − (m − r) = n). This proves Lemma 2.6. Auxiliary enumerative results Assumptions and notations From now on, we assume that the field F is finite. We set q = |F|. We shall use the so-called Iverson bracket notation: Definition 3.1. If A is any logical statement, then we define an integer [A] ∈ {0, 1} by [A] = 1, if A is true; 0, if A is false. For example, [2 + 2 = 4] = 1 but [2 + 2 = 5] = 0. If A is any logical statement, then [A] is known as the truth value of A. The following fact ("counting by roll-call") makes truth values useful to us: Sums over v for fixed x The following proposition is a restatement of [Elkies02, Proposition 2] (but we shall prove it nevertheless to keep this note self-contained): Proposition 3.3. Let m, n ∈ N satisfy m n + 1. Let x ∈ F m+n+1 be a (m + n + 1)-tuple. Then, (q − 1) · [rank (H m,n (x)) m] = ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] − q ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] . Before we prove this proposition, a few words about its significance are worth saying. Assume that, as a first step towards proving Theorem 1.5, we want to count the (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) m. (This is just an interim goal; we will later generalize this inequality to rank (H m,n (x)) r and impose the additional condition x [0,k) = a.) In view of Proposition 3.2, this boils down to computing ∑ x∈F m+n+1 [rank (H m,n (x)) m]. Using Proposition 3.3, we can rewrite the addends [rank (H m,n (x)) m] in this sum in terms of other truth values, which are more "local" (one can think of "rank (H m,n (x)) m" as a "global" statement about the matrix H m,n (x), whereas the statements "v H m,n (x) = 0" and "v H m−1,n+1 (x) = 0" are local in the sense that they only "sample" the matrix at a single vector each) and thus (as we will soon see) are easier to sum. Proof of Proposition 3.3. We are in one of the following two cases: Case 1: We have rank (H m,n (x)) > m. Case 2: We have rank (H m,n (x)) m. Let us first consider Case 1. In this case, we have rank (H m,n (x)) > m. Thus, rank (H m,n (x)) = m + 1 (since H m,n (x) is an (m + 1) × (n + 1)-matrix). Therefore, the rows of the matrix H m,n (x) are linearly independent. Hence, there exists no nonzero v ∈ F 1×(m+1) satisfying v H m,n (x) = 0. Therefore, ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] = 0.(13) Moreover, Lemma 2.5 (applied to m + n, m and n + 1 instead of n, p and q) yields that rank (H m−1,n+1 (x)) = m (since rank (H m,n (x)) > m). In other words, the m × (n + 2)-matrix H m−1,n+1 (x) has rank m. Hence, there exists no nonzero v ∈ F 1×m satisfying v H m−1,n+1 (x) = 0. Therefore, ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = 0.(14) Finally, [rank (H m,n (x)) m] = 0 (since rank (H m,n (x)) > m). In view of this equality, as well as (13) and (14), the equality that we are trying to prove rewrites as (q − 1) · 0 = 0 − q · 0, which is clearly true. Thus, Proposition 3.3 is proved in Case 1. Let us now consider Case 2. In this case, we have rank (H m,n (x)) m. Also, the matrix H m−1,n+1 (x) has m rows; thus, rank (H m−1,n+1 (x)) m n + 1. Therefore, Lemma 2.3 (applied to m + n, m and n + 1 instead of n, p and q) yields rank (H m,n (x)) = rank (H m−1,n+1 (x)) . [v H m,n (x) = 0] is the size of the left kernel 5 of the matrix H m,n (x). But the dimension of this left kernel is (m + 1) − rank (H m,n (x)) (by the rank-nullity theorem 6 ); hence, the size of this left kernel is q (m+1)−rank(H m,n (x)) . Thus, ∑ v∈F 1×(m+1) [v H m,n (x) = 0] = q (m+1)−rank(H m,n (x)) .(16) The same reasoning shows that ∑ v∈F 1×m [v H m−1,n+1 (x) = 0] = q m−rank(H m−1,n+1 (x)) .(17) Now, (15) yields q (m+1)−rank(H m,n (x)) = q (m+1)−rank(H m−1,n+1 (x)) = q · q m−rank(H m−1,n+1 (x)) . In view of q (m+1)−rank(H m,n (x)) = ∑ v∈F 1×(m+1) [v H m,n (x) = 0] (by (16)) = 1 + ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] (since 0 H m,n (x) = 0) and q m−rank(H m−1,n+1 (x)) = ∑ v∈F 1×m [v H m−1,n+1 (x) = 0] (by (17)) = 1 + ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] (since 0 H m−1,n+1 (x) = 0) , we can rewrite this as 1 + ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] = q ·     1 + ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0]     . In other words, ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] − q ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = q − 1. 5 The left kernel of an s × t-matrix A ∈ F s×t is defined to be the set of all row vectors v ∈ F 1×s satisfying vA = 0. This is a vector subspace of F 1×s . 6 The rank-nullity theorem (in the form we are using it here) says that the dimension of the left kernel of a matrix A ∈ F s×t equals s − rank A. Comparing this with (q − 1) · [rank (H m,n (x)) m] =1 (since rank(H m,n (x)) m) = q − 1, we obtain precisely the claim of Proposition 3.3. Thus, Proposition 3.3 is proved in Case 2. We have now proved Proposition 3.3 in both Cases 1 and 2. Thus, Proposition 3.3 always holds. Sums over x for fixed v We need another definition. Namely, if m ∈ N, and if v = (v 0 , v 1 , . . . , v m ) ∈ F 1×(m+1) is a row vector of size m + 1, then last v will denote v m (that is, the last entry of v). There is a bijection [v H m,n (x) = 0] is the number of all x ∈ F m+n+1 satisfying x [0,k) = a and v H m,n (x) = 0. Thus, we must prove that this number is q m−k . R : v ∈ F 1×(m+1) | last v = 0 → F 1×m , (v 0 , v 1 , . . . , v m ) → (v 0 , v 1 , . . . , v m−1 ) . Its inverse map R −1 sends each row vector (v 0 , v 1 , . . . , v m−1 ) ∈ F 1×m to the row vector (v 0 , v 1 , . . . , v m−1 , 0). Write v and a as v = (v 0 , v 1 , . . . , v m ) and a = (a 0 , a 1 , . . . , a k−1 ), respectively. Thus, last v = v m , so that v m = last v = 0. Now, we are looking for an x = (x 0 , x 1 , . . . , x m+n ) ∈ F m+n+1 satisfying x [0,k) = a and v H m,n (x) = 0. The condition x [0,k) = a says that the first k entries of x equal the respective entries of a; that is, x i = a i for each i ∈ {0, 1, . . . , k − 1}. Thus, x 0 , x 1 , . . . , x k−1 are uniquely determined. The condition v H m,n (x) = 0 is equivalent to x satisfying the following system of linear equations:            v 0 x 0 + v 1 x 1 + · · · + v m x m = 0; v 0 x 1 + v 1 x 2 + · · · + v m x m+1 = 0; v 0 x 2 + v 1 x 3 + · · · + v m x m+2 = 0; · · · ; v 0 x n + v 1 x n+1 + · · · + v m x m+n = 0.(18) Since v m = 0, this latter system of equations can be uniquely solved for the unknowns x m , x m+1 , . . . , x m+n (by recursive substitution) when the m entries x 0 , x 1 , . . . , x m−1 are given. Hence, each (m + n + 1)-tuple x = (x 0 , x 1 , . . . , x m+n ) ∈ F m+n+1 satisfying x [0,k) = a and v H m,n (x) = 0 can be constructed as follows: • First, we set x i = a i for each i ∈ {0, 1, . . . , k − 1}. This determines the first k entries x 0 , x 1 , . . . , x k−1 of x. • Then, we choose arbitrary values for the next m − k entries x k , x k+1 , . . . , x m−1 . • Finally, we uniquely determine the remaining entries x m , x m+1 , . . . , x m+n by solving the system (18). Clearly, the number of ways to perform this construction is q m−k (since there are |F| = q many options for each of the m − k entries x k , x k+1 , . . . , x m−1 ). Thus, the number of all x ∈ F m+n+1 satisfying x [0,k) = a and v H m,n (x) = 0 is q m−k . This proves Lemma 3.4. Lemma 3.5. Let k, m, n ∈ N satisfy k n + 1. Let v ∈ F 1×(m+1) be a nonzero row vector of size m + 1 such that last v = 0. Fix any k-tuple a ∈ F k . Then, ∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x) = 0] = q ∑ x∈F m+n+1 ; x [0,k) =a [R (v) H m−1,n+1 (x) = 0] .(19) Proof of Lemma 3.5. The sum on the left hand side of (19) is the number of all (m + n + 1)-tuples x ∈ F m+n+1 satisfying x [0,k) = a and v H m,n (x) = 0 (because of Proposition 3.2). Let us refer to such (m + n + 1)-tuples x as weakly nice tuples. The sum on the right hand side of (19) is the number of all (m + n + 1)tuples x ∈ F m+n+1 satisfying x [0,k) = a and R (v) H m−1,n+1 (x) = 0 (because of Proposition 3.2). Let us refer to such (m + n + 1)-tuples x as strongly nice tuples. We thus need to prove that the number of weakly nice tuples equals q times the number of strongly nice tuples. We shall achieve this by constructing a bijection {weakly nice tuples} → F × {strongly nice tuples} . Indeed, let us unravel the definitions of weakly and strongly nice tuples. Write v and a as v = (v 0 , v 1 , . . . , v m ) and a = (a 0 , a 1 , . . . , a k−1 ), respectively. (v 0 , v 1 , . . . , v m−1 ). Thus, last v = v m , so that v m = last v = 0. Consider the largest j ∈ {0, 1, . . . , m} satisfying v j = 0. (This exists, since v is nonzero.) Thus, v j = 0 but v j+1 = v j+2 = · · · = v m = 0. Also, the definition of R yields R (v) = We have j = m (since v j = 0 but v m = 0). Thus, j m − 1. Furthermore, j 0 +n + 1 n + 1 > n k − 1 (since k n + 1) . The weakly nice tuples are the (m + n + 1)-tuples x = (x 0 , x 1 , . . . , x m+n ) ∈ F m+n+1 satisfying x i = a i for each i ∈ {0, 1, . . . , k − 1} (20) as well as            v 0 x 0 + v 1 x 1 + · · · + v m x m = 0; v 0 x 1 + v 1 x 2 + · · · + v m x m+1 = 0; v 0 x 2 + v 1 x 3 + · · · + v m x m+2 = 0; · · · ; v 0 x n + v 1 x n+1 + · · · + v m x m+n = 0 (21) (because the condition "x [0,k) = a" is equivalent to (20), whereas the condition "v H m,n (x) = 0" is equivalent to (21)). In view of v j+1 = v j+2 = · · · = v m = 0, we can rewrite this as follows: The weakly nice tuples are the (m + n + 1)tuples x = (x 0 , x 1 , . . . , x m+n ) ∈ F m+n+1 satisfying x i = a i for each i ∈ {0, 1, . . . , k − 1} as well as              v 0 x 0 + v 1 x 1 + · · · + v j x j = 0; v 0 x 1 + v 1 x 2 + · · · + v j x j+1 = 0; v 0 x 2 + v 1 x 3 + · · · + v j x j+2 = 0; · · · ; v 0 x n + v 1 x n+1 + · · · + v j x j+n = 0.(22) A similar argument (using R (v) = (v 0 , v 1 , . . . , v m−1 )) shows that the strongly nice tuples are the (m + n + 1)-tuples x = (x 0 , x 1 , . . . , x m+n ) ∈ F m+n+1 satisfying x i = a i for each i ∈ {0, 1, . . . , k − 1} as well as                  v 0 x 0 + v 1 x 1 + · · · + v j x j = 0; v 0 x 1 + v 1 x 2 + · · · + v j x j+1 = 0; v 0 x 2 + v 1 x 3 + · · · + v j x j+2 = 0; · · · ; v 0 x n + v 1 x n+1 + · · · + v j x j+n = 0; v 0 x n+1 + v 1 x n+2 + · · · + v j x j+n+1 = 0. (23) These characterizations of weakly and strongly nice tuples are very similar: The system (23) consists of all the equations of (22) as well as one extra equation v 0 x n+1 + v 1 x n+2 + · · · + v j x j+n+1 = 0.(24) This latter equation (24) uniquely determines the entry x j+n+1 in terms of the other entries of x (since v j = 0), whereas x j+n+1 is entirely unconstrained by the system (22). Thus, the entry x j+n+1 is uniquely determined (in terms of the other entries of x) in a strongly nice tuple x, while being entirely unconstrained in a weakly nice tuple 7 . Informally speaking, this shows that a weakly nice tuple has "one more degree of freedom" than a strongly nice tuple (and this degree of freedom is the entry x j+n+1 , which can take q possible values in a weakly nice tuple). This easily entails that the number of weakly nice tuples equals q times the number of strongly nice tuples 8 . This proves Lemma 3.5. Lemma 3.5 and Lemma 3.4 combined lead to the following: 7 Here we are using the fact that the "x i = a i for each i ∈ {0, 1, . . . , k − 1}" conditions don't constrain x j+n+1 either (since j + n + 1 > k − 1). 8 Here is a rigorous way to show this: Consider the map α : F × {strongly nice tuples} → {weakly nice tuples} , (y, (x 0 , x 1 , . . . , x m+n )) → x 0 , x 1 , . . . , x j+n , y, x j+n+2 , x j+n+3 , . . . , x m+n , which simply replaces the entry x j+n+1 of the strongly nice tuple (x 0 , x 1 , . . . , x m+n ) by the element y. Consider the map β : {weakly nice tuples} → F × {strongly nice tuples} , (x 0 , x 1 , . . . , x m+n ) → x j+n+1 , x 0 , x 1 , . . . , x j+n , z, x j+n+2 , x j+n+3 , . . . , x m+n , where z is the unique element of F that would make the equation (24) valid when it is substituted for x j+n+1 (that is, explicitly, z is given by the formula z = − v 0 x n+1 + v 1 x n+2 + · · · + v j−1 x j+n /v j ). Our above characterizations of weakly nice and strongly nice tuples show that these two maps α and β are mutually inverse. Hence, α and β are bijections. Thus, |{weakly nice tuples}| = |F × {strongly nice tuples}| = q · |{strongly nice tuples}| . In other words, the number of weakly nice tuples equals q times the number of strongly nice tuples. Lemma 3.6. Let k, m, n ∈ N satisfy k m and k n + 1. Fix any k-tuple a ∈ F k . Then, ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] − q ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = (q − 1) q 2m−k . Proof of Lemma 3.6. We first observe that the number of all vectors v ∈ F 1×(m+1) satisfying last v = 0 = (q − 1) q m(25) (since a vector v ∈ F 1×(m+1) satisfying last v = 0 can be constructed by choosing its last entry from the (q − 1)-element set F \ {0} and then choosing its remaining m entries from the q-element set F). For any row vector v ∈ F 1×(m+1) , we define a number χ v := ∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x) = 0] .(26) Thus, if v ∈ F 1×(m+1) is a row vector satisfying last v = 0, then χ v = ∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x) = 0] = q m−k(27) (by Lemma 3.4). Recall that R : v ∈ F 1×(m+1) | last v = 0 → F 1×m is a bijection. This bijection sends 0 to 0, and therefore restricts to a bijection v ∈ F 1×(m+1) | last v = 0 and v = 0 → v ∈ F 1×m | v = 0 , v → R (v) . Hence, given any x ∈ F m+n+1 , we can substitute R (v) for v in the sum ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0], and thus obtain ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = ∑ v∈F 1×(m+1) ; last v=0; v =0 [R (v) H m−1,n+1 (x) = 0] . Thus, q ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = q ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×(m+1) ; last v=0; v =0 [R (v) H m−1,n+1 (x) = 0] = ∑ v∈F 1×(m+1) ; last v=0; v =0 q ∑ x∈F m+n+1 ; x [0,k) =a [R (v) H m−1,n+1 (x) = 0] = ∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x)=0] (by Lemma 3.5) = ∑ v∈F 1×(m+1) ; last v=0; v =0 ∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x) = 0] =χ v (by (26)) = ∑ v∈F 1×(m+1) ; last v=0; v =0 χ v = ∑ v∈F 1×(m+1) ; v =0; last v=0 χ v .(28) On the other hand, ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] = ∑ v∈F 1×(m+1) ; v =0 ∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x) = 0] =χ v (by (26)) = ∑ v∈F 1×(m+1) ; v =0 χ v .(29) Subtracting the equality (28) from the equality (29), we obtain ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] − q ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = ∑ v∈F 1×(m+1) ; v =0 χ v − ∑ v∈F 1×(m+1) ; v =0; last v=0 χ v = ∑ v∈F 1×(m+1) ; v =0; last v =0 χ v =q m−k (by (27))     since ∑ v∈F 1×(m+1) ; v =0 ρ v − ∑ v∈F 1×(m+1) ; v =0; last v=0 ρ v = ∑ v∈F 1×(m+1) ; v =0; last v =0 ρ v for any numbers ρ v     = ∑ v∈F 1×(m+1) ; v =0; last v =0 q m−k = ∑ v∈F 1×(m+1) ; last v =0 q m−k   here, we have removed the condition "v = 0" from under the summation sign, since any vector v ∈ F 1×(m+1) satisfying last v = 0 automatically satisfies v = 0   = the number of all vectors v ∈ F 1×(m+1) satisfying last v = 0 =(q−1)q m (by (25)) ·q m−k = (q − 1) q m · q m−k =q 2m−k = (q − 1) q 2m−k . This proves Lemma 3.6. Theorem 1.5 for r = m Before we prove Theorem 1.5 in full generality, let us first show it in the particular case when r = m: Lemma 3.7. Let k, m, n ∈ N satisfy k m n + 1. Fix any k-tuple a ∈ F k . The number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying x [0,k) = a and rank (H m,n (x)) m is q 2m−k . Proof of Lemma 3.7. Write the k-tuple a in the form a = (a 0 , a 1 , . . . , a k−1 ). We have (q − 1) · ∑ x∈F m+n+1 ; x [0,k) =a [rank (H m,n (x)) m] = ∑ x∈F m+n+1 ; x [0,k) =a (q − 1) · [rank (H m,n (x)) m] = ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x)=0]−q ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x)=0] (by Proposition 3.3) = ∑ x∈F m+n+1 ; x [0,k) =a     ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] − q ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0]     = ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×(m+1) ; v =0 [v H m,n (x) = 0] − q ∑ x∈F m+n+1 ; x [0,k) =a ∑ v∈F 1×m ; v =0 [v H m−1,n+1 (x) = 0] = (q − 1) q 2m−k (by Lemma 3.6) . Cancelling q − 1 from this equality (since q − 1 = 0), we obtain ∑ x∈F m+n+1 ; x [0,k) =a [rank (H m,n (x)) m] = q 2m−k . But the left hand side of this equality is the number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying x [0,k) = a and rank (H m,n (x)) m (because of Proposition 3.2). Thus, this number is q 2m−k . This proves Lemma 3.7. Proofs of the main results We can now prove the results from Section 1 in their full generality. for any (m + n + 1)-tuple x ∈ F m+n+1 . Thus, 9 # of all (m + n + 1) -tuples x ∈ F m+n+1 satisfying x [0,k) = a and rank (H m,n (x)) r = # of all (m + n + 1) -tuples x ∈ F m+n+1 satisfying x [0,k) = a and rank (H r,s (x)) r = # of all (r + s + 1) -tuples x ∈ F r+s+1 satisfying x [0,k) = a and rank (H r,s (x)) r (since m + n = r + s) = q 2r−k (by Lemma 3.7, applied to r and s instead of m and n) . This proves Theorem 1.5. Proof of Theorem 1.1. Let a be the 0-tuple () ∈ F 0 . Thus, Theorem 1.5 (applied to k = 0) yields that the number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying x [0,0) = a and rank (H m,n (x)) r is q 2r−0 . We can remove the "x [0,0) = a" condition from the previous sentence (since every (m + n + 1)tuple x ∈ F m+n+1 satisfies x [0,0) = () = a), and thus obtain the following: The number of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) r is q 2r−0 . But this is precisely the claim of Theorem 1.1 (since 2r − 0 = 2r). Thus, Theorem 1.1 is proved. Proof of Corollary 1.3. We need to prove the following four claims: 10 Claim 1: If r = 0, then the # of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r is 1. Claim 2: If 0 < r m, then the # of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r is q 2r−2 q 2 − 1 . Claim 3: If r = m + 1, then the # of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r is q 2r−2 q n−m+1 − 1 . Claim 4: If r > m + 1, then the # of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r is 0. [Proof of Claim 1: We need to show that the # of (m + n + 1)-tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = 0 is 1. In other words, we need to show that there is exactly one (m + n + 1)-tuple x ∈ F m+n+1 satisfying rank (H m,n (x)) = 0. But this is rather simple: The (m + n + 1)-tuple (0, 0, . . . , 0) ∈ F m+n+1 does satisfy rank (H m,n (x)) = 0 (since H m,n (x) is the zero matrix when x is this (m + n + 1)tuple), and no other (m + n + 1)-tuple does this (because if x ∈ F m+n+1 is not (0, 0, . . . , 0), then the matrix H m,n (x) has at least one nonzero entry, and therefore its rank cannot be 0). Thus, Claim 1 is proved.] [Proof of Claim 2: Assume that 0 < r m. Thus, r and r − 1 are elements of N and satisfy r m n and r − 1 r m n. Hence: • Theorem 1.1 yields that # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) r = q 2r . • Theorem 1.1 (applied to r − 1 instead of r) yields that # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) r − 1 = q 2(r−1) . However, a matrix A satisfies rank A = r if and only if it satisfies rank A r but not rank A r − 1. Hence, # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r = # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) r =q 2r − # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) r − 1 =q 2(r−1) = q 2r − q 2(r−1) = q 2r−2 q 2 − 1 . This proves Claim 2.] [Proof of Claim 3: Assume that r = m + 1. Thus, 2r = 2 (m + 1) = 2m + 2, so that 2m = 2r − 2. The matrix H m,n (x) (for any given x) is an (m + 1) × (n + 1)matrix; thus, its rank is always m + 1. Hence, it has rank m + 1 if and only if it does not have rank m. Thus, # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = m + 1 = # of all (m + n + 1) -tuples x ∈ F m+n+1 =q m+n+1 (since |F|=q) − # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) m =q 2m (by Theorem 1.1, applied to m instead of r) = q m+n+1 − q 2m = q 2m q n−m+1 − 1 = q 2r−2 q n−m+1 − 1 (since 2m = 2r − 2) . But this is precisely the claim of Claim 3 (since r = m + 1). Thus, Claim 3 is proven.] [Proof of Claim 4: Assume that r > m + 1. The matrix H m,n (x) (for any given x) is an (m + 1) × (n + 1)-matrix; thus, its rank is always m + 1. Hence, its rank is never r (because r > m + 1). Thus, # of (m + n + 1) -tuples x ∈ F m+n+1 satisfying rank (H m,n (x)) = r = 0. This proves Claim 4.] Having proved all four claims, we thus have completed the proof of Corollary 1.3. Proof of Corollary 1.7. If x ∈ F 2n+1 is any (2n + 1)-tuple, then the condition "det (H n,n (x)) = 0" is equivalent to "rank (H n,n (x)) n" (since H n,n (x) is an (n + 1) × (n + 1)-matrix, and thus its determinant vanishes if and only if its rank is n). Hence, the number of (2n + 1)-tuples x ∈ F 2n+1 satisfying x [0,k) = a and det (H n,n (x)) = 0 is precisely the number of (2n + 1)-tuples x ∈ F 2n+1 satisfying x [0,k) = a and rank (H n,n (x)) n. But Theorem 1.5 (applied to m = n and r = n) shows that the latter number is q 2n−k . This proves Corollary 1.7. Application to Jacobi-Trudi matrices Let us now discuss how [ACGKLP18, Corollary 6.4] follows from Corollary 1.7. For the sake of simplicity, we shall first restate [ACGKLP18, Corollary 6.4] in a self-contained form that does not rely on the concepts of symmetric functions: Corollary 5.1. Assume that F is finite. Let q = |F|. Let u, v ∈ N. For each (u + v − 1)-tuple y = (y 1 , y 2 , . . . , y u+v−1 ) ∈ F u+v−1 , we define the matrix J u,v (y) := y u−i+j 1 i v, 1 j v ∈ F v×v , where we set y 0 := 1 and y k := 0 for all k < 0. Then, the number of all (u + v − 1)-tuples y ∈ F u+v−1 satisfying det (J u,v (y)) = 0 is q u+v−2 . Example 5.2. (a) If u = 1 and v = 3, then each 3-tuple y = (y 1 , y 2 , y 3 ) ∈ F 3 satisfies J u,v (y) = J 1,3 (y) = y 1−i+j 1 i 3, 1 j 3 =   y 1 y 2 y 3 y 0 y 1 y 2 y −1 y 0 y 1   =   y 1 y 2 y 3 1 y 1 y 2 0 1 y 1   (since y 0 = 1 and y −1 = 0) and thus det (J u,v (y)) = y 3 + y 3 1 − 2y 1 y 2 . (b) If u = 4 and v = 3, then each 6-tuple y = (y 1 , y 2 , . . . , y 6 ) ∈ F 6 satisfies J u,v (y) = J 4,3 (y) = y 4−i+j 1 i 3, 1 j 3 =   y 4 y 5 y 6 y 3 y 4 y 5 y 2 y 3 y 4   and thus det (J u,v (y)) = y 6 y 2 3 − 2y 3 y 4 y 5 + y 3 4 − y 2 y 6 y 4 + y 2 y 2 5 . Why is Corollary 5.1 equivalent to [ACGKLP18, Corollary 6.4]? In fact, Corollary 5.1 can be restated in probabilistic terms; then it says that a uniformly random (u + v − 1)-tuple y ∈ F u+v−1 satisfies det (J u,v (y)) = 0 with a probability of q u+v−2 q u+v−1 = 1 q . However, the matrix J u,v (y) in Corollary 5.1 is precisely the Jacobi-Trudi matrix 11 corresponding to the rectangle-shaped partition (u v ), except that the entries of y have been substituted for the complete homogeneous symmetric functions h 1 , h 2 , . . . , h u+v−1 . The determinant det (J u,v (y)) therefore is the image of the Schur function s (u v ) under this substitution. Thus, Corollary 5.1 says that when a uniformly random (u + v − 1)-tuple of elements of F is substituted for (h 1 , h 2 , . . . , h u+v−1 ), the Schur function s (u v ) becomes 0 with a probability of 1 q . This is precisely the claim of [ACGKLP18, Corollary 6.4]. We shall now sketch (on an example) how Corollary 5.1 can be derived from our Corollary 1.7: Proof of Corollary 5.1 (sketched). For a sufficiently representative example, we pick the case when u = 2 and v = 5; the reader will not find any difficulty in generalizing our reasoning to the general case. Thus, we must show that the number of all 6-tuples y ∈ F 6 satisfying det (J 2,5 (y)) = 0 is q 5 . Let y = (y 1 , y 2 , . . . , y 6 ) ∈ F 6 be any 6-tuple. Then, J 2,5 (y) =       y 2 y 3 y 4 y 5 y 6 y 1 y 2 y 3 y 4 y 5 y 0 y 1 y 2 y 3 y 4 y −1 y 0 y 1 y 2 y 3 y −2 y −1 y 0 y 1 y 2       =       y 2 y 3 y 4 y 5 y 6 y 1 y 2 y 3 y 4 y 5 1 y 1 y 2 y 3 y 4 0 1 y 1 y 2 y 3 0 0 1 y 1 y 2       (since y 0 = 1 and y −1 = 0 and y −2 = 0). If we turn the matrix J 2,5 (y) upside down (i.e., we reverse the order of its rows), then we obtain the matrix       0 0 1 y 1 y 2 0 1 y 1 y 2 y 3 1 y 1 y 2 y 3 y 4 y 1 y 2 y 3 y 4 y 5 y 2 y 3 y 4 y 5 y 6       , which is precisely the Hankel matrix H 4,4 (x) for the 9-tuple x = (0, 0, 1, y 1 , y 2 , y 3 , y 4 , y 5 , y 6 ) . Hence, this 9-tuple x satisfies det (H 4,4 (x)) = ± det (J 2,5 (y)) (since the determinant of a matrix is multiplied by ±1 when the rows of the matrix are permuted). Therefore, the condition "det (J 2,5 (y)) = 0" is equivalent to the condition "det (H 4,4 (x)) = 0" for this 9-tuple x. Hence, the number of all 6tuples y ∈ F 6 satisfying det (J 2,5 (y)) = 0 is precisely the number of all 9-tuples x ∈ F 9 that start with the entries 0, 0, 1 and satisfy det (H 4,4 (x)) = 0. In other words, it is precisely the number of all 9-tuples x ∈ F 9 satisfying x [0,3) = (0, 0, 1) and det (H 4,4 (x)) = 0. However, Corollary 1.7 (applied to k = 3 and n = 4 and a = (0, 0, 1)) shows that the latter number is q 2·4−3 = q 5 . This is precisely what we wanted to show. Thus, Corollary 5.1 is proved. LidNie97, §8.6]), coprime polynomials ([GaGhRa11]), determinants ([Muir60, Section XII.II]), orthogonal polynomials and continued fractions ([Kratte99, §2.7]), total positivity ([Khare21]), and various applications such as x-ray imaging ([Natter01, §V.5]) Proposition 3. 2 . 2Let S be a finite set. Let A (s) be a logical statement for each s ∈ S. Then, ∑ s∈S [A (s)] equals the number of elements s ∈ S satisfying A (s). Lemma 3. 4 . 4Let k, m, n ∈ N satisfy k m. Let v ∈ F 1×(m+1) be a row vector of size m + 1 such that last v = 0. Fix any k-tuple a ∈ F k . Then,∑ x∈F m+n+1 ; x [0,k) =a [v H m,n (x) = 0] = q m−k .Proof of Lemma 3.4. Proposition 3.2 shows that ∑ x∈F m+n+1 ;x [0,k) =a Proof of Theorem 1.5. Let s = m + n − r. Then, r + s = m + n. Also, r s (since s=m+n−r −r = m + n − r m − r n m + n − m − n = 0), so that r s s + 1 and thus k r s + 1. Lemma 2.6 (applied to u = m + n) yields the logical equivalence (rank (H m,n (x)) r) ⇐⇒ (rank (H r,s (x)) r) Some of these references are studying Toeplitz matrices instead of Hankel matrices. However, this is equivalent, since a Toeplitz matrix is just a Hankel matrix turned upside down (i.e., the result of reversing the order of the rows in a Hankel matrix). Note that [GaGhRa11, Theorem 5.1] works with Toeplitz matrices instead of Hankel matrices, but this makes no real difference, since a Toeplitz matrix is just a Hankel matrix turned upside down (and this operation clearly does not change the rank of the matrix). 3 See Section 5 for concrete examples of such matrices. 4 Specifically, "first few" means "at most m". The symbol "#" means "number".10 The symbol "#" means "number". We are using the terminology of[ACGKLP18] here. Jacobi-Trudi Determinants over Finite Fields. Ben Anzis, Shuli Chen, Yibo Gao, Jesse Kim, Zhaoqi Li, Rebecca Patrias, 10.1007/s00026-018-0399-8Ann. Comb. 22Ben Anzis, Shuli Chen, Yibo Gao, Jesse Kim, Zhaoqi Li, Rebecca Patrias, Jacobi-Trudi Determinants over Finite Fields, Ann. Comb. 22 (2018), pp. 447-489. Distribution of Bordered Persymmetric Matrices in a Finite Field. David E Daykin, Journal für die reine und angewandte Mathematik. 203David E. Daykin, Distribution of Bordered Persymmetric Matrices in a Finite Field, Journal für die reine und angewandte Mathematik 203 (1960), pp. 47-54. On finite sequences satisfying linear recursions. Noam D Elkies, New York Journal of Mathematics. 8Noam D. Elkies, On finite sequences satisfying linear recursions, New York Journal of Mathematics 8 (2002), pp. 85-97. Relatively Prime Polynomials and Nonsingular Hankel Matrices over Finite Fields. Mario Garcia Armas, Sudhir R Ghorpade, Samrith Ram, 10.1016/j.jcta.2010.11.005Journal of Combinatorial Theory, Series A. 1183Mario Garcia Armas, Sudhir R. Ghorpade, Samrith Ram, Relatively Prime Polynomials and Nonsingular Hankel Matrices over Finite Fields, Journal of Combinatorial Theory, Series A, 118, No. 3 (2011), pp. 819-828. Hankel and Toeplitz matrices and forms: algebraic theory. S Iosif, Iohvidov, BirkhäuserIosif S. Iohvidov, Hankel and Toeplitz matrices and forms: algebraic the- ory, Birkhäuser 1982. E Kaltofen, A Lobo, 10.1145/236869.237081On rank properties of Toeplitz matrices over finite fields, ISSAC '96: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation. E. Kaltofen, A. Lobo, On rank properties of Toeplitz matrices over finite fields, ISSAC '96: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, 1996, pp. 241-249. Matrix Analysis and Preservers of (Total) Positivity, lecture notes. Apoorva Khare, Apoorva Khare, Matrix Analysis and Preservers of (Total) Positivity, lecture notes, 27 August 2021. http://www.math.iisc.ac.in/~khare/teaching/Math341-notes.pdf Advanced Determinant Calculus. Christian Krattenthaler, arXiv:math/9902004v3The Andrews Festschrift), paper B42q, 67 pp. 42Christian Krattenthaler, Advanced Determinant Calculus, Séminaire Lotharingien Combin. 42 (1999) (The Andrews Festschrift), paper B42q, 67 pp., arXiv:math/9902004v3. Rudolf Lidl, Harald Niederreiter, 10.1017/CBO9780511525926Finite fields, Encyclopedia of Mathematics and its Applications 20. Cambridge University Press2nd editionRudolf Lidl, Harald Niederreiter, Finite fields, Encyclopedia of Mathematics and its Applications 20, 2nd edition, Cambridge Uni- versity Press 1997. A treatise on the theory of determinants. Thomas Muir, William H. MetzlerDoverThomas Muir, A treatise on the theory of determinants, revised and enlarged by William H. Metzler, Dover 1960. Frank Natterer, The Mathematics of Computerized Tomography. SIAMFrank Natterer, The Mathematics of Computerized Tomography, SIAM 2001. . Richard P Stanley, Enumerative Combinatorics, Cambridge University Press2First editionRichard P. Stanley, Enumerative Combinatorics, volume 2, First edition, Cambridge University Press 2001. See http://math.mit.edu/~rstan/ec/ for errata.
[]
[ "Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace", "Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace" ]
[ "Kilian Wenker [email protected] \nFaculty of Business, Economics, and Law\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91054 Erlan-genGermany\n", "Kilian Wenker \nFaculty of Business, Economics, and Law\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91054 Erlan-genGermany\n" ]
[ "Faculty of Business, Economics, and Law\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91054 Erlan-genGermany", "Faculty of Business, Economics, and Law\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91054 Erlan-genGermany" ]
[]
Revised copy (several minor corrections, revised subsections 2.2-2.4)Highlights• The loss of agency theory states that the use of AI leads to a loss of human agency • Smart replies, a text-based form of artificial intelligence, have users respond to stimuli • Those responses substantiate a transfer between human and machine agency through priming and time pressure • Smart replies influence the content we author and the behavior we generate • Human agency transfers out faster than previously thought, but can be enhanced at the same time Abstract: AI-mediated communication is designed to help us do our work more quickly and efficiently. But does it come at a cost? This study uses smart replies to show how AI influences humans without any intent on the part of the developer-the very use of AI is sufficient. I propose a loss of agency theory as a viable approach for studying the impact of AI on human agency. This theory focusses on the transfer of agency that is forced by circumstances (such as time pressure), human weaknesses (such as complacency), and conceptual priming. Mixed methods involving a crowdsourced experiment test that theory. The quantitative results reveal that machine agency affects the content we author and the behavior we generate. But it is a non-zero-sum game. The transfers between human and machine agency are fluid; they complement, replace, and reinforce each other at the same time.
10.1016/j.teler.2023.100062
[ "https://export.arxiv.org/pdf/2210.06470v2.pdf" ]
252,873,219
2210.06470
08b43bd84b66390fa50c6b36c24e2c5072a5412b
Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace 2023-02-17 This version 2023-02-17, first version 2022-10-07 Kilian Wenker [email protected] Faculty of Business, Economics, and Law Friedrich-Alexander-Universität Erlangen-Nürnberg 91054 Erlan-genGermany Kilian Wenker Faculty of Business, Economics, and Law Friedrich-Alexander-Universität Erlangen-Nürnberg 91054 Erlan-genGermany Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace 2023-02-17 This version 2023-02-17, first version 2022-10-07Manuscript Author: Version date: Attention: this paper has not yet been peer reviewed ArticleHuman AgencyMachine AgencyHuman-AI InteractionPrimingTime PressureComputer-Mediated Communication (CMC)Social HeuristicsSmart RepliesLinguistic Align- mentSentiment Revised copy (several minor corrections, revised subsections 2.2-2.4)Highlights• The loss of agency theory states that the use of AI leads to a loss of human agency • Smart replies, a text-based form of artificial intelligence, have users respond to stimuli • Those responses substantiate a transfer between human and machine agency through priming and time pressure • Smart replies influence the content we author and the behavior we generate • Human agency transfers out faster than previously thought, but can be enhanced at the same time Abstract: AI-mediated communication is designed to help us do our work more quickly and efficiently. But does it come at a cost? This study uses smart replies to show how AI influences humans without any intent on the part of the developer-the very use of AI is sufficient. I propose a loss of agency theory as a viable approach for studying the impact of AI on human agency. This theory focusses on the transfer of agency that is forced by circumstances (such as time pressure), human weaknesses (such as complacency), and conceptual priming. Mixed methods involving a crowdsourced experiment test that theory. The quantitative results reveal that machine agency affects the content we author and the behavior we generate. But it is a non-zero-sum game. The transfers between human and machine agency are fluid; they complement, replace, and reinforce each other at the same time. Introduction Smart replies (SRs) are short responses suggested by artificial intelligence (AI) software that allow you to quickly reply to incoming messages. They afford mainly saving time and minimizing interruptions, which makes them particularly valuable in business contexts. SRs are designed to help us do our work more quickly and efficiently. But do they come at a cost? Human agency in this article is a choice and decision-making capacity, including the corresponding ability to act, i.e., both power of decision and power of action (there is a lack of consensus on the definition of the term, see [1]). A recent synthesis of key research themes and trends in AI-mediated communication (AI-MC) states that users retain "substantial agency: they choose which suggested message to use or to ignore, and may also modify the message" [2] (p. 92). However, suggested texts can affect us in various ways without us being aware of it. For instance, lack of time in day-to-day business, a tendency to rely too heavily on automation, or complacency can guide us. When such circumstances prevent the user from thinking at length about the best possible response, they de facto limit the user's ability to retain agency. Since this transfer of agency is forced by circumstances, loss of agency would be a more appropriate term. Loss of agency, even if compelled by circumstances, can still be deliberate. But there may also be subliminal stimuli that blur this boundary of deliberate choice. Priming occurs when previous SR suggestions determine responses because the SR user would have responded differently if they had not received those suggestions in the first place [3][4][5]. In other words, that allegedly substantial degree of agency may be much smaller than we presume. This has huge implications. AI assistance could be manipulative without any intention behind it. As written communication in the form of brief texts has become a ubiquitous means of communication, this argument becomes even more compelling. AI might not only be disruptive as a technology, but also for the meta discourse on communication theories. Although researchers have drawn attention to SRs (e.g., [2,[5][6][7][8][9][10]), empirical studies are scarce and limited in scope. Moreover, they do not directly investigate manipulative effects through priming or time constraints. Because AI is not a single, discernable thing, a brief review and precise definition of SRs is warranted. This leads to research question RQ1: What defines SRs? Next, I seek to explore the framework for biases in SRs, and ask RQ2: How do SRs impact language? Last, I focus on the construct of agency and raise RQ3: To what extent do SR users transfer agency? Table 1 outlines the research questions and the contributions of this article. I give an account of the linguistic constraints imposed by allowlists, blocklists, filters, canned responses, and various biases in the training data as well as in response generation (e.g., shorter, simpler wording or the avoidance of emotion). Those agency constraints at the linguistic level form the basis for action stimuli. RQ3: To what extent do SR users transfer agency? I propose a theory of loss of agency inspired by the relevant literature on computermediated communication (CMC), AI-MC, and SRs. Toward creating an empirical foundation for the theory, I combine a crowdsourced experiment and interviews. Evidence suggests that the use of SRs can lead to a loss of agency in the digital workplace (DWP) through response priming and time pressure. Section 2 introduces related work, proposes a theory on loss of agency and forms corresponding hypotheses. Section 3 presents the experiment design, its operationalization, and its sample. Section 4 addresses the empirical findings. Section 5 concludes the work. Background Smart replies and the digital workplace CMC refers to interpersonal communication transmitted over digital devices. When CMC becomes agentic through AI, e.g., through the feature of generating content, as is the case with SRs, CMC becomes AI-MC. Figure 1 depicts a characterization of SRs within CMC. The rounded boxes within the SRs frame indicate the NLP methods that SRs use. Figure 1 Smart replies within CMC The term smart reply is ambiguous: On the one hand, there is Google's "Smart Reply" functionality, a widely discussed implementation of SRs. On the other hand, SRs are an academic term describing a type of AI-mediated communication (see the use of the term e.g., in [2,6,[11][12][13][14]). In this article, Google's Smart Reply is capitalized to distinguish it from the general term. Table 2 provides a summary of nine sources with descriptions of the various aspects of SRs. [2,5,9,15,16] Genus proximum et differentia specifica is a recognized definitional approach. However, differentia specifica needs to be elaborated. Human-human text conversation with AI support [2,5,17]; [6] mention this in their introduction; [15] characterize this as "positioned somewhere between human-human and human-smart agent communication" (p. 16) While this is true, it is redundant if one assumes that AI-MC is by definition interpersonal communication. Typically, AI-MC is implicitly defined as interpersonal communication by defining it as a subset of CMC. Text-based [5,6] Unclear terms like various messaging apps [15], examples like chat, text, and email [9] or Gmail, Outlook, Skype, Facebook Messenger, Microsoft Teams, and Uber [10] do not explicitly indicate "text-based" media Suggestions [5-7, 9, 15, 17] Most authors agree that SRs are presented to the user in the form of suggestions. Ying et al. [10] resort to the more general term "assist", which could also encompass other AI-MC mechanisms. Reply suggestions [6,7,9,10,15,17] SRs are always phrased in reaction to an incoming message. This is exclusively about pre-written answer texts, not autocorrection or formulation assistance. Short suggestions [7,9,10,15] SRs do not come in the form of long texts. They usually range from a single word to short sentences. Complete and plausible suggestions [5,7,10]; [7] stress and explain "plausible" These are obvious but crucial features of SRs. "Plausible" is the only attribute that refers to the content of SRs. Save time quick [6,10,15] one tap [7,9] reduce keystrokes [9] Most authors explicitly state that SRs save users' time. There is a risk of conflation between purpose and effect. Regardless, this is a key component of the concept and therefore needs to be included in the definition. Underlying technology AI according to [2, 5-7, 9, 11-13] The authors agree that AI is the underlying technology. Since AI is included in the definition of AI-MC, I will not explicitly mention the underlying technology to avoid redundancy. Mobile device(s) Mobile (phone) according to [7]; [9] mention the role of the interface (e.g., mobile, desktop, tablet); [6] list desktops, tablets, and mobile devices Restricting SRs to mobile devices does not make sense, because the functionality and use do not depend on the device, albeit SRs are mostly seen on mobile devices. If SRs are used on a desktop, they are still SRs. Following the principle of parsimony, I will not include this item in the definition. Based on these extant definitions and descriptions, I propose a genus-differentia definition of SRs as a form of AI-MC that offers short, complete and plausible reply suggestions in a text-based communication, allowing the user to save time. This definition emphasizes several points. First, SRs are not an algorithm or an underlying technology, but a concept. Second, SRs are an offer and its users must react to AI stimuli. SRs initiate human action. That makes them popular in burgeoning AI-MC research. Third, SRs predict plausible responses, requiring the system to have some kind of interpretive ability and foresight. Later on, we will see that SRs are a prime example of machine agency and that the latter can enhance human agency. SRs are a general-purpose tool for a wide range of use cases. A notable exception is the use of SRs in One-Click-Chat, the in-app chat of the mobility service provider Uber. Here, the operational context is specific and the list of possible SRs is domain specific. A typical situation during pickup would be a driver receiving an incoming rider message asking: "Where are you right now?" [14] (p. 2597). The digital workplace While the workplace has traditionally been understood as the physical place where work was performed, a distinctive feature of the DWP is precisely that it is possible to work in a wide variety of locations at any time, as digitization enables mobile workplaces. Throughout this article, the term DWP will refer to an integrated technology platform that provides all the tools and services that enable employees to do their jobs effectively, both alone and with others, regardless of location [18]. Thus, smartphones can be part of the workplace or even represent the entire DWP. Note that this definition is truncated. The original definition includes that the DWP "is strategically coordinated and managed through DWP designs that are agile and capable of meeting future organizational needs and technologies" [18] (p. 480). I suppress this part because it is partly circular (DWP is something managed through DWP designs) and partly not conclusively ascertainable (future-proof might or might not exclude certain DWP technologies because they will one day be superseded by a new technology). The branched arrow in Figure 2 containing two converging paths shows how the user deploys the AI-MC to send a SR to a receiver. The lower path shows how the user does not use the SR suggestions and sends a message directly, without using SR technology. The upper path shows the route the message takes when a SR suggestion is tapped (or clicked) and the wording is generated by the AI. Although this decision is assumed to be deliberate, controlled, and signed off, in this case the recipient is effectively talking to an AI impersonating the user. Impact on language We live in a biased world. "Applying machine learning to ordinary human language results in human-like semantic biases" [19] (p. 183). Yet AI-MC may be purposely coded to encourage users to modify communications that bear the mark of prejudice. SRs may avoid some biases, e.g., gender bias by blocklisting gender-specific pronouns. But new biases may be created as well, e.g., shorter and simpler sentences. For instance, Google researchers found that common responses with high prior likelihood appeared more often and adapted the score system [12]. Table 3 shows that difference using sample emails. There are at least two biases in the left column. The wording is more generic, consistent with recent findings that people tend to choose shorter, simpler formulations when writing with suggestions [12,20]. And one negative answer is now offered, see e.g., the rules "no more than two suggestions should be from the same semantic group" [13] (para. 9), "greater semantic variability and intrinsic diversity" [11] (p. 1), or "semantic intent clustering" [7] (p. 959). The longer the reply is, the more important it is to the user that the reply is phrased in their own tone (Gawley as cited in [21]). At six words it is still acceptable to sound generic, but by ten words it becomes awkward [21]. Regardless of the word count, users are reluctant to let AI talk about emotional or otherwise intimate matters (Gawley as cited in [21]). Therefore, Smart Reply has filters on emotional statements, because "the early iterations of Smart Reply were overly affectionate. 'I love you' was the machine's most common suggested response. This was a touch awkward: because the model has no knowledge of the relationship between an email's sender and its receiver, it provides the same suggested responses whether you are corresponding with your boss or a long-lost sibling" [21] (para. 6). Evidence suggests that AI-generated responses change the expression of emotion in human conversations, raising concerns that we may lose our personal communication style as language becomes more homogeneous over time [6]. Prior research implies positivity biases in SRs [5,17]. However, experimental control over which suggestions were presented does not necessarily prove bias. It is also conceivable that after training the models, users' preferences for short, positive, and polite responses are accurately represented and reflect implicit values. "Languages, English included, appear to be inherently positive" [9] (p. 3). The team that developed the Smart Reply functionality argues that a positive attitude reflects the style of email conversations: Positive replies may be more common, and when negative responses are warranted, users may prefer less direct wording [7]. Their model forces negative suggestions when at least one positive suggestion is included and none of the top three replies are negative [7]. The issue is that we do not (yet) know where to set the typical level of positive language in a given situation. SRs could lead to an ultimately limited vocabulary. For example, Google's Smart Reply response options are limited to "a relatively small human-curated whitelist with limited context-dependency" [22] (p. 2288). Personalization of SRs could help remedy this in the future. However, this will be a long journey. For instance, when users of a predictive writing app emailed that they would like to meet with an investor the following week, the experimental models suggested meeting with him, but when the user entered the same wording about a nurse, the experimental models suggested meeting with her [22]. To limit the impact, developers removed all suggestions with a gender pronoun [22], which iswhile great against gender bias-an obvious vocabulary restriction. Another example is LinkedIn, an employment-oriented social network, adding controls to ensure that profanity is not suggested in their SRs for member messages [13]. Overall, language is modified by algorithms that pay attention to word count (of reply suggestions), politeness (e.g., filters against profanity), gender sensitivity (e.g., no gender pronouns), response selection (e.g., forcing a variety of semantic clusters; tending to suppress emotionality), simplicity (more generic wordings; canned responses), probably positivity, and, most importantly, biased training data and often predefined response sets. The modified language of SRs has implications for human language production. We are thinking in words. The interactive alignment model [23] argues that interlocutors become more linguistically aligned over the course of a conversation. Some researchers already pondered the limiting effects on creativity in writing [6,20,24]. This reasoning entails that by using SRs, our human-produced language would gradually adapt to the linguistic features of SRs, i.e., we would unconsciously adjust our terminology, maybe formulate shorter sentences, perhaps lose certain linguistic nuances, etc. In addition to the general impact on language, could there be impacts on individual choices? A study addressing biases in predictive typing on mobile devices debated whether system biases might influence what people create and argues that the sentiment could be affected because "it is easy to enter via accepting the recommendation verbatim" [3] (p. 34). It found evidence that people were primed by the system's recommendations [3]. A laboratory experiment concluded that compound human-AI messages contained "marginally more positive emotion words" (p. 7) compared to the control condition without SRs [5]. However, writers often compensate for overly positive responses by reducing positive sentiment [9]. In sum, prior work shows that biases in training data lead to SR biases, but it is not yet clear if biased SRs in turn lead to biased human behavior. CMC theories and machine agency Early CMC theories pounced on the missing paralinguistic cues of CMC (e.g., media richness theory, social presence theory). In sum, those theories argue that CMC is inferior to face-to-face communication due to the loss of nonverbal cues. This applies to SRs: textbased communication inherently lacks paralinguistic cues. Anthropomorphic theories such as the CASA paradigm (computers are social actors) describe the tendency of people to subconsciously attribute human characteristics, feelings, or intentions to non-human entities [25][26][27][28]. While early CMC theories focus on the limitations of CMC, anthropomorphic theories expose the shortcomings of the human mind. Especially interactions involving verbal prompts automatically elicit social responses and, in sum, "whenever there is discourse, people assume there is an underlying subject who speaks" [4] (p. 27). Anthropomorphism may occur, e.g., when a SR user talks to their smartphone and berates it for not making enough suggestions. Later theories acknowledge the merits and affordances of CMC and stress that CMC constraints do not necessarily impair communication. Media synchronicity theory states that what matters is not the richness of a medium but its synchronicity, i.e., the extent to which people work on the same task simultaneously [29], implying that the benefit of SRs might be to keep the dialogue going with short response times (immediacy of feedback). Compensatory adaptation theory (CAT) takes this a step further, stating that interlocutors can compensate or even overcompensate for the obstacles posed by CMC through procedural structuring [30]. For example, interlocutors could compensate for the textbased character of the communication by putting more cognitive effort into writing, or by being clearer and more concise. While the SR user has a limited choice of response options and therefore cannot perform procedural structuring, their AI could be coded to do so. There is a nascent fourth wave of CMC theories as AI increasingly removes agency from human users. This school of thought will form AI-MC theories and examine the advent of machine agency (sometimes designated as AI agency, digital agency, or agentic artifact). The machine agency paradigm expands the concept of agency to automation, assuming that AI decides and acts in a self-determined manner. Thus, agency is no longer exclusively a human capacity. Machine agency asserts goal-directed behavior and actions not directly controlled by a human [2,15,16,19,28,31,32]. Some denote this as a capability to act autonomously [31,33]. That self-determined character may be challenged by the fact that decisions are made by an AI simulation rather than a conscious mind: algorithms determine machine agency, so that the latter is more of a technically pre-programmed ability to decide and act. Although agency is sometimes understood as the ability to think or reason, and the sense of agency sometimes refers to the sense of being in charge (i.e., that our own actions do not simply happen to us), these definitional approaches are not the focus of this study. The role of technology has progressed from functional tools to agentic co-players, and this study focusses on how this progress affects human agency. We do not debate whether a non-conscious entity can have agency at all, but rather examine whether humans confer agency on an agentic artifact in an uncontrolled way. Relying on a tool does not necessarily mean that you surrender your agency to the tool. For example, relying on a simple tool like an alarm clock does not mean that I surrender my agency to the alarm clock. But if I use a sophisticated alarm clock that selects the wake-up time according to a complicated algorithm to achieve optimal sleep, then I transfer the power of decision (and thereby agency) to the alarm clock in a controlled way. In contrast to the normal alarm clock scenario, I do not know the exact outcome here, yet I trust that the machine agent will make a good enough choice. However, we are interested in whether and how relying on a tool can lead to an uncontrolled transfer of human agency. Loss of agency theory and research hypotheses I propose a theory on the loss of human agency caused by the use of SRs or, in a generic way, by the use of AI. This theory focusses on the transfer of agency that is forced by circumstances (such as time pressure), human weaknesses (such as complacency), and psychological priming. The theory illustrates that the use of SRs cannot be separated from the contextual contingencies in which SRs are used. The term "loss" is intended to reflect that the transfer of agency is not controlled by SR users. Machine agency is inherent in SRs because the algorithm makes suggestions on its own. That does not yet mean that agency is transferred from the human user to the machine agent. Alas, the interactions between human agency, machine agency, and other constructs are too complex, too convoluted, too unexplored to be fully addressed in this study. The basic assumption here is that human agency and machine agency are not exogenous variables imposed on the model, but are interrelated in ways that augment and replace each other. Agentic artifacts shape their users. One of those other constructs mentioned above is priming, which draws on a stimulus-response scheme. In conceptual priming, the meaning of the input stimulus, i.e., the meaning of the prime (e.g., a word, a sentence, a SR) impacts the target (e.g., the response message). For example, if an employee is asked whether they would attend a meeting and they receive overly positive SR suggestions, they may subconsciously consider that to be the most appropriate response. When constructs such as priming limit an individual's ability to take decisions based on careful consideration, they result in a partial, involuntary give-off of decision-making power, in our case to a machine agency. Such a loss of power of decision can hardly be controlled by the user of SRs, since priming is a nonconscious process. Recall that power of decision is per definition one of the two essential dimensions of agency. Further elements are not necessarily unconscious factors, but often factors of which we are not actively aware at the moment. Overtrust in AI (e.g., in the form of complacency) or situational constraints (e.g., time pressure) may prevent the SR user from thinking at length about the best possible action and thus result in another loss of agency. Complacency refers to insufficient vigilance or vetting of AI. Complacency denotes that a SR user may not care about inaccurate answers suggested to them by SR technology because they are focusing their attention on other issues in a multitask environment. Automation bias (AB) refers to the use of decision support systems as a shortcut tool, often in conjunction with a favorable attitude toward automated decision-making, ignoring conflicting information. Both terms coin the tendency to shed responsibility for communication, either as an attention allocation strategy or as a tendency to over-rely on automation (see [34] for a review). A laboratory experiment [35] found that AB "is the result of using automation as a heuristic replacement for more vigilant and complete information search and processing" (p.713), i.e., the issue might be less a fundamental belief in the relative authority of automated aids, but rather that "automation is used as a decision-making short cut that prematurely shuts down situation assessment" (p. 714). Situational constraints such as lack of time in day-to-day business, a high mental workload, uncertainty etc. often force shortcuts [36][37][38]. When time is perceived to be short, cognitive processing either speeds up [39] or becomes less likely and individuals tend to rely on heuristics [36], owing to an inherent contradiction between response speed and decision accuracy [37]. Taken together, I hypothesize: • H1: The SR options will include more positive sentiment than negative or neutral sentiment. • H2: The use of SRs will lead to a loss of human agency through priming. • H3: When SRs are used, situational constraints such as lack of time will lead to a loss of agency. Note that H1 is hypothesis H1a from [5]. In broader terms, the very use of AI leads to a loss of agency. The model in Figure 3 shows the relationships between the various constructs on a theoretical plane that led to hypotheses H2 and H3 at the empirical level of research. Figure 3 where three arrows converge does not represent a local point or moment, but simply shows the effect relationships of moderating variables on the dependent variable. Situational constraints, complacency, and automation bias are moderating variables. Priming is a mediating variable. I do not form hypotheses regarding complacency and AB because I could not test them in the experiment (Sections 3 and 4). I was unable to do this because the works councils of the two companies involved did not agree to questions that might relate to personal work performance. Experiment Experiment design Employees of two large companies (manufacturing and software) in Germany were asked online to respond to twelve emails, using SRs, or alternatively to formulate their own answers. Survey data in the companies had to be collected via internal company platforms due to internal data protection guidelines. The time period of the study was July 2022. The head of sales of the manufacturing company sent me the content of his last 20 emails, completely anonymized. I shortened these emails so that survey respondents would not abandon the survey due to time constraints. Then I sent these emails to a newly created email account and noted the SR suggestions: 14 emails were offered SRs (see e.g., [7] for an explanation of filtering out messages that are not eligible for SRs). Twelve of these e-mails were not duplicates in terms of content. The survey containing these twelve emails and their matching SRs is hereafter referred to as AI-SRs. For this study, two groups of SR suggestions are needed to compare their effects. In a second step, I asked five employees of the software company to formulate three short, typical responses to the twelve email texts from their perspective. I then grouped the responses semantically and selected the three responses with the most occurrences to obtain three human-written SRs per email, henceforth referred to as H-SRs. They served as a control group, although two suggested responses in email 5 written by humans were identical to those generated by the AI. Another design would have been to pre-sort the AI-generated SRs according to sentiment and then split them into two groups. However, AI suggestions were mostly positive, ranging from 0.5 to 0.9. A third option would have been to make no suggestions at all to the control group (see [5]). However, this would have left no possibility to measure the sentiment of the SR suggestions in the control group (no suggestions means no measure of the offered sentiments). In sum, participants were randomly offered one of three surveys: • H-SRs Human-written SR suggestions for the twelve emails. • AI-SRs AI-generated SR suggestions for the twelve emails. • AI-SRs+TC AI-SRs, but with a time constraint of a few seconds to respond to each of the twelve emails. The AI-SRs+TC setting was probably closest to real life in the workplace and was used to simulate the effects of the independent variable of situational constraints. Response options were always reshuffled to avoid bias due to the order of the proposed SRs. Participants were instructed to read the emails carefully and respond according to their own assessment, in case of doubt or uncertainty with answers that should be neutral to moderately agreeing. All three groups had the option to compose their own response text for each email. An open-ended question was subsequently used to inquire about the participants' experiences and assessment regarding SRs. The survey concluded with a brief demographic section (gender and age group). Measures The construct of agency has to be measured indirectly, since we cannot look into each other's minds. What we can measure is a skewing of decisions, which implies an impact on the SR author's agency (as power of decision). More specifically, I drew on the sentiment of written responses, whether entered through SR selection or manual typing, for two reasons. First, meaning is at the core of conceptual priming, and sentiment is an important part of the intention or meaning of a business message or SR. Second, sentiment may be a better measure of agency than a subjective "sense of agency" in the context of priming. The sense of human agency may be very high when human agency is actually not high-recall that priming is a nonconscious process. I used the analysis feature of Google Cloud NLP API which is available for German. This use of Google's tool to analyze sentiment in order to gauge priming is identical to the approach used by [3] for the same aim. The entire response, i.e., mostly a short sentence, was used to evaluate sentiment. Individual words are less appropriate here [3]. Other SR researchers used Mechanical Turk workers [15,17], two research assistants and the dictionary-based classifications of VADER and LIWC [5], or VADER combined with word shifts [9]. The independent variable of situational constraints was simulated by tight time constraints, i.e., a few seconds of process time. Its effects can be measured by the gap in sentiment of given replies between the groups AI-SRs+TC and AI-SRs. Sample Participants were recruited via emails sent by the head of human resources at a software company and the sales director at a manufacturing company. Participants received no compensation. The sample consisted of 346 participants. All participants wrote German. Of the participants, 57.2% (n=198) identified themselves as women and 42.8% (n=148) identified themselves as men, all preferred to disclose their gender. 54.6% of participants worked in the manufacturing company, 45.4% in the service company. One participant was excluded from analysis due to technical problems. The age distribution was collected in cohorts 18-24, 25-34, 35-44, 45-54, 55-64, and 65 years and older. Each cohort had a 20-24% share, except for the 55-64 cohort, which had an 11% share; no participants were in the 65 and older cohort. To determine whether nonresponse bias was present, I compared early responders with late responders. No significant differences were found. However, one respondent phrased their own response for each email and indicated in the open-ended section of the questionnaire that they considered SRs fundamentally inappropriate in the corporate context. All participants were debriefed about the purpose of the experiment after completion. Results and discussion Results Quantitative results Consistent with prior work [5,17] and with H1, sentiment analysis found 86% of AI-SR options included positive sentiment (and 14% negative sentiment). There was a positivity bias in H-SRs as well, but to a lesser extent (78% included positive sentiment, 5% neutral sentiment, and 17% negative sentiment). Figure 4 shows the average sentiment scores for the surveys H-SRs, AI-SRs and AI-SRs+TC. Google Cloud NLP calculates sentiment analysis values in a range from -1.0 for maximally negative to +1.0 for maximally positive. The initial values of the arrows indicate the average sentiment scores of the suggested SRs. The end of the arrows indicates the average sentiment scores of the given answers. Without priming by the SR suggestions, the values on the right side of Figure 4 should be at the same level. Instead, the more positive SRs suggestions in AI-SRs (score 0.55) with otherwise unchanged conditions compared to the suggested H-SRs (score 0.36), lead to higher sentiment scores of the given responses (0.58 AI-SRs and 0.64 in AI-SRs+TC compared to 0.47 in H-SRs). Figure 4 shows that the offered sentiment successfully created a difference in sentiment valence between the responses given by the groups H-SRs and AI-SRs. Writers given more positive recommendations accepted or included more positive content. Tukey's all-pairs HSD test confirmed the difference in sentiment means between all three groups was significant at the p=0.01 level (3 groups; df = 342; critical level 4.12): qtukey = 18.89 for H-SRs and AI-SRs+TC, qtukey = 13.05 for H-SRs and AI-SRs, and finally the lowest but still significant, qtukey = 5.84 for AI-SRs and AI-SRs+TC. Figure 4 Sentiments scores of suggested SRs and all given responses Next, I zoomed in on the acceptance rates for each email (Table 4 and Figure 5). The fact that H-SRs have the highest acceptance rate on average can be explained by the better quality of human-written SRs. However, this does not explain why the acceptance rate does not increase under time pressure, but decreases (see first row). A detailed breakdown of acceptance rates, i.e., e-mail by e-mail, revealed that emails 7 and especially 6 and 8 sorted out the wheat from the chaff. Email 6 asked for confirmation of arrival including time, and here participants were less likely to accept the generally worded SRs suggestions and preferred responding with a specific time. Email 7 is a more complex email that indirectly asks a second question. Email 8 directly asks two questions in one body of text. It is noteworthy that participants in the AI-SRs+TC group skipped the suggested SRs 46.0% of the time with email 8. Figure 5 depicts the acceptance rates of the two groups AI-SRs and AI-SRs+TC. These two groups are best suited to gauge the effect of time pressure: priming by SR suggestions should not differ here because the stimuli are exactly the same, and the only difference is no time pressure versus time pressure. The vertical axis indicates the average acceptance rate for the SRs offered. The horizontal axis is an ordinal scale of the twelve emails, sorted by average rate of the two groups in declining order. Figure 5 Breakdown of acceptance rates by email When the quality of the suggested SR was of satisfactory grade, the SR suggestions were accepted under time pressure at roughly similar rates compared to the group without time pressure (AI-SRs). In one case, the AI-SRs+TC group has a 100% acceptance rate: email 3, a note of thanks for documents sent, which was most often returned with "You're welcome! " However, Figure 5 visualizes that the acceptance rate has a threshold above 80% that, when crossed downward, causes the acceptance rate in the AI-SRs+TC group to drop faster. If we assume that the acceptance rate is a quality parameter, then-below a certain threshold-the behavior changes under time pressure. SR users in a time crunch who find that the SR suggestions may not really fit their intended message do not spend time thinking about the SR suggestions, but quickly write the answer themselves. This is especially true for faster typists (see [20]). This is noteworthy because it indicates that some participants resisted time pressure and loss of agency. The AI-SRs+TC group manifested two effects. First, there was response priming compared to the AI-SRs group (although less significant than in the pair H-SRs versus AI-SRs). And second, the tolerance for semi-good responses decreased. The acceptance rate for H-SRs (not shown in Figure 5) is generally higher and falls off more slowly than for AI-SRs. Participant experience Most participants pointed to the time savings, describing their impression as a "quick response on the smartphone where you don't want to type for a long time" [P9]. Shorter response times were also frequently mentioned: "You're more likely to give immediate feedback because you have to spend a lot less time" [P328]. Participants saw potential usefulness such as "assistance in wording responses" [P172], "no spelling errors in short answers" [P302], "positive impact on the customer (a simple "thank you" may be enough to increase sympathy)" [P65], "facilitation for non-native speakers" [P25], "faster and friendlier relationship management" [P124]. One participant appreciated that there were "no meaningless phrases" [P15], maybe due to the rule of diverse semantic clusters. While some participants noted that the SR suggestions tended to be generic or "standardized" [P335] or only useful for "trivial answers" [P60], others echoed the statements of CAT: "clearer and more targeted answers" [P204], "short and concrete information" [P15], or "faster answers, more politeness, fewer misunderstandings" [P162]. Some participants highlighted how SRs enhance human agency, e.g., text comprehension of the incoming email: "you know immediately what the other person means" [P279], or text composition of the outgoing email: "to process a mail immediately without having to switch to 'write mode' " [P324]. Participants also indicated why they did not use the suggested SRs: "more work, so no benefits. More like saving time typing" [P96], "quick answer, although the accuracy of the answer will probably be poor in many cases" [P301], to outright rejection: "time saving, you don't have to think about it, for lazy people. [...] I would nevertheless like to make it clear that I do not support this form of answering at all, it is completely impersonal, superficial and we are degenerating more and more into robots. At least a short, friendly individual reply will probably still bring everyone together …" [P35]. Participants also confirmed complacency and AB in their own behavior. Comments such as "you don't have to read the emails quite so carefully" [P267] illustrate complacency, "you don't have to think about answers yourself and quicker" [P92] illustrate AB as heuristic shortcut, "less thinking about the right answer because one of the given ones seems appropriate in terms of content and language" [P5] illustrate AB as confidence in AI, or "you don't have to think about the wording of the answer (whether it is too polite, too direct, etc.)" [P276] may illustrate both constructs. One comment summarized: "quicker response because you don't have to think about how to answer first" [P89]. Some participants deliberately counteracted the loss of agency by writing the wording of the suggested SR verbatim in the text box for self-formulated responses-sometimes by copying two SR proposals into one. This behavior was also observed in two other studies, see [P10] in [9], designated as "desire for greater control" (p. 12f.), and [24], designated as "chaining multiple suggestions" (p. 9). Despite all linguistic filters, one participant limited the range of application in the DWP context: "among colleagues certainly a good idea for quick coordination. Towards customers and other 'unknowns' I don't find it appropriate" [P144]. The qualitative interviews explicate on this aspect. Qualitative interviews This study consisted of two parts, the crowdsourced experiment and the qualitative interviews, with only selected participants (N=8) participating in the latter. The qualitative interview results broadly mirror the quantitative results. There was a perceived trade-off between speed of response and accuracy of response (see [37] for the speed-accuracy tradeoff). Most refused trade-offs between speed of response and correctness of response (i.e., accuracy is so low that the response becomes incorrect) in a DWP context, e.g., "there is no compromise to be made on the correctness of the reply. If a SR does not reflect the correct content, I simply don't use the SR and write a manual reply" [I6], or "I refuse to send half-correct answers, no matter how big my time crunch is. In business, you break more than it's worth" [I3]. The heuristic that intervenes here is obviously that time pressure in business should not lead to hasty, wrong decisions. Experience, expertise, and situational awareness seem to be an excellent recipe against wrong decisions under time pressure. One reason for the basic rejection of certain SRs in the workplace seemed to be the lack of formality in the wording of the proposed SRs-they seemed too informal. Discussion The experiment extends prior research on SRs and employs a novel set of humanwritten SRs as a control group because we do not know where to set the typical level of positivity (here in a German-language DWP context). Hypothesis H1 that SR suggestions would include more positive sentiment than negative or neutral sentiment is supported. Evidence supports hypothesis H2 as well. Email writers who received more positive SR suggestions wrote more positive content. Since sentiment is being used as a proxy measure for agency, evidence amounts to a loss of agency and that AI technology is impacting human responses already. The results support hypothesis H3: Under time pressure, email writers increased their average positive sentiment compared to the group with the same SR suggestions but without time constraints. However, a threshold for the acceptance of SR suggestions in everyday business is observable (see Figure 5) and plausible (see participant experience): under time pressure, a well-fitting offer is accepted more quickly, but a poor SR suggestion is rejected more quickly and entails immediate typing. When weighing reaction speed against decision accuracy, participants basically opted for the latter. Table 5 summarizes the findings. Evidence suggests that the use of SRs lead to a loss of agency in the DWP through response priming, and under time pressure. The questionnaire allowed for termination at any time, but forced responses to the emails, either by clicking on the suggested response templates or by typing own responses. However, in some rare cases, participants created their own additional categories. One participant noted in the blank response box of email 11 that they would not write a response at all, but would call the customer. Another participant reported that they would not answer to email 3. One participant commented that there was no need to respond to email 1 because the customer would call anyway. Note that we did not hit upon a simple give-off toward machine agency in the both constraining and affording relationship between human and machine agency. The tradeoff is a non-zero-sum game because it has cooperative elements. If that trade-off were a completely competitive zero-sum game, any loss of human agency would instantly increase machine agency. But there are gains despite the loss. The use of AI can enhance human agency, e.g., through procedural structuring according to CAT, or immediacy of feedback according to media synchronicity theory, both confirmed by participant experience in subsection 4.1. Another example, albeit purely conceptual, would be to include location information in One-Click-Chat so that the driver can easily provide an estimate of arrival time and accurate distance information, precomposed by SR technology. In short, the trade-off is not strictly human versus machine agency, there is also human-AI teaming. We may need to understand the interaction as working together with machine agency in a team, i.e., as a synergy of collective agency, not as a sum-with all the pros and cons of working with good and bad team members. Because this experiment required participants to write under artificial conditions, the biggest threat to the external validity of the results is that the situation described in the emails was not detailed enough or participants did not take the experiment seriously enough. On the other hand, the situations were taken from real life and are very common; many will receive such emails every day. Participants were asked by their supervisors to answer the emails as realistically as possible. The verbal comments of the participants and their actual behavior, such as high rejection rates for certain SRs, give good reason to assume that the participants' responses do not deviate significantly from the natural situation. My sample consisted of employees in Germany. The results might not generalize to other populations, i.e., external validity cannot be tested. Conclusion SRs constrain and deform language with algorithms that pay attention to word count, simplicity, politeness, gender, response selection, probably positivity, and, most importantly, biased training data and predefined response sets. In sum, those skews lead to SR biases (like positivity), and in a second step, this study investigated if those biased SRs in turn lead to biased human behavior. Using agency as theoretical lens, we hit upon human behavior that was biased through response priming and time pressure. Time pressure had an additional effect: participants became pickier when SR quality fell below a threshold. Technical advances such as, most recently, Chat-GPT, a language model optimized for dialogue that delivers high-quality responses, herald a more intense and ubiquitous adaptation of predictive writing tools that will, in turn, have a stronger impact on users. This is a clear indication that the larger conversation about machine agency is gaining momentum and will continue to do so in the coming years. The loss of agency model and empirical evidence suggest that machine agency affects the content we author and thereby the behavior we generate. In coarse terms, this study investigates the trade-off in human "versus" machine agency. However, those transfers between human and machine agency are fluid; they complement, replace, and reinforce each other at the same time. Funding: This research received no external funding. Conflicts of Interest: The author declares no conflict of interest. Abbreviations The following abbreviations are used in this article: Figure 2 2Smart replies in the digital workplace Figure 3 3A loss of agency model Arrows show the influence of variables on each other. The point in the middle of Table 1 . 1Research questions and contributionsResearch Question Contribution RQ1: What defines SRs? I analyze the literature to date and derive a genus-differentia definition of SRs. RQ2: How do SRs impact lan- guage? Table 2 . 2Features, outcomes, and context for SRsAspect of SRs Sources Comment SRs are an exam- ple of AI-MC Table 1 1Examples of SR suggestions with and without response biasMessage: Did you print the document? With response bias Without response bias -Yes, I did. -It's printed. -Yes, it's done -I have printed it. -No, I didn't -Yes, all done. Source: [12] (p. 8) Table 4 4SR acceptance ratesH-SRs AI-SRs AI-SRs+TC All emails 92.1% 87.4% 83.0% Email 6 88.9% 66.6% 59.4% Email 7 91.6% 90.9% 72.9% Email 8 83.3% 69.6% 54.0% Table 5 5Research questions and resultsResearch questionResult RQ1: What defines SRs? SRs are a form of AI-MC that offers short, complete and plausible reply suggestions in a text-based communication, allowing the user to save time. RQ2: How do SRs impact language in the workplace? Linguistic constraints imposed by allowlists, blocklists, filters, canned responses, and various biases in the training data as well as in response generation, are inbuilt into action stimuli and can result in the language becoming more homogeneous over time. RQ3: To what extent do users transfer agency? Machine Agency in Human-Machine Networks; Impacts and Trust Implications. V Engen, J B Pickering, P Walland, 10.1007/978-3-319-39513-5_9Kurosu, M.Springer International PublishingChamHuman-Computer Interaction. Novel User ExperiencesEngen, V., Pickering, J.B., Walland, P.: Machine Agency in Human-Machine Networks; Impacts and Trust Implications. In: Kurosu, M. (ed.) Human-Computer Interaction. Novel User Experiences. pp. 96-106. Springer International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-39513-5_9. AI-mediated communication: Definition, research agenda, and ethical considerations. J T Hancock, M Naaman, K Levy, Journal of Computer-Mediated Communication. 25Hancock, J.T., Naaman, M., Levy, K.: AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication. 25, 89-100 (2020). https://doi.org/10/gj4mj4. Sentiment bias in predictive text recommendations results in biased writing. K C Arnold, K Chauncey, K Z Gajos, 10.20380/GI2018.07Proceedings of Graphics Interface. Graphics InterfaceArnold, K.C., Chauncey, K., Gajos, K.Z.: Sentiment bias in predictive text recommendations results in biased writing. In: Pro- ceedings of Graphics Interface. pp. 33-40 (2018). https://doi.org/10.20380/GI2018.07. Building character for artificial conversational agents: Ethos, ethics, believability, and credibility. S Brahnam, PsychNology Journal. 7Brahnam, S.: Building character for artificial conversational agents: Ethos, ethics, believability, and credibility. PsychNology Jour- nal. 7, (2009). AI-mediated communication: Language use and interpersonal effects in a referential communication task. H Mieczkowski, J T Hancock, M Naaman, M Jung, J Hohenstein, Proc. ACM Hum.-Comput. Interact. 5Mieczkowski, H., Hancock, J.T., Naaman, M., Jung, M., Hohenstein, J.: AI-mediated communication: Language use and inter- personal effects in a referential communication task. Proc. ACM Hum.-Comput. Interact. 5, 1-14 (2021). https://doi.org/10/gp9p99. J Hohenstein, D Difranzo, R F Kizilcec, Z Aghajari, H Mieczkowski, K Levy, M Naaman, J Hancock, M Jung, arXiv:2102.05756Artificial intelligence in communication impacts language and social relationships. Hohenstein, J., DiFranzo, D., Kizilcec, R.F., Aghajari, Z., Mieczkowski, H., Levy, K., Naaman, M., Hancock, J., Jung, M.: Artificial intelligence in communication impacts language and social relationships. arXiv:2102.05756 [cs]. (2021). Smart reply: Automated response suggestion for email. A Kannan, K Kurach, S Ravi, T Kaufmann, A Tomkins, B Miklos, G Corrado, L Lukacs, M Ganea, P Young, V Ramavajjala, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco California USAACMKannan, A., Kurach, K., Ravi, S., Kaufmann, T., Tomkins, A., Miklos, B., Corrado, G., Lukacs, L., Ganea, M., Young, P., Ramava- jjala, V.: Smart reply: Automated response suggestion for email. In: Proceedings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining. pp. 955-964. ACM, San Francisco California USA (2016). https://doi.org/10/gp9p95. A smart email client prototype for effective reuse of past replies. M A Naeem, I W S Linggawa, A A Mughal, C Lutteroth, G Weber, IEEE Access. 6Naeem, M.A., Linggawa, I.W.S., Mughal, A.A., Lutteroth, C., Weber, G.: A smart email client prototype for effective reuse of past replies. IEEE Access. 6, 69453-69471 (2018). https://doi.org/10/gp9p98. I can't reply with that": Characterizing problematic email reply suggestions. R E Robertson, A Olteanu, F Diaz, M Shokouhi, P Bailey, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsYokohama JapanACMRobertson, R.E., Olteanu, A., Diaz, F., Shokouhi, M., Bailey, P.: "I can't reply with that": Characterizing problematic email reply suggestions. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. pp. 1-18. ACM, Yokohama Japan (2021). https://doi.org/10/gksmfp. Q Ying, P Bajaj, B Deb, Y Yang, W Wang, B Lin, M Shokouhi, X Song, Y Yang, D Jiang, arXiv:2106.02232Language scaling for universal suggested replies model. arXiv preprintYing, Q., Bajaj, P., Deb, B., Yang, Y., Wang, W., Lin, B., Shokouhi, M., Song, X., Yang, Y., Jiang, D.: Language scaling for universal suggested replies model. arXiv preprint arXiv:2106.02232. (2021). Diversifying reply suggestions using a matching-conditional variational autoencoder. B Deb, P Bailey, M Shokouhi, arXiv:1903.10630arXiv preprintDeb, B., Bailey, P., Shokouhi, M.: Diversifying reply suggestions using a matching-conditional variational autoencoder. arXiv preprint arXiv:1903.10630. (2019). M Henderson, R Al-Rfou, B Strope, Y.-H Sung, L Lukács, R Guo, S Kumar, B Miklos, R Kurzweil, arXiv:1705.00652Efficient natural language response suggestion for smart reply. arXiv preprintHenderson, M., Al-Rfou, R., Strope, B., Sung, Y.-H., Lukács, L., Guo, R., Kumar, S., Miklos, B., Kurzweil, R.: Efficient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652. (2017). Building smart replies for member messages. J Pasternack, N Chakravarthi, Pasternack, J., Chakravarthi, N.: Building smart replies for member messages. Available online: https://engineer- ing.linkedin.com/blog/2017/10/building-smart-replies-for-member-messages, accessed on 15/02/2023. OCC: A smart reply system for efficient in-app communications. Y Weng, H Zheng, F Bell, G Tur, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningAnchorage AK USAACMWeng, Y., Zheng, H., Bell, F., Tur, G.: OCC: A smart reply system for efficient in-app communications. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 2596-2603. ACM, Anchorage AK USA (2019). https://doi.org/10/gf7ndd. AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior. J Hohenstein, M Jung, 106Hohenstein, J., Jung, M.: AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Com- puters in Human Behavior. 106, 1-30 (2020). https://doi.org/10/gk49jg. AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness. M Jakesch, M French, X Ma, J T Hancock, M Naaman, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. the 2019 CHI Conference on Human Factors in Computing SystemsGlasgow Scotland UkACMJakesch, M., French, M., Ma, X., Hancock, J.T., Naaman, M.: AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Sys- tems. pp. 1-13. ACM, Glasgow Scotland Uk (2019). https://doi.org/10/gjvmw4. AI-supported messaging: An investigation of human-human text conversation with AI support. J Hohenstein, M Jung, Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACMHohenstein, J., Jung, M.: AI-supported messaging: An investigation of human-human text conversation with AI support. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. pp. 1-6. ACM, Montreal QC Canada (2018). https://doi.org/10/gp9p9v. Designs for the digital workplace. S P Williams, P Schubert, Procedia Computer Science. 138Williams, S.P., Schubert, P.: Designs for the digital workplace. Procedia Computer Science. 138, 478-485 (2018). https://doi.org/10/ghcpst. Semantics derived automatically from language corpora contain human-like biases. A Caliskan, J J Bryson, A Narayanan, Science. 356Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science. 356, 183-186 (2017). https://doi.org/10/f93cpf. Predictive text encourages predictable writing. K C Arnold, K Chauncey, K Z Gajos, Proceedings of the 25th International Conference on Intelligent User Interfaces. the 25th International Conference on Intelligent User InterfacesCagliari ItalyACMArnold, K.C., Chauncey, K., Gajos, K.Z.: Predictive text encourages predictable writing. In: Proceedings of the 25th International Conference on Intelligent User Interfaces. pp. 128-138. ACM, Cagliari Italy (2020). https://doi.org/10/gp9p9w. Google's new autoreply sounds great!!!! Available online. N Twilley, 15/02/2023Twilley, N.: Google's new autoreply sounds great!!!! Available online: https://www.newyorker.com/tech/annals-of-technol- ogy/google-new-smart-reply-sounds-great, (2015) accessed on 15/02/2023. Gmail smart compose: Real-time assisted writing. M X Chen, B N Lee, G Bansal, Y Cao, S Zhang, J Lu, J Tsay, Y Wang, A M Dai, Z Chen, T Sohn, Y Wu, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningAnchorage AK USAACMChen, M.X., Lee, B.N., Bansal, G., Cao, Y., Zhang, S., Lu, J., Tsay, J., Wang, Y., Dai, A.M., Chen, Z., Sohn, T., Wu, Y.: Gmail smart compose: Real-time assisted writing. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Dis- covery & Data Mining. pp. 2287-2295. ACM, Anchorage AK USA (2019). https://doi.org/10/gf7m2t. An integrated theory of language production and comprehension. M J Pickering, S Garrod, Behav Brain Sci. 36Pickering, M.J., Garrod, S.: An integrated theory of language production and comprehension. Behav Brain Sci. 36, 329-347 (2013). https://doi.org/10/f45frn. The impact of multiple parallel phrase suggestions on email input and composition behaviour of native and non-native English writers. D Buschek, M Zürn, M Eiband, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsYokohama; JapanACMBuschek, D., Zürn, M., Eiband, M.: The impact of multiple parallel phrase suggestions on email input and composition behav- iour of native and non-native English writers. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. pp. 1-13. ACM, Yokohama Japan (2021). https://doi.org/10/gksk4z. I like my relational machine teacher: An AI instructor's communication styles and social presence in online education. J Kim, K Merrill, K Xu, D D Sellnow, 10.1080/10447318.2021.1908671International Journal of Human-Computer Interaction. 37Kim, J., Merrill, K., Xu, K., Sellnow, D.D.: I like my relational machine teacher: An AI instructor's communication styles and social presence in online education. International Journal of Human-Computer Interaction. 37, 1760-1770 (2021). https://doi.org/10.1080/10447318.2021.1908671. Computers are social actors. C Nass, J Steuer, E R Tauber, Proceedings of the SIGCHI conference on Human factors in computing systems. the SIGCHI conference on Human factors in computing systemsNass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI conference on Human factors in computing systems. pp. 72-78 (1994). The impact of chatbot conversational skill on engagement and perceived humanness. R M Schuetzler, G M Grimes, J Scott Giboney, Journal of Management Information Systems. 37Schuetzler, R.M., Grimes, G.M., Scott Giboney, J.: The impact of chatbot conversational skill on engagement and perceived humanness. Journal of Management Information Systems. 37, 875-900 (2020). https://doi.org/10/ghrmzz. I-It, I-Thou, I-Robot: The perceived humanness of AI in human-machine communication. D Westerman, A P Edwards, C Edwards, Z Luo, P R Spence, Communication Studies. 71Westerman, D., Edwards, A.P., Edwards, C., Luo, Z., Spence, P.R.: I-It, I-Thou, I-Robot: The perceived humanness of AI in human-machine communication. Communication Studies. 71, 393-408 (2020). https://doi.org/10/gjvf3f. Rethinking media richness: Towards a theory of media synchronicity. A R Dennis, J S Valacich, Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers. the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers10Dennis, A.R., Valacich, J.S.: Rethinking media richness: Towards a theory of media synchronicity. In: Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers. pp. 10- pp. IEEE (1999). https://doi.org/10/bj2bqh. Team adaptation to electronic communication media: Evidence of compensatory adaptation in new product development teams. N Kock, G S Lynn, K E Dow, A E Akgün, European Journal of Information Systems. 15Kock, N., Lynn, G.S., Dow, K.E., Akgün, A.E.: Team adaptation to electronic communication media: Evidence of compensatory adaptation in new product development teams. European Journal of Information Systems. 15, 331-341 (2006). https://doi.org/10/df7838. The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts. A Baird, L M Maruping, MISQ. 45Baird, A., Maruping, L.M.: The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts. MISQ. 45, 315-341 (2021). https://doi.org/10/gmpdk3. Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). S S Sundar, Journal of Computer-Mediated Communication. 25Sundar, S.S.: Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). Journal of Computer-Mediated Communication. 25, 74-88 (2020). https://doi.org/10/ggjvvq. Artificial intelligence as digital agency. P J Ågerfalk, European Journal of Information Systems. 29Ågerfalk, P.J.: Artificial intelligence as digital agency. European Journal of Information Systems. 29, 1-8 (2020). https://doi.org/10/gjfm6g. Automation bias: a systematic review of frequency, effect mediators, and mitigators. K Goddard, A Roudsari, J C Wyatt, J Am Med Inform Assoc. 19Goddard, K., Roudsari, A., Wyatt, J.C.: Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc. 19, 121-127 (2012). https://doi.org/10/fndc86. Accountability and automation bias. L J Skitka, K Mosier, M D Burdick, 10.1006/ijhc.1999.0349International Journal of Human-Computer Studies. 52Skitka, L.J., Mosier, K., Burdick, M.D.: Accountability and automation bias. International Journal of Human-Computer Studies. 52, 701-717 (2000). https://doi.org/10.1006/ijhc.1999.0349. Ubiquitous working: Do work versus non-work environments affect decision-making and concentration? Frontiers in Psychology. C P Burmeister, J Moskaliuk, U Cress, 9Burmeister, C.P., Moskaliuk, J., Cress, U.: Ubiquitous working: Do work versus non-work environments affect decision-making and concentration? Frontiers in Psychology. 9, 1-11 (2018). https://doi.org/10/gqptt3. The speed-accuracy tradeoff: history, physiology, methodology, and behavior. R P Heitz, Frontiers in Neuroscience. 8Heitz, R.P.: The speed-accuracy tradeoff: history, physiology, methodology, and behavior. Frontiers in Neuroscience. 8, (2014). https://doi.org/10/gfw8p2. Situational constraints and work outcomes: The influences of a frequently overlooked construct. L H Peters, E J O&apos;connor, Academy of Management Review. 5Peters, L.H., O'Connor, E.J.: Situational constraints and work outcomes: The influences of a frequently overlooked construct. Academy of Management Review. 5, 391-397 (1980). https://doi.org/10/c9zn4d. The effect of time pressure on decision-making behaviour in a dynamic task environment. J H Kerstholt, Acta psychologica. 86Kerstholt, J.H.: The effect of time pressure on decision-making behaviour in a dynamic task environment. Acta psychologica. 86, 89-104 (1994). https://doi.org/10/fp2g5p.
[]
[ "Writing and storing information in an array of magnetic vortex nanodisks using their azimuthal modes", "Writing and storing information in an array of magnetic vortex nanodisks using their azimuthal modes" ]
[ "H Vigo-Cotrina \nCentro Brasileiro de Pesquisas Físicas\nCentro Brasileiro de Pesquisas Físicas\n22290-180, 22290-180Rio de Janeiro, Rio de JaneiroRJBrazil, Brazil\n", "A P Guimarães \nCentro Brasileiro de Pesquisas Físicas\nCentro Brasileiro de Pesquisas Físicas\n22290-180, 22290-180Rio de Janeiro, Rio de JaneiroRJBrazil, Brazil\n" ]
[ "Centro Brasileiro de Pesquisas Físicas\nCentro Brasileiro de Pesquisas Físicas\n22290-180, 22290-180Rio de Janeiro, Rio de JaneiroRJBrazil, Brazil", "Centro Brasileiro de Pesquisas Físicas\nCentro Brasileiro de Pesquisas Físicas\n22290-180, 22290-180Rio de Janeiro, Rio de JaneiroRJBrazil, Brazil" ]
[]
The switching of a vortex core of a single disk in an array of a multilayer system is investigated by micromagnetic simulation. We found that the perpendicular uniaxial anisotropy (PUA) decreases the frequencies of the azimuthal mode in disks with magnetic vortex configuration. We obtained a phase diagram of magnetic field intensity vs. frequency of the azimuthal mode, as a function of the value of perpendicular uniaxial anisotropy. We demonstrated that rotating magnetic fields (CW and CCW) with frequency equal to azimuthal modes can be used to switch the vortex core of single disks in a disk array. This allows obtaining different memory states with a single array of nanodisks, and therefore writing information through the application of rotating fields.
10.1016/j.jmmm.2018.03.064
[ "https://arxiv.org/pdf/1710.10613v1.pdf" ]
96,443,274
1710.10613
c038eca9886bd97079e924e6a7a444b3cfdc9db8
Writing and storing information in an array of magnetic vortex nanodisks using their azimuthal modes 29 Oct 2017 H Vigo-Cotrina Centro Brasileiro de Pesquisas Físicas Centro Brasileiro de Pesquisas Físicas 22290-180, 22290-180Rio de Janeiro, Rio de JaneiroRJBrazil, Brazil A P Guimarães Centro Brasileiro de Pesquisas Físicas Centro Brasileiro de Pesquisas Físicas 22290-180, 22290-180Rio de Janeiro, Rio de JaneiroRJBrazil, Brazil Writing and storing information in an array of magnetic vortex nanodisks using their azimuthal modes 29 Oct 2017Magnetic vortexazimuthal modevortex core switchingperpendicular anisotropymemory states The switching of a vortex core of a single disk in an array of a multilayer system is investigated by micromagnetic simulation. We found that the perpendicular uniaxial anisotropy (PUA) decreases the frequencies of the azimuthal mode in disks with magnetic vortex configuration. We obtained a phase diagram of magnetic field intensity vs. frequency of the azimuthal mode, as a function of the value of perpendicular uniaxial anisotropy. We demonstrated that rotating magnetic fields (CW and CCW) with frequency equal to azimuthal modes can be used to switch the vortex core of single disks in a disk array. This allows obtaining different memory states with a single array of nanodisks, and therefore writing information through the application of rotating fields. Introduction The magnetic vortex configuration is characterized by an in plane curling magnetization, and a core, where the magnetization points out of the plane. The curling direction defines the circulation C = +1 (counterclockwise (CCW)) and C = -1 (clockwise (CW)). The core has polarity p = +1 when it points along the +z direction and p = -1 in the -z direction [1]. A magnetic vortex presents a translation mode of low frequency, in the sub-gigahertz range, known as gyrotropic motion [1,2] and other two modes of higher frequency (> 1 GHz): azimuthal and radial modes, which have their origin in magnetostatic interactions and thus are also dependent on the dimensions of the disk [3][4][5]. Depending on the ratio of the thickness to the radius of the disk (β = L/R), the azimuthal modes have a splitting in the frequencies [6]. These frequencies have CCW and CW senses of rotation [3]. Magnetic vortices have many potential applications in magnetic data storage devices [2,[7][8][9]. For example, a vortex with p = +1 can store bit 1, and a vortex with p = -1 can store bit 0, or vice-versa. In these applications, the issue of switching vortex cores is a topic of great interest, that has been studied for a long time [10][11][12][13][14][15]. Using rotating magnetic fields with a frequency equal to the gyrotropic frequency, it is possible to switch the vortex core polarity [15,16], but with the downside that this only happens when the sense of rotation of the gyrotropic motion (which is determined by p) coincides with the sense of rotation of the magnetic field [10,15,17]. Another proposal found in the literature is to use rotating magnetic fields with frequencies equal to the characteristic frequency of the azimuthal modes [11,14]. This method has the great advantage that magnetic rotating fields can be used with both directions of rotation (CCW and CW), and allow shorter switching times [11,14]. Switching of vortex cores using azimuthal modes allows high switching critical velocities (of the order of ∼ 800 m/s) [11], compared to those of the gyrotropic mode (∼ 330 m/s) [16]. Nanodisks with a magnetic vortex configuration are generally produced by nanolithography in the form of arrays (matrices of nanodisks) on substrates that can influence their dynamic properties. For example, in a multilayer system, a perpendicular uniaxial anisotropy (PUA) can be induced due to the interface contribution, as has already been demonstrated by Garcia et al. [18]. This PUA influences the processes of switching of the vortex core [15,19]. An array of disks can be used to build an information storage device and/or build logic gate circuits [8,9]. In these arrays, the polarity has an important role, since it determines the type of logical gate to be obtained [8]. Consequently, it is necessary to search for mechanisms to control the polarity of a single disk in an array, without altering the polarity of the neighboring disks. The goal of this work is to propose a novel method for controlling the selectively switching of one single vortex core in a matrix of nanodisk multilayer system, in order to obtain the desired combinations of bits in this matrix. For this purpose, we have used micromagnetic simulation. All simulations were made using the open source software Mumax3 [20], with discretization cell size of 2 × 2 × L nm 3 , where L is the thickness of the disk. The material used is Permalloy (NiFe), with typical parameters [1,19,21]: saturation magnetization M s = 8.6 × 10 5 A/m 2 , exchange stiffness A = 1.3 × 10 −11 J/m, and damping constant α = 0.01. The perpendicular uniaxial anisotropy constant 1 (K z ) varied from 0 to 200 kJ/m 3 . For larger values of K z , skyrmion type magnetic structures emerge [15,19]. Results and discussion Isolated disk We used disks with thickness L = 20 nm and diameter D = 500 nm. For these dimensions, the magnetic vortex configuration is stable [15]. We assumed that the vortex core is initially at the equilibrium position at the center of the disk, and has polarity p = +1 and circulation c = +1. In order to excite the azimuthal spin wave modes, we have applied an in-plane sinc pulse magnetic field B(t) = (B 0 sin(x)/x,0,0), with x = 2πf(t-t 0 ), centered on t 0 = 1 ns, where B 0 = 1 mT is the magnetic field amplitude and f = 50 GHz is the frequency of the magnetic field pulse. The frequencies of the modes are obtained by fast Fourier Transform (FFT) from the time evolution of the x-component of the magnetization. These frequencies are shown in Fig. 1. We repeat the same procedure for each value of K z . There are three frequencies for each value of K z . The lowest value frequency corresponds to the gyrotropic mode (f 0 ≈ 0.35 GHz) and the other two frequencies correspond to the m = -1 (clockwise) and m = +1 (counterclockwise) azimuthal modes, respectively [11,14]. The frequency of the gyrotropic mode remains almost constant with the increase 2 of K z . The azimuthal frequencies decrease with increasing anisotropy, as shown in Fig. 1(a), because the influence of PUA modifies the configuration of the magnetic vortex [18,19]. It is important to note that the effect of PUA on the vortex configuration is totally different from that produced by a perpendicular magnetic field (PMF). Whereas PUA does not alter the gyrotropic mode, PMF does so in a manner proportional to the intensity of this field [14]. In Fig. 1(b) are shown all the frequencies (f = ω/2π), considering negative values for m = -1 and positive values for m = +1 [14]. In order to switch the vortex core, we have used an in-plane rotating magnetic field B(t) = B 0 cos(ωt)x + B 0 sin(ωt)ŷ (+ω for CCW and −ω for CW ) bursts with duration of 24 periods, as suggested by Kammerer et al. [11]. After the magnetic field is turned off, we have monitored the micromagnetic simulation for an additional 1.5 ns with zero magnetic field, to observe possible switching that may occur due to delayed processes [22]. We start by exploring the switching vortex core using f = 0.35 GHz (gyrotropic mode); we encountered a minimal magnetic field intensity B 0 = 1.2 mT, to obtain the switching of vortex cores for the entire range of perpendicular uniaxial anisotropy constant (K z ) used in this work (see section 1). In Fig. 2 are shown the switching times 3 (t sw ) as a function of B 0 , for each value of K z used in this work, using the gyrotropic mode. We found a decrease of values of t sw with the increase of B 0 . For K z = 0 kJ/m 3 it is obtained a switching time (t sw ) of approximately 15 ns (Fig. 2). This time is reduced by approximately 88% with the increase of B 0 from 15 ns (B 0 = 1.2 mT) to 1.82 ns (B 0 = 7 mT). For larger intensity magnetic field, undesirable multiple switching events appear. The same behavior is observed for K z 0. Althought t sw decreases with the increase of B 0 , the critical velocity (approximately 329 m/s) that the vortex core reaches before switching, is the same for all values of B 0 . This is known as the universal criterion of switching, as has already been demonstrated by Lee et al. [16]. However, this critical velocity decreases when K z 0, from 329 m/s (K z = 0 kJ/m 3 ) to 200 m/s (K z = 200 kJ/m 3 ), but is still independent of B 0 . These values are consistent with those obtained by Fior et al. [15]. In order to obtain the magnetic field intensity to switch the vortex core using azimuthal modes, we have varied B 0 from 1 mT to 6 mT for m = -1 mode, and from 1 mT to 8 mT for the m = +1 mode, in 0.2 mT steps for both m = +1 (CCW) and m = -1 (CW) modes. The switching phase diagram (B 0 vs. frequency) is shown in Fig. 3. We have slightly varied the values of the frequencies shown in Fig. 1, as suggested in ref. [14], in order to obtain lower values of the threshold magnetic field intensity. We encountered three regions for both modes m = +1 and m = -1: 1) no switching, 2) single switching, and 3) multiple switching. We are interested in the region of single switching, with the purpose of having total control in the selectivity of the resulting polarity p. In Fig. 3 we do not show the region for 1 mT < B 0 < 2 mT for m = +1 mode, and the region for 1 mT < B 0 < 3.6 mT for m = -1 mode, since there is no switching for all the values of K z used in this work. The threshold of the magnetic field intensity (B 0 ) and the range of single switching of vortex core are different for each value of K z (see Fig. 3), and for modes m = +1 and m = -1. A wider range of B 0 values is found for m = -1 mode that result in a single switching (green squares), in comparison with m = +1. For m = +1 mode, multiple switching events are dominant in the phase diagram, whereas for m = -1, single switching events are more frequent. Multiple switching events appear because the applied magnetic field pumps enough energy to reverse repeatedly the vortex core between p = +1 and p = -1 [13]. For the m = +1 mode, we used a magnetic field (B 0 ) of up to 8 mT to obtain single switching for K z = 200 kJ/m 3 . This is different for the m = -1 mode, where a threshold of magnetic field B 0 = 3.6 mT is necessary to obtain a single switching of vortex core. All these differences between modes m = +1 and m = -1 are due to the fact that the modes act differently in the creation of a dip, which is the first step in the switching process [11,23]. Whereas that m = -1 mode leads to the formation of a single dip, m = +1 mode leads to the formation of a double dip [11]. Fig. 4 shows the switching times (t sw ) for both modes, and for the case K z = 0 kJ/m 3 and K z = 100 kJ/m 3 . These times decrease with increasing intensity of the magnetic field for m = +1 mode, however, for m = -1, it is observed that for some values of B 0 , t sw does not have the same behavior 4 . This can also be attributed to nolinear dynamics. Similar behaviors were obtained by Kammerer et al. [11] for Permalloy disks, and using K z = 0 kJ/m 3 . It is important to note that although the switching time can be reduced increasing the magnetic field intensity, and using the gyrotropic mode (Fig. 2), we obtain shorter switching times with lower field intensities using the azimuthal modes (Fig. 4). As mentioned earlier, we get t sw = 1.82 ns (B 0 = 7 mT) using the gyrotropic mode for K z = 0 kJ/m 3 , but using the azimuthal mode we get shorter times, of approximately t sw = 1.29 ns for the m = +1 mode, and t sw = 0.85 ns for m = -1 mode. Both values of t sw were obtained with magnetic field intensity smaller that 7 mT (Fig. 4). The nolinear dynamics breaks the universal criterion for switching of vortex cores found for the gyrotropic mode. We have obtained different values for critical velocities. For example, for K z = 0 kJ/m 3 , we found an average critical velocity (v sw ) for the entire region where there is single switching (Fig. 3) of approximately 816 m/s and 400 m/s, for modes m = +1 and m = -1, respectively. These values are higher than those found in the gyrotropic mode, and similar to those found in ref. [11]. In Table 1 are shown the values of the average critical velocities for modes m = +1 and m = -1. 1 0 816 400 50 913 380 100 686 381 150 806 491 200 601 375 Table 1: Average critical velocity for different values of K z and modes m = +1 and m = -1. Table 1, using the azimuthal modes, are higher than the universal critical velocity found using the gyrotropic mode, of approximately 330 m/s. These higher velocities are responsible for the shorter switching times. K z (kJ/m 3 ) v sw (m/s) v sw (m/s) m = +1 m = - All values of critical velocities shown in The average critical velocities do not show a linear behavior with the increase of K z ; they increase for K z = 50 kJ/m 3 (m = +1), then decrease for K z = 100 kJ/m 3 and then increase again. This behavior contrasts with the case where PMF increases or decreases v sw depending on whether the PMF is parallel or antiparallel to p [14]. This difference is expected, since PUA does not modify the gyrotropic mode, whereas PMF does. Moreover, PUA and PMF act differently on the vortex core, leading to totally different behaviors in the switching processes, as has already been demonstrated by Fior et al. [15]. Next, we used the influence of PUA and rotating magnetic fields in order to obtain different final states in a 2×2 matrix of disks. 2 × 2 matrix We will now describe a matrix of four vortex disks, where we can write four bits of information using the rotating magnetic fields. We used an array of four identical disks, as shown in Fig. 5, with thickness L = 20 nm, diameter D = 500 nm and separated by an edge to edge distance x = 500 nm. Each disk has its own K zn , with n = 1,2,3,4 and K z1 < K z2 < K z3 < K z4 . The initial configuration is that all disks have polarity and circulation p = C = +1. It is important to mention that the magnetostatic interation between the disks can change the values of the intensity of magnetic fields for switching [24] shown in Fig. 3. However, this does not alter the principle that the magnetic field having frequency equal to either modes (m = +1 or m = -1) only reverses the vortex core of the disk to which these modes correspond. In order to obtain different final states, as shown in Fig. 6, we used a global CCW rotating magnetic field B(t) = B 0 cos(ωt)x + B 0 sin(ωt)ŷ acting on the entire matrix, with duration of 24 periods. We used the following convention: blue digit 1 to indicate positive polarity p = +1, and red digit 0 to indicate negative polarity p = -1 (see Fig. 6). In order to obtain any of the final states that have a single Figure 6: Initial configuration of the matrix of disks and the resulting configurations obtained after applying the rotating fields. Blue number 1 and red number 0 correspond to polarity p = +1 and p = -1, respectively red digit, we choose a frequency f equal to the azimuthal m = +1 mode of the disk of interest to be switched. For example, to switch the vortex core of disk 1 (K z1 = 0 kJ/m 3 ), we used a frequency f = 8.4 GHz (see Fig. 1), and B 0 = 4.6 mT. The magnetic field will efficiently excite the disk 1, to which the f frequency corresponds, switching the vortex core only in this disk (see videos in the Supplementary material). It is important to remark that this would be impossible using a frequency equal to the gyrotropic mode, because disks with higher K z would switch before those of smaller values of K z [15]. A final state with two red digits can be obtained in two steps using the azimuthal modes: first switching one of the disks, and next the second disk. This could be done in one step using the gyrotropic mode, depending on which disks the user wants the switching. For example, if one desires to switch only disks 3 and disk 4, the duration of applied magnetic field will have to be that necessary for the switching to occur on disk 3, since the switching on disk 4 would occur earlier [15], due to K z3 < K z4 , thus t sw3 > t sw4 . However, if one desired to switch disk 1 (K z1 ) and disk 4 (K z4 ), the application of magnetic field will switch the disks with intermediate values of K z , such as K z2 and K z3 . Final states with three red digits can be obtained in three steps, following similar procedure, as in the case of two red digits. Full switching of all disks is trivial and can be obtained using a frequency equal to that of the gyrotropic mode. 5 Switching from negative polarity p = -1 to positive polarity p = +1 is also possible using the azimuthal modes, but changing the sense of rotation of these modes. For p = +1, we have m = +1 (CCW) and m = -1 (CW), and for p = -1 we have m = +1 (CW) and m = -1 (CCW) [11]. 5 The same methodology can be used for the case of having the disks located in the form of nanopillars. Conclusion In this work, we initially studied the influence of PUA on azimuthal modes in a matrix of disks with magnetic vortex configuration, using micromagnetic simulations. Our results show that azimuthal mode frequencies decrease with increase of PUA, and modified the intensity of the magnetic field necessary to switching the vortex core. Based on this initial study, we then demonstrated that the azimuthal modes can be used to selective switching in arrays of disks, therefore obtaining several different final state configurations using a single array of disks. This shows the great advantage of using the azimuthal modes in comparison to the use of the gyrotropic mode. This work also shows that when the intrinsic variable K z is considered, the universality of the value of the critical velocity is broken, even for the gyrotropic mode. Our proposal addresses a subject not very studied, the influence of the PUA in the magnetic vortex dynamics, allowing to write information in an array of disks through selective switching of vortex cores. This simple system can be expanded to larger arrays. Figure 1 : 1(a) and (b) Values of the gyrotropic mode frequency and azimuthal spin wave frequencies for m = +1 (counterclockwise) and m = -1 (clockwise) obtained by a fast Fourier transform (FFT) from the time evolution of the xcomponent of the magnetization for each value of K z , for p = +1. Orange triangles correspond to the counterclockwise frequencies, green squares to the clockwise frequencies, and blue stars to the gyrotropic mode. Figure 2 : 2Switching times trough the gyrotropic mode as a function of magnetic field intensity (B 0 ) for different values of K z . Figure 3 : 3Switching phase diagrams for (a) m = +1 (CCW) and (b) m = -1. Red triangles indicate no switching, green squares indicate single switching and blue squares indicate multiples switching. Figure 4 : 4Switching times versus magnetic field intensity for (a-b) m = +1 (CCW) and (c-d) m = -1 (CW), for K z = 0 kJ/m 3 and K z = 100 kJ/m 3 . Figure 5 : 5Array of disks with magnetic vortex configuration and different values of K z . These values of K z can be obtained experimentally increasing the thickness of disk as shown in ref.[18].2 There is a variation of approximately 3% for the maximum value of K z , but this change is negligible. This too was demonstrated by Fior et al.[15]. See Supplementary material for details of how these values were obtained these values. Supplemental material shows t sw for all values of K z used in this work. ACKNOWLEDGMENTSThe authors would like to thank the support of the Brazilian agencies CNPq and FAPERJ. Magnetic vortex state stability, reversal and dynamics in restricted geometries. K Y Guslienko, 10.1166/jnn.2008.003J. Nanoscience Nanotechnol. 8K. Y. Guslienko, Magnetic vortex state stability, reversal and dynamics in restricted geometries, J. Nanoscience Nanotechnol. 8 (2008) 2745-2760. doi:10.1166/jnn.2008.003. A P Guimarães, Principles of Nanomagnetism. ChamSpringer2nd EditionA. P. Guimarães, Principles of Nanomagnetism, 2nd Edition, Springer, Cham, 2017. Dynamic origin of azimuthal modes splitting in vortex-state magnetic dots. K Y Guslienko, A N Slavin, V Tiberkevich, S.-K Kim, 10.1103/PhysRevLett.101.247203Phys. Rev. Lett. 101247203K. Y. Guslienko, A. N. Slavin, V. Tiberkevich, S.-K. Kim, Dynamic origin of azimuthal modes splitting in vortex-state magnetic dots, Phys. Rev. Lett. 101 (2008) 247203. doi:10.1103/PhysRevLett.101.247203 . Precise probing spin wave mode frequencies in the vortex state of circular magnetic dots. A A Awad, K Y Guslienko, J F Sierra, G N Kakazei, V Metlushko, F G Aliev, 10.1063/1.3268453Applied Physics Letters. 96112503A. A. Awad, K. Y. Guslienko, J. F. Sierra, G. N. Kakazei, V. Metlushko, F. G. Aliev, Precise probing spin wave mode frequencies in the vortex state of circular magnetic dots, Applied Physics Letters 96 (1) (2010) 012503. doi:10.1063/1.3268453. Low-amplitude magnetic vortex core reversal by non-linear interaction between azimuthal spin waves and the vortex gyromode. M Sproll, M Noske, H Bauer, M Kammerer, A Gangwar, G Dieterle, M Weigand, H Stoll, G Woltersdorf, C H Back, G Schtz, 10.1063/1.4861779Appl. Phys. Lett. 104112409M. Sproll, M. Noske, H. Bauer, M. Kammerer, A. Gangwar, G. Di- eterle, M. Weigand, H. Stoll, G. Woltersdorf, C. H. Back, G. Schtz, Low-amplitude magnetic vortex core reversal by non-linear interaction between azimuthal spin waves and the vortex gyromode, Appl. Phys. Lett. 104 (1) (2014) 012409. doi:10.1063/1.4861779. Interactions of spin waves with a magnetic vortex. J P Park, P A Crowell, 10.1103/PhysRevLett.95.167201Phys. Rev. Lett. 95167201J. P. Park, P. A. Crowell, Interactions of spin waves with a magnetic vortex, Phys. Rev. Lett. 95 (2005) 167201. doi:10.1103/PhysRevLett.95.167201 . Current controlled random-access memory based on magnetic vortex handedness. S Bohlens, B Krüger, A Drews, M Bolte, G Meier, D Pfannkuche, 10.1063/1.2998584Appl. Phys. Lett. 9314142508S. Bohlens, B. Krüger, A. Drews, M. Bolte, G. Meier, D. Pfannkuche, Current controlled random-access memory based on magnetic vortex handedness, Appl. Phys. Lett. 93 (14) (2008) 142508. doi:10.1063/1.2998584. Logic operations based on magnetic-vortexstate networks. H Jung, Y.-S Choi, K.-S Lee, D.-S Han, Y.-S Yu, M.-Y Im, P Fischer, S.-K Kim, 10.1021/nn3000143ACS Nano. 65H. Jung, Y.-S. Choi, K.-S. Lee, D.-S. Han, Y.-S. Yu, M.-Y. Im, P. Fischer, S.-K. Kim, Logic operations based on magnetic-vortex- state networks, ACS Nano 6 (5) (2012) 3712-3717, pMID: 22533663. doi:10.1021/nn3000143. Single array of magnetic vortex disks uses in-plane anisotropy to create different logic gates. H Vigo-Cotrina, A P Guimarães, 10.1016/j.jmmm.2017.05.027Journal of Magnetism and Magnetic Materials. 441H. Vigo-Cotrina, A. P. Guimarães, Single array of magnetic vortex disks uses in-plane anisotropy to create different logic gates, Jour- nal of Magnetism and Magnetic Materials 441 (2017) 14 -20. doi:10.1016/j.jmmm.2017.05.027 . Origin, criterion, and mechanism of vortex-core reversals in soft magnetic nanodisks under perpendicular bias fields. M.-W Yoo, K.-S Lee, D.-E Jeong, S.-K Kim, 10.1103/PhysRevB.82.174437Phys. Rev. B. 82174437M.-W. Yoo, K.-S. Lee, D.-E. Jeong, S.-K. Kim, Origin, criterion, and mechanism of vortex-core reversals in soft magnetic nanodisks under perpendicular bias fields, Phys. Rev. B 82 (2010) 174437. doi:10.1103/PhysRevB.82.174437. Magnetic vortex core reversal by excitation of spin waves. M Kammerer, M Weigand, M Curcic, M Noske, M Sproll, A Vansteenkiste, B Van Waeyenberge, H Stoll, G Woltersdorf, C H Back, G Schuetz, 10.1038/ncomms1277Nat. Comm. 2279M. Kammerer, M. Weigand, M. Curcic, M. Noske, M. Sproll, A. Vansteenkiste, B. Van Waeyenberge, H. Stoll, G. Woltersdorf, C. H. Back, G. Schuetz, Magnetic vortex core reversal by excitation of spin waves, Nat. Comm. 2 (279). doi:10.1038/ncomms1277. Radial-spin-wave-mode-assisted vortexcore magnetization reversals. M.-W Yoo, J Lee, S.-K Kim, 10.1063/1.4705690Appl. Phys. Lett. 10017172413M.-W. Yoo, J. Lee, S.-K. Kim, Radial-spin-wave-mode-assisted vortex- core magnetization reversals, Appl. Phys. Lett. 100 (17) (2012) 172413. doi:10.1063/1.4705690. Fast spin-wavemediated magnetic vortex core reversal. M Kammerer, H Stoll, M Noske, M Sproll, M Weigand, C Illg, G Woltersdorf, M Fähnle, C Back, G Schütz, 10.1103/PhysRevB.86.134426Phys. Rev. B. 86134426M. Kammerer, H. Stoll, M. Noske, M. Sproll, M. Weigand, C. Illg, G. Woltersdorf, M. Fähnle, C. Back, G. Schütz, Fast spin-wave- mediated magnetic vortex core reversal, Phys. Rev. B 86 (2012) 134426. doi:10.1103/PhysRevB.86.134426. Azimuthal-spin-wave-mode-driven vortex-core reversals. M.-W Yoo, S.-K Kim, 10.1063/1.4905689J. Appl. Phys. 117223904M.-W. Yoo, S.-K. Kim, Azimuthal-spin-wave-mode-driven vortex-core reversals, J. Appl. Phys. 117 (2) (2015) 023904. doi:10.1063/1.4905689. Indirect switching of vortex polarity through magnetic dynamic coupling. G B M Fior, E R P Novais, J P Sinnecker, A P Guimarães, F Garcia, 10.1063/1.4942534J. Appl. Phys. 11993906G. B. M. Fior, E. R. P. Novais, J. P. Sinnecker, A. P. Guimarães, F. Garcia, Indirect switching of vortex polarity through magnetic dynamic coupling, J. Appl. Phys. 119 (2016) 093906. doi:10.1063/1.4942534. Universal criterion and phase diagram for switching a magnetic vortex core in soft magnetic nanodots. K.-S Lee, S.-K Kim, Y.-S Yu, Y.-S Choi, K Y Guslienko, H Jung, P Fischer, 10.1103/PhysRevLett.101.267206Phys. Rev. Lett. 101267206K.-S. Lee, S.-K. Kim, Y.-S. Yu, Y.-S. Choi, K. Y. Guslienko, H. Jung, P. Fischer, Universal criterion and phase diagram for switching a mag- netic vortex core in soft magnetic nanodots, Phys. Rev. Lett. 101 (2008) 267206. doi:10.1103/PhysRevLett.101.267206. Unidirectional sub-100-ps magnetic vortex core reversal. M Noske, A Gangwar, H Stoll, M Kammerer, M Sproll, G Dieterle, M Weigand, M Fähnle, G Woltersdorf, C H Back, G Schütz, 10.1103/PhysRevB.90.104415Phys. Rev. B. 90104415M. Noske, A. Gangwar, H. Stoll, M. Kammerer, M. Sproll, G. Dieterle, M. Weigand, M. Fähnle, G. Woltersdorf, C. H. Back, G. Schütz, Unidirec- tional sub-100-ps magnetic vortex core reversal, Phys. Rev. B 90 (2014) 104415. doi:10.1103/PhysRevB.90.104415. Tailoring magnetic vortices in nanostructures. F Garcia, H Westfahl, J Schoenmaker, E J Carvalho, A D Santos, M Pojar, A C Seabra, R Belkhou, A Bendounan, E R P Novais, A P Guimarães, 10.1063/1.3462305Appl. Phys. Lett. 97222501F. Garcia, H. Westfahl, J. Schoenmaker, E. J. Carvalho, A. D. Santos, M. Pojar, A. C. Seabra, R. Belkhou, A. Bendounan, E. R. P. Novais, A. P. Guimarães, Tailoring magnetic vortices in nanostructures, Appl. Phys. Lett. 97 (2) (2010) 022501. doi:10.1063/1.3462305. Effect of perpendicular uniaxial anisotropy on the annihilation fields of magnetic vortices. E R P Novais, S Allende, D Altbir, P Landeros, F Garcia, A P Guimarães, 10.1063/1.4824803J. Appl. Phys. 11415E. R. P. Novais, S. Allende, D. Altbir, P. Landeros, F. Garcia, A. P. Guimarães, Effect of perpendicular uniaxial anisotropy on the annihi- lation fields of magnetic vortices, J. Appl. Phys. 114 (15) (2013) -. doi:10.1063/1.4824803. The design and verification of Mumax3. A Vansteenkiste, J Leliaert, M Dvornik, M Helsen, F Garcia-Sanchez, B Van Waeyenberge, 10.1063/1.4899186AIP Advances. 4107133A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, B. Van Waeyenberge, The design and verification of Mumax3, AIP Ad- vances 4 (2014) 107133. doi:10.1063/1.4899186. Eigenfrequencies of vortex state excitations in magnetic submicron-size disks. K Y Guslienko, B A Ivanov, V Novosad, Y Otani, H Shima, K Fukamichi, 10.1063/1.1450816J. Appl. Phys. 91K. Y. Guslienko, B. A. Ivanov, V. Novosad, Y. Otani, H. Shima, K. Fukamichi, Eigenfrequencies of vortex state excitations in mag- netic submicron-size disks, J. Appl. Phys. 91 (2002) 8037-8039. doi:10.1063/1.1450816. Delayed magnetic vortex core reversal. M Kammerer, M Sproll, H Stoll, M Noske, M Weigand, C Illg, M Fähnle, G Schütz, 10.1063/1.4773592Applied Physics Letters. 102112404M. Kammerer, M. Sproll, H. Stoll, M. Noske, M. Weigand, C. Illg, M. Fähnle, G. Schütz, Delayed magnetic vortex core reversal, Applied Physics Letters 102 (1) (2013) 012404. doi:10.1063/1.4773592. Dynamic origin of vortex core switching in soft magnetic nanodots. K Y Guslienko, K.-S Lee, S.-K Kim, 10.1103/PhysRevLett.100.027203Phys. Rev. Lett. 10027203K. Y. Guslienko, K.-S. Lee, S.-K. Kim, Dynamic origin of vortex core switching in soft magnetic nanodots, Phys. Rev. Lett. 100 (2008) 027203. doi:10.1103/PhysRevLett.100.027203. Magnetic interaction effect on the critical switching current in vortex arrays. Y Lu, Z Zhang, Y Liu, 10.1063/1.3590333J. Appl. Phys. 10910103906Y. Lu, Z. Zhang, Y. Liu, Magnetic interaction effect on the critical switch- ing current in vortex arrays, J. Appl. Phys. 109 (10) (2011) 103906. doi:10.1063/1.3590333.
[]
[ "BlindSignedID: Mitigating Denial-of-Service Attacks on Digital Contact Tracing", "BlindSignedID: Mitigating Denial-of-Service Attacks on Digital Contact Tracing" ]
[ "Bo-Rong Chen \nUniversity of Illinois at Urbana-Champaign\n\n", "Yih-Chun Hu \nUniversity of Illinois at Urbana-Champaign\n\n" ]
[ "University of Illinois at Urbana-Champaign\n", "University of Illinois at Urbana-Champaign\n" ]
[]
Due to the recent outbreak of COVID-19, many governments suspended outdoor activities and imposed social distancing policies to prevent the transmission of SARS-CoV-2. These measures have had severe impact on the economy and peoples' daily lives. An alternative to widespread lockdowns is effective contact tracing during an outbreak's early stage. However, mathematical models (e.g.,[24]) suggest that epidemic control for SARS-CoV-2 transmission with manual contact tracing is implausible. To reduce the effort of contact tracing, many digital contact tracing projects (e.g., PEPP-PT [7], DP-3T [34], TCN [8], BlueTrace [13], Google/Apple Exposure Notification [4], and East/West Coast PACT [6, 17])are being developed to supplement manual contact tracing. However, digital contact tracing has drawn scrutiny from privacy advocates, since governments or other parties may attempt to use contact tracing protocols for mass surveillance. As a result, many digital contact tracing projects build privacy-preserving mechanisms to limit the amount of privacy-sensitive information leaked by the protocol. In this paper, we examine how these architectures resist certain classes of attacks, specifically DoS attacks, and present BlindSignedIDs, a privacypreserving digital contact tracing mechanism, which are verifiable ephemeral identifiers to limit the effectiveness of MAC-compliant DoS attacks. In our evaluations, we showed BlindSignedID can effectively deny bogus EphIDs, mitigating DoS attacks on the local storage beyond 90% of stored EphIDs. Our ex-ample DoS attacks showed that using 4 attackers can cause the gigabyte level DoS attacks within normal working hours and days.
10.1145/3384419.3430599
[ "https://arxiv.org/pdf/2008.09351v2.pdf" ]
221,246,353
2008.09351
4e84f6a67b0cb8087b5948f5b161bdd66d565c48
BlindSignedID: Mitigating Denial-of-Service Attacks on Digital Contact Tracing Bo-Rong Chen University of Illinois at Urbana-Champaign Yih-Chun Hu University of Illinois at Urbana-Champaign BlindSignedID: Mitigating Denial-of-Service Attacks on Digital Contact Tracing Denial-of-Service AttacksPrivacyDigi- tal Contact TracingCOVID-19 Due to the recent outbreak of COVID-19, many governments suspended outdoor activities and imposed social distancing policies to prevent the transmission of SARS-CoV-2. These measures have had severe impact on the economy and peoples' daily lives. An alternative to widespread lockdowns is effective contact tracing during an outbreak's early stage. However, mathematical models (e.g.,[24]) suggest that epidemic control for SARS-CoV-2 transmission with manual contact tracing is implausible. To reduce the effort of contact tracing, many digital contact tracing projects (e.g., PEPP-PT [7], DP-3T [34], TCN [8], BlueTrace [13], Google/Apple Exposure Notification [4], and East/West Coast PACT [6, 17])are being developed to supplement manual contact tracing. However, digital contact tracing has drawn scrutiny from privacy advocates, since governments or other parties may attempt to use contact tracing protocols for mass surveillance. As a result, many digital contact tracing projects build privacy-preserving mechanisms to limit the amount of privacy-sensitive information leaked by the protocol. In this paper, we examine how these architectures resist certain classes of attacks, specifically DoS attacks, and present BlindSignedIDs, a privacypreserving digital contact tracing mechanism, which are verifiable ephemeral identifiers to limit the effectiveness of MAC-compliant DoS attacks. In our evaluations, we showed BlindSignedID can effectively deny bogus EphIDs, mitigating DoS attacks on the local storage beyond 90% of stored EphIDs. Our ex-ample DoS attacks showed that using 4 attackers can cause the gigabyte level DoS attacks within normal working hours and days. Introduction In light of the success of several countries' SARS-CoV-2 containment strategies based on aggressive contact tracing, several researchers have developed approaches to digital contact tracing [4,6,7,8,13,17,34]. Because of the privacy concerns inherent with disclosing user location traces, several proposals include mechanisms for privacypreserving proximity tracing. These mecachisms make use of a limited-time identity to reduce an adversary's ability to track that device from broadcast to broadcast; these identities are variously called ephemeral ID [34], Temporary ID [13], and Rolling Proximity ID [4]; in this paper, we will use the term ephemeral ID or EphID. To reduce the amount of bandwidth needed to disseminate positive identities, these ephemeral IDs are generated from a master secret in a one-way manner, such that the master secret is sufficient to derive all relevant ephemeral IDs, but that several ephemeral IDs leak no information about other associated ephemeral IDs. In existing designs, a user receiving an ephemeral ID can neither associate that ephemeral ID with other ephemeral IDs from the same contact, nor can the user verify the validity of an ephemeral ID that it receives; as a result, a user must store all of the ephemeral IDs that it encounters across the maximum incubation period (widely set at 14 days for SARS-CoV-2). Furthermore, storing these ephemeral IDs in the cloud faces significant privacy challenges; if a large fraction of users trust a single cloud storage provider to store unencrypted ephemeral IDs, that cloud storage provider would learn extensive privacy-compromising information about a user's social structure. As a result, existing schemes [4,6,8,17,34] largely assume that ephemeral IDs are stored on the user's own device until the incubation period has expired. However, because of the limited storage available on commonly-used mobile devices, an attacker can send a large number of bogus ephemeral IDs, overwhelming the storage of all but high-end devices. We call this attack the DoS attacks. In fact, the existing EphID-based designs [4,6,7,8,13,17,34] are widely adapted by many countries [3,5,7,10,13], which occupy approximately 12.5% [37] of the world population, thus making these people's mobile devices vulnerable to DoS attacks. In this paper, we propose a design for BlindSignedIDs, where receivers can verify ephemeral IDs in-place, and senders can generate only a limited number of ephemeral IDs based on a resource constraint, such as personal phone numbers, that must be verified before privacy-preserving verifiable ephemeral identifiers are issued, while still providing privacy from the certificate authority through the use of blind signatures. The rest of this paper is organized as follows: in Section 2, we make system assumptions and describe the attacker model. In Section 3, we describe the security issues regarding current designs of digital contact tracing. Section 4 introduces our approach in detail. We provide an analysis of BlindSignedID in Section 5 and 6. Section 7 demonstrates our example DoS attacks and BlindSignedID can effectively reduce DoS attacks. Section 8 reviews current projects. We make conclusions in Section 9. System Assumptions and Threat Model System Assumptions Each received EphID is kept in the user's local storage for at least 14 days. The stored EphID includes additional information (e.g., encountering time and duration), which consumes fixed-byte spaces in the local storage. Moreover, the user stores every received EphID regardless of any encountering duration for recording and checking exposure. Finally, the mobile devices support Bluetooth Random Private Address [1], where each received EphID cannot be linked to the real Bluetooth device address. Threat Model The attacker is able to modify the BLE protocol such that her device can advertise BLE beacon packets with the minimum interval. Namely, the transmission interval is not bounded by the underlaying BLE stack (e.g., maximum and minimum intervals in BlueZ [2]). She specifies the standard BLE flags with arbitrary content (i.e., spoofing EphID) such that other users will receive and keep the packets in the local storage. Morever, the attacker uses a powerful BLE antenna for larger transmission ranges. Problem Environment and Statement Existing Contact Tracing Design In this section, we describe DP-3T [34] at a high level, to set the context for our attacks and our approaches. DP-3T generates ephemeral IDs using: EphID 1 || . . . || EphID n = PRG (PRF (SKt, "broadcast key"))(1) where EphID i is a 16-byte ephemeral ID, SKt is secret day seed, PRF is a pseudo-random function, and PRG is a pseudo-random generator. To further reduce the amount of information disseminated for a positive report, DP-3T generates the next secret day seed SKt by hashing the previous secret day seed SKt−1 with a one-way hash function, so that the dissemination of one secret day seed allows contact tracing for all subsequent days. Each user generates 288 EphIDs per day, randomly reorders them, and uses Bluetooth Low Energy (BLE) to broadcast each EphID for one epoch (5 minutes). A device receiving such a beacon stores the EphID, signal strength information (such as RSSI), and the date on which the beacon was received, for a total of 36 bytes per EphID for the low-cost design and 52 bytes for the unlinkable design [34]. A receiving device must store these records for 14 days. When a user tests positive, her secret day seeds from the infectious period will be uploaded to the backend, so that other users can fetch seeds and compare with their local records to determine whether they have been in contact with this particular user. Problem Statement Although storing in local space with user-generated ephemeral IDs reveals no information from the user, it suffers from DoS attacks. Specifically, because a user cannot link messages from different secret-day seeds (and even if they could, the attacker can choose arbitrary secret-day seeds), an attacker with a high-power Bluetooth antenna can stream newly-generated ephemeral IDs limited only by bandwidth. BLE has bandwidth of 1-2 Mbps, so any receiver within transmission range of an attacker will need to store (based on the overhead specified in the DP-3T paper) 1-2 GB per hour or 8-16 GB for an 8-hour day. Even when considering the possibility of compression or elimination of certain data, the incompressible EphID alone will amount to between 450 MB and 900 MB per hour, or 3.6 GB to 7.2 GB across an 8-hour working day. Since many phones do not have several gigabytes of available storage, such an attacker can cause the victims phone to run out of space. We believe that the lack of authentication of ephemeral IDs makes many privacy-preserving contact tracing approaches [4,6,7,8,13,17,34] vulnerable to DoS attacks. Out-of-Scope Problems Our approaches are designed for the attacks listed above, and not for other, similar attacks. In this section, we describe similar attacks that we do not attempt to address, and discuss the design choices that led us to draw these boundaries. Our DoS attacks aims at MAC-and-PHY-compliant attackers. While previous work has examined jamming attacks against the physical layer [22,23,27,33] and other attacks against the MAC layer [11,14,15,18,19,25,28], the aim of digital contact tracing is to be immediately available on commodity hardware, which includes commodity wireless chipsets. As a result, mechanisms that require new approaches to the physical-layer or MAClayer are non-starters for these protocols. Furthermore, the level of complexity required to build jamming devices or MAC-layer attacks makes these attacks less practical than ones that can be built on any Bluetooth compatible system. When future devices may include support for more robust physical-and MAC-layers, our work can integrate with these approaches for greater robustness. Proposed Approach In this section, we describe our approach to the attacks discussed earlier. First, we examine how ephemeral IDs can be verified. Previous work [12,26,29,30,36] have developed schemes that use attacker's resource limitations (e.g., ability to compute computational puzzles or solve CAPTCHAs) to create fairness between normal users and automated attackers. Our approach to BlindSignedIDs builds on this work to limit the ability of adversaries to create unbounded numbers of valid ephemeral IDs. In particular, verifying a user's real identity or phone num-ber can serve as the basis for generating a set of valid identifiers in a privacy-preserving way. Generating verifiable ephemeral identifiers BlindSignedIDs. We build verifiable ephemeral identifiers on subliminal-free blind signatures [21], illustrating the approach using RSA blind signatures. The signer first generates a public-private key pair for each day; we call (et, dt) the verification and signing exponent used on day t − 2 to create credentials for day t. (The modulus n should also change from day to day.) Both the verification exponent, the modulus, and a Prefixt (used to pad the EphID to a length suitable for signing) are published in advance of day t−2. On day t−2, the user creates a main secret day seed skt, and uses a pseudo-random generator on skt to generate M secondary secret day seeds: SKt 1 || . . . || SKt M = PRG (PRF (skt))(2) (where M is chosen to reduce the chance that an attacker generates EphIDs corresponding to a secret key; we suggest M = 100) and generates M sets of EphIDs using {EphID 1 || . . . || EphID n }M = PRG (PRF (SKt M , "broadcast key M"))(3) The user then blinds each EphID. To blind EphIDs in a manner easily verifiable by the signer, the user chooses a main blinding seed bt and uses a pseudo-random generator on bt to generate M secondary blinding seeds bt i , one for each secondary secret day seed: bt 1 || . . . || bt M = PRG (PRF (bt))(4) The user then uses a pseudo-random generator on each bt i to compute a set of blinding values for each EphID, with {r1 || . . . ||rn}i = PRG (PRF (bt i )) ri j =ri j e t(5) where et is the signer's public exponent, and blinding the EphID EphID i j := ( Prefixt || EphID i j ) * ri j (6) Prefixt is a value chosen for that day to pad the EphID to the size of the RSA group. For example, if we use 2048-bit RSA and 104-bit EphID, the Prefixt is 1944 bits. In this manner, the user generates M sets of n blinded EphIDs, each of which corresponds to a single secondary secret day seed. The user then contacts the backend server, proves . SD is := EphID d t is(7) where dt is the private key for signing on day t − 2 (and use on day t). If EphIDs are not correctly generated, the signer adds the user into a blocklist, such that the credentials she used are not valid at the signer for a certain period of time (the length of time to be chosen should depend on the possibility of reassignment; for example, longer-term blocking for national identities and social media accounts may be reasonable, but blocks for phone numbers should expire after a few months). By ensuring that EphIDs are generated correctly, we can ensure that a positive test can be represented by a single secondary secret-day seed, ensuring that anyone broadcasting verifiable ephemeral identifiers can correctly report a positive test. Figure 1 illustrates the registration process. The user computes signatures SDi from the signed val-ues SD i j by inverting theri j used to blind each EphID is : SDi s := SD d t i * r −1 is mod N = (( Prefixt || EphID is ) * ri s ) d t * r is −1 mod N = ( Prefixt || EphID is ) d t * r is e t * d t * r is −1 mod N = ( Prefixt || EphID is ) d t * r is * r is −1 mod N = ( Prefixt || EphID is ) d t mod N (8) All EphIDs generated and signed on day t − 2 are used on day t. We can choose an arbitrary period for signing and refreshing keys, but we choose a day in line with DP-3T [34]. Authenticating verifiable ephemeral identifiers in Beacons Because BLE beacons only have 31 bytes of space, of which 3 bytes are used for BLE flags, including a blind signature for an EphID into the beacon would require the use of several beacons, which could lead to computational attacks on packet defragmentation. To avoid these possibilities, we propose to have the signer exchange signed EphIDs for EphIDs authenticated using TESLA [31]. TESLA uses authenticators based on symmetric cryptography to provide broadcast authentication by using the time at which information is disclosed as the mechanism for creating asymmetry. In particular, the entity wishing to authenticate messages (in this case, the signer) creates a sequence of keys using a one-way hash function: k1, k2 = H(k1), . . . , ki+1 = H(ki) . . ., and a schedule at which these keys will be released: t1, t2 = t1 + T, . . . , ti+1 = ti + T . . ., as illustrated in Figure 2. The signer will release key ki at time ti, so if a verifier receives a message before the signer's clock has reached ti, the signer knows that the message is authentic if it is authenticated using the key ki. TESLA relies on loose time-synchronization for security, meaning that each clock must be within a bounded margin of error ∆ from the signer's clock (in our design, T will be approximately 5 minutes, and the maximum timesynchronization error will be around 10 seconds). Thus, if the message is received before time ti − ∆ on the receiver's clock, the receiver knows that it is before time ti on the signer's clock. To exchange RSA blind-signed verifiable ephemeral identifiers for TESLA-authenticated MACs corresponding to each EphID, a user requests them through a MIX [20] on day t − 1, providing a nonce, an encryption key, the EphID, the corresponding signature SD, and the time interval at which the user intends to use them on day t (these time intervals should be standardized across all requesters to minimize privacy loss). The signer verifies the signature, and if valid and not requested before, generates the authenticator (Auth) for the requested time interval. The signer then encrypts the authenticator using the encryption key and publishes the nonce and encrypted authenticator. On day t − 1, a user can retrieve their authenticators for day t using one of three approaches: (i) full download: the user downloads the entire list of nonces and encrypted authenticators, discarding all values other than the ones chosen by the user; (ii) partial download: the user intially constructs the nonces so that they are equal in some bits; the user then requests a subset of the list by specifying the bits that are equal across all of that users' requests; (iii) individual download: a user with limited resources can also request their authenticators through anonymous connection such as Tor [9]. Because authenticators for each EphID are requested separately through a MIX, the signer cannot link two EphIDs except to know that EphIDs authenticated for the same time are likely to be from different users. When broadcasting a beacon, a user broadcasts the EphID and the TESLA authenticator used for the current time interval. When receiving a beacon, a receiver carries the EphID, the Auth, the time of receipt, and the signal strength. A beacon is kept for two time intervals; if the two keys following the time of receipt do not match the Auth, then the receiver knows that the EphID is not authenticated for the interval in which it was used (see evaluations in Section 7.1). Finally, we use 13 bytes (104 bits) for both EphIDs and Auth to fit within the 28-byte payload limit, as shown in Figure 3 Storing Received Beacons When a device receives a beacon, the user stores them as (EphID i , Authi, time, RSSI). When the user obtains the current released key ki at time ti from TESLA server, she firstly verifies ki with ki−1 by a one-way hash function; she then verifies each EphIDs received in the previous period by checking its authenticator Authi. If the Authi is valid, the device accepts the record and keeps that record in longer-term storage; authenticator verification can be performed in-place to mitigate DoS attacks. Optionally, our protocol can be combined with [32] to prevent relay and replay attacks. However, the details of preventing relay and replay attacks are beyond the scope of our protocol. Checking Exposure When a health authority determines a user is a positive case, all her secondary secret-day seeds and the selected numbers s during the infectious period (typically 2 days before symptoms happen) are published in a blockchain, together with a sequential case number for the day of publication. Each user fetches the list of these positive cases, and generates EphIDs with the secondary secretday seeds and the selected numbers s, and compares the EphIDs with the EphIDs in their local storage. If there is a match, the device can suspect contact with a positive case. FinalTrial. We propose to use the blockchain to provide an additional defense to the identifier-spoofing attacks. When a user generates BlindSignedIDs, she also requests blind-signatures for a set of codes of the form i || ni, where ni is a nonce and i represents a sequential case number; these codes are signed with a daily signing key different than the signing key used for BlindSignedIDs. Specifically, the user generates sufficient codes to respond to the maximum number of positive tests that might reasonable affect her jurisdiction for a day. Each such test is a nonce ni. She then builds a Merkle tree over these nonces, by first hashing each one, then hashing adjacent pairs to reach a tree root. She blinds the tree root R by calculating Rr e t i , has it signed as s = (Rr e t i ) d t , and unblinds the signature by calculating SP = r −1 i si = R d t . When a device matches positive case number i, instead of immediately notifying the user, the device posts ni, SP, and the Merkle path required for verification to the blockchain, allowing each user to determine the number of matches corresponding to each positive case. When a single positive case experiences an excessive number of matches, the device may choose to warn the user that the match seems unlikely, or may entirely ignore the match. Privacy Analysis In our design, real IDs (such as phone numbers) are revealed to a signing server before the server provides EphIDs. Since EphIDs are blind-signed with a subliminal-channel-free signature, each EphID is indistinguishable, and could be associated with any real ID that was presented on the day on which it was signed. Lemma 1. Given two signed EphIDs, EphID 1 and EphID 2 , and two possible identities, A and B, the signer can gain no advantage on which EphID is assigned to which identity; that is, any attempt to assign EphID i to A succeeds with probability 1/2. The set R is sent to the signer, and the signer signs each EphID in R using the private key d. Since r ed i = rimod N, the signer generates a signature set SD SD := {EphID d 1 * r1 mod N, ..., EphID d n * rn mod N} (10) The signer learns the signature set SD is associated with the user's real IDs. However, for any pair of values (x, y) in the signature set SD (EphID d x * rx mod N, EphID d y * ry mod N) ← SD (11) Since the blinding factors are randomly generated, they could take on any value. For example, if we consider two EphIDs, EphID 1 and EphID 2 , blinded as follows: (EphID 1 * r e 1 , EphID 2 * r e 2 ), these are indistinguishable from (EphID 2 * (EphID 1 * EphID −1 2 * r e 1 ), (EphID 1 * (EphID 2 * EphID −1 1 * r e 1 )). In other words, for each EphID EphID i and each blinded EphID EphID j r e j , there exists a blinding factor for EphID i such that the signing request for EphID j would create a signature for EphID i . Naturally, the security of the blind-signature scheme requires that such blinding factors be computationally difficult to compute, but they exist and are equiprobable blinding factors: specifically, if a signer generates N signatures on one day, then for any given EphID, there are N blinding possible blinding factors, each of which corresponds to one blind-signature request. Each such blinding factor is indistinguishably probable to the signer. When a user provides authenticators to the signer, she also includes the time interval and a nonce; the signer therefore learns only the which EphIDs are used in each time interval, and the correlation between nonces and EphIDs. First, if the signer is not compromised, there is no privacy loss, because only the signer has the correlation between EphID and nonce. Even when the signer is compromised, a user adopting the full download method reveals no information about her authenticators. However, the size of download lists might be multiple gigabytes if there are millions of participants. The partial download method saves user resources at the cost of some privacy in case of a compromised signer; however, revealing only a small part of the nonce (e.g., 1 byte of a 16 byte nonce), each user still retains a significant anonymity set. Finally, the individual download method loses more privacy in case of a compromised signer, but the specific amount of privacy loss depends on the mechanism by which the values are downloaded (for example, when downloading over Tor [9], the privacy properties are the same as Tor's). Security Analysis Mitigate DoS attacks. BlindSignedIDs mitigate the DoS attacks by ensuring that each EphID stored by a user is an EphID generated for a valid user. BlindSignedID provides an in-place verification to mitigate the DoS attacks on the local storage in that the user will reject an arbitrarily generated EphID that is not associated with a real identity. Evaluations BlindSignedID on DoS attacks Single attacker. In this section, we evaluated BlindSignedID via sending BLE beacons using mobile devices. Figure 4 shows our experimental setup. We implemented a release key UDP server as the TESLA server, and each key was released every 5 mins. Moreover, we built an Android application to advertise BlindSignedID and scan BLE beacons on two victim's phones placed at a distance of around 20 cm. For the single attacker, we generated a set of 70,000 random EphIDs sent by a single Raspberry Pi 3 Model B with the distance 1.5 m. We used the first 13-byte of HMAC-SHA1 as authenticator. Each random BLE beacon was broadcasted and the rotating time was 20 ms. The receiver stored each EphID with the time of first receipt, duration, RSSI and authenticator temporarily. Whenever a key release is scheduled, the receiver fetches the key over UDP, checks the key, and for each record, attempts to verify the record. When a record is verified, the receiver puts the time of first receipt, duration of contact, and RSSI in local storage, consuming 38 bytes per EphID. Figure 5 shows our results. Our BlindSignedID rejected all DoS attacks periodically created by the attacking set, and prevented nearly 30,000 fake EphIDs stored in 30 mins. Consequently, the original EphID designs are vulnerable to DoS attacks, and BlindSignedID can mitigate such attacks in-place. In an actual attack, the attacker can further modify BlueZ stack to transmit BLE beacons as frequently as possible, so DoS attacks are even more severe under such circumstances. Multiple attackers. TESLA Server In addition, we evaluated BlindSignedID with multiple attackers to somewhat mitigate the long interval between attacker EphIDs. We gen- erated 4 sets of 4,320,000 random EphIDs, which were used by 2 and 4 Raspberry Pis. Figure 6 shows that 2 and 4 attackers using off-the-shelf BlueZ broadcast intervals can cause approximately 1,227,000 and 1,891,000 stored EphIDs within 8 hrs, representing 645 MBytes and 1.076 GBytes across 14 8-hour days, respectively. Due to networking and processing delays, receivers do not filter all of the invalid EphIDs every 5 minutes, but the system still trends towards significant reductions in storage requirements. Crowded environment. We performed 3 experiments where over 14 people were originized in grid positions with a distance of 0.5 m, 1 m, and 1.5 m in a middlesize room, respectively. (These experiments were conducted in compliance with local social distancing requirements.) We used 6 Raspberry Pis as the attackers to send random EphIDs near the same power outlet. Each scenario was conducted for 1-2 mins. Table 1 reported the numbers of received EphIDs and real EphIDs verified by BlindSignedID. As distance increases, devices receive fewer EphIDs due to signal strength losses over longer distances. Also, our BlindSignedID continues to effectively reduce storage consumption, removing over 90% of the identities produced by DoS attacks. Related Works This section reviews several papers that inspire BlindSignedID. Digital Contact Tracing. BlindSignedID mostly fol- lows the current design of DP-3T [34], where a secret day seed is used to generate a set of ephemeral identifiers. Most current projects follow the similar designs. However, current designs suffer from DoS attacks. Although BlindSignedID is based on DP-3T, it can be applied to a wide range of projects based on ephemeral IDs, such as [4,6,7,8,13,17,34]. Moreover, the user registration of BlueTrace [13] utilizes the phone number to acquire a unique, randomised user ID from the backend server. Like BlueTrace, BlindSignedID requires the real IDs (e.g., phone numbers), but the BlueTrace server can learn which particular phone number is linked to which user ID. In other words, the signer can violate the user's privacy; in BlindSignedID, by contrast, the signer learns no information that can associate an ephemeral identifier with a real ID, other than the day on which the EphID was signed and the set of real IDs that authenticated on that day. The ROBERT protocol [16], a framework related to PEPP-PT [7], suggests that the proof-of-work system can be used during application registration to prevent from automatic registrations, but it does not solve the DoS attacks in-place on local storage, and the wide range of computation power available to different devices makes it difficult to simultaneously allow older mobile phone users to create identities while also preventing wellresourced attackers from creating thousands of identities. Replay and Relay Attacks. [32,35] seek to prevent replay and relay attacks on DP-3T. Such attacks can be performed by broadcasting ephemeral IDs generated by published secret seeds. Since these published seeds are positive cases, other users receive false alerts due to such attacks. [35] propose an interactive protocol to prevent such attacks, but it might not be efficient for digital contact tracing. [32] further present Delayed Authentication, which is a non-interactive method to prevent (longterm) replay and relay attacks. They tie a BLE broadcast beacon to time and location information through a key. The receiver verifies potential positives with the backend server. BlueTrace [13] utilizes encrypted User IDs with timestamps as TempID for beaconing, thus mitigating long-term replay attacks (but not relay attacks) by validating the timestamps of each TempID after the user uploads all records. In summary, these proposals prevent long-term replay attacks and potentially relay attacks by including time information and keys in BLE beacons. However, the current digital contact tracing systems still suffer from DoS attacks. Conclusion In this paper, we present BlindSignedIDs, which can mitigate DoS attacks on the current digital contact tracing designs. In particular, we propose BlindSignedIDs that are verifiable without compromising the user's privacy to mitigate DoS attacks. Anonymous identifiers do cause the mass surveillance difficult to be conducted. However, using anonymous identifiers without verifications introduces security issues in the current designs. Finally, we evaluated our BlindSignedIDs with example DoS attacks and showd BlindSignedIDs can reduce DoS attacks. Figure 1 : 1User Registration. her identity (for example, a phone number, social media account, or national identity), and sends the set of all M n blinded EphIDs to the signer. The signer first verifies the identity and ensures that it has not previously signed certificates for this identity for this day. The signer then chooses one set of blinded EphIDs asks the user to reveal the blinding factors and secondary secret day seeds of all remaining sets. The user then reveals the requested secret day seeds and their corresponding blinding seeds. If all revealed EphIDs generated correctly, the signer blindly signs all resulting values from the selected set and sends the signed values back to the user, as shown in Equation 7 Figure 2 :Figure 3 : 23Key BLE Beacon Format. . Proof. The signer generates a RSA key pair (e, d) and a public modulo N , and publishes the public exponent and modulo (e, N ). The user generates a set of ephemeral identifiers, {EphID 1 , . . . , EphID n }, and a set of random numbers raised by the public exponent e modulo N to create blinding factors, {r e 1 mod N, . . . , r e n mod N}. She multiplies the ephemeral identifiers with the blinding factors to obtain the resulting value set R. R := {EphID 1 * r e 1 mod N, . . . , EphID n * r e n mod N} (9) Figure 4 : 4DoS attacks Experimental Setup. Figure 5 : 5Stored EphIDs Sent by Single Attacker. Figure 6 : 6Stored EphIDs Sent by Multiple Attackers. Scenario Distance Original EphIDs BlindSignedID Reduced RateTable 1: DoS attacks on crowded indoor environment.0.5 m 12,122 181 96.72% 1.0 m 6,526 76 97.87% 1.5 m 2,098 28 93.59% Acknowledgment Bluetooth technology protecting your privacy. Bluetooth technology protecting your privacy. https://www.bluetooth.com/blog/bluetooth- technology-protecting-your-privacy/. Ac- cessed: 2020-08-20. Bluez: le set advertising parameters cp. Bluez: le set advertising parameters cp. https://github.com/pauloborges/bluez/blob/ master/lib/hci.h#L1496. Accessed: 2020-07-08. Cdc digital contact tracing tools for covid-19. Cdc digital contact tracing tools for covid- 19. https://www.cdc.gov/coronavirus/2019-ncov/ downloads/digital-contact-tracing.pdf. Ac- cessed: 2020-06-26. Google/apple privacy-preserving contact tracing. Google/apple privacy-preserving contact tracing. https://www.apple.com/covid19/contacttracing. Accessed: 2020-05-28. Japan releases contact-tracing app using apple and google tech. Japan releases contact-tracing app using apple and google tech. https://www.engadget.com/ microsoft-built-japans-contacttracing-app- using-apple-and-google-tech-105556846.html. Accessed: 2020-06-23. Pan-european privacy-preserving proximity tracing. Pan-european privacy-preserving proximity tracing. https://www.pepp-pt.org. Accessed: 2020-06-06. Uk abandons contact-tracing app for apple and google model. Uk abandons contact-tracing app for apple and google model. https://www.theguardian.com/ world/2020/jun/18/uk-poised-to-abandon- coronavirus-app-in-favour-of-apple-and- google-models. Accessed: 2020-06-23. A jamming-resistant mac protocol for single-hop wireless networks. Baruch Awerbuch, Andrea Richa, Christian Scheideler, Proceedings of the twenty-seventh ACM symposium on Principles of distributed computing. the twenty-seventh ACM symposium on Principles of distributed computingBaruch Awerbuch, Andrea Richa, and Christian Scheideler. A jamming-resistant mac protocol for single-hop wireless networks. In Proceedings of the twenty-seventh ACM symposium on Principles of distributed computing, pages 45-54, 2008. Spammers pay others to answer security tests. Vikas Bajaj, Vikas Bajaj. Spammers pay others to answer secu- rity tests. https://www.nytimes.com/2010/04/26/ technology/26captcha.html, April 2010. Accessed: 2020-06-19. Bluetrace: A privacy-preserving protocol for communitydriven contact tracing across borders. Jason Bay, Joel Kek, Alvin Tan, Sheng Chai, Lai Hau, Janice Yongquan, Tang Anh Tan, Quy, Government Technology Agency-Singapore. Tech. RepJason Bay, Joel Kek, Alvin Tan, Chai Sheng Hau, Lai Yongquan, Janice Tan, and Tang Anh Quy. Blue- trace: A privacy-preserving protocol for community- driven contact tracing across borders. Government Technology Agency-Singapore, Tech. Rep, 2020. 802.11 denial-ofservice attacks: Real vulnerabilities and practical solutions. John Bellardo, Stefan Savage, USENIX security symposium. Washington DC12John Bellardo and Stefan Savage. 802.11 denial-of- service attacks: Real vulnerabilities and practical so- lutions. In USENIX security symposium, volume 12, pages 2-2. Washington DC, 2003. Performance comparison of detection schemes for mac layer misbehavior. Svetlana Alvaro A Cárdenas, John S Radosavac, Baras, IEEE INFOCOM 2007-26th IEEE International Conference on Computer Communications. IEEEAlvaro A Cárdenas, Svetlana Radosavac, and John S Baras. Performance comparison of detection schemes for mac layer misbehavior. In IEEE INFOCOM 2007-26th IEEE International Conference on Com- puter Communications, pages 1496-1504. IEEE, 2007. Robert: Robust and privacy-preserving proximity tracing. Claude Castelluccia, Nataliia Bielova, Antoine Boutet, Mathieu Cunche, Cédric Lauradoux, Daniel Le Métayer, Vincent Roca, Claude Castelluccia, Nataliia Bielova, Antoine Boutet, Mathieu Cunche, Cédric Lauradoux, Daniel Le Métayer, and Vincent Roca. Robert: Robust and privacy-preserving proximity tracing. 2020. Pact: Privacy sensitive protocols and mechanisms for mobile contact tracing. Justin Chan, Dean Foster, Shyam Gollakota, Eric Horvitz, Joseph Jaeger, Sham Kakade, Tadayoshi Kohno, John Langford, Jonathan Larson, Puneet Sharma, Sudheesh Singanamalla, Jacob Sunshine, Stefano Tessaro, arXiv:2004.03544cs.CRJustin Chan, Dean Foster, Shyam Gollakota, Eric Horvitz, Joseph Jaeger, Sham Kakade, Tadayoshi Kohno, John Langford, Jonathan Larson, Puneet Sharma, Sudheesh Singanamalla, Jacob Sunshine, and Stefano Tessaro. Pact: Privacy sensitive pro- tocols and mechanisms for mobile contact tracing, 2020, arXiv:2004.03544 [cs.CR]. Securemac: Securing wireless medium access control against insider denial-of-service attacks. Sang-Yoon Chang, Yih-Chun Hu, IEEE Transactions on Mobile Computing. 1612Sang-Yoon Chang and Yih-Chun Hu. Securemac: Securing wireless medium access control against in- sider denial-of-service attacks. IEEE Transactions on Mobile Computing, 16(12):3527-3540, 2017. Simplemac: A simple wireless mac-layer countermeasure to intelligent and insider jammers. Sang-Yoon Chang, Yih-Chun Hu, Nicola Laurenti, IEEE/ACM Transactions on Networking. 242Sang-Yoon Chang, Yih-Chun Hu, and Nicola Lau- renti. Simplemac: A simple wireless mac-layer countermeasure to intelligent and insider jam- mers. IEEE/ACM Transactions on Networking, 24(2):1095-1108, 2015. Untraceable electronic mail, return addresses, and digital pseudonyms. David Chaum, Commun. ACM. 2428490David Chaum. Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM, 24(2):8490, 1981. Blind signatures for untraceable payments. David Chaum, Advances in cryptology. SpringerDavid Chaum. Blind signatures for untraceable pay- ments. In Advances in cryptology, pages 199-203. Springer, 1983. Cross-layer jamming detection and mitigation in wireless broadcast networks. T Jerry, Yih-Chun Chiang, Hu, IEEE/ACM transactions on networking. 191Jerry T Chiang and Yih-Chun Hu. Cross-layer jam- ming detection and mitigation in wireless broadcast networks. IEEE/ACM transactions on networking, 19(1):286-298, 2010. Broadcast anti-jamming systems. Yvo Desmedt, Rei Safavi-Naini, Huaxiong Wang, Lynn Batten, Chris Charnes, Josef Pieprzyk, Computer Networks. 352-3Yvo Desmedt, Rei Safavi-Naini, Huaxiong Wang, Lynn Batten, Chris Charnes, and Josef Pieprzyk. Broadcast anti-jamming systems. Computer Net- works, 35(2-3):223-236, 2001. Quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing. Luca Ferretti, Chris Wymant, Michelle Kendall, Lele Zhao, Anel Nurtay, Lucie Abeler-Dörner, Michael Parker, David Bonsall, Christophe Fraser, Science. 36864912020Luca Ferretti, Chris Wymant, Michelle Kendall, Lele Zhao, Anel Nurtay, Lucie Abeler-Dörner, Michael Parker, David Bonsall, and Christophe Fraser. Quantifying sars-cov-2 transmission suggests epi- demic control with digital contact tracing. Science, 368(6491), 2020. Denial of service attacks at the mac layer in wireless ad hoc networks. Vikram Gupta, Srikanth Krishnamurthy, Michalis Faloutsos, MIL-COM 2002. Proceedings. IEEE2Vikram Gupta, Srikanth Krishnamurthy, and Michalis Faloutsos. Denial of service attacks at the mac layer in wireless ad hoc networks. In MIL- COM 2002. Proceedings, volume 2, pages 1118-1123. IEEE, 2002. Proofs of work and bread pudding protocols. Markus Jakobsson, Ari Juels, Secure information networks. SpringerMarkus Jakobsson and Ari Juels. Proofs of work and bread pudding protocols. In Secure information networks, pages 258-272. Springer, 1999. . Hedy Markey, Antheil Kiesler, George, US Patent. 2387Secret communication systemMarkey Hedy Kiesler and Antheil George. Secret communication system, August 11 1942. US Patent 2,292,387. Selfish mac layer misbehavior in wireless networks. Pradeep Kyasanur, H Nitin, Vaidya, IEEE transactions on mobile computing. 45Pradeep Kyasanur and Nitin H Vaidya. Selfish mac layer misbehavior in wireless networks. IEEE trans- actions on mobile computing, 4(5):502-516, 2005. Torpolice: Towards enforcing service-defined access policies for anonymous communication in the tor network. Zhuotao Liu, Yushan Liu, Philipp Winter, Prateek Mittal, Yih-Chun Hu, 2017 IEEE 25th International Conference on Network Protocols (ICNP). IEEEZhuotao Liu, Yushan Liu, Philipp Winter, Prateek Mittal, and Yih-Chun Hu. Torpolice: Towards en- forcing service-defined access policies for anonymous communication in the tor network. In 2017 IEEE 25th International Conference on Network Protocols (ICNP), pages 1-10. IEEE, 2017. Portcullis: Protecting connection setup from denial-ofcapability attacks. Bryan Parno, Dan Wendlandt, Elaine Shi, Adrian Perrig, Bruce Maggs, Yih-Chun Hu, ACM SIGCOMM Computer Communication Review. 374Bryan Parno, Dan Wendlandt, Elaine Shi, Adrian Perrig, Bruce Maggs, and Yih-Chun Hu. Portcullis: Protecting connection setup from denial-of- capability attacks. ACM SIGCOMM Computer Communication Review, 37(4):289-300, 2007. The tesla broadcast authentication protocol. Adrian Perrig, Ran Canetti, Doug Tygar, Dawn Song, Rsa Cryptobytes. 52Adrian Perrig, Ran Canetti, J Doug Tygar, and Dawn Song. The tesla broadcast authentication pro- tocol. Rsa Cryptobytes, 5(2):2-13, 2002. Delayed authentication: Preventing replay and relay attacks in private contact tracing. Krzysztof Pietrzak, Cryptology ePrint Archive. 2020ReportKrzysztof Pietrzak. Delayed authentication: Preventing replay and relay attacks in pri- vate contact tracing. Technical report, Cryp- tology ePrint Archive, Report 2020/418, 2020. https://eprint.iacr.org/2020/418, 2020. Anti-jamming broadcast communication using uncoordinated spread spectrum techniques. Christina Popper, Mario Strasser, Srdjaň Capkun, ieee journal on selected areas in communications. 285Christina Popper, Mario Strasser, and Srdjaň Capkun. Anti-jamming broadcast communication using uncoordinated spread spectrum techniques. ieee journal on selected areas in communications, 28(5):703-715, 2010. Apostolos Pyrgelis, Daniele Antonioli, et al. Decentralized privacy-preserving proximity tracing. Carmela Troncoso, Mathias Payer, Jean-Pierre Hubaux, Marcel Salathé, James Larus, Edouard Bugnion, Wouter Lueks, Theresa Stadler, arXiv:2005.12273cs.CRCarmela Troncoso, Mathias Payer, Jean-Pierre Hubaux, Marcel Salathé, James Larus, Edouard Bugnion, Wouter Lueks, Theresa Stadler, Apos- tolos Pyrgelis, Daniele Antonioli, et al. Decen- tralized privacy-preserving proximity tracing, 2020, arXiv:2005.12273 [cs.CR]. Analysis of dp3t. Cryptology ePrint Archive. Serge Vaudenay, ReportSerge Vaudenay. Analysis of dp3t. Cryptology ePrint Archive, Report 2020/399, 2020. https: //eprint.iacr.org/2020/399. recaptcha: Human-based character recognition via web security measures. Benjamin Luis Von Ahn, Colin Maurer, David Mcmillen, Manuel Abraham, Blum, Science. 3215895Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. recaptcha: Human-based character recognition via web security measures. Science, 321(5895):1465-1468, 2008. List of countries and dependencies by population. Wikipedia, Wikipedia. List of countries and dependencies by population. https://en.wikipedia.org/wiki/ List of countries and dependencies by population, 2020. Online; accessed: 2020-06-26.
[ "https://github.com/pauloborges/bluez/blob/" ]
[ "Kinetic field theory: Generic effects of alternative gravity theories on non-linear cosmic density-fluctuations", "Kinetic field theory: Generic effects of alternative gravity theories on non-linear cosmic density-fluctuations" ]
[ "A Oestreicher [email protected] \nInstitute for Theoretical Physics\nHeidelberg University\nPhilosophenweg 12D-69120HeidelbergGermany\n", "L Capuano [email protected] \nSISSA\nVia Bonomea 265I-34136TriesteItaly\n\nDepartment of Physics and Astronomy \"Galileo Galilei\"\nUniversity of Padova\nVia Marzolo 8I-35131PadovaItaly\n", "S Matarrese [email protected] \nDepartment of Physics and Astronomy \"Galileo Galilei\"\nUniversity of Padova\nVia Marzolo 8I-35131PadovaItaly\n", "L Heisenberg [email protected] \nInstitute for Theoretical Physics\nHeidelberg University\nPhilosophenweg 12D-69120HeidelbergGermany\n\nInstitute for Theoretical Physics\nETH Zürich\nWolfgang-Pauli-Str. 27SUI-8093ZürichSwitzerland\n", "M Bartelmann [email protected] \nInstitute for Theoretical Physics\nHeidelberg University\nPhilosophenweg 12D-69120HeidelbergGermany\n" ]
[ "Institute for Theoretical Physics\nHeidelberg University\nPhilosophenweg 12D-69120HeidelbergGermany", "SISSA\nVia Bonomea 265I-34136TriesteItaly", "Department of Physics and Astronomy \"Galileo Galilei\"\nUniversity of Padova\nVia Marzolo 8I-35131PadovaItaly", "Department of Physics and Astronomy \"Galileo Galilei\"\nUniversity of Padova\nVia Marzolo 8I-35131PadovaItaly", "Institute for Theoretical Physics\nHeidelberg University\nPhilosophenweg 12D-69120HeidelbergGermany", "Institute for Theoretical Physics\nETH Zürich\nWolfgang-Pauli-Str. 27SUI-8093ZürichSwitzerland", "Institute for Theoretical Physics\nHeidelberg University\nPhilosophenweg 12D-69120HeidelbergGermany" ]
[]
Non-linear cosmic structures contain valuable information on the expansion history of the background space-time, the nature of dark matter, and the gravitational interaction. The recently developed kinetic field theory of cosmic structure formation (KFT) allows to accurately calculate the non-linear power spectrum of cosmic density fluctuations up to wave numbers of k 10 h Mpc −1 at redshift zero. Cosmology and gravity enter this calculation via two functions, viz. the background expansion function and possibly a time-dependent modification of the gravitational coupling strength.The success of the cosmological standard model based on general relativity suggests that cosmological models in generalized theories of gravity should have observable effects differing only weakly from those in standard cosmology. Based on this assumption, we derive the functional, firstorder Taylor expansion of the non-linear power spectrum of cosmic density fluctuations obtained from the mean-field approximation in KFT in terms of the expansion function and the gravitational coupling strength. This allows us to study non-linear power spectra expected in large classes of generalized gravity theories. To give one example, we apply our formalism to generalized Proca theories.
null
[ "https://export.arxiv.org/pdf/2210.16014v1.pdf" ]
253,224,303
2210.16014
e5f3531fb875819f13510d22fbaddcb7a1f8c0a9
Kinetic field theory: Generic effects of alternative gravity theories on non-linear cosmic density-fluctuations 28 Oct 2022 A Oestreicher [email protected] Institute for Theoretical Physics Heidelberg University Philosophenweg 12D-69120HeidelbergGermany L Capuano [email protected] SISSA Via Bonomea 265I-34136TriesteItaly Department of Physics and Astronomy "Galileo Galilei" University of Padova Via Marzolo 8I-35131PadovaItaly S Matarrese [email protected] Department of Physics and Astronomy "Galileo Galilei" University of Padova Via Marzolo 8I-35131PadovaItaly L Heisenberg [email protected] Institute for Theoretical Physics Heidelberg University Philosophenweg 12D-69120HeidelbergGermany Institute for Theoretical Physics ETH Zürich Wolfgang-Pauli-Str. 27SUI-8093ZürichSwitzerland M Bartelmann [email protected] Institute for Theoretical Physics Heidelberg University Philosophenweg 12D-69120HeidelbergGermany Kinetic field theory: Generic effects of alternative gravity theories on non-linear cosmic density-fluctuations 28 Oct 2022Prepared for submission to JCAP1 Corresponding author. Non-linear cosmic structures contain valuable information on the expansion history of the background space-time, the nature of dark matter, and the gravitational interaction. The recently developed kinetic field theory of cosmic structure formation (KFT) allows to accurately calculate the non-linear power spectrum of cosmic density fluctuations up to wave numbers of k 10 h Mpc −1 at redshift zero. Cosmology and gravity enter this calculation via two functions, viz. the background expansion function and possibly a time-dependent modification of the gravitational coupling strength.The success of the cosmological standard model based on general relativity suggests that cosmological models in generalized theories of gravity should have observable effects differing only weakly from those in standard cosmology. Based on this assumption, we derive the functional, firstorder Taylor expansion of the non-linear power spectrum of cosmic density fluctuations obtained from the mean-field approximation in KFT in terms of the expansion function and the gravitational coupling strength. This allows us to study non-linear power spectra expected in large classes of generalized gravity theories. To give one example, we apply our formalism to generalized Proca theories. from cosmology and from the theory of gravity. It should thus be permitted to evaluate the non-linear density-fluctuation power spectrum P (nl) δ in a functional, first-order Taylor expansion in terms of these functions, ∆P (nl) δ (k, a) = a a ini dx         δP (nl) δ (k, a) δE(x) ∆E(x) + δP (nl) δ (k, a) δG(x) ∆G(x)         , (1.1) where ∆[E, G](x) are the differences between the respective functions in a generalized gravity theory relative to the standard cosmological model at scale factor x. The functional derivatives have to be taken within the standard cosmological model and can thus be calculated once and for all. We note that the non-linear density-fluctuation power spectrum in KFT also depends on the linear growth factor D + because this is a convenient time coordinate for KFT, but variations in D + are determined by those in E and G via the linear growth equation. Besides applications to modified theories of gravity our formalism can also be applied to cosmological theories that keep general relativity as the underlying theory of gravity, but introduce non-trivial dark energy components such as dark energy with a time dependent equation of state. Such theories would suggest a modified expansion function E(a) and can thus be implemented via the first term in (1.1). We calculate in this paper the functional derivatives of the non-linear density-fluctuation power spectrum with respect to E and G, starting from the mean-field approximation of KFT. These functional derivatives will be the main result of Sect. 3. In Sect. 4, we will illustrate the results at the example of one particular class of generalizations of general relativity, viz. the generalized Proca theories [13]. In Sect. 2, we will begin by briefly reviewing the essential concepts of KFT and the mean-field equation for the non-linear power spectrum. The KFT mean field power spectrum Kinetic field theory (KFT) is a statistical theory for the evolution of classical particle ensembles in or out of equilibrium [11,[14][15][16]. It defines an initial state of the ensemble by the probability distribution for phase-space positions x (i) to be occupied. The Hamiltonian equations of motion for the particles on the expanding cosmological background allow constructing a retarded Green's function evolving the particle trajectories forward in time, including particle interactions. This Hamiltonian phase-space flow defines a diffeomorphic map of the initial phase-space distribution to any later time. The information on the initial state of the particle ensemble and its time evolution are encapsulated in a generating functional Z. By functional derivatives of Z with respect to suitably incorporated source fields, statistical information on the evolved ensemble can be extracted, such as the power spectrum or higher-order spectra of evolved density fluctuations. This approach based on particle trajectories in phase space has three major advantages compared to more conventional, analytic methods for studying cosmic structure formation. First, particle trajectories in phase space do not cross, avoiding by construction the shell-crossing problem notorious in cosmology. Second, with the Zel'dovich approximation or related approximations [17,18], a free reference motion or inertial motion can be chosen for the particles which already incorporates part of the gravitational interaction. Third, even small perturbations of particle trajectories can lead to arbitrarily high densities, which allows entering the regime of non-linear density evolution with low-order perturbation theory. Particle interactions are described by an interaction operator acting on the generating functional Z. Taylor expansion of this operator opens a systematic approach to perturbation theory. Calculating the non-linear density-fluctuation power spectrum at first perturbative order returns a result which agrees with the spectrum obtained from numerical simulations within 10. . . 20 % up to k 10 h Mpc −1 at redshift zero [11]. Even better agreement at the level of 5 % between analytical and numerical, non-linear power spectra can be obtained by averaging the interaction operator in a mean-field approximation [12]. In this approach, the non-linear power spectrum P (nl) δ is related by P (nl) δ (k, t) ≈ e S I (k,t) P (lin) δ (k, t) (2.1) to the linear power spectrum P (lin) δ (k, t), where the scale-and time-dependent, mean-field averaged interaction term is S I (k, t) = 3 t 0 dt a m g H (t, t )D 2 + σ 2 J G , g H (t, t ) = t t dt m . (2.2) Here, D + is the linear growth factor of cosmic density fluctuations, g H is the so-called Hamilton propagator obtained by solving the equations of motion, a is the scale factor, G is the possibly time dependent gravitational coupling strength normalized at some initial time and m is the effective particle mass m(t) = a 3 (t) dt da E(t) ,(2.3) where E(t) = H(t)/H i is the cosmological expansion function, i.e. the Hubble function normalized by the Hubble constant at the same initial time. A suitable choice for this time is the time of cosmic recombination. For convenience, we also normalize the scale factor and the growth factor to unity at this initial time. The effective particle mass is thus unity initially and grows with time, reflecting the weakening gravitational interaction in an expanding space-time. The quantity σ 2 J is a second moment of the damped initial density-fluctuation power spectrum, σ 2 J (k, t) = 1 (2π) 2 ∞ 0 dyy 2P(i) (y)J(y/k, y 0 /k) ,(2.4) effectively low-pass filtered by a filter function J. Since the damping depends on time, so does σ 2 J . The damped initial power spectrum is given bỹ P (i) (y) = (1 + Q D ) −1 P (i) (y) ,(2.5) where Q D = y 2 λ 2 (t) , λ(t) ≈ t 1 + √ t/τ σ 1 with τ ≈ 24.17. (2.6) λ is a damping scale and σ 1 is a moment of the density-fluctuation power spectrum σ 2 n := 1 2π 2 ∞ 0 dkk 2n−2 P (i) δ (k) . (2.7) For a complete derivation of these expressions the reader is referred to [12]. As explained in [12] the power spectrum in the mean field approximation has two open parameters the non-linear scale k 0 and an effective viscosity ν. These can either be estimated within KFT or chosen to optimize agreement with numerical results. For this paper we use the optimized parameters and leave them constant throughout the paper. The time coordinate t in (2.2) is set by the linear growth factor, t = D + −1. While this is a suitable choice for deriving the formalism, our main goal in this paper is to ask how the power spectrum changes at a given scale factor (usually a 0 = 1000, i.e today). Rewriting the above expression with the scale factor as our independent variable and inserting the definition of the effective particle mass, the power spectrum takes the following simple form P (nl) δ (k, a) ≈ e S I (k,a) P (lin) δ (k, a) (2.8) with the interaction term as follows S I (k, a) = 3 a a min da g H (a, a ) D 2 + σ 2 J G a 2 E , g H (a, a ) = a a dā a 3 E . (2.9) The approximate equation (2.8) for the non-linear density-fluctuation power spectrum serves as the starting point for the present paper. 3 Functional Taylor expansion of the non-linear density-fluctuation power spectrum The theory of gravity enters into the KFT formalism in two ways: (1) through the gravitational coupling strength G, which becomes time and possibly scale-dependent in some theories, and (2) through the time evolution of the background space-time in terms of the expansion function E. We know from observations that, if there are deviations from general relativity (GR), they have to be small. At least on small scales and for weak fields, any alternative theory of gravity (AG) would have to reproduce the predictions of GR. A successful alternative theory can thus only differ by a small amount from general relativity, and possible differences can only occur on cosmological scales or in strong fields. We can thus reasonably expect that such AG theories should predict only small corrections to the gravitational coupling and the expansion function. Under this assumption, we can express the non-linear density-fluctuation power spectrum in an alternative theory of gravity as a first-order functional Taylor expansion around its form predicted by general relativity, P (nl) δ [E + δE, G + δG](k, a) ≈ P (nl) δ [E, G] + a a ini dx δP (nl) δ [E, G] δE(x) ∆E(x) + a a ini dx δP (nl) δ [E, G] δG(x) ∆G(x) . (3.1) This expansion can be understood in the following way: The functional derivatives δP (nl) δ /δE and δP (nl) δ /δG quantify by how much the non-linear power spectrum P (nl) δ evaluated at scale factor a changes if the expansion function or the gravitational coupling strength are varied at some scale factor a ini ≤ x ≤ a prior to a. We shall call x the perturbance scale factor hereafter. The functions ∆E(x) = E AG (x) − E GR (x) and ∆G(x) = G AG (x) − G GR (x) are the changes in the expansion function and the gravitational coupling in the AG theory relative to GR. Since changes in E and G may occur at any scale factor preceding a, we need to integrate over the scale factor. The functional derivatives are to be evaluated in GR and thus independent of the specific AG theory. The only terms that need to be specified for each theory are the changes ∆E(x) and ∆G(x). We are thus able to predict the non-linear power spectrum in the mean-field approximation of KFT for any alternative theory of gravity satisfying the assumption of small deviations from GR in the background expansion and the gravitational coupling. Functional derivatives of the growth factor For calculating the functional derivatives of the power spectrum with respect to E and G, we need the functional derivatives of the linear growth factor with respect to both functions. This is because the most appropriate time coordinate for KFT is t = D + − 1, which introduces an indirect dependence of the power spectrum on E and G through this time t. To determine it, we take the functional derivatives of the linear growth equation D + (a) + 3 a + E (a) E(a) D + − 3 2 Ω m (a) a 2 D + = 0 (3.2) with respect to E and G and commute them with the derivative with respect to the scale factor a. KFT initial conditions are set at the time of recombination allowing us to neglect radiation when writing down (3.2). Additionally we assume dark energy perturbations to be negligible. The functional derivative of the growth factor with respect to the expansion function has already been determined in [19], δD + (a) δE(x) = Θ(a − x) D + (a) f E (x) a x dy 1 y 3 D 2 + (y)E(y) − D + (x) D + (x)E(x) (3.3) with f E (x) = xD 2 + (x) Ω 2γ−1 m − 3 2 Ω m , (3.4) where the exponent γ appears by writing the logarithmic derivative d ln D + /d ln a = Ω γ m (a). For the ΛCDM reference model, γ ≈ 6/11 [20]. The derivative of the linear growth equation (3.2) with respect to the gravitational coupling, assuming that δE/δG = 0, is d 2 da 2 δD + (a) δG(x) + 3 a + E (a) E(a) d da δD + (a) δG(x) − 3 2 Ω m (a)D + (a) a 2 δ ln D + (a) δG(x) + δ ln Ω m (a) δG(x) = 0 . (3.5) Inserting the derivative δΩ m (a) δG(x) = δ δG(x) 8πG(a)ρ m 3H 2 (a) = Ω m (a) G(a) δ D (a − x) (3.6) into (3.5) leads to the second-order inhomogeneous differential equation d 2 da 2 δD + (a) δG(x) + 3 a + E (a) E(a) d da δD + (a) δG(x) − 3 2 Ω m (a) a 2 δD + (a) δG(x) = 3 2 D + (a) a 2 Ω m (a) G(a) δ D (a − x) . (3.7) This differential equation for the function δD + /δG has the same shape as (3.2), which suggests the ansatz δD + (a)/δG(x) = C(a, x)D + (a), which corresponds to the familiar variation of constants. Requiring further that our result be proportional to the step function Θ(a − x) to ensure causality results in δD + (a) δG(x) = Θ(a − x) D + (a) f G (x) a x dy D 2 + (y)y 3 E(y) (3.8) with f G (x) = 3 2 Ω m (x) G(x) D 2 + (x)xE(x) . (3.9) We show the logarithmic functional derivatives of the growth factor with respect to E and G in Fig. 1. The absolute values of both derivatives are largest for small x and decrease with increasing x. Both derivatives increase with the scale factor a. This behaviour confirms the expectation that the earlier deviations occur in E and G, and the longer they act, the larger is their effect. The derivative with respect to the gravitational coupling is strictly positive, reflecting that an increase in the coupling strength always leads to enhanced structure formation. The derivative with respect to the expansion function on the other hand is strictly negative because the background expansion slows down structure growth. This confirms the behaviour one would intuitively expect. Functional derivatives of the power spectrum Having obtained the functional dependence of the growth factor on the gravitational coupling and the expansion function, we can proceed to calculate the functional derivatives of the non-linear power spectrum with respect to G and E. They separate into derivatives of the linear power spectrum P (lin) δ and of the mean interaction term S I . The functional derivatives with respect to X = (E, G) are δP (nl) δ (k, a) δX(x) = e S I         δP (lin) δ (k, a) δX(x) + P (lin) δ (k, a) δ S I (k, a) δX(x)         . (3.10) We shall now calculate the functional derivatives of the linear power spectrum and of the interaction term in the next two sections, respectively. Derivatives of the linear power spectrum The linear power spectrum depends on E and G through the growth factor D + , since its time evolution is given by the square of D + . Its derivative with respect to X is δP (lin) δ (k, a) δX(x) = δ δX(x) D 2 + (a)P (i) δ (k) = 2D + (a)P (i) δ (k) δD + (a) δX(x) = 2P (lin) δ (k, a) δ ln D + (a) δX(x) . (3.11) Equation (3.11) shows that these functional derivatives reproduce the shape of the linearly evolved power spectrum but change its amplitude by an amount proportional to the functional derivatives of the growth factor with respect to G and E shown in Fig. 1. Derivatives of the interaction term The mean interaction term S I depends both explicitly on the gravitational coupling and on the expansion function, and also implicitly on G and E through the growth factor D + . We take the meanfield interaction term in the form (2.9). Taking its functional derivative with respect to X = (G, E) and applying the product rule results in δ S I (k, a) δX(x) = 3 a a min da g H (a, a ) D 2 + σ 2 J G a 2 E ·       δ ln g H (a, a ) δX(x) − δ ln E(a ) δX(x) + δ ln σ 2 J (a ) δX(x) + 2 δ ln D + (a ) δX(x) + δ ln G(a ) δX(x)       . (3.12) Any function appearing here is understood to depend on the integration variable unless otherwise specified. To proceed further we now need the functional derivatives of D + , g H , and σ 2 J . Having already calculated the derivatives of the growth factor, we begin with the functional derivative of g H (a, a ). The Hamilton propagator only depends on the expansion function E and not on the effective gravitational constant G. Pulling the derivative into the integral, we find δg H (a, a ) δE(x) = − a a dā a 3 E 2 (ā) δ D (ā − x) = − Θ(a − x)Θ(x − a ) x 3 E 2 (x) . (3.13) Lastly we need the functional derivative of σ 2 J . It is neither a direct function of E or G and only depends on E, G through our time variable t = D + − 1. The functional derivative thus is δσ 2 J (a) δX(x) =σ 2 J δD + (a) δX(x) ,(3.14) whereσ 2 J is the time derivative of σ 2 J . It can quickly be calculated using (2.5), resulting iṅ σ 2 J = − 1 (2π) 2 ∞ 0 dy y 4 σ 2 ν λ 2 (1 + Q D ) −2 2 t − 1 √ tτ + t P (i) (y) J(y/k, y 0 /k) . (3.15) Having calculated the functional derivatives of all functions appearing in the interaction term, we can now return to expression (3.12). The functional derivative of the expansion function and the Hamilton propagator vanish for X = G, while the derivative of G returns a delta function. Integrating over it leaves us with δ S I (k, a) δG(x) = 3Θ(a − x)g H (a, x) D 2 + (x)σ 2 J (x) x 2 E(x) + a a min da g H (a, a ) D 2 + σ 2 J G a 2 E       δ ln σ 2 J (a ) δG(x) + 2 δ ln D + δG(x)       . (3.16) Taking the functional derivative of the interaction term with respect to X = E, the functional derivative of the gravitational coupling vanishes, and we now get a delta function for the functional derivative of E. Integrating over it results in δ S I (k, a) δE(x) = −3Θ(a − x)g H (a, x) D 2 + (x)σ 2 J (x)G(x) x 2 E 2 (x) + 3 a a min da g H (a, a ) D 2 + σ 2 J G a 2 E       δ ln g H (a, a ) δE(x) + δ ln σ 2 J (a ) δE(x) + 2 δ ln D + δE(x)       . (3.17) We show the functional derivatives of S I with respect to G and E in Figs. 2 and 3. The mean interaction term remains unchanged on the largest scales since its functional derivative is non-zero only on moderate and small scales. Note that both derivatives are proportional to Heaviside functions, ensuring causality. As before, the derivative with respect to G is strictly positive reflecting that an increase in the coupling strength enhances structure formation and the derivative with respect to E is strictly negative reflecting that the background expansion slows down structure growth. Functional derivatives of the non-linear power spectrum Having calculated the functional derivatives of the linearly evolved power spectrum and of the mean interaction term, we can now return to the expression (3.10) for the complete functional derivative of the non-linear power spectrum. The first term in both expressions describes a pure amplitude change of the power spectrum independent of k, while the second term does depend on k and thus changes the shape of the non-linear relative to the linear power spectrum. We illustrate this in Fig. 4. At large scales, the relative change of the power spectrum with respect to both G and E is constant, while it depends on k at smaller scales. The k-independent amplitude change results from the functional derivatives of the linear power spectrum and appears because an alternative gravity theory may imply changes to the linear growth factor. Since the linear power spectrum grows as D 2 + (a), the modified growth factor would then also change the overall amplitude of the power spectrum. The k dependence on small scales results purely from the derivative of the mean interaction term. This part changes the shape of the non-linear power spectrum in response to modified gravity. Non-linear power spectra in alternative gravity theories With all necessary functional derivatives at hand, we can now write down the full expression for an alternative power spectrum as given by our functional first-order Taylor expansion P (nl) δ,AG (k, a) = P (nl) δ,GR (k, a) ·         1 + 2 X=E,G a a ini dx δ ln D + (a) δX(x) ∆X(x) + X=E,G a a ini dx δ S I (k, a) δX(x) ∆X(x)         . (3.18) As discussed above, the derivatives predict two different types of change: (1) A k-independent, pure amplitude change (first integral in square brackets) and (2) a k-dependent and thus shape-changing part (second integral in square brackets). Thus far, by construction of the Taylor expansion, the power spectrum for an alternative theory of gravity and general relativity are equal at the initial scale factor since all integrals in (3.18) vanish there. For comparison with observations, we should rather set the amplitudes of the two spectra equal at the final scale factor a 0 . Normalizing the spectra in this way, we only need to keep the shape-changing part. For the two spectra to reach the same amplitude today, their initial amplitudes must have differed, in order to account for their different time evolution. This difference in initial amplitude will also affect the interaction term (2.9), since it contains the moment σ 2 J , which also depends on the initial power spectrum. Assuming the amplitude change to be small, we can include it into our Taylor-expansion approach by adding another term P (nl) δ,AG ≈ P (nl) δ,GR + X=E,G a a ini dx δP (nl) δ,GR δX(x) ∆X(x) + ∂P (nl) δ,GR ∂A ∆A ,(3.19) where ∆A = A AG − A GR is the change in the amplitude of the initial power spectrum. We now evaluate this additional term. The amplitude changes with the square of the growth factor. To correct it, we thus need to determine by how much the growth factor differs in the alternative theory. We do this by the first-order Taylor approximation D 2 +,AG (a) = D 2 + (a)         1 + 2 D + (a) X=E,G a a min dx δD + (a) δX(x) ∆X(x)         . (3.20) For the two spectra to reach the same amplitude today, the initial power spectrum for the AG theory should thus be lower by the factor A AG =         1 + 2 D + (a) X=E,G a a min dx δD + (a) δX(x) ∆X(x)         −1 ≈ 1 − 2 D + (a) X=E,G a a min dx δD + (a) δX(x) ∆X(x) . (3.21) Setting A GR = 1 for the reference model, we thus find ∆A ≈ − 2 D + (a) X=E,G a a min dx δD + (a) δX(x) ∆X(x) . (3.22) To calculate the derivative of the power spectrum with respect to A, we introduce an explicit amplitude factor A into P (nl) δ (k, t) ≈ e S I (k,t) D 2 + AP (i) (3.23) and the function σ 2 J (k, t) = 1 (2π) 2 ∞ 0 dy y 2 (1 + AQ D ) −1 AP (i) (y)J(y/k, y 0 /k) (3.24) appearing in the mean-field interaction term S I . The derivative of the power spectrum is ∂P (nl) δ ∂A = P (nl) δ A + P (nl) δ ∂ S I (k, t) ∂A , (3.25) where ∂ S I (k, t) ∂A = 3 t 0 dt g H (t, t ) a m D 2 + G ∂σ 2 J (k, t ) ∂A ,(3.26) and ∂σ 2 J (k, t) ∂A = 1 (2π) 2 ∞ 0 dyy 2 (1 + AQ D ) −1 AP (i) (y) 1 A − Q D (1 + AQ D ) −1 J(y/k, y 0 /k) . (3.27) Inserting the result for ∆A into (3.19) we can summarize our result as follows P (nl) δ,AG (k, a) ≈ P (nl) δ,GR (k, a) + X=E,G a a ini dx         δP (nl) δ,GR (k, a) δX(x) − 2 ∂P (nl) δ,GR (k, a) ∂A δ ln D + (a) δX(x)         ∆X(x) ≈ P (nl) δ,GR (k, a) + X=E,G a a ini dx         δP (nl) δ,GR (k, a) δX(x) + A X (k, a, x)         ∆X(x) . (3.28) We can now study a wide range of alternative power spectra, simply by specifying the change in the gravitational coupling ∆G and the change in the expansion function ∆E suggested by alternative gravity theories. Code and Data Release One major advantage of our approach is that the coefficients of the functional Taylor expansion are to be evaluated in the standard ΛCDM cosmological model and can thus be calculated once and for all. Together with this paper, we provide tables of the ΛCDM power spectrum and the coefficients under the integral in (3.28) evaluated at scale factor a 0 = 1000, i.e. today. We also provide a simple Python script requiring an alternative expansion function and effective gravitational constant as input, performing the remaining integration in (3.28) and returning the non-linear power spectrum. This should allow to quickly calculate the non-linear power spectrum for a desired alternative theory of gravity. The code can be found here. Application to Generalized Proca theories As a demonstration, we will now apply our approach to one specific alternative gravity theory, viz. the generalized Proca theories [13,21]. We choose this model since KFT has already been applied to it in an earlier paper [22], albeit with another normalization of the interaction term and the final power spectrum. General Relativity is the field theory which describes the dynamics of the metric tensor, whose equations of motion are the well known Einstein equations. In four dimensions, GR is the unique theory which can be constructed from an action only containing the metric tensor and its first and second derivative (Lovelock's theorem). One possibility for extending GR is to include additional fields into the Einstein-Hilbert action. An important class of these theories is represented by vector-tensor theories, in which an additional vector field is included. Within this family, requiring second-order equations of motion, the most general theories are the so called generalized Proca theories. We now proceed to apply our functional Taylor expansion to this theory, with the model described in [23] for the gravitational coupling. In this case, the dark energy density parameter reads Ω DE = 1 − Ω m = 6p 2 2 (2p + 2p 2 − 1)β 4 − p 2 (p + p 2 )(1 + 4p 2 β 5 ) p 2 (p + p 2 ) y ,(4.1) with p, p 2 , β 4 , β 5 and y parameters of the theory. The equation-of-state parameter for dark energy itself depends on Ω DE as w DE = − 1 + s 1 + sΩ DE ,(4.2) with s = p 2 /p. The effective gravitational constant for the examined model reads G eff G = (p + p 2 ) F G q V u 2 − 2p 2 y[1 − 6β 4 (2p + 2p 2 − 3) + 2β 5 (3p + 2p 2 − 3)] ,(4.3) where q V controls the amplitude of the correction to GR due to the introduction of the vector field, and F G is a function of the other parameters of the model; see [23] for the full expression. Equipped with the cosmic expansion function and the effective gravitational coupling for this model, we are now able to compute the variations ∆E and ∆G for it and calculate the alternative power spectrum using our Taylor expansion. As in [22], we implement two different models with the parameter choices p = 2.5, p 2 = 0.5, λ = 0.86, β 4 = 1.0e − 4, β 5 = 0.052 and p = 2.5, p 2 = 0.5, λ = 0.86, β 4 = 0.0, β 5 = 0.0, respectively. For both models, we plot results for a range of values for q v . The result obtained with our Taylor expansion (3.19) can be seen in Fig. 5. In order to check the validity and accuracy of our Taylor we also show results obtained by directly specifying the Proca expansion function, growth factor and gravitational constant in (2.8). The relative difference between this exact approach and our Taylor expansion turns out to be less than 0.15% in all tested cases. The results obtained this way can be seen in Fig. 6. The differences between Fig. 5 and Figs. 2 and 3 of [22] originate from the different normalization scheme for the initial power spectrum applied here. The shape of the curves shown in Fig. 5 is generic: the wave number above which the non-linear spectrum obtained in the modified gravity theory deviates from the generally-relativistic result is set by the condition σ J ≈ 1, with σ J defined in (2.4). The amplitude of the deviations results from two competing effects illustrated in Fig. 4: the functional derivative of the non-linear power spectrum with respect to the gravitational coupling is positive, while its derivative with respect to the expansion function is negative. The quantitative consequences of these two terms for any modified gravity theory then depend on the specific changes ∆E and ∆G, integrated together with these derivatives as shown in (2.5). Although detailed comparisons to simulations are difficult, our analytic results tend to agree well with results obtained numerically [24-26, for examples]. Comparing Figs. 5 and 6 shows that the functional Taylor expansion applied here returns highly accurate results. Model 1 Taylor Expansion q v = 2.0 q v = 1.5 q v = 1.0 q v = 0.5 q v = 0.1 q v = 0.05 q v = 0.01 q v =q v = 2.0 q v = 1.5 q v = 1.0 q v = 0.5 q v = 0.1 q v = 0.05 q v = 0.01 q v = 0.001q v = 2.0 q v = 1.5 q v = 1.0 q v = 0.5 q v = 0.1 q v = 0.05 q v = 0.01 q v = 0.001 Conclusion In this paper, we have pursued the following thought: The power spectrum of non-linear cosmic density fluctuations can be analytically and accurately calculated with kinetic field theory (KFT) in a mean-field approximation. At redshift z = 0, the KFT result agrees at the per-cent level with typical results obtained from numerical simulations for wave numbers k 10 h Mpc −1 . The cosmological background model and the theory of gravity enter into this KFT result in two ways; either through the expansion function (or dimension-less Hubble function) E, or through a possibly time-dependent gravitational coupling strength G, if the small-scale gravitational potential can still be described by the Poisson equation. For the wide class of modified gravity theories satisfying these criteria, deviations from general relativity must be small because general relativity agrees very well with empirical data on small and large scales. Then, the effect of modified gravity theories on the non-linear density-fluctuation power spectrum can be approximated by a first-order Taylor expansion of the KFT power spectrum P (nl) δ in the mean-field approximation in terms of the functions E and G. The functional derivatives of P (nl) δ with respect to E and G are to be evaluated in the ΛCDM cosmological model based on general relativity. They are thus generic for all modified gravity theories falling into the class defined above. We have worked out the functional derivatives of P (nl) δ with respect to E and G in Sect. 3 of this paper and evaluated them in ΛCDM. In Sect. 4, we have used the results to calculate the effect of one particular modified gravity theory on non-linear cosmic structure formation, namely a generalized Proca theory [13], to show one example. The relative difference of the non-linear density-fluctuation power spectrum from the expectation in the ΛCDM model lifts above zero near a wave number k ≈ 1 h Mpc −1 defined by the filtered variance σ J of the linear density-fluctuation power spectrum entering into the mean-field interaction term in KFT. It reaches an amplitude of several per cent near k ≈ 10 h Mpc −1 , in good qualitative agreement with numerical results in the literature. We argue that this scheme of a functional Taylor expansion of the non-linear density-fluctuation power spectrum can now be used to scan wide classes of modified gravity theories. To facilitate this, we provide the functional derivatives of P (nl) δ with respect to E and G, evaluated in ΛCDM, in tabulated form, together with simple Python code integrating them over any deviations ∆E and ∆G that may be predicted by a modified gravity theory adapted to current observational data. Figure 1 . 1Logarithmic functional derivatives of the linear growth factor D + with respect to the gravitational constant G (left) and the expansion function E (right) plotted as functions of the perturbance scale factor x and the scale factor a. Figure 2 . 2Functional derivative of the mean interaction term S I with respect to the logarithm of G evaluated today at scale factor a 0 , plotted as a function of the wavenumber k and for different values of the perturbance scale factor x (left), and plotted as a function of both k and x in a two-dimensional figure (right). Figure 3 . 3As Fig. 2 for the expansion function E instead of the gravitational coupling G. Figure 4 . 4Logarithmic functional derivatives of the non-linear power spectrum P with respect to the gravitational constant G (left) and the expansion function E (right), evaluated today at scale factor a 0 and plotted for different perturbance scale factors x. Figure 5 . 5Relative difference between the non-linear power spectrum for the generalized Proca theories and the power spectrum for the standard cosmological model, for two different Proca models. Model 1 (left) with the parameter choices p = 2.5, p 2 = 0.5, λ = 0.86, β 4 = 1.0e − 4, β 5 = 0.052 and Model 2 (right) with p = 2.5, p 2 = 0.5, λ = 0.86, β 4 = 0.0, β 5 = 0.0, both plotted for a range of values for q v . Results obtained with the Taylor expansion. Figure 6 . 6As Fig. 5, but results obtained in a direct approach and not via the Taylor expansion. Simulating cosmic structure formation with the GADGET-4 code. V Springel, R Pakmor, O Zier, M Reinecke, 10.1093/mnras/stab1855MNRAS. 50628712010.03567V. Springel, R. Pakmor, O. Zier and M. Reinecke, Simulating cosmic structure formation with the GADGET-4 code, MNRAS 506 (2021) 2871 [2010.03567]. First results from the IllustrisTNG simulations: matter and galaxy clustering. V Springel, R Pakmor, A Pillepich, R Weinberger, D Nelson, L Hernquist, 10.1093/mnras/stx33041707.03397MNRAS. 475676V. Springel, R. Pakmor, A. Pillepich, R. Weinberger, D. Nelson, L. Hernquist et al., First results from the IllustrisTNG simulations: matter and galaxy clustering, MNRAS 475 (2018) 676 [1707.03397]. Large-scale structure of the Universe and cosmological perturbation theory. F Bernardeau, S Colombi, E Gaztañaga, R Scoccimarro, 10.1016/S0370-1573(02)00135-7astro-ph/0112551Phys. Rep. 3671F. Bernardeau, S. Colombi, E. Gaztañaga and R. Scoccimarro, Large-scale structure of the Universe and cosmological perturbation theory, Phys. Rep. 367 (2002) 1 [astro-ph/0112551]. Renormalized cosmological perturbation theory. M Crocce, R Scoccimarro, 10.1103/PhysRevD.73.063519astro-ph/0509418Phys. Rev. D. 7363519M. Crocce and R. Scoccimarro, Renormalized cosmological perturbation theory, Phys. Rev. D 73 (2006) 063519 [astro-ph/0509418]. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale. S Anselmi, M Pietroni, 10.1088/1475-7516/2012/12/013JCAP. 2012131205.2235S. Anselmi and M. Pietroni, Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale, JCAP 2012 (2012) 013 [1205.2235]. Resumming cosmological perturbations via the Lagrangian picture: One-loop results in real space and in redshift space. T Matsubara, 10.1103/PhysRevD.77.063530Phys. Rev. D. 77635300711.2521T. Matsubara, Resumming cosmological perturbations via the Lagrangian picture: One-loop results in real space and in redshift space, Phys. Rev. D 77 (2008) 063530 [0711.2521]. Matter power spectrum from a Lagrangian-space regularization of perturbation theory. P Valageas, T Nishimichi, A Taruya, 10.1103/PhysRevD.87.0835221302.4533Phys. Rev. D. 8783522P. Valageas, T. Nishimichi and A. Taruya, Matter power spectrum from a Lagrangian-space regularization of perturbation theory, Phys. Rev. D 87 (2013) 083522 [1302.4533]. The Lagrangian-space Effective Field Theory of large scale structures. R A Porto, L Senatore, M Zaldarriaga, 10.1088/1475-7516/2014/05/022JCAP. 2014221311.2168R.A. Porto, L. Senatore and M. Zaldarriaga, The Lagrangian-space Effective Field Theory of large scale structures, JCAP 2014 (2014) 022 [1311.2168]. Effective field theory of dark matter and structure formation: Semianalytical results. M P Hertzberg, 10.1103/PhysRevD.89.0435211208.0839Phys. Rev. D. 8943521M.P. Hertzberg, Effective field theory of dark matter and structure formation: Semianalytical results, Phys. Rev. D 89 (2014) 043521 [1208.0839]. Time-sliced perturbation theory for large scale structure I: general formalism. D Blas, M Garny, M M Ivanov, S Sibiryakov, 10.1088/1475-7516/2016/07/0521512.05807JCAP. 201652D. Blas, M. Garny, M.M. Ivanov and S. Sibiryakov, Time-sliced perturbation theory for large scale structure I: general formalism, JCAP 2016 (2016) 052 [1512.05807]. A microscopic, non-equilibrium, statistical field theory for cosmic structure formation. M Bartelmann, F Fabis, D Berg, E Kozlikin, R Lilow, C Viermann, 10.1088/1367-2630/18/4/043020New Journal of Physics. 18430201411.0806M. Bartelmann, F. Fabis, D. Berg, E. Kozlikin, R. Lilow and C. Viermann, A microscopic, non-equilibrium, statistical field theory for cosmic structure formation, New Journal of Physics 18 (2016) 043020 [1411.0806]. Kinetic field theory: Non-linear cosmic power spectra in the mean-field approximation. M Bartelmann, J Dombrowski, S Konrad, E Kozlikin, R Lilow, C Littek, 10.21468/SciPostPhys.10.6.153SciPost Physics. 101532011.04979M. Bartelmann, J. Dombrowski, S. Konrad, E. Kozlikin, R. Lilow, C. Littek et al., Kinetic field theory: Non-linear cosmic power spectra in the mean-field approximation, SciPost Physics 10 (2021) 153 [2011.04979]. L Heisenberg, 10.1088/1475-7516/2014/05/0151402.7026Generalization of the Proca Action. 201415L. Heisenberg, Generalization of the Proca Action, JCAP 2014 (2014) 015 [1402.7026]. Field Theoretic Formulation of Kinetic Theory: Basic Development. S P Das, G F Mazenko, 10.1007/s10955-012-0610-y1111.0571Journal of Statistical Physics. 149643S.P. Das and G.F. Mazenko, Field Theoretic Formulation of Kinetic Theory: Basic Development, Journal of Statistical Physics 149 (2012) 643 [1111.0571]. M Bartelmann, E Kozlikin, R Lilow, C Littek, F Fabis, I Kostyuk, 10.1002/andp.2018004461905.01179Cosmic Structure Formation with Kinetic Field Theory. 5311800446M. Bartelmann, E. Kozlikin, R. Lilow, C. Littek, F. Fabis, I. Kostyuk et al., Cosmic Structure Formation with Kinetic Field Theory, Annalen der Physik 531 (2019) 1800446 [1905.01179]. S Konrad, M Bartelmann, arXiv:2202.11077[2202.11077Kinetic Field Theory for Cosmic Structure Formation. arXiv e-printsS. Konrad and M. Bartelmann, Kinetic Field Theory for Cosmic Structure Formation, arXiv e-prints (2022) arXiv:2202.11077 [2202.11077]. Zel'dovich, Gravitational instability: An approximate theory for large density perturbations. Y B , A&A. 584Y.B. Zel'dovich, Gravitational instability: An approximate theory for large density perturbations., A&A 5 (1970) 84. Trajectories of point particles in cosmology and the Zel'dovich approximation. M Bartelmann, 10.1103/PhysRevD.91.0835241411.0805Phys. Rev. D. 9183524M. Bartelmann, Trajectories of point particles in cosmology and the Zel'dovich approximation, Phys. Rev. D 91 (2015) 083524 [1411.0805]. On the sensitivity of weak gravitational lensing to the cosmic expansion function. C F Schmidt, M Bartelmann, arXiv:2011.03202[2011.03202arXiv e-printsC.F. Schmidt and M. Bartelmann, On the sensitivity of weak gravitational lensing to the cosmic expansion function, arXiv e-prints (2020) arXiv:2011.03202 [2011.03202]. Model-independent determination of the cosmic growth factor. S Haude, S Salehi, S Vidal, M Maturi, M Bartelmann, 10.21468/SciPostAstro.2.1.001SciPost Astronomy. 21S. Haude, S. Salehi, S. Vidal, M. Maturi and M. Bartelmann, Model-independent determination of the cosmic growth factor, SciPost Astronomy 2 (2022) 001. A De Felice, L Heisenberg, R Kase, S Mukohyama, S Tsujikawa, Y Zhang, 10.1088/1475-7516/2016/06/048Cosmology in generalized Proca theories. 2016481603.05806A. De Felice, L. Heisenberg, R. Kase, S. Mukohyama, S. Tsujikawa and Y.-l. Zhang, Cosmology in generalized Proca theories, JCAP 2016 (2016) 048 [1603.05806]. Kinetic field theory applied to vector-tensor gravity. L Heisenberg, M Bartelmann, 10.1016/j.physletb.2019.07.0041901.01041Physics Letters B. 79659L. Heisenberg and M. Bartelmann, Kinetic field theory applied to vector-tensor gravity, Physics Letters B 796 (2019) 59 [1901.01041]. Effective gravitational couplings for cosmological perturbations in generalized Proca theories. A De Felice, L Heisenberg, R Kase, S Mukohyama, S Tsujikawa, Y Zhang, 10.1103/PhysRevD.94.044024Phys. Rev. D. 94440241605.05066A. De Felice, L. Heisenberg, R. Kase, S. Mukohyama, S. Tsujikawa and Y.-l. Zhang, Effective gravitational couplings for cosmological perturbations in generalized Proca theories, Phys. Rev. D 94 (2016) 044024 [1605.05066]. Systematic simulations of modified gravity: symmetron and dilaton models. P Brax, A.-C Davis, B Li, H A Winther, G.-B Zhao, 10.1088/1475-7516/2012/10/002JCAP. 201221206.3568P. Brax, A.-C. Davis, B. Li, H.A. Winther and G.-B. Zhao, Systematic simulations of modified gravity: symmetron and dilaton models, JCAP 2012 (2012) 002 [1206.3568]. Systematic simulations of modified gravity: chameleon models. P Brax, A.-C Davis, B Li, H A Winther, G.-B Zhao, 10.1088/1475-7516/2013/04/0291303.0007JCAP. 201329P. Brax, A.-C. Davis, B. Li, H.A. Winther and G.-B. Zhao, Systematic simulations of modified gravity: chameleon models, JCAP 2013 (2013) 029 [1303.0007]. N-body simulations for parametrized modified gravity. F Hassani, L Lombriser, 10.1093/mnras/staa2083MNRAS. 4971885F. Hassani and L. Lombriser, N-body simulations for parametrized modified gravity, MNRAS 497 (2020) 1885 [2003.05927].
[]
[ "Elliptic Orbits with a Non-Newtonian Eccentricity", "Elliptic Orbits with a Non-Newtonian Eccentricity" ]
[ "F T Hioe ", "David Kuebel ", "\nDepartment of Physics\nDepartment of Physics & Astronomy\nSt. John Fisher College\n14618RochesterNY\n", "\nUniversity of Rochester\n14627RochesterNY\n" ]
[ "Department of Physics\nDepartment of Physics & Astronomy\nSt. John Fisher College\n14618RochesterNY", "University of Rochester\n14627RochesterNY" ]
[]
It is shown that the lowest order general relativistic correction produces elliptic orbits with a non-Newtonian eccentricity.
null
[ "https://export.arxiv.org/pdf/1208.0260v1.pdf" ]
119,207,399
1208.0260
f1e16ed79626cbdc9f215150990642dfe3ee1fd4
Elliptic Orbits with a Non-Newtonian Eccentricity 1 Aug 2012 F T Hioe David Kuebel Department of Physics Department of Physics & Astronomy St. John Fisher College 14618RochesterNY University of Rochester 14627RochesterNY Elliptic Orbits with a Non-Newtonian Eccentricity 1 Aug 2012 It is shown that the lowest order general relativistic correction produces elliptic orbits with a non-Newtonian eccentricity. In a weak gravitational field, the general relativistic effect of a massive object such as a star that produces a precessional motion [1] to an otherwise Newtonian elliptical orbit of a particle (such as a planet) is well known and well noted for its historical significance. The precessional angle ∆φ is approximately given by 6π(GM/hc) 2 , where M is the mass of the star, h is the angular momentum per unit rest mass of the particle, G is the universal gravitation constant, and c is the speed of light. Like the precessional angle of a particle in a weak gravitational field, for small s ≡ GM/hc, most lowest order general relativistic corrections are known to be of the order of s 2 and higher. A general relativistic correction of the order s, on the other hand, is uncommon and is a principal result that we shall present in this Note. Specifically we shall present elliptic orbits with eccentricity given by √ 2s; that is, we shall present a general relativistic effect of order s that makes circular Newtonian orbits elliptical, and the resulting elliptical orbits are non-precessing if terms of order s 2 and higher can be neglected. In contrast, two new examples for hyperbolic orbits in which the lowest order general relativistic corrections are of the order s 2 are also presented. We start with one of the analytic solutions for the orbits in the Schwarzschild geometry that we presented in our papers [2][3][4][5][6]. We first introduce the parameters used in the analysis. The massive spherical object (which we call a star) of mass M sits at the origin of the coordinate system. Let the coordinates r and φ describe the position of the particle relative to the star. If [x µ ] = (t, r, θ, φ), then the worldline x µ (τ ), where τ is the proper time along the path, of a particle moving in the equatorial plane θ = π/2, satisfies the 'combined' energy equation [1] · r 2 + h 2 r 2 1 − α r − c 2 α r = c 2 (κ 2 − 1),(1) where the derivative · represents d/dτ , α ≡ 2GM/c 2 is the Schwarzschild radius, h = r 2 · φ is identified as the angular momentum per unit rest mass of the particle, and the constant κ = E/(m 0 c 2 ) is identified to be the total energy per unit rest energy of the particle, E being the total energy of the particle in its orbit and m 0 the rest mass of the particle at r = ∞. By using the dimensionless distance q ≡ r/α of the particle from the star measured in units of the Schwarzschild radius and another dimensionless quantity U defined by U ≡ 1 4 α r − 1 3 = 1 4 1 q − 1 3 ,(2) eq.(1) reduces to the following simple form dU dφ 2 = 4U 3 − g 2 U − g 3(3) where g 2 = 1 12 − s 2 g 3 = 1 216 + 1 6 s 2 − 1 4 κ 2 s 2 ≡ 1 216 − 1 12 s 2 + 1 4 (1 − e 2 )s 4 ,(4) and where s 2 ≡ GM hc 2 ,(5) and e 2 ≡ 1 + h 2 c 2 (κ 2 − 1) (GM ) 2 ≡ 1 + κ 2 − 1 s 2 .(6) The use of the dimensionless distance q led naturally to two dimensionless parameters κ 2 (or e 2 ) and s 2 for characterizing the orbit. As was pointed in our previous work [2][3][4][5][6], the use of the parameter e 2 makes the correspondence to the Newtonian case much easier to see. To demonstrate this, we use eqs. (6) and (1) to write e 2 =   r 2 · φ GM   2    · r 2 + r · φ − GM r 2 · φ 2 − 2GM c 2 r · φ 2    ,(7) and compare this expression with the Newtonian eccentricity e N , which, using the derivative · to represent d/dt, t being the ordinary time, can be expressed as e 2 N =   r 2 · φ GM   2    · r 2 + r · φ − GM r 2 · φ 2    .(8) Comparing the two above expressions, it is seen that e → e N in the Newtonian limit implies the approximation τ → t and c → ∞. However, since setting c = ∞ is not consistent with reality, we will proceed by stating that e → e N if we take the approximation τ → t and · r 2 + r · φ − GM r 2 · φ 2 >> 2GM c 2 r · φ 2 .(9) We use the coordinates (e 2 , s 2 ), where −∞ ≤ e 2 ≤ +∞, 0 ≤ s ≤ ∞ of a parameter space for characterizing the two regions which we call Regions I and II for different types of orbits [5,6]. Region I is mathematically characterized by ∆ ≤ 0 and Region II is characterized by ∆ > 0 where ∆ is the discriminant of the cubic equation 4U 3 − g 2 U − g 3 = 0 (10) that is defined by ∆ = 27g 2 3 − g 3 2(11) and where g 2 and g 3 are defined by eq.(4). For the case ∆ ≤ 0, the three roots of the cubic equation (10) are all real. We call the three roots e 1 , e 2 , e 3 and arrange them so that e 1 > e 2 > e 3 . In this paper, we are interested only in the orbit solution for which ∆ ≤ 0, e 1 > e 2 ≥ U > e 3 applicable in Region I. The equation for the orbit is [2,3] 1 q = 1 3 + 4e 3 + 4(e 2 − e 3 )sn 2 (γφ, k).(12) The constant γ appearing in the argument, and the modulus k, of the Jacobian elliptic functions [7] are given in terms of the three roots of the cubic equation (10) by γ = (e 1 − e 3 ) 1/2 ,(13)k 2 = e 2 − e 3 e 1 − e 3 .(14) where e 1 , e 2 , e 3 are given by e 1 = 2 g 2 12 1/2 cos θ 3 , e 2 = 2 g 2 12 1/2 cos θ 3 + 4π 3 , e 3 = 2 g 2 12 1/2 cos θ 3 + 2π 3 ,(15) and where cos θ = g 3 27 g 3 2 1/2 .(16) A typical orbit given by eq.(12) (not on any one of the three boundaries) in Region I is a precessional elliptic-type orbit for e 2 < 1, a parabolic-type orbit for e 2 = 1, and a hyperbolic-type orbit for e 2 > 1 [5,6]. For the elliptic-type orbits (e 2 < 1), the maximum distance r max (the aphelion) of the particle from the star and the minimum distance r min (the perihelion) of the particle from the star, or their corresponding dimensionless forms q max (= r max /α) and q min (= r min /α), are obtained from eq.(12) when γφ = 0 and when γφ = K(k) respectively, where K(k) is the complete elliptic integral of the first kind [7], and they are given by 1 q max = 1 3 + 4e 3 ,(17) and 1 q min = 1 3 + 4e 2 .(18) The geometric eccentricity ε of the orbit is defined in the range 0 ≤ ε ≤ 1 by ε ≡ r max − r min r max + r min = q max − q min q max + q min = e 2 − e 3 1/6 + e 2 + e 3 ,(19) using q max and q min given by eqs. (17) and (18). It has been shown in ref.3 that in the range 0 ≤ ε < 1 that ε → e from below as s → 0, and that ε = e for all values of s when ε = 1. The precessional angle ∆φ is given by ∆φ = 2K(k) γ − 2π.(20) The Newtonian correspondence is approached by making s very small. Substituting eq.(4) into eq.(16) and expanding in power series in s, we find cos θ = 1 − 2 · 3 3 e 2 s 4 − 2 2 · 3 3 (1 + 9e 2 )s 6 − 2 · 3 5 · 5(1 + 6e 2 )s 8 + ... To obtain a power series in s for θ, the point e 2 = 0 must be done separately. For e 2 > 0, we have θ = 2 · 3 √ 3es 2 1 + e −2 (1 + 9e 2 )s 2 + ... .(22) Expanding e 1 , e 2 , e 3 , γ, k 2 , sn(γφ, k) in eq.(12), ε in eq.(19), and ∆φ in eq.(20) in the power series in s, we find that the orbit equation (12) can be approximated for e 2 > 0 and for small s 2 by 1 q = 2s 2 {1 − ε cos[(1 − δ)φ]},(23) which, in terms of r, gives the approximate orbit equation 1 r = GM h 2 {1 − ε cos[(1 − δ)φ]},(24) where ε, to the order of s 2 , is given by ε ≃ e + (e −1 − e 3 )s 2 ,(25) and where δ, to the order of s 2 , is given by δ ≃ 3s 2 .(26) δ is related to the precessional angle ∆φ given in eq.(20) by ∆φ ≃ 2πδ ≃ 6πs 2 = 6π[GM/(hc)] 2 and it is independent of e (to the order s 2 ) for 0 < e ≤ ∞. As an example, a general relativistic elliptic-type orbit with e = 0.8, s = 0.0176539 has an exact ε = 0.80015. The lowest order general relativistic corrections (25) and (26) yield ε ≃ 0.80023 and δ ≃ 0.000935. The corrections to ε ≃ e and the magnitude of δ are both of the order s 2 . Thus if we can ignore terms of order s 2 and higher, we recover the Newtonian orbit equation given by 1 r = GM h 2 (1 − e cos φ).(27) The question that can be posed at this point is whether we can define the Newtonian limit by stating that it is the general relativistic result for small s if we ignore terms of order s 2 and higher. To proceed, we note that the case e = 0 is excluded from the expansion given by eq.(22) and it is also clear that e 2 = 0 does not satisfy the condition given by eq.(9) and must be treated separately. For e 2 = 0, the expansion for θ, instead of eq.(22), is now θ = √ 2 3 · 3 3 s 3 1 + 3 2 · 5 2 2 s 2 + ...(28) and the approximate orbit equation still has the form of eq.(23) or (24) with the same δ given by eq.(26) to the order of s 2 , but with ε, instead of eq.(25), now given to the order of s 3 by ε = √ 2s 1 + 9 4 s 2 .(29) Thus we have elliptic orbits that precess with the same angle δ given by eq.(26) but with an eccentricity equal to √ 2s + 9 √ 2s 3 /4 to the order s 3 . If we ignore terms of order s 2 and higher, the orbit equation becomes 1 r = GM h 2 (1 − √ 2s cos φ),(30) which is a (non-precessing) elliptical orbit with a non-Newtonian eccentricity that is dependent on the speed of light. To the best of our knowledge, this general relativistic elliptical orbit with an eccentricity of the order s is new and has never been noted by other authors. As an example, an elliptic orbit with eccentricity ε = 0.0368035 could come from a general relativistic orbit with e = 0 [remembering that e is defined by eq.(7) and not eq.(8)] and s = 0.0259843 for which the approximation formula (29) gives ε ≃ 0.0368032 [the first term alone gives √ 2s = 0.0367474] and eq.(26) gives δ ≃ 0.0020256. The approximation ε ≃ e = 0 holds if we ignore terms of order s. The orbit for e = 0 from general relativity becomes a Newtonian circular orbit if we ignore terms of order s, i.e. ignoring the second order terms in s 2 and higher order terms is not sufficient to get the Newtonian limit for this case. It is clear that the entire region characterized by e 2 ≤ 0 or · r 2 + r · φ − GM r 2 · φ 2 ≤ 2GM c 2 r · φ 2(31) is non-Newtonian in character. This includes all circular orbits that occur [6] on the curve s 2 = s ′2 1 for which k 2 = 0 and ε = 0, where s ′2 1 is given by s ′2 1 = 1 − 9e 2 − (1 + 3e 2 ) 3 27(1 − e 2 ) 2(32) from the "vertex" V at (e 2 , s 2 ) = (−1/3, 1/12) where the innermost stable circular orbit (ISCO) occurs, to the origin O at (e 2 , s 2 ) = (0, 0) where the circular orbit has an infinite radius. This curve s ′ 1 defines a boundary of Region I for which the values of e 2 range between −1/3 and 0 and the values for s 2 range between 1/12 and 0. All circular orbits precess even though the precession angle is not observable [8], and for small s the precession angle is given by ∆φ ≃ 6πs 2 ≃ 6π GM c 2 r c , where r c ≃ h 2 /(GM ) is the radius of the circular orbit, and ∆φ is nonzero unless the radius of the circle is infinite which occurs on s = 0 for zero gravitational field. The values of e 2 along the s ′ 1 curve where the circular orbits occur near s = 0 are given by e 2 ≃ −2s ′2 1 . For small s ′ 1 and for s just above s ′ 1 inside Region I given by s 2 = s ′2 1 +(∆s) 2 , it can be shown in the same manner that to the order ∆s we have elliptic orbits similar to eq.(30) given by 1 r = GM h 2 [1 − √ 2(∆s) cos φ].(33) For the parabolic-type orbit (e 2 = 1), e 3 = −1/12 and the initial distance of the particle from the star is given from eq.(17) to be q max = ∞ and eq.(19) gives ε = 1. Thus e = 1 and ε = 1 coincide for all values of s. The orbit equation is given exactly by 1 q = 4(e 2 + 1 12 )sn 2 (γφ, k),(34) and for small s values approximately by 1 r = GM h 2 {1 − cos[(1 − δ)φ]},(35) where δ is given by eq.(26) and for which the lowest order general relativistic correction to the Newtonian case is of the order s 2 . For the hyperbolic-type orbit (e 2 > 1), e 3 is less than −1/12 and eq.(17) is not applicable. Instead, a particle approaches the star from infinity along an incoming asymptote at an angle Ψ 1 to the horizontal axis given by [2,3] Ψ 1 = γ −1 sn −1   − 1 3 + 4e 3 4(e 2 − e 3 ) , k   ,(36) where γ and k are defined by eqs. (13) and (14), turns counter-clockwise about the star to its right on the horizontal axis, and leaves along an outgoing asymptote at an angle Ψ 2 given by Ψ 2 ≡ 2K(k) γ − Ψ 1 .(37) The minimum dimensionless distance q min of the particle from the star is still given by eq.(18) as e 2 is still greater than −1/12. In the Newtonian limit for small s, Ψ 1 becomes Ψ 1 ≃ cos −1 ( 1 e ) ≡ φ 0 ,(38) and the complementary angle Ψ ′ 1 ≡ 2π − Ψ 2 ≃ φ 0 also. If we define Θ GR ≡ Ψ 1 + Ψ ′ 1(39) and Θ N ewton = 2φ 0 ,(40) the difference ∆φ ≡ Θ N ewton − Θ GR can be taken to be an analog of the precession angle given by eq.(20) for a hyperbolic orbit, and for small s and to the order of s 2 , it was shown [3] to be given by ∆φ ≃ 6π − 6φ 0 + 2(2 + e −2 ) e 2 − 1 s 2 . This result is different from the approximation used by Longuski et al. [9]. As discussed in ref. 9, an experimental test can be carried out to check this result. From eq.(18), the minimum distance r min of the particle from the star is given approximately by 1 r min ≃ GM h 2 (e + 1) 1 + (e + 1) 2 e s 2 + ... . Equations (41) and (42) show two examples for which the lowest general relativistic corrections to the Newtonian case are of the order s 2 and not s for e 2 > 1. To summarize the above results, for small values of s, the general relativistic correction to the Newtonian elliptic orbit is second order in s for e 2 > 0 but is first order in s for e 2 ≤ 0 for which the correction is shown to appear in the eccentricity of the elliptic orbit. This division gives a new meaning and significance to the parameter e 2 . In particular, there exist non-Newtonian elliptic orbits of eccentricity √ 2s given by eq.(30). M P Hobson, G Efstathiou, A N Lasenby, General Relativity. Cambridge University Press910M.P. Hobson, G. Efstathiou and A.N. Lasenby: General Relativity, Cam- bridge University Press, 2006, Chapters 9 and 10. . F T Hioe, Phys. Lett. A. 3731506F.T. Hioe, Phys. Lett. A 373, 1506 (2009). . F T Hioe, D Kuebel, Phys. Rev. D. 8184017F.T. Hioe and D. Kuebel, Phys. Rev. D 81, 084017 (2010). . F T Hioe, D Kuebel, arXiv:1008.1964v1F.T. Hioe and D. Kuebel, arXiv:1008.1964 v1 (2010). . F T Hioe, D Kuebel, arXiv:1010.0996v2F.T. Hioe and D. Kuebel, arXiv:1010.0996 v2 (2010). . F T Hioe, D Kuebel, arXiv:1207.7041v1F.T. Hioe and D. Kuebel, arXiv:1207.7041v1 (2012). P F Byrd, M D Friedman, Handbook of Elliptic Integrals for Engineers and Scientists. New YorkSpringer-Verlag2nd EditionP.F. Byrd and M.D. Friedman: Handbook of Elliptic Integrals for Engi- neers and Scientists, 2nd Edition, Springer-Verlag, New York, 1971. J L Martin, General Relativity. New YorkPrentice HallRevised EditionJ.L. Martin: General Relativity, Revised Edition, Prentice Hall, New York 1996, Chapter 4. . J M Longuski, E Fischbach, D J Scheeres, ; J M Longuski, E Fischbach, D J Scheeres, G Giampierri, R S Park, Phys. Rev. Lett. 8642001Phys. Rev. DJ.M. Longuski, E. Fischbach, and D.J. Scheeres, Phys. Rev. Lett. 86, 2942 (2001), J.M. Longuski, E. Fischbach, D.J. Scheeres, G. Giampierri, and R.S. Park, Phys. Rev. D 69, 042001 (2004).
[]
[ "Published as a conference paper at ICLR 2017 SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY", "Published as a conference paper at ICLR 2017 SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY" ]
[ "Ziyu Wang Deepmind [email protected] \nDeepMind, CIFAR\nOxford University\n\n", "Victor Bapst Deepmind \nDeepMind, CIFAR\nOxford University\n\n", "Nicolas Heess Deepmind \nDeepMind, CIFAR\nOxford University\n\n", "Volodymyr Mnih Deepmind \nDeepMind, CIFAR\nOxford University\n\n", "Remi Munos Deepmind \nDeepMind, CIFAR\nOxford University\n\n", "Koray Kavukcuoglu \nDeepMind, CIFAR\nOxford University\n\n", "Nando De Freitas [email protected] \nDeepMind, CIFAR\nOxford University\n\n" ]
[ "DeepMind, CIFAR\nOxford University\n", "DeepMind, CIFAR\nOxford University\n", "DeepMind, CIFAR\nOxford University\n", "DeepMind, CIFAR\nOxford University\n", "DeepMind, CIFAR\nOxford University\n", "DeepMind, CIFAR\nOxford University\n", "DeepMind, CIFAR\nOxford University\n" ]
[]
This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.Published as a conference paper at ICLR 2017 is the first to address this challenge successfully at scale. More specifically, we introduce an actor critic with experience replay (ACER) that nearly matches the state-of-the-art performance of deep Q-networks with prioritized replay on Atari, and substantially outperforms A3C in terms of sample efficiency on both Atari and continuous control domains.ACER capitalizes on recent advances in deep neural networks, variance reduction techniques, the off-policy Retrace algorithm (Munos et al., 2016) and parallel training of RL agents (Mnih et al., 2016). Yet, crucially, its success hinges on innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling network architectures, and efficient trust region policy optimization.On the theoretical front, the paper proves that the Retrace operator can be rewritten from our proposed truncated importance sampling with bias correction technique.
null
[ "https://arxiv.org/pdf/1611.01224v2.pdf" ]
10,296,217
1611.01224
74aa1932035843dca83706a41423f1d8450dd91d
Published as a conference paper at ICLR 2017 SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY Ziyu Wang Deepmind [email protected] DeepMind, CIFAR Oxford University Victor Bapst Deepmind DeepMind, CIFAR Oxford University Nicolas Heess Deepmind DeepMind, CIFAR Oxford University Volodymyr Mnih Deepmind DeepMind, CIFAR Oxford University Remi Munos Deepmind DeepMind, CIFAR Oxford University Koray Kavukcuoglu DeepMind, CIFAR Oxford University Nando De Freitas [email protected] DeepMind, CIFAR Oxford University Published as a conference paper at ICLR 2017 SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.Published as a conference paper at ICLR 2017 is the first to address this challenge successfully at scale. More specifically, we introduce an actor critic with experience replay (ACER) that nearly matches the state-of-the-art performance of deep Q-networks with prioritized replay on Atari, and substantially outperforms A3C in terms of sample efficiency on both Atari and continuous control domains.ACER capitalizes on recent advances in deep neural networks, variance reduction techniques, the off-policy Retrace algorithm (Munos et al., 2016) and parallel training of RL agents (Mnih et al., 2016). Yet, crucially, its success hinges on innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling network architectures, and efficient trust region policy optimization.On the theoretical front, the paper proves that the Retrace operator can be rewritten from our proposed truncated importance sampling with bias correction technique. INTRODUCTION Realistic simulated environments, where agents can be trained to learn a large repertoire of cognitive skills, are at the core of recent breakthroughs in AI (Bellemare et al., 2013;Mnih et al., 2015;Schulman et al., 2015a;Narasimhan et al., 2015;Mnih et al., 2016;Brockman et al., 2016;Oh et al., 2016). With richer realistic environments, the capabilities of our agents have increased and improved. Unfortunately, these advances have been accompanied by a substantial increase in the cost of simulation. In particular, every time an agent acts upon the environment, an expensive simulation step is conducted. Thus to reduce the cost of simulation, we need to reduce the number of simulation steps (i.e. samples of the environment). This need for sample efficiency is even more compelling when agents are deployed in the real world. Experience replay (Lin, 1992) has gained popularity in deep Q-learning (Mnih et al., 2015;Wang et al., 2016;Narasimhan et al., 2015), where it is often motivated as a technique for reducing sample correlation. Replay is actually a valuable tool for improving sample efficiency and, as we will see in our experiments, state-of-the-art deep Q-learning methods Wang et al., 2016) have been up to this point the most sample efficient techniques on Atari by a significant margin. However, we need to do better than deep Q-learning, because it has two important limitations. First, the deterministic nature of the optimal policy limits its use in adversarial domains. Second, finding the greedy action with respect to the Q function is costly for large action spaces. Policy gradient methods have been at the heart of significant advances in AI and robotics (Silver et al., 2014;Lillicrap et al., 2015;Levine et al., 2015;Mnih et al., 2016;Schulman et al., 2015a;Heess et al., 2015). Many of these methods are restricted to continuous domains or to very specific tasks such as playing Go. The existing variants applicable to both continuous and discrete domains, such as the on-policy asynchronous advantage actor critic (A3C) of Mnih et al. (2016), are sample inefficient. The design of stable, sample efficient actor critic methods that apply to both continuous and discrete action spaces has been a long-standing hurdle of reinforcement learning (RL). We believe this paper 2 BACKGROUND AND PROBLEM SETUP Consider an agent interacting with its environment over discrete time steps. At time step t, the agent observes the n x -dimensional state vector x t ∈ X ⊆ R nx , chooses an action a t according to a policy π(a|x t ) and observes a reward signal r t ∈ R produced by the environment. We will consider discrete actions a t ∈ {1, 2, . . . , N a } in Sections 3 and 4, and continuous actions a t ∈ A ⊆ R na in Section 5. The goal of the agent is to maximize the discounted return R t = i≥0 γ i r t+i in expectation. The discount factor γ ∈ [0, 1) trades-off the importance of immediate and future rewards. For an agent following policy π, we use the standard definitions of the state-action and state only value functions: Q π (x t , a t ) = E xt+1:∞,at+1:∞ [ R t | x t , a t ] and V π (x t ) = E at [Q π (x t , a t )|x t ] . Here, the expectations are with respect to the observed environment states x t and the actions generated by the policy π, where x t+1:∞ denotes a state trajectory starting at time t + 1. We also need to define the advantage function A π (x t , a t ) = Q π (x t , a t ) − V π (x t ), which provides a relative measure of value of each action since E at [A π (x t , a t )] = 0. The parameters θ of the differentiable policy π θ (a t |x t ) can be updated using the discounted approximation to the policy gradient , which borrowing notation from Schulman et al. (2015b), is defined as: g = E x0:∞,a0:∞   t≥0 A π (x t , a t )∇ θ log π θ (a t |x t )   .(1) Following Proposition 1 of Schulman et al. (2015b), we can replace A π (x t , a t ) in the above expression with the state-action value Q π (x t , a t ), the discounted return R t , or the temporal difference residual r t + γV π (x t+1 ) − V π (x t ), without introducing bias. These choices will however have different variance. Moreover, in practice we will approximate these quantities with neural networks thus introducing additional approximation errors and biases. Typically, the policy gradient estimator using R t will have higher variance and lower bias whereas the estimators using function approximation will have higher bias and lower variance. Combining R t with the current value function approximation to minimize bias while maintaining bounded variance is one of the central design principles behind ACER. To trade-off bias and variance, the asynchronous advantage actor critic (A3C) of Mnih et al. (2016) uses a single trajectory sample to obtain the following gradient approximation: g a3c = t≥0 k−1 i=0 γ i r t+i + γ k V π θv (x t+k ) − V π θv (x t ) ∇ θ log π θ (a t |x t ).(2) A3C combines both k-step returns and function approximation to trade-off variance and bias. We may think of V π θv (x t ) as a policy gradient baseline used to reduce variance. In the following section, we will introduce the discrete-action version of ACER. ACER may be understood as the off-policy counterpart of the A3C method of Mnih et al. (2016). As such, ACER builds on all the engineering innovations of A3C, including efficient parallel CPU computation. ACER uses a single deep neural network to estimate the policy π θ (a t |x t ) and the value function V π θv (x t ). (For clarity and generality, we are using two different symbols to denote the parameters of the policy and value function, θ and θ v , but most of these parameters are shared in the single neural network.) Our neural networks, though building on the networks used in A3C, will introduce several modifications and new modules. DISCRETE ACTOR CRITIC WITH EXPERIENCE REPLAY Off-policy learning with experience replay may appear to be an obvious strategy for improving the sample efficiency of actor-critics. However, controlling the variance and stability of off-policy estimators is notoriously hard. Importance sampling is one of the most popular approaches for offpolicy learning (Meuleau et al., 2000;Jie & Abbeel, 2010;Levine & Koltun, 2013). In our context, it proceeds as follows. Suppose we retrieve a trajectory {x 0 , a 0 , r 0 , µ(·|x 0 ), · · · , x k , a k , r k , µ(·|x k )}, where the actions have been sampled according to the behavior policy µ, from our memory of experiences. Then, the importance weighted policy gradient is given by: g imp = k t=0 ρ t k t=0 k i=0 γ i r t+i ∇ θ log π θ (a t |x t ),(3) where ρ t = π(at|xt) µ(at|xt) denotes the importance weight. This estimator is unbiased, but it suffers from very high variance as it involves a product of many potentially unbounded importance weights. To prevent the product of importance weights from exploding, Wawrzyński (2009) truncates this product. Truncated importance sampling over entire trajectories, although bounded in variance, could suffer from significant bias. Recently, Degris et al. (2012) attacked this problem by using marginal value functions over the limiting distribution of the process to yield the following approximation of the gradient: g marg = E xt∼β,at∼µ [ρ t ∇ θ log π θ (a t |x t )Q π (x t , a t )] ,(4) where E xt∼β,at∼µ [·] is the expectation with respect to the limiting distribution β(x) = lim t→∞ P (x t = x|x 0 , µ) with behavior policy µ. To keep the notation succinct, we will replace E xt∼β,at∼µ [·] with E xtat [·] and ensure we remind readers of this when necessary. Two important facts about equation (4) must be highlighted. First, note that it depends on Q π and not on Q µ , consequently we must be able to estimate Q π . Second, we no longer have a product of importance weights, but instead only need to estimate the marginal importance weight ρ t . Importance sampling in this lower dimensional space (over marginals as opposed to trajectories) is expected to exhibit lower variance. Degris et al. (2012) estimate Q π in equation (4) using lambda returns: R λ t = r t + (1 − λ)γV (x t+1 ) + λγρ t+1 R λ t+1 . This estimator requires that we know how to choose λ ahead of time to trade off bias and variance. Moreover, when using small values of λ to reduce variance, occasional large importance weights can still cause instability. In the following subsection, we adopt the Retrace algorithm of Munos et al. (2016) to estimate Q π . Subsequently, we propose an importance weight truncation technique to improve the stability of the off-policy actor critic of Degris et al. (2012), and introduce a computationally efficient trust region scheme for policy optimization. The formulation of ACER for continuous action spaces will require further innovations that are advanced in Section 5. MULTI-STEP ESTIMATION OF THE STATE-ACTION VALUE FUNCTION In this paper, we estimate Q π (x t , a t ) using Retrace (Munos et al., 2016). (We also experimented with the related tree backup method of Precup et al. (2000) but found Retrace to perform better in practice.) Given a trajectory generated under the behavior policy µ, the Retrace estimator can be expressed recursively as follows 1 : Q ret (x t , a t ) = r t + γρ t+1 [Q ret (x t+1 , a t+1 ) − Q(x t+1 , a t+1 )] + γV (x t+1 ),(5) whereρ t is the truncated importance weight,ρ t = min {c, ρ t } with ρ t = π(at|xt) µ(at|xt) , Q is the current value estimate of Q π , and V (x) = E a∼π Q(x, a). Retrace is an off-policy, return-based algorithm which has low variance and is proven to converge (in the tabular case) to the value function of the target policy for any behavior policy, see Munos et al. (2016). The recursive Retrace equation depends on the estimate Q. To compute it, in discrete action spaces, we adopt a convolutional neural network with "two heads" that outputs the estimate Q θv (x t , a t ), as well as the policy π θ (a t |x t ). This neural representation is the same as in (Mnih et al., 2016), with the exception that we output the vector Q θv (x t , a t ) instead of the scalar V θv (x t ). The estimate V θv (x t ) can be easily derived by taking the expectation of Q θv under π θ . To approximate the policy gradient g marg , ACER uses Q ret to estimate Q π . As Retrace uses multistep returns, it can significantly reduce bias in the estimation of the policy gradient 2 . To learn the critic Q θv (x t , a t ), we again use Q ret (x t , a t ) as a target in a mean squared error loss and update its parameters θ v with the following standard gradient: (Q ret (x t , a t ) − Q θv (x t , a t ))∇ θv Q θv (x t , a t )).(6) Because Retrace is return-based, it also enables faster learning of the critic. Thus the purpose of the multi-step estimator Q ret in our setting is twofold: to reduce bias in the policy gradient, and to enable faster learning of the critic, hence further reducing bias. IMPORTANCE WEIGHT TRUNCATION WITH BIAS CORRECTION The marginal importance weights in Equation (4) can become large, thus causing instability. To safe-guard against high variance, we propose to truncate the importance weights and introduce a correction term via the following decomposition of g marg : g marg = E xtat [ρ t ∇ θ log π θ (a t |x t )Q π (x t , a t )] = E xt E at [ρ t ∇ θ log π θ (a t |x t )Q π (x t , a t )]+E a∼π ρ t (a) − c ρ t (a) + ∇ θ log π θ (a|x t )Q π (x t , a) ,(7) whereρ t = min {c, ρ t } with ρ t = π(at|xt) µ(at|xt) as before. We have also introduced the notation ρ t (a) = π(a|xt) µ(a|xt) , and [x] + = x if x > 0 and it is zero otherwise. We remind readers that the above expectations are with respect to the limiting state distribution under the behavior policy: x t ∼ β and a t ∼ µ. The clipping of the importance weight in the first term of equation (7) ensures that the variance of the gradient estimate is bounded. The correction term (second term in equation (7)) ensures that our estimate is unbiased. Note that the correction term is only active for actions such that ρ t (a) > c. In particular, if we choose a large value for c, the correction term only comes into effect when the variance of the original off-policy estimator of equation (4) is very high. When this happens, our decomposition has the nice property that the truncated weight in the first term is at most c while the correction weight ρt(a)−c ρt(a) + in the second term is at most 1. We model Q π (x t , a) in the correction term with our neural network approximation Q θv (x t , a t ). This modification results in what we call the truncation with bias correction trick, in this case applied to the function ∇ θ log π θ (a t |x t )Q π (x t , a t ): g marg = E xt E at ρ t ∇ θ log π θ (a t |x t )Q ret (x t , a t ) +E a∼π ρ t (a) − c ρ t (a) + ∇ θ log π θ (a|x t )Q θv (x t , a) .(8) Equation (8) involves an expectation over the stationary distribution of the Markov process. We can however approximate it by sampling trajectories {x 0 , a 0 , r 0 , µ(·|x 0 ), · · · , x k , a k , r k , µ(·|x k )} generated from the behavior policy µ. Here the terms µ(·|x t ) are the policy vectors. Given these trajectories, we can compute the off-policy ACER gradient: g acer t =ρ t ∇ θ log π θ (a t |x t )[Q ret (x t , a t ) − V θv (x t )] + E a∼π ρ t (a) − c ρ t (a) + ∇ θ log π θ (a|x t )[Q θv (x t , a) − V θv (x t )] .(9) In the above expression, we have subtracted the classical baseline V θv (x t ) to reduce variance. It is interesting to note that, when c = ∞, (9) recovers (off-policy) policy gradient up to the use of Retrace. When c = 0, (9) recovers an actor critic update that depends entirely on Q estimates. In the continuous control domain, (9) also generalizes Stochastic Value Gradients if c = 0 and the reparametrization trick is used to estimate its second term (Heess et al., 2015). EFFICIENT TRUST REGION POLICY OPTIMIZATION The policy updates of actor-critic methods do often exhibit high variance. Hence, to ensure stability, we must limit the per-step changes to the policy. Simply using smaller learning rates is insufficient as they cannot guard against the occasional large updates while maintaining a desired learning speed. Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) provides a more adequate solution. Schulman et al. (2015a) approximately limit the difference between the updated policy and the current policy to ensure safety. Despite the effectiveness of their TRPO method, it requires repeated computation of Fisher-vector products for each update. This can prove to be prohibitively expensive in large domains. In this section we introduce a new trust region policy optimization method that scales well to large problems. Instead of constraining the updated policy to be close to the current policy (as in TRPO), we propose to maintain an average policy network that represents a running average of past policies and forces the updated policy to not deviate far from this average. We decompose our policy network in two parts: a distribution f , and a deep neural network that generates the statistics φ θ (x) of this distribution. That is, given f , the policy is completely characterized by the network φ θ : π(·|x) = f (·|φ θ (x)). For example, in the discrete domain, we choose f to be the categorical distribution with a probability vector φ θ (x) as its statistics. The probability vector is of course parameterised by θ. We denote the average policy network as φ θa and update its parameters θ a "softly" after each update to the policy parameter θ: θ a ← αθ a + (1 − α)θ. Consider, for example, the ACER policy gradient as defined in Equation (9), but with respect to φ: g acer t =ρ t ∇ φ θ (xt) log f (a t |φ θ (x))[Q ret (x t , a t ) − V θv (x t )] + E a∼π ρ t (a) − c ρ t (a) + ∇ φ θ (xt) log f (a t |φ θ (x))[Q θv (x t , a) − V θv (x t )] .(10) Given the averaged policy network, our proposed trust region update involves two stages. In the first stage, we solve the following optimization problem with a linearized KL divergence constraint: minimize z 1 2 ĝ acer t − z 2 2 subject to ∇ φ θ (xt) D KL [f (·|φ θa (x t )) f (·|φ θ (x t ))] T z ≤ δ(11) Since the constraint is linear, the overall optimization problem reduces to a simple quadratic programming problem, the solution of which can be easily derived in closed form using the KKT conditions. Letting k = ∇ φ θ (xt) D KL [f (·|φ θa (x t ) f (·|φ θ (x t )] , the solution is: z * =ĝ acer t − max 0, k Tĝacer t − δ k 2 2 k(12) This transformation of the gradient has a very natural form. If the constraint is satisfied, there is no change to the gradient with respect to φ θ (x t ). Otherwise, the update is scaled down in the direction of k, thus effectively lowering rate of change between the activations of the current policy and the average policy network. In the second stage, we take advantage of back-propagation. Specifically, the updated gradient with respect to φ θ , that is z * , is back-propagated through the network to compute the derivatives with respect to the parameters. The parameter updates for the policy network follow from the chain rule: ∂φ θ (x) ∂θ z * . The trust region step is carried out in the space of the statistics of the distribution f , and not in the space of the policy parameters. This is done deliberately so as to avoid an additional back-propagation step through the policy network. We would like to remark that the algorithm advanced in this section can be thought of as a general strategy for modifying the backward messages in back-propagation so as to stabilize the activations. Instead of a trust region update, one could alternatively add an appropriately scaled KL cost to the objective function as proposed by Heess et al. (2015). This approach, however, is less robust to the choice of hyper-parameters in our experience. The ACER algorithm results from a combination of the above ideas, with the precise pseudo-code appearing in Appendix A. A master algorithm (Algorithm 1) calls ACER on-policy to perform updates and propose trajectories. It then calls ACER off-policy component to conduct several replay steps. When on-policy, ACER effectively becomes a modified version of A3C where Q instead of V baselines are employed and trust region optimization is used. RESULTS ON ATARI We use the Arcade Learning Environment of Bellemare et al. (2013) to conduct an extensive evaluation. We deploy one single algorithm and network architecture, with fixed hyper-parameters, to learn to play 57 Atari games given only raw pixel observations and game rewards. This task is highly demanding because of the diversity of games, and high-dimensional pixel-level observations. Our experimental setup uses 16 actor-learner threads running on a single machine with no GPUs. We adopt the same input pre-processing and network architecture as Mnih et al. (2015). Specifically, the network consists of a convolutional layer with 32 8 × 8 filters with stride 4 followed by another convolutional layer with 64 4 × 4 filters with stride 2, followed by a final convolutional layer with 64 3 × 3 filters with stride 1, followed by a fully-connected layer of size 512. Each of the hidden layers is followed by a rectifier nonlinearity. The network outputs a softmax policy and Q values. When using replay, we add to each thread a replay memory that is up to 50 000 frames in size. The total amount of memory used across all threads is thus similar in size to that of DQN (Mnih et al., 2015). For all Atari experiments, we use a single learning rate adopted from an earlier implementation of A3C without further tuning. We do not anneal the learning rates over the course of training as in Mnih et al. (2016). We otherwise adopt the same optimization procedure as in Mnih et al. (2016). Specifically, we adopt entropy regularization with weight 0.001, discount the rewards with γ = 0.99, and perform updates every 20 steps (k = 20 in the notation of Section 2). In all our experiments with experience replay, we use importance weight truncation with c = 10. We consider training ACER both with and without trust region updating as described in Section 3.3. When trust region updating is used, we use δ = 1 and α = 0.99 for all experiments. To compare different agents, we adopt as our metric the median of the human normalized score over all 57 games. The normalization is calculated such that, for each game, human scores and random scores are evaluated to 1, and 0 respectively. The normalized score for a given game at time t is computed as the average normalized score over the past 1 million consecutive frames encountered until time t. For each agent, we plot its cumulative maximum median score over time. The result is summarized in Figure 1. The four colors in Figure 1 correspond to four replay ratios (0, 1, 4 and 8) with a ratio of 4 meaning that we use the off-policy component of ACER 4 times after using the on-policy component (A3C). That is, a replay ratio of 0 means that we are using A3C. The solid and dashed lines represent ACER with and without trust region updating respectively. The gray and black curves are the original DQN (Mnih et al., 2015) and Prioritized Replay agent of agents respectively. As shown on the left panel of Figure 1, replay significantly increases data efficiency. We observe that when using the trust region optimizer, the average reward as a function of the number of environmental steps increases with the ratio of replay. This increase has diminishing returns, but with enough replay, ACER can match the performance of the best DQN agents. Moreover, it is clear that the off-policy actor critics (ACER) are much more sample efficient than their on-policy counterpart (A3C). The right panel of Figure 1 shows that ACER agents perform similarly to A3C when measured by wall clock time. Thus, in this case, it is possible to achieve better data-efficiency without necessarily compromising on computation time. In particular, ACER with a replay ratio of 4 is an appealing alternative to either the prioritized DQN agent or A3C. CONTINUOUS ACTOR CRITIC WITH EXPERIENCE REPLAY Retrace requires estimates of both Q and V , but we cannot easily integrate over Q to derive V in continuous action spaces. In this section, we propose a solution to this problem in the form of a novel representation for RL, as well as modifications necessary for trust region updating. POLICY EVALUATION Retrace provides a target for learning Q θv , but not for learning V θv . We could use importance sampling to compute V θv given Q θv , but this estimator has high variance. We propose a new architecture which we call Stochastic Dueling Networks (SDNs), inspired by the Dueling networks of Wang et al. (2016), which is designed to estimate both V π and Q π off-policy while maintaining consistency between the two estimates. At each time step, an SDN outputs a stochastic estimate Q θv of Q π and a deterministic estimate V θv of V π , such that Q θv (x t , a t ) ∼ V θv (x t ) + A θv (x t , a t ) − 1 n n i=1 A θv (x t , u i ), and u i ∼ π θ (·|x t )(13) where n is a parameter, see Figure 2. The two estimates are consistent in the sense that E a∼π(·|xt) E u1:n∼π(·|xt) Q θv (x t , a) = V θv (x t ). Furthermore, we can learn about V π by learning Q θv . To see this, assume we have learned Q π perfectly such that E u1:n∼π(·|xt) Q θv (x t , a t ) = Q π (x t , a t ), then V θv (x t ) = E a∼π(·|xt) E u1:n∼π(·|xt) Q θv (x t , a) = E a∼π(·|xt) [Q π (x t , a)] = V π (x t ) . Therefore, a target on Q θv (x t , a t ) also provides an error signal for updating V θv . + e Q ✓v (x t , a t ) - 1 n n X i=1 A ✓v (x t , u i ) Figure 2: A schematic of the Stochastic Dueling Network. In the drawing, [u 1 , · · · , u n ] are assumed to be samples from π θ (·|x t ). This schematic illustrates the concept of SDNs but does not reflect the real sizes of the networks used. In addition to SDNs, however, we also construct the following novel target for estimating V π : V target (x t ) = min 1, π(a t |x t ) µ(a t |x t ) Q ret (x t , a t ) − Q θv (x t , a t ) + V θv (x t ).(14) The above target is also derived via the truncation and bias correction trick; for more details, see Appendix D. Finally, when estimating Q ret in continuous domains, we implement a slightly different formulation of the truncated importance weightsρ t = min 1, π(at|xt) µ(at|xt) 1 d , where d is the dimensionality of the action space. Although not essential, we have found this formulation to lead to faster learning. TRUST REGION UPDATING To adopt the trust region updating scheme (Section 3.3) in the continuous control domain, one simply has to choose a distribution f and a gradient specificationĝ acer t suitable for continuous action spaces. For the distribution f , we choose Gaussian distributions with fixed diagonal covariance and mean φ θ (x). To deriveĝ acer t in continuous action spaces, consider the ACER policy gradient for the stochastic dueling network, but with respect to φ: g acer t = E xt E at ρ t ∇ φ θ (xt) log f (a t |φ θ (x t ))(Q opc (x t , a t ) − V θv (x t )) + E a∼π ρ t (a) − c ρ t (a) + ( Q θv (x t , a) − V θv (x t ))∇ φ θ (xt) log f (a|φ θ (x t )) .(15) In the above definition, we are using Q opc instead of Q ret . Here, Q opc (x t , a t ) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1 (Harutyunyan et al., 2016). Please refer to Appendix B an expanded discussion on this design choice. Given an observation x t , we can sample a t ∼ π θ (·|x t ) to obtain the following Monte Carlo approximation g acer t =ρ t ∇ φ θ (xt) log f (a t |φ θ (x t ))(Q opc (x t , a t ) − V θv (x t )) + ρ t (a t ) − c ρ t (a t ) + ( Q θv (x t , a t ) − V θv (x t ))∇ φ θ (xt) log f (a t |φ θ (x t )).(16) Given f andĝ acer t , we apply the same steps as detailed in Section 3.3 to complete the update. The precise pseudo-code of ACER algorithm for continuous spaces results is presented in Appendix A. RESULTS ON MUJOCO We evaluate our algorithms on 6 continuous control tasks, all of which are simulated using the MuJoCo physics engine (Todorov et al., 2012). For descriptions of the tasks, please refer to Appendix E.1. Briefly, the tasks with action dimensionality in brackets are: cartpole (1D), reacher (3D), cheetah (6D), fish (5D), walker (6D) and humanoid (21D). These tasks are illustrated in Figure 3. To benchmark ACER for continuous control, we compare it to its on-policy counterpart both with and without trust region updating. We refer to these two baselines as A3C and Trust-A3C. Additionally, we also compare to a baseline with replay where we truncate the importance weights over trajectories as in (Wawrzyński, 2009). For a detailed description of this baseline, please refer to Appendix E. Again, we run this baseline both with and without trust region updating, and refer to these choices as Trust-TIS and TIS respectively. Last but not least, we refer to our proposed approach with SDN and trust region updating as simply ACER. All five setups are implemented in the asynchronous A3C framework. All the aforementioned setups share the same network architecture that computes the policy and state values. We maintain an additional small network that computes the stochastic A values in the case of ACER. We use n = 5 (using the notation in Equation (13)) in all SDNs. Instead of mixing on-policy and replay learning as done in the Atari domain, ACER for continuous actions is entirely off-policy, with experiences generated from the simulator (4 times on average). When using replay, we add to each thread a replay memory that is 5, 000 frames in size and perform updates every 50 steps (k = 50 in the notation of Section 2). The rate of the soft updating (α as in Section 3.3) is set to 0.995 in all setups involving trust region updating. The truncation threshold c is set to 5 for ACER. We use diagonal Gaussian policies with fixed diagonal covariances where the diagonal standard deviation is set to 0.3. For all setups, we sample the learning rates log-uniformly in the range [10 −4 , 10 −3.3 ]. For setups involving trust region updating, we also sample δ uniformly in the range [0.1, 2]. With all setups, we use 30 sampled hyper-parameter settings. The empirical results for all continuous control tasks are shown Figure 3, where we show the mean and standard deviation of the best 5 out of 30 hyper-parameter settings over which we searched 3 . For sensitivity analyses with respect to the hyper-parameters, please refer to Figures 5 and 6 in the Appendix. In continuous control, ACER outperforms the A3C and truncated importance sampling baselines by a very significant margin. Here, we also find that the proposed trust region optimization method can result in huge improvements over the baselines. The high-dimensional continuous action policies are much harder to optimize than the small discrete action policies in Atari, and hence we observe much higher gains for trust region optimization in the continuous control domains. In spite of the improvements brought in by trust region optimization, ACER still outperforms all other methods, specially in higher dimensions. ABLATIONS To further tease apart the contributions of the different components of ACER, we conduct an ablation analysis where we individually remove Retrace / Q(λ) off-policy correction, SDNs, trust region, and truncation with bias correction from the algorithm. As shown in Figure 4, Retrace and offpolicy correction, SDNs, and trust region are critical: removing any one of them leads to a clear deterioration of the performance. Truncation with bias correction did not alter the results in the Fish and Walker2d tasks. However, in Humanoid, where the dimensionality of the action space is much higher, including truncation and bias correction brings a significant boost which makes the originally kneeling humanoid stand. Presumably, the high dimensionality of the action space increases the variance of the importance weights which makes truncation with bias correction important. For more details on the experimental setup please see Appendix E.4. THEORETICAL ANALYSIS Retrace is a very recent development in reinforcement learning. In fact, this work is the first to consider Retrace in the policy gradients setting. For this reason, and given the core role that Retrace plays in ACER, it is valuable to shed more light on this technique. In this section, we will prove that Retrace can be interpreted as an application of the importance weight truncation and bias correction trick advanced in this paper. Consider the following equation: Q π (x t , a t ) = E xt+1at+1 [r t + γρ t+1 Q π (x t+1 , a t+1 )] .(17) If we apply the weight truncation and bias correction trick to the above equation we obtain Q π (x t , a t ) = E xt+1at+1 r t + γρ t+1 Q π (x t+1 , a t+1 ) + γ E a∼π ρ t+1 (a) − c ρ t+1 (a) + Q π (x t+1 , a) . (18) By recursively expanding Q π as in Equation (18), we can represent Q π (x, a) as: Q π (x, a) = E µ   t≥0 γ t t i=1ρ i r t + γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + Q π (x t+1 , b)   .(19) The expectation E µ is taken over trajectories starting from x with actions generated with respect to µ. When Q π is not available, we can replace it with our current estimate Q to get a return-based Published as a conference paper at ICLR 2017 Red lines, in all plots, represent ACER whereas green lines ACER with missing components. This study indicates that all 4 components studied improve performance where 3 are critical to success. Note that the ACER curve is of course the same in all rows. esitmate of Q π . This operation also defines an operator: BQ(x, a) = E µ   t≥0 γ t t i=1ρ i r t + γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + Q(x t+1 , b)   .(20) In the following proposition, we show that B is a contraction operator with a unique fixed point Q π and that it is equivalent to the Retrace operator. Proposition 1. The operator B is a contraction operator such that BQ − Q π ∞ ≤ γ Q − Q π ∞ and B is equivalent to Retrace. The above proposition not only shows an alternative way of arriving at the same operator, but also provides a different proof of contraction for Retrace. Please refer to Appendix C for the regularization conditions and proof of the above proposition. Finally, B, and therefore Retrace, generalizes both the Bellman operator T π and importance sampling. Specifically, when c = 0, B = T π and when c = ∞, B recovers importance sampling; see Appendix C. CONCLUDING REMARKS We have introduced a stable off-policy actor critic that scales to both continuous and discrete action spaces. This approach integrates several recent advances in RL in a principle manner. In addition, it integrates three innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling networks and an efficient trust region policy optimization method. We showed that the method not only matches the performance of the best known methods on Atari, but that it also outperforms popular techniques on several continuous control problems. The efficient trust region optimization method advanced in this paper performs remarkably well in continuous domains. It could prove very useful in other deep learning domains, where it is hard to stabilize the training process. A ACER PSEUDO-CODE FOR DISCRETE ACTIONS Algorithm 1 ACER for discrete actions (master algorithm) // Assume global shared parameter vectors θ and θv. // Assume ratio of replay r. repeat Call ACER on-policy, Algorithm 2. n ← Possion(r) for i ∈ {1, · · · , n} do Call ACER off-policy, Algorithm 2. end for until Max iteration or time reached. Algorithm 2 ACER for discrete actions Reset gradients dθ ← 0 and dθv ← 0. Initialize parameters θ ← θ and θ v ← θv. if not On-Policy then Sample the trajectory {x0, a0, r0, µ(·|x0), · · · , x k , a k , r k , µ(·|x k )} from the replay memory. else Get state x0 end if for i ∈ {0, · · · , k} do Compute f (·|φ θ (xi)), Q θ v (xi, ·) and f (·|φ θa (xi)). if On-Policy then Perform ai according to f (·|φ θ (xi)) Receive reward ri and new state xi+1 µ(·|xi) ← f (·|φ θ (xi)) end if ρi ← min 1, f (a i |φ θ (x i )) µ(a i |x i ) . end for Q ret ← 0 for terminal x k a Q θ v (x k , a)f (a|φ θ (x k )) otherwise for i ∈ {k − 1, · · · , 0} do Q ret ← ri + γQ ret Vi ← a Q θ v (xi, a)f (a|φ θ (xi)) Computing quantities needed for trust region updating: g ← min {c, ρi(ai)} ∇ φ θ (x i ) log f (ai|φ θ (xi))(Q ret − Vi) + a 1 − c ρi(a) + f (a|φ θ (xi))∇ φ θ (x i ) log f (a|φ θ (xi))(Q θ v (xi, ai) − Vi) k ← ∇ φ θ (x i ) DKL [f (·|φ θa (xi) f (·|φ θ (xi)] Accumulate gradients wrt θ : dθ ← dθ + ∂φ θ (x i ) ∂θ g − max 0, k T g−δ k 2 2 k Accumulate gradients wrt θ v : dθv ← dθv + ∇ θ v (Q ret − Q θ v (xi, a)) 2 Update Retrace target: Q ret ←ρi Q ret − Q θ v (xi, ai) + Vi end for Perform asynchronous update of θ using dθ and of θv using dθv. Updating the average policy network: θa ← αθa + (1 − α)θ B Q(λ) WITH OFF-POLICY CORRECTIONS Given a trajectory generated under the behavior policy µ, the Q(λ) with off-policy corrections estimator (Harutyunyan et al., 2016) can be expressed recursively as follows: Q opc (x t , a t ) = r t + γ[Q opc (x t+1 , a t+1 ) − Q(x t+1 , a t+1 )] + γV (x t+1 ). (21) Notice that Q opc (x t , a t ) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1. Algorithm 3 ACER for Continuous Actions Reset gradients dθ ← 0 and dθv ← 0. Initialize parameters θ ← θ and θ v ← θv. Sample the trajectory {x0, a0, r0, µ(·|x0), · · · , x k , a k , r k , µ(·|x k )} from the replay memory. for i ∈ {0, · · · , k} do Compute f (·|φ θ (xi)), V θ v (xi), Q θ v (xi, ai), and f (·|φ θa (xi)). Sample a i ∼ f (·|φ θ (xi)) ρi ← f (a i |φ θ (x i )) µ(a i |x i ) and ρ i ← f (a i |φ θ (x i )) µ(a i |x i ) ci ← min 1, (ρi) 1 d . end for Q ret ← 0 for terminal x k V θ v (x k ) otherwise Q opc ← Q ret for i ∈ {k − 1, · · · , 0} do Q ret ← ri + γQ ret Q opc ← ri + γQ opc Computing quantities needed for trust region updating: g ← min {c, ρi} ∇ φ θ (x i ) log f (ai|φ θ (xi)) Q opc (xi, ai) − V θ v (xi) + 1 − c ρ i + ( Q θ v (xi, a i ) − V θ v (xi))∇ φ θ (x i ) log f (a i |φ θ (xi)) k ← ∇ φ θ (x i ) DKL [f (·|φ θa (xi) f (·|φ θ (xi)] Accumulate gradients wrt θ: dθ ← dθ + ∂φ θ (x i ) ∂θ g − max 0, k T g−δ k 2 2 k Accumulate gradients wrt θ v : dθv ← dθv + (Q ret − Q θ v (xi, ai))∇ θ v Q θ v (xi, ai) dθv ← dθv + min {1, ρi} Q ret (xt, ai) − Q θ v (xt, ai) ∇ θ v V θ v (xi) Update Retrace target: Q ret ← ci Q ret − Q θ v (xi, ai) + V θ v (xi) Update Retrace target: Q opc ← Q opc − Q θ v (xi, ai) + V θ v (xi) end for Perform asynchronous update of θ using dθ and of θv using dθv. Updating the average policy network: θa ← αθa + (1 − α)θ Because of the lack of the truncated importance ratio, the operator defined by Q opc is only a contraction if the target and behavior policies are close to each other (Harutyunyan et al., 2016). Q(λ) with off-policy corrections is therefore less stable compared to Retrace and unsafe for policy evaluation. Q opc , however, could better utilize the returns as the traces are not cut by the truncated importance weights. As a result, Q opc could be used efficiently to estimate Q π in policy gradient (e.g. in Equation (16)). In our continuous control experiments, we have found that Q opc leads to faster learning. C RETRACE AS TRUNCATED IMPORTANCE SAMPLING WITH BIAS CORRECTION For the purpose of proving proposition 1, we assume our environment to be a Markov Decision Process (X , A, γ, P, r). We restrict X to be a finite state space. For notational simplicity, we also restrict A to be a finite action space. P : X , A → X defines the state transition probabilities and r : X , A → [−R MAX , R MAX ] defines a reward function. Finally, γ ∈ [0, 1) is the discount factor. Proof of proposition 1. First we show that B is a contraction operator. |BQ(x, a) − Q π (x, a)| = E µ   t≥0 γ t t i=1ρ i γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + (Q(x t+1 , b) − Q π (x t+1 , b))   ≤ E µ   t≥0 γ t t i=1ρ i γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + |Q(x t+1 , b) − Q π (x t+1 , b)|   ≤ E µ   t≥0 γ t t i=1ρ i γ(1 −P t+1 ) sup b |Q(x t+1 , b) − Q π (x t+1 , b)|   (22) WhereP t+1 = 1 − E b∼π ρt+1(b)−c ρt+1(b) + = E b∼µ [ρ t+1 (b)]. The last inequality in the above equation is due to Hölder's inequality. (22) ≤ sup x,b |Q(x, b) − Q π (x, b)| E µ   t≥0 γ t t i=1ρ i γ(1 −P t+1 )   = sup x,b |Q(x, b) − Q π (x, b)| E µ   γ t≥0 γ t t i=1ρ i − t≥0 γ t t i=1ρ i γP t+1   = sup x,b |Q(x, b) − Q π (x, b)| E µ   γ t≥0 γ t t i=1ρ i − t≥0 γ t+1 t+1 i=1ρ i   = sup x,b |Q(x, b) − Q π (x, b)| (γC − (C − 1)) where C = t≥0 γ t t i=1ρ i . Since C ≥ 0 t=0 γ t t i=1ρ i = 1, we have that γC −(C −1) ≤ γ. Therefore, we have shown that B is a contraction operator. Now we show that B is the same as Retrace. By apply the trunction and bias correction trick, we have E b∼π [Q(x t+1 , b)] = E b∼µ [ρ t+1 (b)Q(x t+1 , b)] + E b∼π ρ t+1 (b) − c ρ t+1 (b) + Q(x t+1 , b) .(23) By adding and subtracting the two sides of Equation (23) inside the summand of Equation (20), we have BQ(x, a) = E µ t≥0 γ t t i=1ρ i r t + γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + Q(x t+1 , b) +γ E b∼π [Q(x t+1 , b)] −γ E b∼µ [ρ t+1 (b)Q(x t+1 , b)] − γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + Q(x t+1 , b) = E µ   t≥0 γ t t i=1ρ i r t + γ E b∼π [Q(x t+1 , b)] − γ E b∼µ [ρ t+1 (b)Q(x t+1 , b)]   = E µ   t≥0 γ t t i=1ρ i r t + γ E b∼π [Q(x t+1 , b)] − γρ t+1 Q(x t+1 , a t+1 )   = E µ   t≥0 γ t t i=1ρ i r t + γ E b∼π [Q(x t+1 , b)] − Q(x t , a t )   + Q(x, a) = RQ(x, a) In the remainder of this appendix, we show that B generalizes both the Bellman operator and importance sampling. First, we reproduce the definition of B: BQ(x, a) = E µ   t≥0 γ t t i=1ρ i r t + γ E b∼π ρ t+1 (b) − c ρ t+1 (b) + Q(x t+1 , b)   . When c = 0, we have thatρ i = 0 ∀i. Therefore only the first summand of the sum remains: t+1 , b)) . BQ(x, a) = E µ r t + γ E b∼π (Q(x In this case B = T . When c = ∞, the compensation term disappears andρ i = ρ i ∀i: BQ(x, a) = E µ   t≥0 γ t t i=1 ρ i r t + γ E b∼π (0 × Q(x t+1 , b))   = E µ   t≥0 γ t t i=1 ρ i r t   . In this case B is the same operator defined by importance sampling. D DERIVATION OF V target By using the truncation and bias correction trick, we can derive the following: V π (x t ) = E a∼µ min 1, π(a|x t ) µ(a|x t ) Q π (x t , a) + E a∼π ρ t (a) − 1 ρ t (a) + Q π (x t+1 , a) . We, however, cannot use the above equation as a target as we do not have access to Q π . To derive a target, we can take a Monte Carlo approximation of the first expectation in the RHS of the above equation and replace the first occurrence of Q π with Q ret and the second with our current neural net approximation Q θv (x t , ·): V target pre (x t ) := min 1, π(a t |x t ) µ(a t |x t ) Q ret (x t , a t ) + E a∼π ρ t (a) − 1 ρ t (a) + Q θv (x t , a) .(24) Through the truncation and bias correction trick again, we have the following identity: E a∼π [Q θv (x t , a)] = E a∼µ min 1, π(a|x t ) µ(a|x t ) Q θv (x t , a) + E a∼π ρ t (a) − 1 ρ t (a) + Q θv (x t , a) .(25) Adding and subtracting both sides of Equation (25) to the RHS of (24) while taking a Monte Carlo approximation, we arrive at V target (x t ): V target (x t ) := min 1, π(a t |x t ) µ(a t |x t ) Q ret (x t , a t ) − Q θv (x t , a t ) + V θv (x t ). E CONTINUOUS CONTROL EXPERIMENTS E.1 DESCRIPTION OF THE CONTINUOUS CONTROL PROBLEMS Our continuous control tasks were simulated using the MuJoCo physics engine (Todorov et al. (2012)). For all experiments we considered an episodic setup with an episode length of T = 500 steps and a discount factor of 0.99. Cartpole swingup This is an instance of the classic cart-pole swing-up task. It consists of a pole attached to a cart running on a finite track. The agent is required to balance the pole near the center of the track by applying a force to the cart only. An episode starts with the pole at a random angle and zero velocity. A reward zero is given except when the pole is approximately upright (within ±5 deg) and the track approximately in the center of the track (±0.05) for a track length of 2.4. The observations include position and velocity of the cart, angle and angular velocity of the pole. a sine/cosine of the angle, the position of the tip of the pole, and Cartesian velocities of the pole. The dimension of the action space is 1. Reacher3 The agent needs to control a planar 3-link robotic arm in order to minimize the distance between the end effector of the arm and a target. Both arm and target position are chosen randomly at the beginning of each episode. The reward is zero except when the tip of the arm is within 0.05 of the target, where it is one. The 8-dimensional observation consists of the angles and angular velocity of all joints as well as the displacement between target and the end effector of the arm. The 3-dimensional action are the torques applied to the joints. Cheetah The Half-Cheetah (Wawrzyński (2009); Heess et al. (2015)) is a planar locomotion task where the agent is required to control a 9-DoF cheetah-like body (in the vertical plane) to move in the direction of the x-axis as quickly as possible. The reward is given by the velocity along the x-axis and a control cost: r = v x + 0.1 a 2 . The observation vector consists of the z-position of the torso and its x, z velocities as well as the joint angles and angular velocities. The action dimension is 6. Fish The goal of this task is to control a 13-DoF fish-like body to swim to a random target in 3D space. The reward is given by the distance between the head of the fish and the target, a small penalty for the body not being upright, and a control cost. At the beginning of an episode the fish is initialized facing in a random direction relative to the target. The 24-dimensional observation is given by the displacement between the fish and the target projected onto the torso coordinate frame, the joint angles and velocities, the cosine of the angle between the z-axis of the torso and the world z-axis, and the velocities of the torso in the torso coordinate frame. The 5-dimensional actions control the position of the side fins and the tail. Walker The 9-DoF planar walker is inspired by (Schulman et al. (2015a)) and is required to move forward along the x-axis as quickly as possible without falling. The reward consists of the x-velocity of the torso, a quadratic control cost, and terms that penalize deviations of the torso from the preferred height and orientation (i.e. terms that encourage the walker to stay standing and upright). The 24-dimensional observation includes the torso height, velocities of all DoFs, as well as sines and cosines of all body orientations in the x-z plane. The 6-dimensional action controls the torques applied at the joints. Episodes are terminated early with a negative reward when the torso exceeds upper and lower limits on its height and orientation. Humanoid The humanoid is a 27 degrees-of-freedom body with 21 actuators (21 action dimensions). It is initialized lying on the ground in a random configuration and the task requires it to achieve a standing position. The reward function penalizes deviations from the height of the head when standing, and includes additional terms that encourage upright standing, as well as a quadratic action penalty. The 94 dimensional observation contains information about joint angles and velocities and several derived features reflecting the body's pose. E.2 UPDATE EQUATIONS OF THE BASELINE TIS The baseline TIS follows the following update equations, updates to the policy: min 5, k−1 i=0 ρ t+i k−1 i=0 γ i r t+i + γ k V θv (x k+t ) − V θv (x t ) ∇ θ log π θ (a t |x t ), updates to the value: min 5, k−1 i=0 ρ t+i k−1 i=0 γ i r t+i + γ k V θv (x k+t ) − V θv (x t ) ∇ θv V θv (x t ). The baseline Trust-TIS is appropriately modified according to the trust region update described in Section 3.3. E.3 SENSITIVITY ANALYSIS In this section, we assess the sensitivity of ACER to hyper-parameters. In Figures 5 and 6, we show, for each game, the final performance of our ACER agent versus the choice of learning rates, and the trust region constraint δ respectively. Note, as we are doing random hyper-parameter search, each learning rate is associated with a random δ and vice versa. It is therefore difficult to tease out the effect of either hyper-parameter independently. We observe, however, that ACER is not very sensitive to the hyper-parameters overall. In addition, smaller δ's do not seem to adversely affect the final performance while larger δ's do in domains of higher action dimensionality. Similarly, smaller learning rates perform well while bigger learning rates tend to hurt final performance in domains of higher action dimensionality. Figure 6: Trust region constraint (δ) vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the final performance after training for all 30 trust region constraints (δ) searched over. Note that each δ is associated with a different learning rate as a consequence of random search over hyper-parameters. E.4 EXPERIMENTAL SETUP OF ABLATION ANALYSIS For the ablation analysis, we use the same experimental setup as in the continuous control experiments while removing one component at a time. To evaluate the effectiveness of Retrace/Q(λ) with off-policy correction, we replace both with importance sampling based estimates (following Degris et al. (2012)) which can be expressed recursively: R t = r t + ρ t+1 R t+1 . To evaluate the Stochastic Dueling Networks, we replace it with two separate networks: one computing the state values and the other Q values. Given Q ret (x t , a t ), the naive way of estimating the state values is to use the following update rule: ρ t Q ret (x t , a t ) − V θv (x t ) ∇ θv V θv (x t ). The above update rule, however, suffers from high variance. We consider instead the following update rule: ρ t Q ret (x t , a t ) − V θv (x t ) ∇ θv V θv (x t ) which has markedly lower variance. We update our Q estimates as before. To evaluate the effects of the truncation and bias correction trick, we change our c parameter (see Equation (16)) to ∞ so as to use pure importance sampling. Figure 1 : 1ACER improvements in sample (LEFT) and computation (RIGHT) complexity on Atari. On each plot, the median of the human-normalized score across all 57 Atari games is presented for 4 ratios of replay with 0 replay corresponding to on-policy A3C. The colored solid and dashed lines represent ACER with and without trust region updating respectively. The environment steps are counted over all threads. The gray curve is the original DQN agent(Mnih et al., 2015) and the black curve is one of the Prioritized Double DQN agents from. Figure 3 : 3[TOP] Screen shots of the continuous control tasks.[BOTTOM] Performance of different methods on these tasks. ACER outperforms all other methods and shows clear gains for the higherdimensionality tasks (humanoid, cheetah, walker and fish). The proposed trust region method by itself improves the two baselines (truncated importance sampling and A3C) significantly. Figure 4 : 4Ablation analysis evaluating the effect of different components of ACER. Each row compares ACER with and without one component. The columns represents three control tasks. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI Gym. arXiv preprint 1606.01540, 2016. T. Degris, M. White, and R. S. Sutton. Off-policy actor-critic. In ICML, pp. 457-464, 2012. Anna Harutyunyan, Marc G Bellemare, Tom Stepleton, and Remi Munos. Q (λ) with off-policy corrections. arXiv preprint arXiv:1602.04951, 2016. N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015. T. Jie and P. Abbeel. On a connection between importance sampling and the likelihood ratio policy gradient. In NIPS, pp. 1000-1008, 2010. S. Levine and V. Koltun. Guided policy search. In ICML, 2013. S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015. L.J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293-321, 1992. N. Meuleau, L. Peshkin, L. P. Kaelbling, and K. Kim. Off-policy policy search. Technical report, MIT AI Lab, 2000. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540): 529-533, 2015. V. Mnih, A. Puigdomènech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv:1602.01783, 2016. R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In EMNLP, 2015. Figure 5 : 5Log learning rate vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the final performance after training for all 30 log learning rates considered. Note that each learning rate is associated with a different δ as a consequence of random search over hyper-parameters. For ease of presentation, we consider only λ = 1 for Retrace. An alternative to Retrace here is Q(λ) with off-policy corrections(Harutyunyan et al., 2016) which we discuss in more detail in Appendix B. For videos of the policies learned with ACER, please see: https://www.youtube.com/watch?v= NmbeQYoVv5g&list=PLkmHIkhlFjiTlvwxEnsJMs3v7seR5HSP-. ACKNOWLEDGMENTSWe are very thankful to Marc Bellemare, Jascha Sohl-Dickstein, and Sébastien Racaniere for proofreading and valuable suggestions. The arcade learning environment: An evaluation platform for general agents. M G Bellemare, Y Naddaf, J Veness, M Bowling, JAIR. 47M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. JAIR, 47:253-279, 2013. Control of memory, active perception, and action in Minecraft. J Oh, V Chockalingam, S P Singh, H Lee, ICML. J. Oh, V. Chockalingam, S. P. Singh, and H. Lee. Control of memory, active perception, and action in Minecraft. In ICML, 2016. Eligibility traces for off-policy policy evaluation. D Precup, R S Sutton, S Singh, ICML. D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In ICML, pp. 759-766, 2000. Prioritized experience replay. T Schaul, J Quan, I Antonoglou, D Silver, ICLR. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In ICLR, 2016. Trust region policy optimization. J Schulman, S Levine, P Abbeel, M I Jordan, P Moritz, ICML. J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015a. High-dimensional continuous control using generalized advantage estimation. J Schulman, P Moritz, S Levine, M I Jordan, P Abbeel, arXiv:1506.02438J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015b. Deterministic policy gradient algorithms. D Silver, G Lever, N Heess, T Degris, D Wierstra, M Riedmiller, ICML. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014. Mastering the game of Go with deep neural networks and tree search. D Silver, A Huang, C J Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T Lillicrap, M Leach, K Kavukcuoglu, T Graepel, D Hassabis, Nature. 5297587D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016. Policy gradient methods for reinforcement learning with function approximation. R S Sutton, D Mcallester, S Singh, Y Mansour, NIPS. R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057-1063, 2000. MuJoCo: A physics engine for model-based control. E Todorov, T Erez, Y Tassa, International Conference on Intelligent Robots and Systems. E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems, pp. 5026-5033, 2012. Dueling network architectures for deep reinforcement learning. Z Wang, T Schaul, M Hessel, H Van Hasselt, M Lanctot, N De Freitas, ICML. Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016. Real-time reinforcement learning by sequential actor-critics and experience replay. P Wawrzyński, Neural Networks. 2210P. Wawrzyński. Real-time reinforcement learning by sequential actor-critics and experience replay. Neural Networks, 22(10):1484-1497, 2009.
[]
[ "Allometric Exponent and Randomness", "Allometric Exponent and Randomness" ]
[ "Su Do \nDepartment of Physics\nBK21 Physics Research Division\nSungkyunkwan University\n440-746SuwonKorea\n", "Yi ", "Jun Beom [email protected] ", "Kim \nDepartment of Physics\nBK21 Physics Research Division\nSungkyunkwan University\n440-746SuwonKorea\n", "Petter Minnhagen [email protected] \nDepartment of Physics\nIceLab\nUmeå University\n901 87UmeåSweden\n" ]
[ "Department of Physics\nBK21 Physics Research Division\nSungkyunkwan University\n440-746SuwonKorea", "Department of Physics\nBK21 Physics Research Division\nSungkyunkwan University\n440-746SuwonKorea", "Department of Physics\nIceLab\nUmeå University\n901 87UmeåSweden" ]
[]
An allometric height-mass exponent γ gives an approximative power-law relation M ∝ H γ between the average mass M and the height H, for a sample of individuals. The individuals in the present study are humans but could be any biological organism. The sampling can be for a specific age of the individuals or for an age-interval. The body-mass index (BMI) is often used for practical purposes when characterizing humans and it is based on the allometric exponent γ = 2. It is here shown that the actual value of γ is to large extent determined by the degree of correlation between mass and height within the sample studied: no correlation between mass and height means γ = 0, whereas if there was a precise relation between mass and height such that all individuals had the same shape and density then γ = 3. The connection is demonstrated by showing that the value of γ can be obtained directly from three numbers characterizing the spreads of the relevant random Gaussian statistical distributions: the spread of the height and mass distributions together with the spread of the mass distribution for the average height. Possible implications for allometric relations in general are discussed.
10.1088/1367-2630/15/4/043001
[ "https://arxiv.org/pdf/1304.1601v1.pdf" ]
14,125,344
1304.1601
6b2a3caf4ebe18983ae828258ab8f0d700a1f735
Allometric Exponent and Randomness 5 Apr 2013 Su Do Department of Physics BK21 Physics Research Division Sungkyunkwan University 440-746SuwonKorea Yi Jun Beom [email protected] Kim Department of Physics BK21 Physics Research Division Sungkyunkwan University 440-746SuwonKorea Petter Minnhagen [email protected] Department of Physics IceLab Umeå University 901 87UmeåSweden Allometric Exponent and Randomness 5 Apr 2013arXiv:1304.1601v1 [physics.soc-ph] An allometric height-mass exponent γ gives an approximative power-law relation M ∝ H γ between the average mass M and the height H, for a sample of individuals. The individuals in the present study are humans but could be any biological organism. The sampling can be for a specific age of the individuals or for an age-interval. The body-mass index (BMI) is often used for practical purposes when characterizing humans and it is based on the allometric exponent γ = 2. It is here shown that the actual value of γ is to large extent determined by the degree of correlation between mass and height within the sample studied: no correlation between mass and height means γ = 0, whereas if there was a precise relation between mass and height such that all individuals had the same shape and density then γ = 3. The connection is demonstrated by showing that the value of γ can be obtained directly from three numbers characterizing the spreads of the relevant random Gaussian statistical distributions: the spread of the height and mass distributions together with the spread of the mass distribution for the average height. Possible implications for allometric relations in general are discussed. Introduction Allometric relations in biology describe how a quantity Y scales with the body mass M, i.e., Y = AM 1 γ , where γ is an allometric exponent. Allometric relations have a long history with pioneering work by D'Arcy Thomson On Growth and Form [1] and J.S. Huxley Problems of Relative Growth [2]. Among others, the allometric relation for the metabolic rate B has drawn much interest: Kleiber's law [3] states that B ∼ M p , with p = 1/γ ≈ 3/4, and has been tested in e.g. Refs. [4], [5], and [6]. For a review of allometric relations see Ref. [7] Scaling in Biology. In the present study we focus on the allometric relation between height and mass for humans. This mass (M) -height (H) relation has an even longer history going back to the pioneering work by A. Quetelet A Treatise on Man and the Developments of his Faculties from 1842 [8], where the allometric relation has been introduced to define a normal man so that M/H γ becomes a Gaussian variable. The precise definition of the allometric exponent used in the present study is M H ∝ H γ where M H is the average mass of the individuals of height H in the sample. Note that the allometric exponents γ = 1, 2 and 3 correspond to that mass is proportional to height, body surface and body volume, respectively. For practical purposes γ = 2 is often a good approximation for humans, as shown in Ref. [9]. This approximation is the basis for the body mass index (BMI) A given by M H = AH 2 , provided mass is in kilogram and height in meter. More recently it has been suggested that a larger allometric index 2 < γ ≤ 3 should be more appropriate [10,11,12]. In particular Burton in Ref. [11] suggests that γ = 2 is an underestimate caused by randomness. This is in accordance with the conclusions reached in the present investigation. The object with the present investigation is to understand the relation between the exponent γ and the randomness for a given sample of individuals. The issue is best illustrated by a specific example. Figure 1(a) and (b) show the height and mass distributions, P (H) and P (M), respectively, for 25000 children 18 years old from Hong Kong [13]. Figure 1(c) in addition shows the distribution P (M|H = H ) for the children of average height H . All these three statistical distributions are to very good approximation Gaussians. This means that the variables in all three cases are randomly distributed around their respective average values. The random spread are in all three cases characterized be the normalized standard deviations, which we denoteσ H ,σ M , andσ for random spread of height, mass and mass-for-average-height, respectively. The relation derived in the present paper states that γ to good approximation should be given by γ = √σ 2 M −σ 2 σ H . From the random spreads in Fig. 1 one then finds γ = 1.63. Figure 2(a) shows that this is a very accurate prediction. This means that the allometric exponent is entirely determined by the randomness of the three distributions. Why is this so and what does it imply? These are questions which come to mind. In Sec. 2 the relation between γ and the random spreads is derived. Comparisons with data are made in Sec. 3, whereas we in Sec. 4 sum up and discuss the results. Allometric exponent expressed in normalized standard deviations The point made in the present paper is that the exponent γ can be estimated from the sole knowledge of the first and second moments of the mass and height distributions. In order to derive such a relation we assume that the mass and height distributions are approximately Gaussians. This is, as is illustrated by the datasets in Fig. 1, often a fair approximation around the maxima of the distributions. It means that the probability distribution for the mass and height distributions are approximately given by P M (M) = 1 2πσ 2 M exp − (M − M ) 2 2σ 2 M ,(1)P H (H) = 1 2πσ 2 H exp − (H − H ) 2 2σ 2 H ,(2) respectively. Note that these two distributions are characterized by the four explicit numbers M , H , σ M and σ H . The degree of correlation between the mass and height is then given by the mass distribution for a given height, which we likewise assume to be approximately Gaussian and is given by the conditional probability P (M|H) = 1 2πσ(H) 2 exp − (M − M H ) 2 2σ(H) 2 ,(3) where M H and σ(H) are the average and the standard deviation of mass obtained for all individuals of height H. Note that the standard deviation σ X for a stochastic variable X is related to the first and second moments by σ 2 X = X 2 − X 2 . A particular feature in the present context is that the distribution of mass for a given height can approximately be characterized by a constant standard deviation σ, since in practice it turns out that σ(H) is only weakly dependent on H in the close vicinity of H [see P (M|H) = 1 √ 2πσ 2 exp − (M − M H ) 2 2σ 2 .(4) Another particular feature of the mass relation is that the average mass for a given height, M H , monotonously increases with height. We can use this one-to-one correspondence by changing the variable in Eq. (2) P M (M) = d M H P (M| M H )P H (H( M H )) dH( M H ) d M H .(5) Another crucial feature of the data is that M H to some approximation is described by the power-law relation M H = AH γ .(6) This is the allometric relation in focus and discussed in the present paper. Here A and γ are two constants. The constant A can be expressed as A = M H H −γ = M H H −γ . However, for Gaussian distributions H corresponds to the peak position of the height distribution and, since the individuals in this peak also to good approximation have the average mass, it follows that M H ≈ M . In the following, we will consequently use the simplified estimate A = M H −γ . The argument for P H in Eq. (2) is H( M H ) = M H A 1 γ = H M H M 1 γ .(7) Close to the peak of this distribution at M H we can use the linear approximation H( M H ) = H M H M 1 γ ≈ H 1 + 1 γ M ( M H − M ) .(8) Inserting this approximation into the relation P H ( (M) ) = P H (H( M H )) dH( M H ) d M H leads to the Gaussian distribution P H ( (M) = 1 2πσ 2 H γ 2 M 2 H −2 exp − ( M H − M H ) 2 2σ 2 H γ 2 M 2 H −2 .(9) Using Eq. (9) together with Eq. (4), means that the right-hand side of Eq. (5) becomes a convolution of two Gaussian. Since the convolution of two Gaussians with standard deviations σ 1 and σ 2 becomes a Gaussian with standard deviation σ 3 = σ 2 1 + σ 2 2 , it follows that σ 2 M = σ 2 + σ 2 H γ 2 H 2 / M 2 or equivalently γ = H M σ 2 M − σ 2 σ H = σ 2 M −σ 2 σ H ,(10) where we have introduced the normalized standard deviationsσ M = σ M / M ,σ = σ/ M andσ H = σ H / H . Equation (10) is the central relation in the present investigation and shows that γ can be approximately obtained from the three dimensionless numbersσ M ,σ andσ H , which measures the random spread of the data in units of, respectively, the average mass and height of the individuals. Also note that γ given by Eq. (10) is what you get when an allometric relation is used as an ansatz. It does not a priori say anything about whether or not an allometric relation is a good approximation of the data. Comparison with data In the light of the above theoretical underpinning we return to the data for 25000 children 18 years [13]. One notes in Fig. 1 that both the height and the mass distributions to good approximations are Gaussians for this dataset. The average mass for a child is M ≈ 57.7 kg and height H ≈ 172.7 cm. The standard deviations are σ M ≈ 5.3 kg and σ H ≈ 4.8 cm. Figure 1(c) shows that also the distribution of mass for given heights are Gaussians and Fig. 1(d) shows that the standard deviation σ(H) in Eq. (3) is constant in a broad rage of H around H , so that Eq. (4) gives a very good description. As shown in Sec. 2, under these conditions the relation given by Eq. (10) applies. This relation states that if there is a power-law relation between average mass and height, M H = AH γ , then the best prediction for the given information is Table 1 gives the average height and mass (in cm and kg, respectively) together with the three normalized standard deviations for the statistical distributions:σ H ,σ M andσ. M H = M H H √σ 2 M −σ 2 σ H(11) The resulting power-law exponent predicted by γ th = √σ 2 M −σ 2 σ H , as well as γ ex obtained by direct fitting to the data [see Fig. 2(a)], is also listed. The agreement between γ th and γ ex is very precise and confirms that there really exists a relation between the spreads and the power-law exponent. The question is what it implies. Table 1. Note that the allometric exponent between the Hong Kong data set in (a) and the Swedish data set in (c) are significantly different, whereas they are almost identical for within the ordered data-representation given by (b) and (d). These features are explained in the present paper. In order to get an idea of what this means we note that if σ = 0 then there exists a one-to-one function between M and H and according to Eq. (10), we get γ =σ M σ H .(12) Changingσ in the Hong Kong children data toσ = 0 changes the prediction for γ th from 1.63 to 3.26 (compare Table 1 and Fig. 3). We can test this prediction against the children data by re-ordering so that the children are assigned masses which strictly follow the heights of the children. For this re-ordered data, σ is indeed zero and as shown in Fig. 2 In contrast, M/H γex and M/H γ th exhibit neutral correlation with H, which shows that our estimation γ th ≈ 1.63 describes data much better than the conventional BMIvalue γ = 2. Likewise in the second row for the Swedish data the correlations between H and M/H 3 , and between H and M/H 2 are, respectively, negative and positive, implying an exponent between 2 and 3. This is again in agreement with our analysis and theory. gives a direct demonstration of the connection between spread and power-law exponent. Table 2 gives various Pearson's r-coefficients computed for various pairs of quantities. For the Hong Kong children data (first row of the table) the height-mass correlation (H vs M) is significantly positive, implying that in general the taller child has the heavier mass. It is to be noted that the correlations for M/H 3 and M/H 2 deviate much from zero, while M/H γex and M/H γ th exhibit neutral correlation with the height, suggesting that our estimation γ th ≈ 1.63 describes data much better than the conventionally used BMI value γ = 2. In order to rule out that there is anything accidental or fortuitous about the results presented, we have investigated a second data-set in the same way. This second data set gives height and mass for Swedish children between 13.5 to 19 years old (more precisely between 5000 to 7000 days old containing in total 11327 data points) [14]. The results are presented in Fig. 2(c) and (d) with parameters given in the second rows of Tables 1 and 2. From Table 1 one can see that the average height and mass for these two datasets are roughly the same. However, since the Swedish children data spans over a longer age period than the Hong Kong data, the standard deviations for height,σ H , and mass, σ M , are larger by about a factor 2. This is of course because children during a longer period grows more. Yet the ratio between the standard deviations,σ M /σ H , is closely equal for the two data-sets (3.26 and 3.25 respectively.) This ratio is in fact the γ σ=0 and, as seen from Table 1 and Figs. 2(b) and (d), γ σ=0 gives very precise estimates of the allometric exponents γ ordered . The fact that the exponents γ ordered are very nearly the same for the two data-sets, suggests that, in this particular aspect, children from Hong Kong and Sweden are very similar. Also for the Swedish data set there is a good agreement between the experimental allometric exponent γ ex and the prediction γ th from Eq. (10) (compare Figs. 2(c) and Table 1). However, there is a significant difference between the allometric exponents γ ex for the two data-sets: γ ex = 1, 63 and 2.35 for, respectively, Hong Kong and Swedish children. The close agreement between γ ex and the prediction γ th for both data-sets suggests that the difference in value of the allometric exponent γ ex can be attributed to a relatively larger spread in weight for the children of average height for Hong Kong children compared to Swedish children. From this point of view it is rather a sampling difference than some difference in trait of a Hong Kong and Swedish individual. A possible explanation could be that, since Hong Kong compared to Sweden for a long time has been a human hotspot with influx of people of great variety in both genetical and cultural backgrounds, this has resulted in a relatively larger spread of weight for a given height in a particular age interval. for individuals of mass M. This means that if you pick an individual with mass M then you know that his most likely height is given by allometric relation with γ 0 . If in addition there was no randomness in the height, then you for certain know his height and the result is given by Fig. 2(b). But if there is an randomness in the height caused by many causes, so that the probability of the height for an individual is described by a Gaussian probability distribution, then the allometric exponent becomes smaller and you can end up with something like Fig. 2(a) instead. The difference is that Fig. 2(a) describes the collective dataset, whereas Fig. 2(b) corresponds to an allometric relation on an individual basis. In Fig. 2 (a) and (b) it is precisely the random spread which causes the decrease of the allometric exponent from 3.29 to 1.63. So you cannot any longer associate the allometric exponent γ with some unique growth property of the individuals. It is rather like that the spread in Fig. 1(c) just tells you how much the masses and heights for individuals are random and uncorrelated. This is also reflected by the prediction given by Eq. (10) which decreases from its maximum value γ =σ M σ H to zero with increasing random spreadσ, as illustrated in Fig. 3. Some further insight to this is given by the comparison between the Hong Kong children and the Swedish children: The allometric exponent for the Hong Kong children is significantly smaller than for the Swedish children. According to the present analysis, this difference can be traced to the difference in the relatively larger spread in mass for a given height in case of the Hong Kong data-set. One possible explanation for this difference in spread could be a sampling difference: Compared to Sweden, Hong Kong has been a historical hotspot leading to a greater variety of people both from a genetical and a cultural point of view. Such greater variety is also likely to cause a larger variety in weight for a given height. The present analysis is quite general and its implication is likely to have a wider range of applicability within allometric relations than just to the illustrative example of mass-height relation for humans, discussed here: It brings a caution against attributing too much specific cause to the precise value of an allometric exponent. The crucial point is that the allometric exponent for an individual is, because of randomness, not the same as the allometric exponent of the collective dataset. Figure 3. Plot of the γ th obtained from Eq. (10). The standard deviationsσ M and σ H from the children data are used to plot γ th as a function of the standard deviation ratio σ/σ H . Note that, since γ σ=0 =σ M /σ H is almost identical for the Hong Kong and Swedish data, both predictions are obtained from the same curve. The black dot represent the predicted maximum value γ σ=0 for σ = 0. The cross and triangle represents the prediction of γ th for, respectively, the Hong Kong and Swedish children data. Fig. 1(d)]. Thus Eq.(3)can approximately be reduced to Figure 1 . 1(a) The height distribution P (H) and (b) the mass distribution P (M ) for 25000 Hong Kong children 18 years old [13]. (c) The conditional probability distribution P (M |H = H ) obtained for 3670 children whose heights are in the interval [171.82, 173.62] around H = 172.72. In (a)-(c), the crosses are the data and the fulldrawn curves are the corresponding Gaussian approximations. The numbers of bins are 81 for (a) and (b), and 31 for (c). H in (a) and M in (b) and (c) are in units of cm and kg. (d) The standard deviation σ(H) of the distribution P (M |H) in Eq. (3) as a function of height H. The horizontal line shows that the standard deviation σ(H) is independent of H in a range around the average height H ≈ 172.72 cm. Figure 2 . 2Allometric relations for the Hong Kong data in (a) and (b) and for the Swedish data in (c) and (d): (a) Log-log plot of the average mass M H as a function of height H. Symbols correspond to the average for a length interval of 1.26 cm. The data fall on a straight line in accordance with the allometric relation M H ∝ H γ . The value of γ determined by the least-square fit to the data is γ ex = 1.63. The straight line is the prediction in Eq. (11) in terms of the random spreads given by Eq. (10). The prediction is very accurate for this dataset. (b) Log-log plot of the ordered data M ordered (H) as a function of H. The data is well represented by M ordered ∝ H γ with γ ordered = 3.29. (c) and (d) is the same as (a) and (b) for the Swedish data. The straight lines are least square fits to the data giving, respectively γ ex = 2.35 and γ ordered = 3.30. These predictions are again in good agreement with the predictions given in from H to M H . The height distribution in terms of M H is then just P H (H( M H )).This means that there exists a precise relation between the three distributions [Eqs. (1), (2), and (4)] given by (b) the slope for this re-ordered data is indeed again close to the prediction. ThisTable 2. Pearson's correlation coefficient r between the height H and various quantities (M , M/H 3 , M/H 2 , M/H γ th , and M/H γex ) are computed for Hong Kong children [13] (first row) and the Swedish data [14] (second row). The height-mass correlation (r for H vs M ) is of course positively significant in both cases. For the Hong Kong data in the first row the correlations between H and M/H 3 , and betweenH M σ HσMσ γ th γ ex γ σ=0 γ ordered Hong Kong 172.72 57.68 0.0280 0.0912 0.079 1.63 1.63 3.26 3.29 Sweden 170.48 60.52 0.0573 0.1862 0.130 2.33 2.35 3.25 3.30 Table 1. Summary of 25000 data for children 18 years old from Hong Kong [13] (first row) and for 11300 data for Swedish children in the year interval 13.5-19 years old [14] (second row). The average height H and the average mass M are in units of cm and kg. The normalized dimensionless standard deviations for the heightσ H , the massσ M , and for the mass distribution at average heightσ are listed. The theoretical prediction γ th from Eq. (10) and γ ex , obtained from the least-square fit to the data presented in Figs. 2(a) and (c), are in good agreement. γ σ=0 from Eq. (10) with σ = 0 and γ ordered , obtained from the least-square fit to the data in Figs. 2(b) and (d), also agree with each other. Note the close agreements between fitted and predicted values of the allometric exponents γ in all cases H vs M H vs M/H 3 H vs M/H 2 H vs M/H γ th H vs M/H γex Pearson's r (Hong Kong) 0.50 −0.43 −0.12 0.01 0.01 Pearson's r (Sweden) 0.64 −0.21 0.14 0.03 0.02 H and M/H 2 are negative, implying that the exponents 2 and 3 are overestimations. DiscussionThe implication of theses results become clearer when comparingFig. 2(a) andFig. 2(b). Both represent data with the same two Gaussian distributions for mass and height given inFig. 1. The difference is that the data inFig. 2(a) also has a spread of mass for individuals with a given height, as shown inFig. 1. For the artificial data inFig. 2(b) there is no such spread. The data inFig. 2(b) represents a true allometric relation between mass and height of the form M ∝ H γ : as soon as you pick a person with a certain height you also to very good approximation know his mass. Since γ for this artificial dataset is 3.29, it either means that these artificial people either all have the same shape but get a bit denser with increasing height, or that they just become somewhat disproportionally fatter with increasing height. The point is that you in this case can relate the allometric exponent to some property of the individual. However, for the real data inFig. 2(a)this becomes more problematic. This is because for a given height the individuals have a random mass distributed about the mean, as shown inFig. 1(c). This random spread can have a multitude of different causes, like availability of food, climate, diseases, genetics etc. Different individuals are affected by these multitude of causes in different ways.An alternative way of describing this is as follows: Suppose you have a dataset like the one discussed in the present paper and suppose that each individual can be characterized by the allometric relation M ∝ H γ 0 M , where H M is the average height . D Thomson, On Growth and Form. Cambridge University PressThomson D W 1917 On Growth and Form (New York: Cambridge University Press, MA) . J Huxley, Problems of Relative Growth. Huxley J S 1932 Problems of Relative Growth (London: Methuen) Body size and metabolism Hilgardia. M Kleiber, 6Kleiber M 1932 Body size and metabolism Hilgardia 6 315-353 . P S Dodds, D Rothman, J S Weitz, J Theor Biol. 209Dodds P S, Rothman D H and Weitz J S 2001 Re-examination of the 3/4-law of metabolism J Theor Biol 209 9-27 Mammalian basal metabolic rate is proportional to body mass2/3. C White, R S Seymour, Proc Natl Acad Sci. 100White C R and Seymour R S 2003 Mammalian basal metabolic rate is proportional to body mass2/3 Proc Natl Acad Sci USA 100 4046-4049 Self-similarity and scaling in forerst communities. F Simini, T Anfodillo, M Carrer, J Banavar, A Maritan, Proc Natl Acad Sci. 107Simini F, Anfodillo T, Carrer M, Banavar J R and Maritan A 2010, Self-similarity and scaling in forerst communities Proc Natl Acad Sci USA 107 7658-7662 . J Brown, G B West, Biology. Oxford University PressBrown J H and West G B,Editors 2000 Scaling in Biology (New York, Oxford University Press) M Quetelet, A Treatise on Man and the Developments of his Faculties. New YorkBurt FraklinQuetelet M A 1942 A Treatise on Man and the Developments of his Faculties (New York: Burt Fraklin(1968)) Indices of Relative mass and Obesity J, Chron Dis. A Keys, F Fidanza, J Karvonen, N Kimura, H Tayolor, 25Keys A, Fidanza F, Karvonen J, Kimura N and Tayolor H 1972 Indices of Relative mass and Obesity J, Chron Dis 25 329-343 Scaling of human body mass with height: The body mass index revisited. N Mackay, J. of Biomechanics. 43MacKay N J 2010 Scaling of human body mass with height: The body mass index revisited J. of Biomechanics 43 764-766 Human allometry: Adult bodies are more nearly geomerically similar than regression analysis has suggested Medical Hypotheses. R Burton, 74Burton R F 2010 Human allometry: Adult bodies are more nearly geomerically similar than regression analysis has suggested Medical Hypotheses 74 15-17 Scaling of adult human body mass with height J. R Burton, Biomechanics. 441216Burton R F 2011 Scaling of adult human body mass with height J, of Biomechanics 44 1216 Data from Hong Kong Growth Survey. Data from Hong Kong Growth Survey 1993 (http://www.socr.ucla.edu). . M.Dr Bo Werner, private communicationM.Dr Bo Werner, private communication (2012)
[]
[ "Locality, Latency and Spatial-Aware Data Placement Strategies at the Edge", "Locality, Latency and Spatial-Aware Data Placement Strategies at the Edge", "Locality, Latency and Spatial-Aware Data Placement Strategies at the Edge", "Locality, Latency and Spatial-Aware Data Placement Strategies at the Edge" ]
[ "Nikhil Sreekumar \nDepartment of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN\n", "Abhishek Chandra [email protected] \nDepartment of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN\n", "Job B Weissman \nDepartment of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN\n", "Nikhil Sreekumar \nDepartment of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN\n", "Abhishek Chandra [email protected] \nDepartment of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN\n", "Job B Weissman \nDepartment of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN\n" ]
[ "Department of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN", "Department of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN", "Department of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN", "Department of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN", "Department of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN", "Department of Computer Science and Engineering\nUniversity of Minnesota Minneapolis\nMN" ]
[]
The vast data deluge at the network's edge is raising multiple challenges for the edge computing community. One of them is identifying edge storage servers where data from edge devices/sensors have to be stored to ensure low latency access services to emerging edge applications. Existing data placement algorithms mainly focus on locality, latency, and zoning to select edge storage servers under multiple environmental constraints. This paper uses a data placement framework to compare distance-based, latency-based, and spatial-awareness-based data placement strategies, which all share a decision-making system with similar constraints. Based on simulation experiments, we observed that the spatial-awareness-based strategy could provide a quality of service on par with the latency-based and better than the distance-based strategy.
null
[ "https://export.arxiv.org/pdf/2212.01984v2.pdf" ]
258,040,912
2212.01984
83794c7c6110c30c554e46d52c16549705d2c745
Locality, Latency and Spatial-Aware Data Placement Strategies at the Edge Nikhil Sreekumar Department of Computer Science and Engineering University of Minnesota Minneapolis MN Abhishek Chandra [email protected] Department of Computer Science and Engineering University of Minnesota Minneapolis MN Job B Weissman Department of Computer Science and Engineering University of Minnesota Minneapolis MN Locality, Latency and Spatial-Aware Data Placement Strategies at the Edge The vast data deluge at the network's edge is raising multiple challenges for the edge computing community. One of them is identifying edge storage servers where data from edge devices/sensors have to be stored to ensure low latency access services to emerging edge applications. Existing data placement algorithms mainly focus on locality, latency, and zoning to select edge storage servers under multiple environmental constraints. This paper uses a data placement framework to compare distance-based, latency-based, and spatial-awareness-based data placement strategies, which all share a decision-making system with similar constraints. Based on simulation experiments, we observed that the spatial-awareness-based strategy could provide a quality of service on par with the latency-based and better than the distance-based strategy. I. INTRODUCTION The International Data Corporation (IDC) expects 55.9 billion connected devices by 2025, generating 79.4ZB data [1]. This data deluge is increasing the data gravity at the network's edge resulting in the emergence of edge applications [2]. Deploying applications near the source of data generation ensures low data access latency and low service latency with improved quality of service. For example, in the case of AR/VR applications, the MTP(Motion to Photon) latency should be less than 20 ms for immersive experience [3]. Most of the data generated can be stored at the edge, utilized by edge applications, and then either sent to the cloud for persistent storage or discarded. This temporary buffering strategy can decrease the traffic congestion on the wide-area backhaul link while giving data owners more control to choose what data to be stored on the public cloud. However, there are multiple challenges one should address to ensure the quality of service to users when it comes to storing data at the edge [4], [5]. The heterogeneity of edge storage nodes, limited elasticity, node churn, user mobility, time-sensitive compute-value of data, and privacy opens multiple doors to edge storage research. Furthermore, with the introduction of compliance laws like GDPR [6], it is essential for data (for example, health and trading) to be stored persistently within a region. The storage nodes at the edge are heterogeneous in terms of storage capacity, network bandwidth, and request-handling power. The limited elasticity and node churn make it even more challenging. The placement of data under these environmental conditions to provide the best quality of service for application users by minimizing the average latency is not a trivial task. We will call data generation sources, Producers; data consuming applications, Consumers, and edge storage servers, Hosts in the rest of the paper. Multiple consumers can subscribe to the data generated by producers. Also, a single consumer can subscribe to multiple producer data. Depending on the consumer demand and the available storage capacity, new replica hosts may have to be created on the fly to continue service. Existing data placement systems focus on single replication [7] and multi-replication [8], [9] of data using distance, latency and geo-aware [10], [11] strategies. However, the host request handling capacity needs to be considered while making data placement decisions. Also, using location, latency, and spatial awareness separately may increase the endto-end latency and decision-making time. We propose the following enhancements that can significantly impact the quality of service: 1 The data placement decision should consider the ingress and egress load capacity per host node for replica selection (Producer/Consumer load constraint in Section II-A). 2 The density of producers and consumers in a region should be considered for selecting a replica (Centroid-based host selection in Section III). 3 A combined use of geolocation, latency, and spatial awareness can prune the potential host search space, resulting in less decision-making time (Spatial-awareness-based strategy in Section III). We propose a data placement framework that accommodates the above enhacements and explores the following questions: • How will the distance-based, and latency-based selection of hosts for data placement affect end-to-end latency observed by consumers? • Can we combine the features of distance-based and latency-based along with spatial awareness to create a better data placement policy? • With varying application workloads, how will the average number of replicas generated per producer vary across all three strategies? • With an increasing number of hosts, producers, and consumers, can the distance-based, latency-based, and spatial-awareness-based strategies scale? The main contributions in this paper are • Proposed a problem formulation for data placement that considers the storage capacity, producer/consumer load threshold, and on-demand replication of hosts to minimize the overall average end-to-end latency observed by consumers. • Proposed a data placement framework that uses three strategies: distance-based, latency-based, and spatialawareness-based under similar system constraints. • All three strategies are compared using simulation experiments where we made the following observations: 1 The use of distance to identify a potential set of host nodes may only be sometimes beneficial as the selected nodes incur high latency. 2 Combining the properties of location-based and latency-based strategies can improve the latency observed by consumers. 3 Spatial awareness can be used to prune the potential host search space, reducing the decision-making time for replica creation which can scale with increasing consumer demand. Section II gives a brief overview of the system model and data placement formulation. The proposed data placement framework that uses the three data placement strategies is described in Section III. A comparison across the data placement strategies is presented in Section IV using simulation experiments. We conclude our work with a brief discussion of future work and the major findings in the paper in Section VI. II. SYSTEM MODEL The system architecture is shown in Figure 1. There are a set of data generators (producers), a set of edge storage nodes (hosts), and a set of application tasks (consumers) in the system. Producers generate data and store/stream it to a host, where a consumer will collect the data. Gateways allow the connection of producers to the internet and vice versa. Multiple consumers can subscribe to a producer's data, and consumers can subscribe to multiple producer data. The hosts vary in storage capacity and ingress and egress request handling capacity. Based on a ping latency experiment ( Figure. 2) on AWS local servers, on-premise servers, and volunteers' servers, we could see that high-resource servers (AWS and onpremise servers) have low latency. We are focusing on a setting with latency directly proportional to resource capacity, as this is one plausible scenario. In future work, we plan to identify Fig. 2. Ping latency test. AWS local server and on-premise servers have lower ping latency compared to volunteer nodes how the different strategies will perform in different network and resource settings. A producer can directly communicate with a host once it is identified as a suitable location for data storage. A host can communicate with any other host node within a region. The producer generates data and sends it to a suitable host for storage. If there are pending consumer requests at the host, the data is stored and immediately shared with the consumers. We consider long-running edge application services, for example, a car insurance company perusing accident videos for validating insurance claims or a city planning system that wants to rearrange the traffic over coming days based on hourly data. It can also be for applications like an augmented reality-based robotic surgery handled by multiple surgeons that require the same data to be streamed to all involved doctors to avoid life-threatening situations. This AR/medical data will have to be stored at the host node in case someone wants to check back on a procedure carried out sometime back. There can be two types of latency here: data retrieval latency(car insurance or city planning where data is already present at the host) or end-to-end latency(augmented reality surgery where data is streaming). In this paper, we will focus on end-to-end latency scenarios. A. Problem Formulation Given sets of Producers (P ), Hosts (H) and Consumers (C) with sizes p, h and c respectively. Each producer p i sends data of size datasize pi to at most r hosts to meet the demands of subscribed consumers. The value of r may vary from producer to producer. However, here we take r = replica t hreshold for all producers. A consumer (c k ) can subscribe to more than one producer. For simplicity, we assumed the data transfer unit is b bytes. Each host has a storage capacity of cap hj , a producer load threshold pload hj (number of concurrent producer connections), and a consumer load threshold cload hj . Each consumer maintains a binary subscription list csub of size (1 × p), where, csub ki = 1, if c k subscribes to p i , otherwise csub ki = 0. The binary matrix, phc of size (p×h×c) is used to represent the paths from a producer to a consumer via a host. If phc ijk = 1, there exists a path from producer (p i ) to consumer (c k ) via host (h j ), otherwise phc ijk = 0. • Load constraint: A producer p i or a consumer c j can use a host h j only if the addition of a new connection is within the producer or consumer load threshold respectively. If plt hj is the producer load threshold and clt hj is the consumer load threshold for host h j , then p i=1 ( c k=1 phc ijk > 0) ≤ plt hj (1) c k=1 ( p i=1 phc ijk > 0) ≤ clt hj(2) • Storage constraint: A host (h j ) can store data from a producer (p i ) only if datasize pi is less than the available storage capacity cap hj of the host. p i=1 datasize pi * ( c k=1 phc ijk > 0) ≤ cap hj(3) • Single path constraint: There exist a single path from a producer p i to a consumer c k via a host h j , provided c k is subscribed to p i , i.e., csub ki = 1. h j=1 (phc ijk * csub ki ) = 1 (4) • On demand constraint: The number of replicas alloted to a producer (p i ) can be greater than or equal to 1. Once there are no more resources to share, any incoming replica request is declined. h j=1 ( c k=1 phc ijk > 0) ≥ 1(5) Given the above constraints, we need to ensure that the selected path (phc ijk ) for data transfer from a producer (p i ) to a consumer (c k ) via a host (h j ) has low latency. If the latency of transferring b units of data between p i and h j is lat ij and that between h j and c k is lat jk , then the total latency is given by lat phc ijk = (lat ij + lat jk ) * datasize pi b * phc ijk(6) Objective Our data placement algorithm aims to find the hosts where data from the producers can be stored for consumption by consumers while providing minimum average end-to-end la-tency. It means filling the binary matrix phc by minimizing (6) for all producers, hosts, and consumers. M inimize p i=1 h j=1 c k=1 lat phc ijk p i=1 h j=1 c k=1 phc ijk subject to p i=1 ( c k=1 phc ijk > 0) ≤ plt hj c k=1 ( p i=1 phc ijk > 0) ≤ clt hj p i=1 datasize pi * ( c k=1 phc ijk > 0) ≤ cap hj h j=1 (phc ijk * csub ki ) = 1 h j=1 ( c k=1 phc ijk > 0) ≥ 1 (7) B. Optimization Solution The objective (7) can be represented as a Mixed Integer Integer Programming problem. However, the formulation is similar to [7], which is an NP-Hard problem. Therefore, as the number of actors in the system increases, the execution time will also increase, which is unsuitable for latency-sensitive applications. Hence, in the next section, we propose a data placement framework that uses three strategies to scale the solution. III. DATA PLACEMENT FRAMEWORK To scale the optimization problem with an increasing number of producers, hosts, and consumers, we propose a data placement framework that considers all the constraints mentioned in section II-A. The framework mainly consists of a central decision-making system called the Matchmaker, deployed on a dedicated, stable edge server. Upon entering the system, producers, hosts, and consumers will register with the Matchmaker. Producers share an estimated data size that will be sent to the selected host with the Matchmaker. Hosts share information on the producer load threshold, consumer load threshold, and total storage capacity. Finally, the consumer will share the subscription list. All three of them will also share their geo-location with the Matchmaker. The Matchmaker consists of a server selection module and a network monitoring module (Figure 3). The server selection module selects the appropriate edge storage server according to distancebased, latency-based, or spatial-awareness-based data placement strategies. The network module monitors the network links across producers, hosts, and consumers periodically, the information from which is then used by the server selection module. Initial Load conditioning: Initially, when producers join the framework, the server selection module allocates no more than half the producer load per host node. This restriction is imposed to balance the load across the host nodes rather than focusing on a few ones. Suppose a host node with high capacity in terms of storage and producer load is selected for most producers. In that case, there is a chance that consumers of a few producers will take over the entire consumer load available in the host. This takeover can lead to storage space wastage and new replica creation overhead. The restriction enforced by the server selection module mitigates the takeover to some extent. Centroid based host selection: Once the initial load conditioning is complete, all producers will have a host allotted. As the demand increases, we will have to select a new host node for data replication. For the end-to-end latency scenario, producer and consumer information is essential to decide where data should be placed. Therefore, we consider the centroid of geo-locations of all existing consumers and the producer to select potential host nodes. This way, depending on the density of consumers, the host node selection will move towards locations with a high number of consumers. At the same time, we also take into account the producer. Once the centroid is identified, a data placement strategy is run on the potential host nodes to find a replica. The latencybased strategy does not require centroid-based selection as it always picks up the low latency, high resource-based host nodes for replication. Therefore, it is near-optimal among the three strategies. A. Data Placement Strategies 1) Distance-based selection: Host nodes that are at close physical proximity to the centroid is chosen to store the data from a producer. 2) Latency-based selection: Host nodes at close network proximity to the producer/consumer is chosen. Centroidbased selection does not apply to latency-based strategy as it mostly picks the best host node with low latency and high capacity. 3) Spatial heuristic-based selection: In a dense edge environment, using distance-based and latency-based selection may incur higher decision-making latency. The overhead can be decreased if we can somehow prune the search space. In this strategy, we prune the host search space using a spatial data structure, R-Tree [12]. The host nodes within the centroid's vicinity are first selected using the spatial data structure. Then, the best host with low latency to the producer and consumer is selected for data storage. In all the above strategies, we consider the load and storage capacity of the edge server before making the selection. Figure. 4. 1 The producers register with the Matchmaker by providing their location, id, and estimated storage size. The hosts will also send their location, storage capacity, and producer/consumer load threshold to the matchmaker 2 . The Matchmaker identifies the best host nodes and relays the information back to the producer 3 . More than one producer can store data on a host node. The producer can now directly contact the host to store the data 4 . The consumer registers with the Matchmaker by providing their location, application id, and subscription list 5 . Based on the subscription list, the Matchmaker identifies the best host nodes and shares the information with the consumer 6 . The consumer directly connects with the hosts for data transfer 7 . host ← DPStrategy(loc, prod id, cons id) 10: InformProducerofHost(prod id, host) 11: if is replica then 12: InformConsumerofHost(cons id, prod id, host) 13: end if 14: end procedure The server selection module in the Matchmaker runs the data placement strategies in SelectHost (algorithm 1) in two cases. a Producer enters the system for the first time and b Dynamic replication caused by overload on a host node. In algorithm 1, depending on whether the procedure call is for a replica or initial host for the producer, the location of interest is calculated line 4-8 . Based on the location, either distance-based or spatial-awareness-based strategy is called. For latency-based strategy, location is not used line 9 . Once Algorithm 2 Distance/Latency based Strategy 1: procedure DISTANCE/LATENCYSTRATEGY(loc, prod id, cons id) 2: potential hosts ← SortHostByLocationOrLatency(loc, prod id, cons id) 3: host ← SelectBestViableHost(prod id, potential hosts) 4: return host 5: end procedure a host is selected, the producer and consumer are notified so that they can send and receive data respectively line 10-13 . For distance-based and latency-based strategies (algorithm 2), the Matchmaker first orders host nodes based on their sum of distance or latency from producer and consumer line 2 . The best node is selected based on available storage capacity, and load line 3 . In cases where all the resources are exhausted, the consumer replica requests are currently denied as immediate data sharing is not possible, given that applications like live streaming deem it necessary. The time complexity for distance/latency-based host selection is O(nlogn), where n is the number of host nodes under consideration. The network module provides latency information observed across the nodes. For spatial-awareness-based strategy (algorithm 3), the Matchmaker first checks if the loc is present inside one of the Minimum Bounding Rectangle (MBR) of the R-Tree. If present, the best host is selected and returned lines 4-7 . Otherwise, the search extends to all the nearest MBRs identified using different MinDist [13] across the branches of the R-Tree. If present, the best host is selected and returned lines 9-11 . If none of the mentioned searches find a host, the search incrementally starts from the loc as concentric zones externally till the far host node is reached. If present, the best host is selected and returned lines 9-11 . There is a chance that all the replicas are overloaded during this search. If so, the consumer requests are declined. The time complexity for the spatial-awareness-based strategy is O(log M (n)), where M is the maximum number of children per node in R-Tree and n is the number of host nodes under consideration. A depiction of MinDist and Concentric Search is shown in Figure 5. The Matchmaker will assign a host node for each consumer subscription by calling ConsumerHostSelect (algorithm 4). Initially, the Matchmaker checks if any existing hosts allocated to a subscribed producer are available. The information of the available host with the shortest distance or latency is sent to the consumer lines 4-6 , otherwise a call to the SelectHost (algorithm 4) is made to create a new replica for the producer data line 8 . IV. EVALUATION A. Experimental Setup The simulation experiments are run on a Linux machine with 64GB RAM and 24 cores for the simulation. Based on if host.empty() then 10: nodes ← SD.FindAllNearestMBRNodesByMinDist(loc) 11: if ¬nodes.empty() then 12: host ← SelectHostFromCandidates(nodes, prod id, cons id) 13 27: potential hosts ← SortByLatency(loc, prod id, cons id) 28: host ← SelectBestViableHost(potential host, prod id) 29: return host 30: end procedure Algorithm 4 Host Selection for Consumer 1: procedure CONSUMERHOSTSELECT(cons id) 2: prods ← GetSubscribedList(cons id) 3: for prod id ∈ prods do 4: nodes ← GetAssignedHosts(prod id) 5: nodes ← SortByDistanceOrLatency(nodes, cons id) 6: host ← SelectBestViableHostByLoad(nodes) 7: if host.empty() then the ping latency information in section II, we select 5ms-10ms, 10ms-15ms, and 15ms-20ms ranges for high-capacity storage, medium-capacity storage, and low-capacity storage host nodes. The location associated with producers, hosts, and consumers was taken from the Social IoT real-time dataset [14]. The storage capacity of host nodes is in the range of 32GB-1TB (proportional to latency). Each host can support a load (producer + consumer) in the range of 40-80. The high-capacity host nodes have a higher load, followed by medium-capacity and low-capacity host nodes. The producer load is one-third of the total load, and the remaining is for the consumer load in the host. Producers can generate data in the range of 1GB-32GB. The consumer arrival follows Poisson distribution with a mean inter-arrival time of 5ms. Each producer will send chunks of size 1024 bytes to hosts until it reaches the data size to be generated. The RTree parameters M (maximum number of children within a node) and m (minimum number of children within a node) are set to 40 and 20, respectively. B. Simulation experiments 1) End-to-end latency: End-to-end latency provides a measure of the quality of service. In this experiment, we simulate 50 hosts, 100 producers, and a varying number of consumers (200-800). It can be seen in Figure. 6, the average end-toend latency remains almost the same across all the difference (host, producer, consumer) configurations. This is because the number of times dynamic replication occurs is much less than the number of chunks transferred. A detailed look at the replica overhead is shown in section IV-B3. The distancebased strategy takes more time as it does not consider the selected host's latency. There is also the chance that the host node has less resource capacity, leading to more replications. As for latency-based and spatial-based strategies, they can find the best host node within the search space in the given environmental setting. Therefore, the expectation would be to have less latency for spatial-based. However, the low number of hosts causes less contribution of host selection time at the Fig. 6. Average end-to-end latency. The number of hosts and producers is set to 50 and 100, respectively. Distance-based looks only at location information, leading to the selection of nodes with high latency. In contrast, latency-based and spatial-based consider latency resulting in low end-to-end latency. Matchmaker to the average end-to-end latency. We will look at a scenario where spatial-based will outperform latency-based in section IV-B4. 2) Average replica count per producer: For the same simulation scenario discussed above, the average count of replicas per producer is shown in Figure 7. It can be seen that latency-based and spatial-based outperform distance-based in all the configurations. Distance-based can choose host nodes that have less storage and less load threshold. This results in the creation of new replicas more often. Spatial-based shows a similar replica count to latency-based. However, there is a chance that spatial selects host nodes with comparatively fewer resources. This selection leads to more replicas, as shown in the configuration (50,100,800). 3) Average replication overhead per consumer: The replication overhead is calculated as the elapsed time from when a consumer sends a request to the Matchmaker for a host corresponding to a producer till it receives the first chunk of data. When a replica is identified, the Matchmaker will request the producer to start a new connection with the replica to Fig. 7. Average replica count per producer. The number of hosts and producers is set to 50 and 100 respectively. In the given environmental setting, distancebased may end up selecting hosts with less resource capacity. This selection leads to the creation of more replicas. Fig. 8. Average replication overhead per consumer. The number of hosts and producers is set to 50 and 100, respectively. Given the low host count, and the high number of chunks transferred, the average overhead across all the strategies is observed to be the same. start sharing data. Once the host receives the data from the producer, it stores and immediately sends it to the consumer. In all three strategies, even though replication overhead is expected to vary, it is observed that the average overhead is almost the same (Figure 8). The reason for the behavior can be explained from the overhead histogram in Figure 9. The distance-based selects a high number of high latency hosts for data placement. However, it can find a fairly good number of low-latency hosts at some point in the given environmental setting. Depending on the number of requests the Matchmaker receives, the decision-making time will vary due to contention. This results in the lowering of average replica overhead. Latency-based and spatial-based select low latency hosts initially, leading to fewer replica calls. Once the existing replicas are exhausted, both move to high-latency hosts. This selection from low latency to high latency hosts resulted in getting the observed replica overhead. The dip in histogram around 40-60ms for latency-based and spatial-based shows that a few replicas were selected with less contention at the Matchmaker. As the number of hosts, producer and consumers increase, we believe the overhead difference will become more evident as shown in section IV-B4. 4) Average consumer replica selection time: This simulation experiment considers three configurations with 5000 hosts, 50 producers, and varying consumers (100-1000). The experiment aims to observe the average replica selection time per consumer at the Matchmaker. It can be seen in Figure. 10 that the time taken to identify a replica for a consumer is the lowest for spatial-based strategy compared to latencybased and distance-based. The reason for this reduction is the pruning of search space for identifying potential host candidates in the Spatial-based. Latency-based makes a nearoptimal selection of host in the given setting leading to less replication and hence less contention to access the shared information, resulting in a low selection time compared to distance-based. Based on the simulation results, the distance-based strategy incurs more end-to-end latency than latency-based and spatialbased. The same trend is observed for the average number of replicas per producer. For a sparse edge environment, using latency-based or spatial-based can be beneficial. For a dense edge environment, spatial-based strategies' search space pruning can reduce the host selection time per consumer resulting in less replica overhead and hence scale with consumer demand. V. RELATED WORK Over the past five years, multiple approaches have been proposed for data placement at the edge. iFogStor [7] models the data placement as GAP [15], an NP-Hard problem. Its main goal is to identify a single replica storage node where data from the producer be kept to minimize the overall latency between producers and consumers. As the solution cannot scale, a heuristic based on geographical zoning is proposed. The single replication of data may only sometimes be suitable as there can be increased requests resulting in network and storage throttling. Also, if there is an inter-regional flow of data, the zone-specific solution may be sub-optimal. To solve this problem, iFogStorG [16] proposed a divide-and-conquer approach. It divided the entire edge infrastructure into several separate and balanced parts to ensure minimized data flow across parts. Within each identified part, the iFogStor approach was run to get the local decision and then combined to get suitable global placement. Here, only a single replica is associated with a producer making it unsuitable for high-request scenarios. To resolve the single replica issue, iFogStorM [8] was proposed. The model adds a constraint allowing more than one replica per producer. Similar to iFogStor, iFogStorM cannot be solved in polynomial time. Hence the authors proposed the MultiCopyStorage heuristic, which greedily allows a consumer to select the low latency node among the replicas. It also curbs the replica count when increasing the count does not significantly impact the overall latency. One issue with this technique is that replicas may be too high in an already restricted edge environment. Also, focusing on replicas too far away from the consumer may not be required as edge application users are mostly colocated (autonomous vehicles, AR/VR games). [9] introduces iFogStorS for small infrastructures that use the shortest path between producers and consumers; and iFogStorP for large infrastructures that use P-median [17] to place P replicas. Compared to MultiCopyStorage, which in parallel sends updates from producer to replicas, iFogStorS/P sends data to one replica, which in turn updates others. Scientific workflows usually have massive generated datasets stored across multiple cloud data centers leading to high transmission delay. [18] proposes a genetic, self-adaptive, discrete particle swarm optimization data placement strategy (GA-DPSO) that utilizes both the cloud and the edge. The approach does not consider the highly heterogeneous feature of edge nodes. [19] considers the storage capacity at each edge site and the data transmission cost across cloud nodes to make a placement decision. They propose a discrete particle swarm optimization with differential evolution to identify the locations where shared data across multiple scientific workflows can be placed to minimize the transmission time. In [10], the switches and data indices of unstructured data are associated with coordinates in a virtual space. The data index is stored on servers connected to a switch closest to the virtual space. In [11], inspired by [10], the data is placed at the center of a dense network in a virtual space to ensure a shorter distance to all the areas in a region followed by popularity-based replica placement. [20] jointly place tasks and data, where each block of data is assigned a popularity value to help decide on data placement. FogStore [21], a key-value store, places a set of replicas within the vicinity of the clients and another set of replicas away from the clients to ensure fault tolerance. In addition, it provides differential consistency for data depending on the situation awareness of applications. DataFog [22] is a data management platform for IoT which uses spatial proximity to identify the location of replicas. Like FogStore, it also keeps a few replicas in remote locations for fault tolerance. EdgeKV [23] is a decentralized storage system for general-purpose tasks with fault tolerance, reliability guarantees, and strong consistency. Distributed Hash tables are used in EdgeKV to identify locations to store data across different edge nodes. Data placement is a well-researched topic in distributed systems [24]- [33]. They mainly focus on reducing the network latency, proximity of dedicated servers to clients, data popularity, adapting to dynamic workloads, partitioning data to adapt to server sizes, and dynamically configuring replicas based on application requirements. Many of the existing distributed databases [34]- [36] use consistent hashing to store data across different nodes in a load-balanced manner. VI. CONCLUSION Data placement at the edge is a significant challenge that should be addressed to meet the demands of edge applications. The selection of the host node by utilizing location, latency, and spatial awareness can lead to less decision-making time and reduce the end-to-end latency observed by the end user. We compared three data placement strategies: distancebased, latency-based, and spatial-based, using a data placement framework under the same system constraints. The simulation experiments showed that the spatial-based strategy could achieve low end-to-end latency, average replica count, and decision-making time compared to the distance-based strategy. Furthermore, spatial-based is on par with latency-based in terms of end-to-end latency and the average number of replicas in most cases. We also saw that the spatial-based strategy could identify new replica locations with low overhead as the number of consumers increases in a dense edge environment, meaning it can scale with consumer demand. Fig. 1 . 1System architecture Fig. 3 . 3Matchmaker: The decision-making component Fig. 4 . 4System workflow Consider the workflow in Fig. 5 . 5Host search (a) Search in MBR of centroid/producer (b) Search in all nearby MinDist MBR (c) Search in unvisited MBRs expanding search concentrically from centroid/producer Fig. 9 . 9Replica overhead distribution. The replica overhead is distributed across all bins in distance-based, leading to similar overhead as latency-based and spatial-based. Fig. 10 . 10Average consumer replica selection time overhead. The number of hosts and producers is set to 5000 and 50, respectively. The replica selection overhead increases at the Matchmaker with the number of consumers concurrently requesting replicas. Spatial-based can focus on the pruned search space resulting in less time. Algorithm 3 Spatial Strategy 1: procedure SPATIALSTRATEGY(loc, prod id, cons id)2: host ← {} 3: 4: nodes ← SD.GetMBRNodes(loc) SD is the RTree structure 5: if ¬nodes.empty() then 6: host ← SelectHostFromCandidates(nodes, prod id, cons id) 7: end if 8: 9: :end if 14: end if 15: 16: if host.empty() then 17: nodes ← {} 18: while nodes.empty() & boundReached do 19: nodes ← SD.ConcentricSearch(id, loc, sz) 20: end while 21: host ← SelectHostFromCandidates(nodes, prod id, cons id) 22: end if 23: return host 24: end procedure 25: 26: procedure SELECTHOSTFROMCANDIDATES(nodes, prod id, cons id) The placement of data in the presence of producer mobility, fairness of replica creation based on application requirements, and inclusion of application-specific latency threshold in problem formulation are venues we will be exploring in the future. We also plan to investigate the different types of workloads, and edge server traces for compute, storage, and network to ensure the data placement strategies are adaptable across different environmental settings. Data age 2025: The evolution of data to life-critical. D Reinsel, J Gantz, J Rydning, Don't Focus on Big Data. 2D. Reinsel, J. Gantz, and J. Rydning, "Data age 2025: The evolution of data to life-critical," Don't Focus on Big Data, vol. 2, 2017. The seminal role of edge-native applications. M Satyanarayanan, G Klas, M Silva, S Mangiante, 2019 IEEE International Conference on Edge Computing (EDGE). IEEEM. Satyanarayanan, G. Klas, M. Silva, and S. Mangiante, "The seminal role of edge-native applications," in 2019 IEEE International Conference on Edge Computing (EDGE). IEEE, 2019, pp. 33-40. Making immersive virtual reality possible in mobile. Q T Inc, 12Q. T. Inc., "Making immersive virtual reality possible in mobile," https://www.qualcomm.com/media/documents/files/ whitepaper-making-immersive-virtual-reality-possible-in-mobile.pdf, 2016, online: accessed 12-Feb-2023. Sharing and caring of data at the edge. A Trivedi, L Wang, H Bal, A Iosup, 3rd USENIX Workshop on Hot Topics in Edge Computing. 2020A. Trivedi, L. Wang, H. Bal, and A. Iosup, "Sharing and caring of data at the edge," in 3rd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 20), 2020. Position paper: Towards a robust edge-native storage system. N Sreekumar, A Chandra, J Weissman, 2020 IEEE/ACM Symposium on Edge Computing (SEC). IEEEN. Sreekumar, A. Chandra, and J. Weissman, "Position paper: Towards a robust edge-native storage system," in 2020 IEEE/ACM Symposium on Edge Computing (SEC). IEEE, 2020, pp. 285-292. Big data and analytics in the age of the gdpr. P A Bonatti, S Kirrane, 2019 IEEE International Congress on Big Data. IEEEP. A. Bonatti and S. Kirrane, "Big data and analytics in the age of the gdpr," in 2019 IEEE International Congress on Big Data (BigDat- aCongress). IEEE, 2019, pp. 7-16. ifogstor: an iot data placement strategy for fog infrastructure. M I Naas, P R Parvedy, J Boukhobza, L Lemarchand, 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC). IEEEM. I. Naas, P. R. Parvedy, J. Boukhobza, and L. Lemarchand, "ifogstor: an iot data placement strategy for fog infrastructure," in 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC). IEEE, 2017, pp. 97-104. A latency-aware multiple data replicas placement strategy for fog computing. T Huang, W Lin, Y Li, L He, S Peng, Journal of Signal Processing Systems. 9110T. Huang, W. Lin, Y. Li, L. He, and S. Peng, "A latency-aware multiple data replicas placement strategy for fog computing," Journal of Signal Processing Systems, vol. 91, no. 10, pp. 1191-1204, 2019. Iot data replication and consistency management in fog computing. M I Naas, L Lemarchand, P Raipin, J Boukhobza, Journal of Grid Computing. 193M. I. Naas, L. Lemarchand, P. Raipin, and J. Boukhobza, "Iot data replication and consistency management in fog computing," Journal of Grid Computing, vol. 19, no. 3, pp. 1-25, 2021. Efficient indexing mechanism for unstructured data sharing systems in edge computing. J Xie, C Qian, D Guo, M Wang, S Shi, H Chen, IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEEJ. Xie, C. Qian, D. Guo, M. Wang, S. Shi, and H. Chen, "Efficient indexing mechanism for unstructured data sharing systems in edge computing," in IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019, pp. 820-828. Popularity-based data placement with load balancing in edge computing. X Wei, Y Wang, IEEE Transactions on Cloud Computing. X. Wei and Y. Wang, "Popularity-based data placement with load balancing in edge computing," IEEE Transactions on Cloud Computing, 2021. R-trees: A dynamic index structure for spatial searching. A Guttman, Proceedings of the 1984 ACM SIGMOD international conference on Management of data. the 1984 ACM SIGMOD international conference on Management of dataA. Guttman, "R-trees: A dynamic index structure for spatial searching," in Proceedings of the 1984 ACM SIGMOD international conference on Management of data, 1984, pp. 47-57. Nearest neighbor queries. N Roussopoulos, S Kelley, F Vincent, Proceedings of the 1995 ACM SIGMOD international conference on Management of data. the 1995 ACM SIGMOD international conference on Management of dataN. Roussopoulos, S. Kelley, and F. Vincent, "Nearest neighbor queries," in Proceedings of the 1995 ACM SIGMOD international conference on Management of data, 1995, pp. 71-79. How to exploit the social internet of things: Query generation model and device profiles' dataset. C Marche, L Atzori, V Pilloni, M Nitti, Computer Networks. 107248C. Marche, L. Atzori, V. Pilloni, and M. Nitti, "How to exploit the social internet of things: Query generation model and device profiles' dataset," Computer Networks, p. 107248, 2020. A survey of algorithms for the generalized assignment problem. D G Cattrysse, L N Van Wassenhove, European journal of operational research. 603D. G. Cattrysse and L. N. Van Wassenhove, "A survey of algorithms for the generalized assignment problem," European journal of operational research, vol. 60, no. 3, pp. 260-272, 1992. A graph partitioning-based heuristic for runtime iot data placement strategies in a fog infrastructure. M I Naas, L Lemarchand, J Boukhobza, P Raipin, Proceedings of the 33rd annual ACM symposium on applied computing. the 33rd annual ACM symposium on applied computingM. I. Naas, L. Lemarchand, J. Boukhobza, and P. Raipin, "A graph partitioning-based heuristic for runtime iot data placement strategies in a fog infrastructure," in Proceedings of the 33rd annual ACM symposium on applied computing, 2018, pp. 767-774. The p-median problem: A survey of metaheuristic approaches. N Mladenović, J Brimberg, P Hansen, J A Moreno-Pérez, European Journal of Operational Research. 1793N. Mladenović, J. Brimberg, P. Hansen, and J. A. Moreno-Pérez, "The p-median problem: A survey of metaheuristic approaches," European Journal of Operational Research, vol. 179, no. 3, pp. 927-939, 2007. A time-driven data placement strategy for a scientific workflow combining edge computing and cloud computing. B Lin, F Zhu, J Zhang, J Chen, X Chen, N N Xiong, J L Mauri, IEEE Transactions on Industrial Informatics. 157B. Lin, F. Zhu, J. Zhang, J. Chen, X. Chen, N. N. Xiong, and J. L. Mauri, "A time-driven data placement strategy for a scientific workflow combining edge computing and cloud computing," IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4254-4265, 2019. A novel data placement strategy for data-sharing scientific workflows in heterogeneous edge-cloud computing environments. X Du, S Tang, Z Lu, J Wet, K Gai, P C Hung, 2020 IEEE International Conference on Web Services (ICWS). IEEEX. Du, S. Tang, Z. Lu, J. Wet, K. Gai, and P. C. Hung, "A novel data placement strategy for data-sharing scientific workflows in heteroge- neous edge-cloud computing environments," in 2020 IEEE International Conference on Web Services (ICWS). IEEE, 2020, pp. 498-507. Joint optimization of data placement and scheduling for improving user experience in edge computing. C Li, J Bai, J Tang, Journal of Parallel and Distributed Computing. 125C. Li, J. Bai, and J. Tang, "Joint optimization of data placement and scheduling for improving user experience in edge computing," Journal of Parallel and Distributed Computing, vol. 125, pp. 93-105, 2019. Fogstore: Toward a distributed data store for fog computing. R Mayer, H Gupta, E Saurez, U Ramachandran, 2017 IEEE Fog World Congress (FWC). IEEER. Mayer, H. Gupta, E. Saurez, and U. Ramachandran, "Fogstore: Toward a distributed data store for fog computing," in 2017 IEEE Fog World Congress (FWC). IEEE, 2017, pp. 1-6. {DataFog}: Towards a holistic data management platform for the {IoT} age at the network edge. H Gupta, Z Xu, U Ramachandran, USENIX Workshop on Hot Topics in Edge Computing (HotEdge 18). H. Gupta, Z. Xu, and U. Ramachandran, "{DataFog}: Towards a holistic data management platform for the {IoT} age at the network edge," in USENIX Workshop on Hot Topics in Edge Computing (HotEdge 18), 2018. Edgekv: decentralized, scalable, and consistent storage for the edge. K Sonbol, Ö Özkasap, I Al-Oqily, M Aloqaily, Journal of Parallel and Distributed Computing. 144K. Sonbol,Ö.Özkasap, I. Al-Oqily, and M. Aloqaily, "Edgekv: de- centralized, scalable, and consistent storage for the edge," Journal of Parallel and Distributed Computing, vol. 144, pp. 28-40, 2020. Towards global storage management and data placement. A C Veitch, E Riedel, S J Towers, J Wilkes, HotOS. Citeseer. 184A. C. Veitch, E. Riedel, S. J. Towers, J. Wilkes et al., "Towards global storage management and data placement." in HotOS. Citeseer, 2001, p. 184. Scalable, structured data placement over p2p storage utilities. Z Zhang, M Mahalingam, Z Xu, W Tang, Proceedings. 10th IEEE International Workshop on Future Trends of Distributed Computing Systems. 10th IEEE International Workshop on Future Trends of Distributed Computing SystemsIEEEZ. Zhang, M. Mahalingam, Z. Xu, and W. Tang, "Scalable, structured data placement over p2p storage utilities," in Proceedings. 10th IEEE International Workshop on Future Trends of Distributed Computing Systems, 2004. FTDCS 2004. IEEE, 2004, pp. 244-251. Stork: Making data placement a first class citizen in the grid. T Kosar, M Livny, 24th International Conference on Distributed Computing Systems. T. Kosar and M. Livny, "Stork: Making data placement a first class citizen in the grid," in 24th International Conference on Distributed Computing Systems, 2004. Proceedings. IEEE, 2004, pp. 342-349. A qos-aware heuristic algorithm for replica placement. H Wang, P Liu, J.-J Wu, 2006 7th IEEE/ACM International Conference on Grid Computing. IEEEH. Wang, P. Liu, and J.-J. Wu, "A qos-aware heuristic algorithm for replica placement," in 2006 7th IEEE/ACM International Conference on Grid Computing. IEEE, 2006, pp. 96-103. Dynamic and redundant data placement. A Brinkmann, S Effert, F M Der Heide, C Scheideler, 27th International Conference on Distributed Computing Systems (ICDCS'07). IEEEA. Brinkmann, S. Effert, F. M. auf der Heide, and C. Scheideler, "Dy- namic and redundant data placement," in 27th International Conference on Distributed Computing Systems (ICDCS'07). IEEE, 2007, pp. 29- 29. Effects of replica placement algorithms on performance of structured overlay networks. B A Alqaralleh, C Wang, B B Zhou, A Y Zomaya, 2007 IEEE International Parallel and Distributed Processing Symposium. IEEEB. A. Alqaralleh, C. Wang, B. B. Zhou, and A. Y. Zomaya, "Effects of replica placement algorithms on performance of structured overlay net- works," in 2007 IEEE International Parallel and Distributed Processing Symposium. IEEE, 2007, pp. 1-8. Rcfile: A fast and space-efficient data placement structure in mapreduce-based warehouse systems. Y He, R Lee, Y Huai, Z Shao, N Jain, X Zhang, Z Xu, 2011 IEEE 27th International Conference on Data Engineering. IEEEY. He, R. Lee, Y. Huai, Z. Shao, N. Jain, X. Zhang, and Z. Xu, "Rcfile: A fast and space-efficient data placement structure in mapreduce-based warehouse systems," in 2011 IEEE 27th International Conference on Data Engineering. IEEE, 2011, pp. 1199-1208. A distributed algorithm for the replica placement problem. S Zaman, D Grosu, IEEE Transactions on Parallel and Distributed Systems. 229S. Zaman and D. Grosu, "A distributed algorithm for the replica placement problem," IEEE Transactions on Parallel and Distributed Systems, vol. 22, no. 9, pp. 1455-1468, 2011. An efficient approach for data placement in distributed systems. H I Abdalla, 2011 Fifth FTRA international conference on multimedia and ubiquitous engineering. IEEEH. I. Abdalla, "An efficient approach for data placement in distributed systems," in 2011 Fifth FTRA international conference on multimedia and ubiquitous engineering. IEEE, 2011, pp. 297-301. A new replica placement policy for hadoop distributed file system. W Dai, IDSI Ibrahim, IDSM Bassiouni, IDS2016 IEEE 2nd international conference on big data security on cloud (bigdatasecurity), IEEE international conference on high performance and smart computing (HPSC), and IEEE international conference on intelligent data and security. IEEEW. Dai, I. Ibrahim, and M. Bassiouni, "A new replica placement policy for hadoop distributed file system," in 2016 IEEE 2nd international conference on big data security on cloud (bigdatasecurity), IEEE international conference on high performance and smart computing (HPSC), and IEEE international conference on intelligent data and security (IDS). IEEE, 2016, pp. 262-267. Amazon dynamodb: a seamlessly scalable nonrelational database service. S Sivasubramanian, Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. the 2012 ACM SIGMOD International Conference on Management of DataS. Sivasubramanian, "Amazon dynamodb: a seamlessly scalable non- relational database service," in Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, 2012, pp. 729-730. Cassandra: a decentralized structured storage system. A Lakshman, P Malik, ACM SIGOPS Operating Systems Review. 442A. Lakshman and P. Malik, "Cassandra: a decentralized structured storage system," ACM SIGOPS Operating Systems Review, vol. 44, no. 2, pp. 35-40, 2010. Voldemort project. 12"Voldemort project," https://www.project-voldemort.com/voldemort/, 2013, online: accessed 12-Feb-2023.
[]
[]
[ "Joseph Y Halpern [email protected] \nCornell University Computer Science Department Ithaca\n14853NY\n" ]
[ "Cornell University Computer Science Department Ithaca\n14853NY" ]
[]
Bounded Rationality: One way of capturing bounded rationality is in terms of agents who have limited computational power. In economics, this line of research goes back to the work of Neyman [1985] andRubinstein [1986], who focused on finitely repeated prisoner's dilemma. In n-round finitely repeated prisoner's dilemma, there are 2 2 n −1 strategies (since a strategy is a function from histories to {cooperate, defect}, and there are clearly 2 n − 1 histories of length < n). Finding a best response to a particular move can thus potentially be difficult. Clearly people do not find best responses by doing extensive computation. Rather, they typically rely on simple heuristics, such as "Tit for Tat"[Axelrod 1984]. Such heuristics can often be captured by finite automata; both Neyman and Rubinstein thus focus on finite automata playing repeated prisoners dilemma. Two computer scientists, Papadimitriou and Yannakakis [1994], showed that if both players in an n-round prisoners dilemma are finite automata with at least 2 n−1 states, then the only equilibrium is the one where they defect in every round. This result says that a finite automaton with exponentially many states can compute best responses in prisoners dilemma. We can then model bounded rationality by restricting the number of states of the automaton. Neyman [1985] showed, roughly speaking, that if the two players in n-round prisoner's dilemma are modeled by finite automata with a number of states in the interval [n 1/k , n k ] for some k, then collaboration can be approximated in equilibrium; more precisely, if the payoff for (cooperate,cooperate) is (3,3) there is an equilibrium in the repeated game where the average payoff per round is greater than 3 − 1 k for each player. Papadimitriou and Yannakakis [1994] sharpen this result by showing that if at least one of the players has fewer than 2 cǫn states, where c ǫ = ǫ 12(1+ǫ) , then for sufficiently large n, then there is an equilibrium where each player's average payoff per round is greater than 3 − ǫ. Thus, computational limitations can lead to cooperation in prisoner's dilemma.
null
[ "https://arxiv.org/pdf/cs/0703148v1.pdf" ]
6,495,023
cs/0703148
be878e763151ba40c08288f4d1b3845de8316386
29 Mar 2007 February 1, 2008 * Joseph Y Halpern [email protected] Cornell University Computer Science Department Ithaca 14853NY 29 Mar 2007 February 1, 2008 *Computer Science and Game Theory: A Brief Survey Bounded Rationality: One way of capturing bounded rationality is in terms of agents who have limited computational power. In economics, this line of research goes back to the work of Neyman [1985] andRubinstein [1986], who focused on finitely repeated prisoner's dilemma. In n-round finitely repeated prisoner's dilemma, there are 2 2 n −1 strategies (since a strategy is a function from histories to {cooperate, defect}, and there are clearly 2 n − 1 histories of length < n). Finding a best response to a particular move can thus potentially be difficult. Clearly people do not find best responses by doing extensive computation. Rather, they typically rely on simple heuristics, such as "Tit for Tat"[Axelrod 1984]. Such heuristics can often be captured by finite automata; both Neyman and Rubinstein thus focus on finite automata playing repeated prisoners dilemma. Two computer scientists, Papadimitriou and Yannakakis [1994], showed that if both players in an n-round prisoners dilemma are finite automata with at least 2 n−1 states, then the only equilibrium is the one where they defect in every round. This result says that a finite automaton with exponentially many states can compute best responses in prisoners dilemma. We can then model bounded rationality by restricting the number of states of the automaton. Neyman [1985] showed, roughly speaking, that if the two players in n-round prisoner's dilemma are modeled by finite automata with a number of states in the interval [n 1/k , n k ] for some k, then collaboration can be approximated in equilibrium; more precisely, if the payoff for (cooperate,cooperate) is (3,3) there is an equilibrium in the repeated game where the average payoff per round is greater than 3 − 1 k for each player. Papadimitriou and Yannakakis [1994] sharpen this result by showing that if at least one of the players has fewer than 2 cǫn states, where c ǫ = ǫ 12(1+ǫ) , then for sufficiently large n, then there is an equilibrium where each player's average payoff per round is greater than 3 − ǫ. Thus, computational limitations can lead to cooperation in prisoner's dilemma. Introduction There has been a remarkable increase in work at the interface of computer science and game theory in the past decade. Game theory forms a significant component of some major computer science conferences (see, for example, [Kearns and Reiter 2005;Sandholm and Yakoo 2003]); leading computer scientists are often invited to speak at major game theory conferences, such as the World Congress on Game Theory 2000 and 2004. In this article I survey some of the main themes of work in the area, with a focus on the work in computer science. Given the length constraints, I make no attempt at being comprehensive, especially since other surveys are also available, including [Halpern 2003;Linial 1994;Papadimitriou 2001], and a comprehensive survey book will appear shortly [Nisan et al. 2007]. The survey is organized as follows. I look at the various roles of computational complexity in game theory in Section 2, including its use in modeling bounded rationality, its role in mechanism design, and the problem of computing Nash equilibria. In Section 3, I consider a game-theoretic problem that originated in the computer science literature, but should be of interest to the game theory community: computing the price of anarchy, that is, the cost of using decentralizing solution to a problem. In Section 4 I consider interactions between distributed computing and game theory. I conclude in Section 6 with a discussion of a few other topics of interest. Complexity Considerations The influence of computer science in game theory has perhaps been most strongly felt through complexity theory. I consider some of the strands of this research here. There are a numerous basic texts on complexity theory that the reader can consult for more background on notions like NP-completeness and finite automata, including [Hopcroft and Ullman 1979;Papadimitriou 1994a]. There have been a number of other attempts to use complexity-theoretic ideas from computer science to model bounded rationality; see Rubinstein [1998] for some examples. However, it seems that there is much more work to be done here. Computing Nash Equilibrium: Nash [1950] showed every finite game has a Nash equilibrium in mixed strategies. But how hard is it to actually find that equilibrium? On the positive side, there are well known algorithms for computing Nash equilibrium, going back to the classic Lemke-Howson [1964] algorithm, with a spate of recent improvements (see, for example, [Govindan and Wilson 2003;Blum et al. 2003;Porter et al. 2004]). Moreover, for certain classes of games (for example, symmetric games [Papadimitriou and Roughgarden 2005]), there are known to be polynomial-time algorithms. On the negative side, many questions about Nash equilibrium are known to be NP-hard. For example, Gilboa and Zemel [1989] showed that, for a game presented in normal form, deciding whether there exists a Nash equilibrium where each player gets a payoff of at least r is NP-complete. (Interestingly, Gilboa and Zemel also show that computing whether there exists a correlated equilibrium [Aumann 1987] where each player gets a payoff of at least r is computable in polynomial time. In general, questions regarding correlated equilibrium seem easier than the analogous questions for Nash equilibrium; see also [Papadimitriou 2005;Papadimitriou and Roughgarden 2005] for further examples.) Chu and Halpern [2001] prove similar NP-completeness results if the game is represented in extensive form, even if all players have the same payoffs (a situation which arises frequently in computer science applications, where we can view the players as agents of some designer, and take the payoffs to be the designer's payoffs). give a compendium of hardness results for various questions one can ask about Nash equilibria. Nevertheless, there is a sense in which it seems that the problem of finding a Nash equilibrium is easier than typical NP-complete problems, because every game is guaranteed to have a Nash equilibrium. By way of contrast, for a typical NP-complete problem like propositional satisfiability, whether or not a propositional formula is satisfiable is not known. Using this observation, it can be shown that if finding a Nash equilibrium is NP-complete, then NP = coNP. Recent work has in a sense has completely characterized the complexity of finding a Nash equilibrium in normal-form games: it is a PPPAD-complete problem [Chen and Deng 2006;Daskalis, Goldberg, and Papadimitriou 2006]. PPAD stands for "polynomial party argument (directed case)"; see [Papadimitriou 1994b] for a formal definition and examples of other PPAD problems. It is believed that PPAD-complete problems are not solvable in polynomial time, but are simpler than NP-complete problems, although this remains an open problem. See [Papadimitriou 2007] for an overview of this work. Algorithmic Mechanism Design: The problem of mechanism design is to design a game such that the agents playing the game, motivated only by self-interest, achieve the designer's goals. This problem has much in common with the standard computer science problem of designing protocols that satisfy certain specifications (for example, designing a distributed protocol that achieves Byzantine agreement; see Section 4). Work on mechanism design has traditionally ignored computational concerns. But Kfir-Dahav, Monderer, and Tennenholtz [2000] show that, even in simple settings, optimizing social welfare is NP-hard, so that perhaps the most common approach to designing mechanisms, applying the Vickrey-Groves-Clarke (VCG) procedure [Clarke 1971;Groves 1973;Vickrey 1961], is not going to work in large systems. We might hope that, even if we cannot compute an optimal mechanism, we might be able to compute a reasonable approximation to it. However, as Nisan andRonen [2000, 2001] show, in general, replacing a VCG mechanism by an approximation does not preserve truthfulness. That is, even though truthfully revealing one's type is an optimal strategy in a VCG mechanism, it may no longer be optimal in an approximation. Following Nisan and Ronen's work, there has been a spate of papers either describing computationally tractable mechanisms or showing that no computationally tractable mechanism exists for a number of problems, ranging from task allocation [Archer and Tardos 2001;Nisan and Ronen 2001] to costsharing for multicast trees [Feigenbaum et al. 2000] (where the problem is to share the cost of sending, for example, a movie over a network among the agents who actually want the movie) to finding low-cost paths between nodes in a network [Archer and Tardos 2002]. The problem that has attracted perhaps the most attention is combinatorial auctions, where bidders can bid on bundles of items. This becomes of particular interest in situations where the value to a bidder of a bundle of goods cannot be determined by simply summing the value of each good in isolation. To take a simple example, the value of a pair of shoes is much higher than that of the individual shoes; perhaps more interestingly, an owner of radio stations may value having a license in two adjacent cities more than the sum of the individual licenses. Combinatorial auctions are of great interest in a variety of settings including spectrum auctions, airport time slots (i.e., takeoff and landing slots), and industrial procurement. There are many complexity-theoretic issues related to combinatorial auctions. For a detailed discussion and references, see [Cramton et al. 2006]; I briefly discuss a few of the issues involved here. Suppose that there are n items being auctioned. Simply for a bidder to communicate her bids to the auctioneer can take, in general, exponential time, since there are 2 n bundles. In many cases, we can identify a bid on a bundle with the bidder's valuation of the bundle. Thus, we can try to carefully design a bidding language in which a bidder can communicate her valuations succinctly. Simple informationtheoretic arguments can be used to show that, for every bidding language, there will be valuations that will require length at least 2 n to express in that language. Thus, the best we can hope for is to design a language that can represent the "interesting" bids succinctly. See [Nisan 2006] for an overview of various bidding languages and their expressive power. Given bids from each of the bidders in a combinatorial auction, the auctioneer would like to then determine the winners. More precisely, the auctioneer would like to allocate the m items in an auction so as to maximize his revenue. This problem, called the winner determination problem, is NP-complete in general, even in relatively simple classes of combinatorial auctions with only two bidders making rather restricted bids. Moreover, it is not even polynomial-time approximable, in the sense that there is no constant d and polynomial-time algorithm such that the algorithm produces an allocation that gives revenue that is at least 1/d of optimal. On the other hand, there are algorithms that provably find a good solution, seem to work well in practice, and, if they seem to taking too long, can be terminated early, usually with a good feasible solution in hand. See [Lehmann et al. 2006] for an overview of the results in this area. In most mechanism design problems, computational complexity is seen as the enemy. There is one class of problems in which it may be a friend: voting. One problem with voting mechanisms is that of manipulation by voters. That is, voters may be tempted to vote strategically rather than ranking the candidates according to their true preferences, in the hope that the final outcome will be more favorable. This situation arises frequently in practice; in the 2000 election, American voters who preferred Nader to Gore to Bush were encouraged to vote for Gore, rather than "wasting" a vote on Nader. The classic Gibbard-Satterthwaite theorem [Gibbard 1973;Satterthwaite 1975] shows that, if there are at least three alternatives, then in any nondictatorial voting scheme (i.e., one where it is not the case that one particular voter dictates the final outcome, irrespective of how the others vote), there are preferences under which an agent is better off voting strategically. The hope is that, by constructing the voting mechanism appropriately, it may be computationally intractable to find a manipulation that will be beneficial. While finding manipulations for majority voting (the candidate with the most votes wins) is easy, there are well-known voting protocols for which manipulation is hard in the presence of three or more candidates. See ] for a summary of results and further pointers to the literature. [Kushilevitz and Nisan 1997] studies how much communication is needed for a set of n agents to compute the value of a function f : × n i=1 Θ i → X, where each agent i knows θ i ∈ Θ i . To see the relevance of this to economics, consider, for example, the problem of mechanism design. Most mechanisms in the economics literature are designed so that agents truthfully reveal their preferences (think of θ i as characterizing agent i's preferences here). However, in some settings, revealing one's full preferences can require a prohibitive amount of communication. For example, in a combinatorial auction of m items, revealing one's full preferences may require revealing what one would be willing to pay for each of the 2 m − 1 possible bundles of items. Even if m = 30, this requires revealing more than one billion numbers. This leads to an obvious question: how much communication is required by various mechanisms? Nisan and Segal [2005] show that a standard approach for conducting combinatorial auctions, where prices are listed, agents are expected to make demands based on these prices, and then prices are adjusted (according to some pre-specified rule) based on demand, requires an exponential amount of communication for a certain class of valuations. This is among the first of preliminary steps to understanding the communication complexity of mechanisms; the general problem remains wide open. Communication Complexity: Communication complexity The Price of Anarchy In a computer system, there are situations where we may have a choice between invoking a centralized solution to a problem or a decentralized solution. By "centralized" here, I mean that each agent in the system is told exactly what to do and must do so; in the decentralized solution, each agent tries to optimize his own selfish interests. Of course, centralization comes at a cost. For one thing, there is a problem of enforcement. For another, centralized solutions tend to be more vulnerable to failure. On the other hand, a centralized solution may be more socially beneficial. How much more beneficial can it be? Koutsoupias and Papadimitriou [1999] formalized this question by comparing the ratio of the social welfare of the centralized solution to that of the social welfare of the Nash equilibrium with the worst social welfare (assuming that the social welfare function is always positive). They called this ratio the the price of anarchy, and proved a number of results regarding the price of anarchy for a scheduling problem on parallel machines. Since the original paper, the price of anarchy has been studied in many settings, including traffic routing [Roughgarden and Tardos 2002], facility location games (e.g., where is the best place to put a factory) [Vetta 2002], and spectrum sharing (how should channels in a WiFi network be assigned) [Halldórsson et al. 2004]. To give a sense of the results, consider the traffic-routing context of Roughgarden and Tardos [2002]. Suppose that the travel time on a road increases in a known way with the congestion on the road. The goal is to minimize the average travel time for all drivers. Given a road network and a given traffic load, a centralized solution would tell each driver which road to take. For example, there could be a rule that cars with odd-numbered license plates take road 1, while those with even-numbered plates take road 2, to minimize congestion on either road. Roughgarden and Tardos show that the price of anarchy is unbounded if the travel time can be a nonlinear function of the congestion. On the other hand, if it is linear, they show that the price of anarchy is at most 4/3. The price of anarchy is but one way of computing the "cost" of using a Nash equilibrium. Others have been considered in the computer science literature. For example, Tennenholtz [2002] compares the safety level of a game-the optimal amount that an agent can guarantee himself, independent of what the other agents do-to what the agent gets in a Nash equilibrium, and shows, for interesting classes of games, including load-balancing games and first-price auctions, the ratio between the safety level and the Nash equilibrium is bounded. For example, in the case of first-price auctions, it is bounded by the constant e. Game Theory and Distributed Computing Distributed computing and game theory are interested in much the same problems: dealing with systems where there are many agents, facing uncertainty, and having possibly different goals. In practice, however, there has been a significant difference in emphasis in the two areas. In distributed computing, the focus has been on problems such as fault tolerance, asynchrony, scalability, and proving correctness of algorithms; in game theory, the focus has been on strategic concerns. I discuss here some issues of common interest. 1 To understand the relevance of fault tolerance and asynchrony, consider the Byzantine agreement problem, a paradigmatic problem in the distributed systems literature. In this problem, there are assumed to be n soldiers, up to t of which may be faulty (the t stands for traitor); n and t are assumed to be common knowledge. Each soldier starts with an initial preference, to either attack or retreat. (More precisely, there are two types of nonfaulty agents-those that prefer to attack, and those that prefer to retreat.) We want a protocol that guarantees that (1) all nonfaulty soldiers reach the same decision, and (2) This was introduced by Pease, Shostak, and Lamport [1980], and has been studied in detail since then; Chor and Dwork [1989], Fischer [1983], and Linial [1994] provide overviews. Whether the Byzantine agreement problem is solvable depends in part on what types of failures are considered, on whether the system is synchronous or asynchronous, and on the ratio of n to t. Roughly speaking, a system is synchronous if there is a global clock and agents move in lockstep; a "step" in the system corresponds to a tick of the clock. In an asynchronous system, there is no global clock. The agents in the system can run at arbitrary rates relative to each other. One step for agent 1 can correspond to an arbitrary number of steps for agent 2 and vice versa. Synchrony is an implicit assumption in essentially all games. Although it is certainly possible to model games where player 2 has no idea how many moves player 1 has taken when player 2 is called upon to move, it is not typical to focus on the effects of synchrony (and its lack) in games. On the other hand, in distributed systems, it is typically a major focus. Suppose for now that we restrict to crash failures, where a faulty agent behaves according to the protocol, except that it might crash at some point, after which it sends no messages. In the round in which an agent fails, the agent may send only a subset of the messages that it is supposed to send according to its protocol. Further suppose that the system is synchronous. In this case, the following rather simple protocol achieves Byzantine agreement: • In the first round, each agent tells every other agent its initial preference. • For rounds 2 to t + 1, each agent tells every other agent everything it has heard in the previous round. (Thus, for example, in round 3, agent 1 may tell agent 2 that it heard from agent 3 that its initial preference was to attack, and that it (agent 3) heard from agent 2 that its initial preference was to attack, and it heard from agent 4 that its initial preferences was to retreat, and so on. This means that messages get exponentially long, but it is not difficult to represent this information in a compact way so that the total communication is polynomial in n, the number of agents.) • At the end of round t + 1, if an agent has heard from any other agent (including itself) that its initial preference was to attack, it decides to attack; otherwise, it decides to retreat. Why is this correct? Clearly, if all agents are correct and want to retreat (resp., attack), then the final decision will be to retreat (resp., attack), since that is the only preference that agents hear about (recall that for now we are considering only crash failures). It remains to show that if some agents prefer to attack and others to retreat, then all the nonfaulty agents reach the same final decision. So suppose that i and j are nonfaulty and i decides to attack. That means that i heard that some agent's initial preference was to attack. If it heard this first at some round t ′ < t + 1, then i will forward this message to j, who will receive it and thus also attack. On the other hand, suppose that i heard it first at round t + 1 in a message from i t+1 . Thus, this message must be of the form "i t said at round t that . . . that i 2 said at round 2 that i 1 said at round 1 that its initial preference was to attack." Moreover, the agents i 1 , . . . , i t+1 must all be distinct. Indeed, it is easy to see that i k must crash in round k before sending its message to i (but after sending its message to i k+1 ), for k = 1, . . . , t, for otherwise i must have gotten the message from i k , contradicting the assumption that i first heard at round t + 1 that some agent's initial preference was to attack. Since at most t agents can crash, it follows that i t+1 , the agent that sent the message to i, is not faulty, and thus sends the message to j. Thus, j also decides to attack. A symmetric argument shows that if j decides to attack, then so does i. It should be clear that the correctness of this protocol depends on both the assumptions made: crash failures and synchrony. Suppose instead that Byzantine failures are allowed, so that faulty agents can deviate in arbitrary ways from the protocol; they may "lie", send deceiving messages, and collude to fool the nonfaulty agents in the most malicious ways. In this case, the protocol will not work at all. In fact, it is known that agreement can be reached in the presence of Byzantine failures iff t < n/3, that is, iff fewer than a third of the agents can be faulty [Pease et al. 1980]. The effect of asynchrony is even more devastating: in an asynchronous system, it is impossible to reach agreement using a deterministic protocol even if t = 1 (so that there is at most one failure) and only crash failures are allowed [Fischer et al. 1985]. The problem in the asynchronous setting is that if none of the agents have heard from, say, agent 1, they have no way of knowing whether agent 1 is faulty or just slow. Interestingly, there are randomized algorithms (i.e., behavior strategies) that achieve agreement with arbitrarily high probability in an asynchronous setting [Ben-Or 1983;Rabin 1983]. Byzantine agreement can be viewed as a game where, at each step, an agent can either send a message or decide to attack or retreat. It is essentially a game between two teams, the nonfaulty agents and the faulty agents, whose composition is unknown (at least by the correct agents). To model it as a game in the more traditional sense, we could imagine that the nonfaulty agents are playing against a new player, the "adversary". One of adversary's moves is that of "corrupting" an agent: changing its type from "nonfaulty" to "faulty". Once an agent is corrupted, what the adversary can do depends on the failure type being considered. In the case of crash failures, the adversary can decide which of a corrupted agent's messages will be delivered in the round in which the agent is corrupted; however, it cannot modify the messages themselves. In the case of Byzantine failures, the adversary essentially gets to make the moves for agents that have been corrupted; in particular, it can send arbitrary messages. Why has the distributed systems literature not considered strategic behavior in this game? Crash failures are used to model hardware and software failures; Byzantine failures are used to model random behavior on the part of a system (for example, messages getting garbled in transit), software errors, and malicious adversaries (for example, hackers). With crash failures, it does not make sense to view the adversary's behavior as strategic, since the adversary is not really viewed as having strategic interests. While it would certainly make sense, at least in principle, to consider the probability of failure (i.e., the probability that the adversary corrupts an agent), this approach has by and large been avoided in the literature because it has proved difficult to characterize the probability distribution of failures over time. Computer components can perhaps be characterized as failing according to an exponential distribution (see [Babaoglu 1987] for an analysis of Byzantine agreement in such a setting), but crash failures can be caused by things other than component failures (faulty software, for example); these can be extremely difficult to characterize probabilistically. The problems are even worse when it comes to modeling random Byzantine behavior. With malicious Byzantine behavior, it may well be reasonable to impute strategic behavior to agents (or to an adversary controlling them). However, it is often difficult to characterize the payoffs of a malicious agent. The goals of the agents may vary from that of simply trying to delay a decision to that of causing disagreement. It is not clear what the appropriate payoffs should be for attaining these goals. Thus, the distributed systems literature has chosen to focus instead on algorithms that are guaranteed to satisfy the specification without making assumptions about the adversary's payoffs (or nature's probabilities, in the case of crash failures). Recently, there has been some working adding strategic concerns to standard problems in distributed computing (see, for example, [Halpern and Teague 2004]) as well as adding concerns of fault tolerance and asynchrony to standard problems in game theory (see, for example, [Eliaz 2002;Monderer and Tennenholtz 1999a;Monderer and Tennenholtz 1999b] and the definitions in the next section). This seems to be an area that is ripe for further developments. One such development is the subject of the next section. Implementing Mediators The question of whether a problem in a multiagent system that can be solved with a trusted mediator can be solved by just the agents in the system, without the mediator, has attracted a great deal of attention in both computer science (particularly in the cryptography community) and game theory. In cryptography, the focus on the problem has been on secure multiparty computation. Here it is assumed that each agent i has some private information x i . Fix functions f 1 , . . . , f n . The goal is have agent i learn f i (x 1 , . . . , x n ) without learning anything about x j for j = i beyond what is revealed by the value of f i (x 1 , . . . , x n ). With a trusted mediator, this is trivial: each agent i just gives the mediator its private value x i ; the mediator then sends each agent i the value f i (x 1 , . . . , x n ). Work on multiparty computation [Goldreich et al. 1987;Shamir et al. 1981;Yao 1982] provides conditions under which this can be done. In game theory, the focus has been on whether an equilibrium in a game with a mediator can be implemented using what is called cheap talk-that is, just by players communicating among themselves (cf. [Barany 1992;Ben-Porath 2003;Heller 2005;Urbano and Vila 2002;Urbano and Vila 2004]). As suggested in the previous section, the focus in the computer science literature has been in doing multiparty computation in the presence of possibly malicious adversaries, who do everything they can to subvert the computation, while in the game theory literature, the focus has been on strategic agents. In recent work, Abraham et al. [2006Abraham et al. [ , 2007 considered deviations by both rational players, deviations by both rational players, who have preferences and try to maximize them, and players who can viewed as malicious, although it is perhaps better to think of them as rational players whose utilities are not known by the other players or mechanism designer. I briefly sketch their results here. 2 The idea of tolerating deviations by coalitions of players goes back to Aumann [1959]; more recent refinements have been considered by Moreno and Wooders [1996]. Aumann's definition is essentially the following. Definition 1 σ is a k-resilient ′ equilibrium if, for all sets C of players with |C| ≤ k, it is not the case that there exists a strategy τ such that u i ( τ C , σ −C ) > u i ( σ) for all i ∈ C. As usual, the strategy ( τ C , σ −C ) is the one where each player i ∈ C plays τ i and each player i / ∈ C plays σ i . As the prime notation suggests, this is not quite the definition we want to work with. The trouble with this definition is that it suggests that coalition members cannot communicate with each other beyond agreeing on what strategy to use. Perhaps surprisingly, allowing communication can prevent certain equilibria (see [Abraham et al. 2007] for an example). Since we should expect coalition members to communicate, the following definition seems to capture a more reasonable notion of resilient equilibrium. Let the cheap-talk extension of a game Γ be, roughly speaking, the the game where players are allowed to communicate among themselves in addition to performing the actions of Γ and the payoffs are just as in Γ. Definition 2 σ is a k-resilient equilibrium in a game Γ if σ is a k-resilient ′ equilibrium in the cheap-talk extension of Γ (where we identify the strategy σ i in the game Γ with the strategy in the cheap-talk game where player i never sends any messages beyond those sent according to σ i ). A standard assumption in game theory is that utilities are (commonly) known; when we are given a game we are also given each player's utility.When players make decision, they can take other players' utilities into account. However, in large systems, it seems almost invariably the case that there will be some fraction of users who do not respond to incentives the way we expect. For example, in a peer-to-peer network like Kazaa or Gnutella, it would seem that no rational agent should share files. Whether or not you can get a file depends only on whether other people share files; on the other hand, it seems that there are disincentives for sharing (the possibility of lawsuits, use of bandwidth, etc.). Nevertheless, people do share files. However, studies of the Gnutella network have shown almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [Adar and Huberman 2000]. One reason that people might not respond as we expect is that they have utilities that are different from those we expect. Alternatively, the players may be irrational, or (if moves are made using a computer) they may be playing using a faulty computer and thus not able to make the move they would like, or they may not understand how to get the computer to make the move they would like. Whatever the reason, it seems important to design strategies that tolerate such unanticipated behaviors, so that the payoffs of the users with "standard" utilities do not get affected by the nonstandard players using different strategies. This can be viewed as a way of adding fault tolerance to equilibrium notions. Definition 3 A joint strategy σ is t-immune if, for all T ⊆ N with |T | ≤ t, all joint strategies τ , and all i / ∈ T , we have u i ( σ −T , τ T ) ≥ u i ( σ). The notion of t-immunity and k-resilience address different concerns. For t immunity, we consider the payoffs of the players not in C; for resilience, we consider the payoffs of players in C. It is natural to combine both notions. Given a game Γ, let Γ T τ be the game that is identical to Γ except that the players in T are fixed to playing strategy τ . Definition 4 σ is a (k, t)-robust equilibrium if σ is t-immune and, for all T ⊆ N such that |T | ≤ t and all joint strategies τ , σ −T is a k-resilient strategy of Γ τ T . To state the results of Abraham et al. [2006Abraham et al. [ , 2007 on implementing mediators, three games need to be considered: an underlying game Γ, an extension Γ d of Γ with a mediator, and a cheap-talk extension Γ CT of Γ. Assume that Γ is a normal-form Bayesian game: each player has a type from some type space with a known distribution over types, and the utilities of the agents depend on the types and actions taken. Roughly speaking, a cheap talk game implements a game with a mediator if it induces the same distribution over actions in the underlying game, for each type vector of the players. With this background, I can summarize the results of Abraham et al. [2006Abraham et al. [ , 2007. • If n > 3k + 3t, a (k, t)-robust strategy σ with a mediator can be implemented using cheap talk (that is, there is a (k, t)-robust strategy σ ′ in a cheap talk game such that σ and σ ′ induce the same distribution over actions in the underlying game). Moreover, the implementation requires no knowledge of other agents' utilities, and the cheap talk protocol has bounded running time that does not depend on the utilities. • If n ≤ 3k + 3t then, in general, mediators cannot be implemented using cheap talk without knowledge of other agents' utilities. Moreover, even if other agents' utilities are known, mediators cannot, in general, be implemented without having a (k+t)-punishment strategy (that is, a strategy that, if used by all but at most k + t players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy) nor with bounded running time. • If n > 2k + 3t, then mediators can be implemented using cheap talk if there is a punishment strategy (and utilities are known) in finite expected running time that does not depend on the utilities. • If n ≤ 2k + 3t then mediators cannot, in general, be implemented, even if there is a punishment strategy and utilities are known. • If n > 2k + 2t and there are broadcast channels then, for all ǫ, mediators can be ǫ-implemented (intuitively, there is an implementation where players get utility within ǫ of what they could get by deviating) using cheap talk, with bounded expected running time that does not depend on the utilities. • If n ≤ 2k+2t then mediators cannot, in general, be ǫ-implemented, even with broadcast channels. Moreover, even assuming cryptography and polynomially-bounded players, the expected running time of an implementation must depend on the utility functions of the players and ǫ. • If n > k + 3t then, assuming cryptography and polynomially-bounded players, mediators can be ǫ-implemented using cheap talk, but if n ≤ 2k + 2t, then the running time depends on the utilities in the game and ǫ. • If n ≤ k + 3t, then even assuming cryptography, polynomially-bounded players, and a (k + t)punishment strategy, mediators cannot, in general, be ǫ-implemented using cheap talk. • If n > k + t then, assuming cryptography, polynomially-bounded players, and a public-key infrastructure (PKI), we can ǫ-implement a mediator. The proof of these results makes heavy use of techniques from computer science. All the possibility results showing that mediators can be implemented use techniques from secure multiparty computation. The results showing that that if n ≤ 3k + 3t, then we cannot implement a mediator without knowing utilities, even if there is a punishment strategy, uses the fact that Byzantine agreement cannot be reached if t < n/3; the impossibility result for n ≤ 2k + 3t also uses a variant of Byzantine agreement. A related line of work considers implementing mediators assuming stronger primitives (which cannot be implemented in computer networks); see [Izmalkov et al. 2005;Lepinski et al. 2004] for details. Other Topics There are many more areas of interaction between computer science than I have indicated in this brief survey. I briefly mention a few others here: • Interactive epistemology: Since the publication of Aumann's [1976] seminal paper, there has been a great deal of activity in trying to understand the role of knowledge in games, and providing epistemic analyses of solution concepts (see [Battigalli and Bonanno 1999] for a survey). In computer science, there has been a parallel literature applying epistemic logic to reason about distributed computation. One focus of this work has been on characterizing the level of knowledge needed to solve certain problems. For example, to achieve Byzantine agreement common knowledge among the nonfaulty agents of an initial value is necessary and sufficient. More generally, in a precise sense, common knowledge is necessary and sufficient for coordination. Another focus has been on defining logics that capture the reasoning of resource-bounded agents. This work has ranged from logics for reasoning about awareness, a topic that has been explored in both computer science and game theory (see, for example, [Dekel, Lipman, and Rustichini 1998;Fagin and Halpern 1988;Halpern 2001;Halpern and Rêgo 2006;Heifetz, Meier, and Schipper 2006;Modica and Rustichini 1994;Modica and Rustichini 1999]) and logics for capturing algorithmic knowledge, an approach that takes seriously the assumption that agents must explicitly compute what they know. See [Fagin et al. 1995] for an overview of the work in epistemic logic in computer science. • Network growth: If we view networks as being built by selfish players (who decide whether or not to build links), what will the resulting network look like? How does the growth of the network affect its functionality? For example, how easily will influence spread through the network? How easy is it to route traffic? See [Fabrikant et al. 2003;Kempe et al. 2003] for some recent computer science work in this burgeoning area. • Efficient representation of games: Game theory has typically focused on "small" games, often 2or 3-player games, that are easy to describe, such as prisoner's dilemma, in order to understand subtleties regarding basic issues such as rationality. To the extent that game theory is used to tackle larger, more practical problems, it will become important to find efficient techniques for describing and analyzing games. By way of analogy, 2 n − 1 numbers are needed to describe a probability distribution on a space characterized by n binary random variables. For n = 100 (not an unreasonable number in practical situations), it is impossible to write down the probability distribution in the obvious way, let alone do computations with it. The same issues will surely arise in large games. Computer scientists use graphical approaches, such as Bayesian networks and Markov networks [Pearl 1988], for representing and manipulating probability measures on large spaces. Similar techniques seem applicable to games; see, for example, [Koller and Milch 2001;La Mura 2000;Kearns et al. 2001], and [Kearns 2007] for a recent overview. Note that representation is also an issue when we consider the complexity of problems such as computing Nash or correlated equilibria. The complexity of a problem is a function of the size of the input, and the size of the input (which in this case is a description of the game) depends on how the input is represented. • Learning in games: There has been a great deal of work in both computer science and game theory on learning to play well in different settings (see [Fudenberg and Levine 1998] for an overview of the work in game theory). One line of research in computer science has involved learning to play optimally in a reinforcement learning setting, where an agent interacts with an unknown (but fixed) environment. The agent then faces a fundamental tradeoff between exploration and exploitation. The question is how long it takes to learn to play well (i.e., to get a reward within some fixed ǫ of optimal); see [Brafman and Tennenholtz 2002;Kearns and Singh 1998] for the current state of the art. A related question is efficiently finding a strategy minimizes regret-that is, finding a strategy that is guaranteed to do not much worse than the best strategy would have done in hindsight (that is, even knowing what the opponent would have done). See [Blum and Mansour 2007] for a recent overview of work on this problem. if all the soldiers are nonfaulty and their initial preferences are identical, then the final decision agrees with their initial preferences. (The condition simply prevents the obvious trivial solutions, where the soldiers attack no matter what, or retreat no matter what.) Much of the discussion in this section is taken from[Halpern 2003]. Much of the discussion in this section is taken from[Abraham et al. 2007]. Distributed computing meets game theory: Robust mechanisms for rational secret sharing and multiparty computation. I Abraham, D Dolev, R Gonen, J Halpern, Proc. 25th ACM Symposium on Principles of Distributed Computing. 25th ACM Symposium on Principles of Distributed ComputingAbraham, I., D. Dolev, R. Gonen, and J. Halpern (2006). Distributed computing meets game theory: Robust mechanisms for rational secret sharing and multiparty computation. In Proc. 25th ACM Symposium on Principles of Distributed Computing, pp. 53-62. Lower bounds on implementing robust and resilient mediators. I Abraham, D Dolev, J Halpern, Unpublished manuscriptAbraham, I., D. Dolev, and J. Halpern (2007). Lower bounds on implementing robust and resilient mediators. Unpublished manuscript. . E Adar, B Huberman, Free riding on Gnutella. First Monday. 510Adar, E. and B. Huberman (2000). Free riding on Gnutella. First Monday 5(10). Truthful mechanisms for one-parameter agents. A Archer, Tardos, Proc. 42nd IEEE Symposium on Foundations of Computer Science. 42nd IEEE Symposium on Foundations of Computer ScienceArcher, A. andÉ. Tardos (2001). Truthful mechanisms for one-parameter agents. In Proc. 42nd IEEE Symposium on Foundations of Computer Science, pp. 482-491. Frugal path mechanisms. A Archer, Tardos, Proc. 13th ACM-SIAM Symposium on Discrete Algorithms. 13th ACM-SIAM Symposium on Discrete AlgorithmsArcher, A. andÉ. Tardos (2002). Frugal path mechanisms. In Proc. 13th ACM-SIAM Symposium on Discrete Algorithms, pp. 991-999. Acceptable points in general cooperative n-person games. Contributions to the Theory of Games. R Aumann, Annals of Mathematical Studies IV. Aumann, R. (1959). Acceptable points in general cooperative n-person games. Contributions to the Theory of Games, Annals of Mathematical Studies IV, 287-324. Agreeing to disagree. R J Aumann, Annals of Statistics. 46Aumann, R. J. (1976). Agreeing to disagree. Annals of Statistics 4(6), 1236-1239. Correlated equilibrium as an expression of Bayesian rationality. R J Aumann, Econometrica. 55Aumann, R. J. (1987). Correlated equilibrium as an expression of Bayesian rationality. Economet- rica 55, 1-18. The Evolution of Cooperation. R Axelrod, Basic BooksNew YorkAxelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books. On the reliability of consensus-based fault-tolerant distributed computing systems. O Babaoglu, ACM Trans. on Computer Systems. 5Babaoglu, O. (1987). On the reliability of consensus-based fault-tolerant distributed computing sys- tems. ACM Trans. on Computer Systems 5, 394-416. Fair distribution protocols or how the players replace fortune. I Barany, Mathematics of Operations Research. 17Barany, I. (1992). Fair distribution protocols or how the players replace fortune. Mathematics of Operations Research 17, 327-340. Recent results on belief, knowledge and the epistemic foundations of game theory. P Battigalli, G Bonanno, Research in Economics. 532Battigalli, P. and G. Bonanno (1999). Recent results on belief, knowledge and the epistemic founda- tions of game theory. Research in Economics 53(2), 149-225. Another advantage of free choice: completely asynchronous agreement protocols. M Ben-Or, Proc. 2nd ACM Symp. on Principles of Distributed Computing. 2nd ACM Symp. on Principles of Distributed ComputingBen-Or, M. (1983). Another advantage of free choice: completely asynchronous agreement proto- cols. In Proc. 2nd ACM Symp. on Principles of Distributed Computing, pp. 27-30. Cheap talk in games with incomplete information. E Ben-Porath, Journal of Economic Theory. 1081Ben-Porath, E. (2003). Cheap talk in games with incomplete information. Journal of Economic The- ory 108(1), 45-71. Learning, regret minimization, and equilibria. A Blum, Y Mansour, Algorithmic Game Theory. N. Nisan, T. Roughgarden,É. Tardos, and V. VaziraniCambridge, U.K.Cambridge University PressBlum, A. and Y. Mansour (2007). Learning, regret minimization, and equilibria. In N. Nisan, T. Roughgarden,É. Tardos, and V. Vazirani (Eds.), Algorithmic Game Theory. Cambridge, U.K.: Cambridge University Press. A continuation method for Nash equilibria in structured games. B Blum, C R Shelton, D Koller, Proc. Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03). Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03)Blum, B., C. R. Shelton, and D. Koller (2003). A continuation method for Nash equilibria in struc- tured games. In Proc. Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03), pp. 757-764. R-MAX: A general polynomial time algorithm for nearoptimal reinforcement learning. R I Brafman, M Tennenholtz, Journal of Machine Learning Research. 3Brafman, R. I. and M. Tennenholtz (2002). R-MAX: A general polynomial time algorithm for near- optimal reinforcement learning. Journal of Machine Learning Research 3, 213-231. Settling the complexity of 2-player nash equilibrium. X Chen, X Deng, Proc. 47th IEEE Symposium on Foundations of Computer Science. 47th IEEE Symposium on Foundations of Computer ScienceChen, X. and X. Deng (2006). Settling the complexity of 2-player nash equilibrium. In Proc. 47th IEEE Symposium on Foundations of Computer Science, pp. .. Randomization in Byzantine agreement. B Chor, C Dwork, Advances in Computing Research 5: Randomness and Computation. JAI PressChor, B. and C. Dwork (1989). Randomization in Byzantine agreement. In Advances in Computing Research 5: Randomness and Computation, pp. 443-497. JAI Press. A decision-theoretic approach to reliable message delivery. F Chu, J Y Halpern, Distributed Computing. 14Chu, F. and J. Y. Halpern (2001). A decision-theoretic approach to reliable message delivery. Dis- tributed Computing 14, 359-389. Multipart pricing of public goods. E H Clarke, Public Choice. 11Clarke, E. H. (1971). Multipart pricing of public goods. Public Choice 11, 17-33. How many candidates are needed to make elections hard to manipulate. V Conitzer, J Lang, T Sandholm, Theoretical Aspects of Rationality and Knowledge: Proc. Ninth Conference (TARK 2003). Conitzer, V., J. Lang, and T. Sandholm (2003). How many candidates are needed to make elections hard to manipulate. In Theoretical Aspects of Rationality and Knowledge: Proc. Ninth Conference (TARK 2003), pp. 201-214. Complexity results about Nash equilibria. V Conitzer, T Sandholm, Proc. Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03). Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03)Conitzer, V. and T. Sandholm (2003). Complexity results about Nash equilibria. In Proc. Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03), pp. 765-771. P Cramton, Y , Combinatorial Auctions. Shoham, and R. SteinbergCambridge, MassMIT PressCramton, P., Y. Shoham, and R. Steinberg (Eds.) (2006). Combinatorial Auctions. Cambridge, Mass.: MIT Press. The comlexity of computing a nash equilibrium. C Daskalis, P Goldberg, C H Papadimitriou, Proc. 38th ACM Symposium on Theory of Computing. 38th ACM Symposium on Theory of ComputingDaskalis, C., P. Goldberg, and C. H. Papadimitriou (2006). The comlexity of computing a nash equilibrium. In Proc. 38th ACM Symposium on Theory of Computing, pp. 71-78. Standard state-space models preclude unawareness. E Dekel, B Lipman, A Rustichini, Econometrica. 66Dekel, E., B. Lipman, and A. Rustichini (1998). Standard state-space models preclude unawareness. Econometrica 66, 159-173. Fault-tolerant implementation. K Eliaz, Review of Economic Studies. 693Eliaz, K. (2002). Fault-tolerant implementation. Review of Economic Studies 69(3), 589-610. On a network creation game. A Fabrikant, A Luthra, E Maneva, C H Papadimitriou, S Shenker, Proc. 22nd ACM Symposium on Principles of Distributed Computing. 22nd ACM Symposium on Principles of Distributed ComputingFabrikant, A., A. Luthra, E. Maneva, C. H. Papadimitriou, and S. Shenker (2003). On a network creation game. In Proc. 22nd ACM Symposium on Principles of Distributed Computing, pp. 347- 351. Belief, awareness, and limited reasoning. R Fagin, J Y Halpern, Artificial Intelligence. 34Fagin, R. and J. Y. Halpern (1988). Belief, awareness, and limited reasoning. Artificial Intelli- gence 34, 39-76. Reasoning about Knowledge. R Fagin, J Y Halpern, Y Moses, M Y Vardi, MIT PressCambridge, MassFagin, R., J. Y. Halpern, Y. Moses, and M. Y. Vardi (1995). Reasoning about Knowledge. Cambridge, Mass.: MIT Press. A revised paperback edition was published in 2003. Sharing the cost of muliticast transmissions (preliminary version). J Feigenbaum, C Papadimitriou, S Shenker, Proc. 32nd ACM Symposium on Theory of Computing. 32nd ACM Symposium on Theory of ComputingFeigenbaum, J., C. Papadimitriou, and S. Shenker (2000). Sharing the cost of muliticast transmissions (preliminary version). In Proc. 32nd ACM Symposium on Theory of Computing, pp. 218-227. The consensus problem in unreliable distributed systems. M J Fischer, Foundations of Computation Theory. M. KarpinskiBerlin/New YorkSpringer-Verlag185Fischer, M. J. (1983). The consensus problem in unreliable distributed systems. In M. Karpinski (Ed.), Foundations of Computation Theory, Lecture Notes in Computer Science, Volume 185, pp. 127-140. Berlin/New York: Springer-Verlag. Impossibility of distributed consensus with one faulty processor. M J Fischer, N A Lynch, M S Paterson, Journal of the ACM. 322Fischer, M. J., N. A. Lynch, and M. S. Paterson (1985). Impossibility of distributed consensus with one faulty processor. Journal of the ACM 32(2), 374-382. The Theory of Learning in Games. D Fudenberg, D Levine, MIT PressFudenberg, D. and D. Levine (1998). The Theory of Learning in Games. MIT Press. Manipulation of voting schemes. A Gibbard, Econometrica. 41Gibbard, A. (1973). Manipulation of voting schemes. Econometrica 41, 587-602. Nash and correlated equilibrium: some complexity considerations. I Gilboa, E Zemel, Games and Economic Behavior. 1Gilboa, I. and E. Zemel (1989). Nash and correlated equilibrium: some complexity considerations. Games and Economic Behavior 1, 80-93. How to play any mental game. O Goldreich, S Micali, A Wigderson, Proc. 19th ACM Symp. on Theory of Computing. 19th ACM Symp. on Theory of ComputingGoldreich, O., S. Micali, and A. Wigderson (1987). How to play any mental game. In Proc. 19th ACM Symp. on Theory of Computing, pp. 218-229. A global Newton method to compute Nash equilibria. S Govindan, R Wilson, Journal of Economic Theory. 1101Govindan, S. and R. Wilson (2003). A global Newton method to compute Nash equilibria. Journal of Economic Theory 110(1), 65-86. Incentives in teams. T Groves, Eocnometrica. 41Groves, T. (1973). Incentives in teams. Eocnometrica 41, 617-631. On spectrum sharing games. M M Halldórsson, J Y Halpern, L Li, V Mirrokni, Proc. 23rd ACM Symposium on Principles of Distributed Computing. 23rd ACM Symposium on Principles of Distributed ComputingHalldórsson, M. M., J. Y. Halpern, L. Li, and V. Mirrokni (2004). On spectrum sharing games. In Proc. 23rd ACM Symposium on Principles of Distributed Computing, pp. 107-114. Alternative semantics for unawareness. J Y Halpern, Games and Economic Behavior. 37Halpern, J. Y. (2001). Alternative semantics for unawareness. Games and Economic Behavior 37, 321-339. A computer scientist looks at game theory. J Y Halpern, Games and Economic Behavior. 451Halpern, J. Y. (2003). A computer scientist looks at game theory. Games and Economic Behav- ior 45(1), 114-132. Reasoning about knowledge of unawareness. J Y Halpern, L C Rêgo, Principles of Knowledge Representation and Reasoning: Proc. Tenth International Conference (KR '06). Full version available at arxiv.org/cs.LO/0603020Halpern, J. Y. and L. C. Rêgo (2006). Reasoning about knowledge of unawareness. In Principles of Knowledge Representation and Reasoning: Proc. Tenth International Conference (KR '06), pp. 6-13. Full version available at arxiv.org/cs.LO/0603020. Rational secret sharing and multiparty computation: extended abstract. J Y Halpern, V Teague, Proc. 36th ACM Symposium on Theory of Computing. 36th ACM Symposium on Theory of ComputingHalpern, J. Y. and V. Teague (2004). Rational secret sharing and multiparty computation: extended abstract. In Proc. 36th ACM Symposium on Theory of Computing, pp. 623-632. Interactive unawareness. A Heifetz, M Meier, B Schipper, Journal of Economic Theory. 130Heifetz, A., M. Meier, and B. Schipper (2006). Interactive unawareness. Journal of Economic The- ory 130, 78-94. A minority-proof cheap-talk protocol. Y Heller, Unpublished manuscriptHeller, Y. (2005). A minority-proof cheap-talk protocol. Unpublished manuscript. Introduction to Automata Theory, Languages and Computation. J E Hopcroft, J D Ullman, Addison-WesleyNew YorkHopcroft, J. E. and J. D. Ullman (1979). Introduction to Automata Theory, Languages and Compu- tation. New York: Addison-Wesley. Rational secure computation and ideal mechanism design. S Izmalkov, S Micali, M Lepinski, Proc. 46th IEEE Symp. Foundations of Computer Science. 46th IEEE Symp. Foundations of Computer ScienceIzmalkov, S., S. Micali, and M. Lepinski (2005). Rational secure computation and ideal mechanism design. In Proc. 46th IEEE Symp. Foundations of Computer Science, pp. 585-595. Graphical games. M Kearns, Algorithmic Game Theory. N. Nisan, T. Roughgarden,É. Tardos, and V. VaziraniCambridge, U.K.Cambridge University PressKearns, M. (2007). Graphical games. In N. Nisan, T. Roughgarden,É. Tardos, and V. Vazirani (Eds.), Algorithmic Game Theory. Cambridge, U.K.: Cambridge University Press. Graphical models for game theory. M Kearns, M L Littman, S P Singh, Proc. Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI 2001). Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI 2001)Kearns, M., M. L. Littman, and S. P. Singh (2001). Graphical models for game theory. In Proc. Sev- enteenth Conference on Uncertainty in Artificial Intelligence (UAI 2001), pp. 253-260. Near-optimal reinforcement learning in polynomial time. M Kearns, S P Singh, Proc. 15th International Conference on Machine Learning. 15th International Conference on Machine LearningKearns, M. and S. P. Singh (1998). Near-optimal reinforcement learning in polynomial time. In Proc. 15th International Conference on Machine Learning, pp. 260-268. ACM Conference on Electronic Commerce (EC '05). Kearns, M. J. and M. K. ReiterNew YorkACMKearns, M. J. and M. K. Reiter (Eds.) (2005). ACM Conference on Electronic Commerce (EC '05). New York: ACM. See www.informatik.uni-trier.de/ ley/db/conf/sigecom/sigecom2005.html for contents. Maximizing the spread of influence through a social network. D Kempe, J Kleinberg, Tardos, Proc. Ninth ACM SIGKDD International Conference Knowledge Discovery and Data Mining. Ninth ACM SIGKDD International Conference Knowledge Discovery and Data MiningKempe, D., J. Kleinberg, andÉ. Tardos (2003). Maximizing the spread of influence through a social network. In Proc. Ninth ACM SIGKDD International Conference Knowledge Discovery and Data Mining, pp. 137-146. Mechanism design for resourcebounded agents. N E Kfir-Dahav, D Monderer, M Tennenholtz, International Conference on Multiagent Systems. Kfir-Dahav, N. E., D. Monderer, and M. Tennenholtz (2000). Mechanism design for resource- bounded agents. In International Conference on Multiagent Systems, pp. 309-316. Structured models for multiagent interactions. D Koller, B Milch, Theoretical Aspects of Rationality and Knowledge: Proc. Eighth Conference. Koller, D. and B. Milch (2001). Structured models for multiagent interactions. In Theoretical Aspects of Rationality and Knowledge: Proc. Eighth Conference (TARK 2001), pp. 233-248. Worst-case equilibria. E Koutsoupias, C H Papadimitriou, Proc. 16th Conference on Theoretical Aspects of Computer Science. 16th Conference on Theoretical Aspects of Computer ScienceBerlinSpringer-Verlag1563Koutsoupias, E. and C. H. Papadimitriou (1999). Worst-case equilibria. In Proc. 16th Conference on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science, Volume 1563, pp. 404-413. Berlin: Springer-Verlag. Communication Complexity. E Kushilevitz, N Nisan, Cambridge University PressCambridge, U.K.Kushilevitz, E. and N. Nisan (1997). Communication Complexity. Cambridge, U.K.: Cambridge University Press. Game networks. La Mura, P , Proc. Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI 2000). Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI 2000)La Mura, P. (2000). Game networks. In Proc. Sixteenth Conference on Uncertainty in Artificial Intel- ligence (UAI 2000), pp. 335-342. The winner determination problem. D Lehmann, R Müller, T Sandholm, Combinatorial Auctions. P. Cramton, Y. Shoham, and R. SteinbergCambridge, MassMIT PressLehmann, D., R. Müller, and T. Sandholm (2006). The winner determination problem. In P. Cramton, Y. Shoham, and R. Steinberg (Eds.), Combinatorial Auctions. Cambridge, Mass.: MIT Press. Equilibrium poitns of bimatrix games. C E Lemke, J J T Howson, Journal of the Society for Industrial and Applied Mathematics. 12Lemke, C. E. and J. J. T. Howson (1964). Equilibrium poitns of bimatrix games. Journal of the Society for Industrial and Applied Mathematics 12, 413-423. Completely fair SFE and coalition-safe cheap talk. M Lepinski, S Micali, C Peikert, A Shelat, Proc. 23rd ACM Symp. Principles of Distributed Computing. 23rd ACM Symp. Principles of Distributed ComputingLepinski, M., S. Micali, C. Peikert, and A. Shelat (2004). Completely fair SFE and coalition-safe cheap talk. In Proc. 23rd ACM Symp. Principles of Distributed Computing, pp. 1-10. Games computers play: Game-theoretic aspects of computing. N Linial, Handbook of Game Theory. R. J. Aumann and S. HartAmsterdamNorth-HollandIILinial, N. (1994). Games computers play: Game-theoretic aspects of computing. In R. J. Aumann and S. Hart (Eds.), Handbook of Game Theory, Volume II, pp. 1340-1395. Amsterdam: North- Holland. Awareness and partitional information structures. S Modica, A Rustichini, Theory and Decision. 37Modica, S. and A. Rustichini (1994). Awareness and partitional information structures. Theory and Decision 37, 107-124. Unawareness and partitional information structures. S Modica, A Rustichini, Games and Economic Behavior. 272Modica, S. and A. Rustichini (1999). Unawareness and partitional information structures. Games and Economic Behavior 27(2), 265-298. Distributed games. D Monderer, M Tennenholtz, Games and Economic Behavior. 28Monderer, D. and M. Tennenholtz (1999a). Distributed games. Games and Economic Behavior 28, 55-72. Distributed Games: From Mechanisms to Protocols. D Monderer, M Tennenholtz, Proc. Sixteenth National Conference on Artificial Intelligence (AAAI '99). Sixteenth National Conference on Artificial Intelligence (AAAI '99)Monderer, D. and M. Tennenholtz (1999b). Distributed Games: From Mechanisms to Protocols. In Proc. Sixteenth National Conference on Artificial Intelligence (AAAI '99), pp. 32-37. Coalition-proof equilibrium. D Moreno, J Wooders, Games and Economic Behavior. 171Moreno, D. and J. Wooders (1996). Coalition-proof equilibrium. Games and Economic Behav- ior 17(1), 80-112. Equilibrium points in n-person games. J Nash, Proc. National Academy of Sciences. 36Nash, J. (1950). Equilibrium points in n-person games. Proc. National Academy of Sciences 36, 48-49. Bounded complexity justifies cooperation in finitely repated prisoner's dilemma. A Neyman, Economic Letters. 19Neyman, A. (1985). Bounded complexity justifies cooperation in finitely repated prisoner's dilemma. Economic Letters 19, 227-229. Bidding langages for combinatorial auctions. N Nisan, Combinatorial Auctions. Cambridge. MassMIT PressNisan, N. (2006). Bidding langages for combinatorial auctions. In Combinatorial Auctions. Cam- bridge, Mass.: MIT Press. Computationally feasible VCG mechanisms. N Nisan, A Ronen, Second ACM Conference on Electronic Commerce (EC '00). Nisan, N. and A. Ronen (2000). Computationally feasible VCG mechanisms. In Second ACM Con- ference on Electronic Commerce (EC '00), pp. 242-252. Algorithmic mechanism design. N Nisan, A Ronen, Games and Economic Behavior. 35Nisan, N. and A. Ronen (2001). Algorithmic mechanism design. Games and Economic Behavior 35, 166-196. Algorithmic Game Theory. N Nisan, T Roughgarden, É , Tardos, and V. VaziraniCambridge University PressCambridge, U.K.Nisan, N., T. Roughgarden,É. Tardos, and V. Vazirani (Eds.) (2007). Algorithmic Game Theory. Cambridge, U.K.: Cambridge University Press. Exponential communication inefficiency of demand queries. N Nisan, I Segal, Theoretical Aspects of Rationality and Knowledge: Proc. Tenth Conference (TARK 2005). Nisan, N. and I. Segal (2005). Exponential communication inefficiency of demand queries. In The- oretical Aspects of Rationality and Knowledge: Proc. Tenth Conference (TARK 2005), pp. 158- 164. Computational Complexity. C H Papadimitriou, Addison WesleyReading, MassPapadimitriou, C. H. (1994a). Computational Complexity. Reading, Mass.: Addison Wesley. On the complexity of the parity argument and other inefficient proofs of existence. C H Papadimitriou, Journal of Computer and System Sciences. 483Papadimitriou, C. H. (1994b). On the complexity of the parity argument and other inefficient proofs of existence. Journal of Computer and System Sciences 48(3), 498-532. Algorithms, games, and the internet. C H Papadimitriou, Proc. 33rd ACM Symposium on Theory of Computing. 33rd ACM Symposium on Theory of ComputingPapadimitriou, C. H. (2001). Algorithms, games, and the internet. In Proc. 33rd ACM Symposium on Theory of Computing, pp. 749-753. Computing correlated equilibria in multiplayer games. C H Papadimitriou, Proc. 37th ACM Symposium on Theory of Computing. 37th ACM Symposium on Theory of ComputingPapadimitriou, C. H. (2005). Computing correlated equilibria in multiplayer games. In Proc. 37th ACM Symposium on Theory of Computing, pp. 49-56. The complexity of finding Nash equilibria. C H Papadimitriou, Algorithmic Game Theory. N. Nisan, T. Roughgarden,É. Tardos, and V. VaziraniCambridge, U.K.Cambridge University PressPapadimitriou, C. H. (2007). The complexity of finding Nash equilibria. In N. Nisan, T. Roughgar- den,É. Tardos, and V. Vazirani (Eds.), Algorithmic Game Theory. Cambridge, U.K.: Cambridge University Press. Computing equilibria in multi-player games. C H Papadimitriou, T Roughgarden, Proc. 16th ACM-SIAM Symposium on Discrete Algorithms. 16th ACM-SIAM Symposium on Discrete AlgorithmsPapadimitriou, C. H. and T. Roughgarden (2005). Computing equilibria in multi-player games. In Proc. 16th ACM-SIAM Symposium on Discrete Algorithms, pp. 82-91. On complexity as bounded rationality. C H Papadimitriou, M Yannakakis, Proc. 26th ACM Symposium on Theory of Computing. 26th ACM Symposium on Theory of ComputingPapadimitriou, C. H. and M. Yannakakis (1994). On complexity as bounded rationality. In Proc. 26th ACM Symposium on Theory of Computing, pp. 726-733. Probabilistic Reasoning in Intelligent Systems. J Pearl, Morgan KaufmannSan FranciscoPearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. San Francisco: Morgan Kaufmann. Reaching agreement in the presence of faults. M Pease, R Shostak, L Lamport, Journal of the ACM. 272Pease, M., R. Shostak, and L. Lamport (1980). Reaching agreement in the presence of faults. Journal of the ACM 27(2), 228-234. Simple search methods for finding a Nash equilibrium. R Porter, E Nudelman, Y Shoham, Proc. Twenty-First National Conference on Artificial Intelligence (AAAI '04). Twenty-First National Conference on Artificial Intelligence (AAAI '04)Porter, R., E. Nudelman, and Y. Shoham (2004). Simple search methods for finding a Nash equi- librium. In Proc. Twenty-First National Conference on Artificial Intelligence (AAAI '04), pp. 664-669. Randomized Byzantine generals. M O Rabin, Proc. 24th IEEE Symp. on Foundations of Computer Science. 24th IEEE Symp. on Foundations of Computer ScienceRabin, M. O. (1983). Randomized Byzantine generals. In Proc. 24th IEEE Symp. on Foundations of Computer Science, pp. 403-409. How bad is selfish routing. T Roughgarden, Tardos, Journal of the ACM. 492Roughgarden, T. andÉ. Tardos (2002). How bad is selfish routing? Journal of the ACM 49(2), 236- 259. Finite automata play the repeated prisoner's dilemma. A Rubinstein, Journal of Economic Theory. 39Rubinstein, A. (1986). Finite automata play the repeated prisoner's dilemma. Journal of Economic Theory 39, 83-96. Modeling Bounded Rationality. A Rubinstein, MIT PressCambridge, MassRubinstein, A. (1998). Modeling Bounded Rationality. Cambridge, Mass.: MIT Press. The Second International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2003). Sandholm, T. and M. YakooACMSandholm, T. and M. Yakoo (Eds.) (2003). The Second International Joint Conference on Au- tonomous Agents and Multiagent Systems (AAMAS 2003). ACM. See http://www.informatik.uni- trier.de/ ley/db/conf/atal/aamas2003.html for the contents. Strategy-proofness and Arrow's conditions: existence and correspondence theorems for voting procedures and social welfare functions. M Satterthwaite, Journal of Economic Theory. 10Satterthwaite, M. (1975). Strategy-proofness and Arrow's conditions: existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory 10, 187-217. Mental poker. A Shamir, R L Rivest, L Adelman, The Mathematical Gardner. Prindle, Weber, and SchmidtBoston, MassShamir, A., R. L. Rivest, and L. Adelman (1981). Mental poker. In D. A. Klarner (Ed.), The Mathe- matical Gardner, pp. 37-43. Boston, Mass.: Prindle, Weber, and Schmidt. Competitive safety analysis: robust decision-making in multi-agent systems. M Tennenholtz, Journal of A.I. Research. 17Tennenholtz, M. (2002). Competitive safety analysis: robust decision-making in multi-agent systems. Journal of A.I. Research 17, 363-378. Computational complexity and communication: Coordination in two-player games. A Urbano, J E Vila, Econometrica. 705Urbano, A. and J. E. Vila (2002). Computational complexity and communication: Coordination in two-player games. Econometrica 70(5), 1893-1927. Computationally restricted unmediated talk under incomplete information. A Urbano, J E Vila, Economic Theory. 232Urbano, A. and J. E. Vila (2004). Computationally restricted unmediated talk under incomplete in- formation. Economic Theory 23(2), 283-320. Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions. A Vetta, Proc. 43rd IEEE Symposium on Foundations of Computer Science. 43rd IEEE Symposium on Foundations of Computer ScienceVetta, A. (2002). Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions. In Proc. 43rd IEEE Symposium on Foundations of Computer Science, pp. 416-425. Counterspeculation, auctions and competitive sealed tenders. W Vickrey, Journal of Finance. 16Vickrey, W. (1961). Counterspeculation, auctions and competitive sealed tenders. Journal of Fi- nance 16, 8-37. Protocols for secure computation (extended abstract). A Yao, Proc. 23rd IEEE Symp. on Foundations of Computer Science. 23rd IEEE Symp. on Foundations of Computer ScienceYao, A. (1982). Protocols for secure computation (extended abstract). In Proc. 23rd IEEE Symp. on Foundations of Computer Science, pp. 160-164.
[]
[ "CLNet: Complex Input Lightweight Neural Network designed for Massive MIMO CSI Feedback", "CLNet: Complex Input Lightweight Neural Network designed for Massive MIMO CSI Feedback" ]
[ "Sijie Ji ", "Fellow, IEEEMo Li " ]
[]
[]
Unleashing the full potential of massive MIMO in FDD mode by reducing the overhead of CSI feedback has recently garnered attention. Numerous deep learning based massive MIMO CSI feedback approaches have demonstrate their efficiency and potential. However, most existing methods improve accuracy at the cost of computational complexity and the accuracy decreases significantly as the CSI compression rate increases. This paper presents a novel neural network CLNet tailored for CSI feedback problem based on the intrinsic properties of CSI. CLNet proposes a forge complex-valued input layer to process signals and utilizes attention mechanism to enhance the performance of the network. The experiment result shows that CLNet outperforms the state-of-the-art method by average accuracy improvement of 5.41% in both outdoor and indoor scenarios with average 24.1% less computational overhead. Codes are available at GitHub. 1 .
10.1109/lwc.2021.3100493
[ "https://export.arxiv.org/pdf/2102.07507v3.pdf" ]
235,262,793
2102.07507
f30436108b6cbded3798bee90988a378b3447fff
CLNet: Complex Input Lightweight Neural Network designed for Massive MIMO CSI Feedback 28 Apr 2023 Sijie Ji Fellow, IEEEMo Li CLNet: Complex Input Lightweight Neural Network designed for Massive MIMO CSI Feedback 28 Apr 2023arXiv:2102.07507v3 [cs.IT] 1Index Terms-Massive MIMOFDDCSI feedbackdeep learn- ingcomplex neural networkattention mechanismlightweight model Unleashing the full potential of massive MIMO in FDD mode by reducing the overhead of CSI feedback has recently garnered attention. Numerous deep learning based massive MIMO CSI feedback approaches have demonstrate their efficiency and potential. However, most existing methods improve accuracy at the cost of computational complexity and the accuracy decreases significantly as the CSI compression rate increases. This paper presents a novel neural network CLNet tailored for CSI feedback problem based on the intrinsic properties of CSI. CLNet proposes a forge complex-valued input layer to process signals and utilizes attention mechanism to enhance the performance of the network. The experiment result shows that CLNet outperforms the state-of-the-art method by average accuracy improvement of 5.41% in both outdoor and indoor scenarios with average 24.1% less computational overhead. Codes are available at GitHub. 1 . I. INTRODUCTION T He massive multiple-input multiple-output (MIMO) technology is considered one of the core technologies of the next generation communication system, e.g., 5G. By equipping a large number of antennas, the base station (BS) can sufficiently utilize spatial diversity to improve the channel capacity. Especially, by enabling beamforming, a 5G BS can concentrate signal energy to a specific user equipment (UE) to achieve higher signal-to-noise ratio (SNR), less interference leakage and hence, higher channel capacity. However, beamforming is possibly conducted by the BS only when it has the channel state information (CSI) of the downlink at hand [1]. In frequency division duplexing (FDD) mode that most of contemporary cellular systems operate in, the channel reciprocity does not exist. Therefore, the UE would have to explicitly feed back the downlink CSI to the BS, and the pilotaided training overhead grows quadratically with the number of transmitting antennas, which might overturn the benefit of Massive MIMO itself [2]. Thus, CSI compression is needed before the feedback to reduce the overhead. Traditional compressive sensing (CS) based methods rely heavily on channel sparsity and are limited by their efficiency in iteratively reconstructing the signals. Their performance is highly dependent on the wireless channel [3], and thus is not This work is supported by Singapore MOE Tier 2 under grant T2EP20220-0011. Sijie Ji and Mo Li are with School of Computer Science and Engineering, Nanyang Technological University, Singapore. (Emails: [email protected], [email protected]). 1 https://github.com/SIJIEJI/CLNet a desirable approach considering the diversified use cases of 5G networks. The recent rapid development of deep learning (DL) technologies provide another possible solution for efficient CSI feedback in FDD massive MIMO system. Instead of relying on sparsity, the DL approaches utilize the auto-encoder framework [4]. The encoder learns a map to the low-dimensional compressed space and the decoder reconstruct to the original data by a single run without the labeled data. It naturally overcomes the limits of CS-based approaches in channel sparsity and operation efficiency. The first DL-based method, CsiNet [5], explored and demonstrated the efficiency of deep learning in CSI feedback. CsiNet significantly outperforms the traditional CSbased methods (LASSO, BM3D-AMP and TVAL3) under various compression rates. Based on CsiNet, most of the subsequent DL-based methods utilize more powerful DL building blocks to achieve better performance with the sacrifice of computational overhead. CsiNet-LSTM [6] and Attention-CSI [7] introduced LSTM that significantly increases the computational overhead. CsiNet+ [8] comprehensively surveyed recent DL-based methods and proposed a parallel multiple-rate compression framework. The computational overhead of CsiNet+ are approximately x7 higher than the original CsiNet [9]. Recently, some methods start to reduce the complexity, for example, JCNet [10] and BcsiNet [11], however, their performance has also reduced. So far, only CRNet [12] has outperformed CsiNet without increasing the computational complexity. However, CSI or signals are represented in complex envelopes, which have their own physical meaning that is overlooked by previous works, only [13] considered this problem by adopting complex-valued three dimensional convolutional neural network [14]. However, as the complex kernel is hard to optimize through back-propagation, the network is hard to train and the computational complexity is inevitably greatly increased. Considering the limited computational resource and limited storage at UE side, this letter proposes a tailored DL network that can cope with complex number yet maintain lightweight, CLNet, for CSI feedback problem. Eventually, CLNet outperforms CRNet with 5.41% higher accuracy and 24.1% less complexity on average. The main contributions are summarized as follows: • CLNet proposes a simple yet effective way to organic integrate the real and imaginary parts into the real-valued neural network models. • CLNet adopts spatial attention mechanisms to let the DL model focus on the more informative clustered signal parts. II. SYSTEM MODEL AND PRELIMINARY Consider a single cell FDD system using massive MIMO with N t antennas at BS, where N t ≫ 1 and N r antennas at UE side (N r equals to 1 for simplicity). The received signal y ∈ C Nc×1 can be expressed as y = Ax + z(1) where N c indicates the number of subcarriers, x ∈ C Nc×1 indicates the transmitted symbols, and z ∈ C Nc×1 is the complex additive Gaussian noise. A can be expressed as diag h H 1 p 1 , · · · , h H Nc p Nc , where h i ∈ C Nt×1 and p i ∈ C Nt×1 , i ∈ {1, · · · , N c } represent downlink channel coefficients and beamforming precoding vector for subcarrier i, respectively. In order to derive the beamforming precoding vector p i , the BS needs the knowledge of corresponding channel coefficient h i , which is fed back by the UE. Suppose that the downlink channel matrix is H = [h 1 · · · h Nc ] H which contains N c N t elements. The number of parameters that need to be fed back is 2N c N t , including the real and imaginary parts of the CSI, which is proportional to the number of antennas. Because the channel matrix H is often sparse in the angulardelay domain. By 2D discrete Fourier transform (DFT), the original form of spatial-frequency domain CSI can be converted into angular-delay domain, such that H ′ = F c HF H t (2) where F c and F t are the DFT matrices with dimension N c × N c and N t × N t , respectively. For the angular-delay domain channel matrix H ′ , every element corresponds to a certain path delay with a certain angle of arrival (AoA). In H ′ , only the first N a rows contain useful information, while the rest rows represent the paths with larger propagation delays are made up of near-zero values, can be omitted without much information loss. Let H a denote the informative rows of H ′ . H a is input into UE's encoder to produce the codeword v according to a given compression ratio η such that v = f E (H a , Θ E )(3) where f E denotes the encoding process and Θ E represents a set of parameters of the encoder. Once the BS receives the codeword v, the decoder is used to reconstruct the channel bŷ H a = f D (v, Θ D )(4) where f D denotes the decoding process and Θ D represents a set of parameters of the decoder. Therefore, the entire feedback process can be expressed aŝ H a = f D (f E (H a , Θ E ) , Θ D )(5) The goal of CLNet is to minimize the difference between the original H a and the reconstructedĤ a , which can be expressed formally as finding the parameter sets of encoder and decoder satisfying Θ E ,Θ D = arg min ΘE ,ΘD H a − f D (f E (H a , Θ E ) , Θ D ) 2 2 (6) III. CLNET DESIGN This section presents the design of the CLNet and its key components. Figure 1 depicts the overall architecture of CLNet, in which traditional convolution blocks are omitted for simplicity. Overall, CLNet is an encoder-decoder framework with four main building blocks that are tailored for the CSI feedback problem. The performance of the CSI feedback scheme highly depends on the compression part, the encoder. The less information loss of the compression, the higher the decompression accuracy can be obtained. Due to the limited computing power and storage space of UE, deepening the encoder network design is not practical. Therefore, CLNet leverages the physical characteristics of CSI to achieve a lightweight yet informative encoder by two tailored blocks. First, CSI is the channel frequency response with complex values that depict channel coefficients of different signal paths. The previous DL-based CSI feedback methods, treat the real and imaginary parts of the CSI separately. Instead, the input CSI in CLNet first goes through the forged complex-valued input layer that embeds the real and imaginary parts together to preserve the physical information of the CSI (Section III-A). Second, different signal paths have different resolutions of cluster effect in the angular-delay domain, which corresponding to different angles of arrival and different path delays. Thus, we introduce the CBAM block [15] that serves as spatial-wise attention to force the neural network focus on those clusters and suppress the unnecessary parts (Section III-B). Since the encoder becomes more powerful, the decoder can be correspondingly more lightweight, thus CLNet modifies the CRBlocks [12] in decoder by reducing the filter size from 1×9 to 1 × 3. To further reduce the computational cost, CLNet adopts the hard-Sigmoid activation fuction which is more hardware friendly than the conventional Sigmoid activation function (Section III-C). A. Forged Complex-valued Input CSI is complex-valued channel coefficients such that: H(t) = N k=1 a k (t)e −jθ k (t)(7) where N is the number of signal paths. a k (t) and θ k (t) indicate the signal attenuation and propagation phase rotation of the k-th path at time t respectively. The BS relies the physical meaning of CSI, the norm of real and imaginary part describes the channel's attenuation to signal and the ratio of the real and the imaginary part describes the channel's phase rotation to the signal, to conduct the beamforming. Since a typical deep learning neural network is designed based on real-valued inputs, operations, and representations. Existing DL-based CSI feedback methods simply separate the real and imaginary parts of the complex-valued as two independent channels of an image as the neural network input, which may destroy the original physical property of each complex-valued channel coefficient. Specifically, as Figure 2 (a) depicts, a conventional 3 × 3 kernel size entangles the real and imaginary parts of neighboring elements in H a , and as a result, the 9 complex CSI are interpolated as one synthesized value. Mathematically, F tr : H a → I is a convolutional transformation. Here, H a ∈ R Na×Na×2 is a 3D tensor, extended from its 2D version by including an additional dimension to separately express the real and imaginary parts, and I ∈ R Na×Na×C , where C indicates the number of convolutional filters applied to learn different weighted representations. The output of F tr is I = [i 1 , i 2 , . . . , i C ], i c ∈ R Na×Na . Let a n + b n i denotes a CSI and w n is the learnable weight of a convolutional filter f . The 3x3 convolution operation essentially is the sum of two multiplication such that: The insight of CLNet is that by utilizing a 1 × 1 point-wise convolution, the real and imaginary parts of a complex-valued coefficient can be explicitly embedded such that: i 1 (1, 1) = [a 1 ] · [w 1 ] + [b 1 ] · [w 1 ](9) where the ratio between a and b are preserved, thus maintain the phase information and the amplitude of the signal be scaled by w. Since CNN shares the weight w, so the entire whole CSI matrix's amplitude is essentially scaled by the same w, the relative amplitude across subchannels is also preserved. The output i c , essentially, is a weighted representation of the original H a and different filters learn different weighted representations, among which, some may be more important than others. Based on this, CLNet further adopts the SE block [16], which serve as the channel-wise attention in the forged complex-valued input layer. It assists the neural network to model the relationship of the weights so as to focus on the important features and suppress the unnecessary ones. A diagram of the SE block is shown in Figure 2 (b) with annotation F se . The output I first goes through F sq transformation by global average pooling to obtain channel-wise statistics descriptor z ∈ R C . Here, F sq expands the neural network receptive field to the whole angular-delay domain to obtain the global statistical information, compensating the shortcoming of the insufficient local receptive field of 1 × 1 convolution used in the first step of the forged complex-valued input layer. After that, the channel descriptor z goes through F ex transformation, i.e., a gated layer with sigmoid activation to learn the nonlinear interaction as well as the non-mutually-exclusive relationship between channels, such that s = F ex (z, W) = σ(g(z, W)) = σ (W 2 δ (W 1 z)) ,(10) where δ is the ReLU function, W 1 ∈ R C 2 ×C and W 2 ∈ R C× C 2 . F ex further explicitly models the inter-channel dependencies based on z and obtain the calibrated s, which is the attention vector that summarizes all the characteristics of channel C, including intra-channel and inter-channel dependencies. Before being fed into the next layer, each channel of I is scaled by the corresponding attention value, such that I ∈ R Na×Na×C is the final output of the forged complexvalued input layer, which preserves the CSI physical information while capturing dynamics by the channel-wise attention mechanism. B. Attention Mechanism for Informative Encoder On the other hand, in angular-delay domain, the channel coefficients exhibit the effect of clusters with different resolutions that corresponding to the distinguishable paths that arrive with specific delays and AoAs. In order to pay more attention to such clusters, CLNet employs a CBAM block [15] serve as spatial-wise attention to distinguish them with weights in the spatial domain as Figure 3 illustrates. Based on the cluster effect in the angular-delay domain, spatial-wise attention uses the generated spatial statistical descriptors as the basis for assigning weights, forcing the network to focus more on the distinguishable propagation paths. First, two pooling operations, i.e., average-pooling and max-pooling, are adopted across the input F i 's channel C to generate two 2D feature maps, F avg ∈ R Na×Na×1 and F max ∈ R Na×Na×1 , respectively. CLNet concatenates the two feature maps to generate a compressed spatial feature descriptor F dsc ∈ R Na×Na×2 , and convolves it with a standard convolution layer to produce a 2D spatial attention mask F mask ∈ R Na×Na×1 . The mask is activated by Sigmoid and then multiplied with the original feature maps F i to obtain F o with spatial-wise attention. F o = CBAM(F i ) = F i (σ (f c ([AvgPool(F i ); Max Pool(F i )]))) = F i (σ (f c ([F avg ; F max ])))(12) With spatial-wise attention, CLNet focuses the neural network on the more informative signal propagation paths in the angular-delay domain. C. Reduction of Computational Cost The often-used Sigmoid activation function contains exponential operation: σ(x) = 1 1 + e −x = e x e x + 1 .(13) In order to reduce time cost in the computation, CLNet uses the hard version of Sigmoid, its piece-wise linear analogy function, denoted as hσ to replace the Sigmoid function [17], hσ(x) = min(max(x + 3, 0), 6) 6 (14) Fig. 4: Comparison between Sigmoid and hard-Sigmoid functions. Figure 4 compares the excitation curves of the hard-Sigmoid and Sigmoid functions. The hard-Sigmoid induces no discernible degradation in the accuracy but benefits from its computational advantage of entailing no exponential calculations. In practice, the hard-Sigmoid can fit in most software and hardware frameworks and can mitigate the potential numerical quantization loss introduced by different hardware. IV. EVALUATION This section presents the detailed experiment setting and the comparison with the state-of-the-art (SOTA) DL-based CSI feedback approach, in terms of accuracy and computational overhead. 1) Data Generation: To ensure a fair performance comparison, we use the same dataset as provided in the first work of DL-based Massive MIMO CSI feedback in [5], which is also used in later studies on this problem [6], [9], [8], [12], [7]. The generated CSI matrices are converted to angulardelay domain H a ∈ R 32×32×2 by 2D-DFT. The total 150,000 independently generated CSI are split into three parts, i.e., 100,000 for training, 30,000 for validation, and 20,000 for testing, respectively. 2) Training Scheme and Evaluation Metric: The normalized mean square error (NMSE) between the original H a and the reconstructedĤ a is used to evaluate the network accuracy: NMSE = E H a −Ĥ a 2 2 / H a 2 2(15) The complexity is measure by the flops (floating-point operations per second). The model was trained with the batch size of 200 and 8 workers on a single NVIDIA 2080Ti GPU. The epoch is set to 1000, as recommended in previous work [12], [8]. To further ensure the fairness, we fix the random seed of the computer. 3) CLNet Overall Performance: Table I shows the overall performance comparison among the proposed CLNet and related CSI feedback networks. As for the complexity, generally, the LSTM-based networks (CSINet+ and Attn-CSI) require approximate five to sevenflods higher computational resources than the CNN-based networks (CSINet, CRNet 2 and CLNet). Furthermore, because LSTM's operation relies on the previous output as the input of the hidden layer and cannot share parameters for parallel computation, it is difficult to reduce the complexity even if the compression rate increases. As we can see from Table I, the CLNet is the lightest among all these networks. Compared with the SOTA CRNet, the CLNet significantly reduces the computational complexity by 24.1% fewer flops on average. The flops of CLNet is 18.00%, 22.35%, 25.20%, 26.50%, 28.36% less than CRNet at the compression ratio η of 1/64, 1/32, 1/16, 1/8, 1/4, respectively. As the compression rate increases, the computational complexity degrades more. Turn to the accuracy part, the best results in the lightweight network are shown in bold, and the best results in all networks are shown in italics. For the accuracy as shown in Table I, the result shows that CLNet consistently outperforms other lightweight networks at all compression ratios in both indoor and outdoor scenarios with 5.41% overall average improvement compared with the SOTA CRNet 3 . In indoor scenarios, CLNet obtains an average performance increase of 6.61%, with the most increase of 21.00% at the compression ratio of η = 1/4. In outdoor scenarios, the average improvement on NMSE is 4.21%, with the most increase of 10.44% at the compression ratio of η = 1/32. Compared to heavyweight networks, CLNet still achieves the best results at the compression ratio of 1/4, outperforming the second place CSINet+ by 6.54% and 3.87% in indoor and outdoor scenario respectively. CLNet also achieves the best result in indoor scenario at the compression ratio equals to 1/64. 4) Ablation Study: Considering the limited interpretability of deep neural network, we further conduct the ablation study to better quantify the gain of the proposed forged complex-valued input layer and spatial-attention mechanism. The epochs of ablation studies are set to 500 in indoor scenarios, the rest settings remain the same as discussed in §IV (1)(2). Baseline is the CRNet with conventional convolution. 2 Note that the CRNet paper reported flops is corrected by [13]. 3 We reproduce CRNet follow the open source code: https://github.com/Kylin9511/CRNet the higher performance they reported in the paper are from training with 2500 epoch. As Table II shown, by simply modifying the first layer from a conventional convolution layer to an 1x1 convolution as the forged complex input layer,its accuracy surpasses the baseline at all compression ratios with an average improvement of 10.964%, which demonstrates the efficacy of appropriately preserving the complex notation. After adding the SE block, the accuracy is slightly improved although there is no improvement at η = 1/8. The last two columns show that the spatialattention slightly improves the accuracy at low compression rates, however, when combined with the SE block, its accuracy is further improved by 3.058% on average. 5) Encoder Complexity: Table III reveals that the CLNet encoder is actually slightly heavier than that of CRNet. However, the BS may need to execute several different models at the same time so a relatively light decoder would also be beneficial. In terms of storage space, CLNet and CRNet are roughly the same. V. CONCLUSION This article studies the CSI feedback problem for massive MIMO under FDD mode, which is the key technology of 5G communication systems. Based on the understanding of the physical properties of the CSI data, a novel customized deep learning framework, CLNet, is proposed. The forged complex-valued input layer preserves the amplitude and phase information of the signal and enhances with spatial-attention mechanisms. The hard-Sigmoid function is adopts to eliminate the exponential calculations. The overall performance of CLNet has 5.41% higher accuracy than the state-of-the-art CRNet with 24.10% less computation overhead. Fig. 1 : 1The encoder and decoder architecture of CLNet. Fig. 2 : 2Diagrammatic comparison of the conventional convolution and the CLNet forged complex-valued input layer. i 1 (1, 1) = [a 1 , ..., a 9 ] · [w 1 , ..., w 9 ] + [b 1 , ..., b 9 ] · [w 1 , ..., w 9 ] (8) In such way, the real and imaginary parts of the same complexvalued signal are decoupled and different CSI metrics are mixed, thus losing the original physical information carried by the channel matrix. I :,:,i = F scale (s, I) = s i I :,:,i , s.t. i ∈ {1, 2, · · · , C} (11) Fig. 3 : 3Operation illustration of spatial-wise attention of CLNet. TABLE I : INMSE(dB) a and complexity comparison between series of CSI feedback network and the proposed CLNet. a / means the performance is not reported.η 1/4 1/8 1/16 1/32 1/64 Methods FLOPS NMSE FLOPS NMSE FLOPS NMSE FLOPS NMSE FLOPS NMSE indoor outdoor indoor outdoor indoor outdoor indoor outdoor indoor outdoor CLNet 4.05M -29.16 -12.88 3.01M -15.60 -8.29 2.48M -11.15 -5.56 2.22M -8.95 -3.49 2.09M -6.34 -2.19 CRNet 5.12M -24.10 -12.57 4.07M -15.04 -7.94 3.55M -10.52 -5.36 3.29M -8.90 -3.16 3.16M -6.23 -2.19 CSINet[5] 5.41M -17.36 -8.75 4.37M -12.70 -7.61 3.84M -8.65 -4.51 3.58M -6.24 -2.81 3.45M -5.84 -1.93 CSINet+[8] 24.57M -27.37 -12.40 23.52M -18.29 -8.72 23.00M -14.14 -5.73 22.74M -10.43 -3.40 22.61M / / Attn-CSI[7] 24.72M -20.29 -10.43 22.62M / / 21.58M -10.16 -6.11 21.05M -8.58 -4.57 20.79M -6.32 -3.27 TABLE II : IINMSE (dB) Comparison of Ablation Study.η Baseline 1x1 Conv 1x1 Conv + SE 1x1 Conv + CBAM 1x1 Conv + SE + CBAM 1/4 -21.702 -27.694 -27.903 -28.142 -28.984 1/8 -13.037 -15.171 -15.167 -15.321 -15.487 1/16 -10.212 -11.013 -11.231 -10.684 -11.217 1/32 -8.443 -8.525 -8.732 -8.613 -8.885 1/64 -6.023 -6.145 -6.201 -6.086 -6.297 TABLE III : IIIDetailed Complexity of CRNet and CLNetη Method Encoder at UE Decoder at BS flops(M) #params flops(M) #params 1/4 CLNet 1.34 1.049M 2.71 1.052M CRNet 1.20 1.049M 3.92 1.053M 1/64 CLNet 0.36 65.954K 1.73 69.210K CRNet 0.22 65.720K 2.94 70.386K Noncooperative cellular wireless with unlimited numbers of base station antennas. T L Marzetta, IEEE transactions on wireless communications. 911T. L. Marzetta, "Noncooperative cellular wireless with unlimited num- bers of base station antennas," IEEE transactions on wireless communi- cations, vol. 9, no. 11, pp. 3590-3600, 2010. An overview of massive mimo: Benefits and challenges. L Lu, G Y Li, A L Swindlehurst, A Ashikhmin, R Zhang, IEEE journal of selected topics in signal processing. 85L. Lu, G. Y. Li, A. L. Swindlehurst, A. Ashikhmin, and R. Zhang, "An overview of massive mimo: Benefits and challenges," IEEE journal of selected topics in signal processing, vol. 8, no. 5, pp. 742-758, 2014. Correlation analysis based on mimo channel measurements in an indoor environment. P Kyritsi, D C Cox, R A Valenzuela, P W Wolniansky, IEEE Journal on Selected areas in communications. 215P. Kyritsi, D. C. Cox, R. A. Valenzuela, and P. W. Wolniansky, "Correlation analysis based on mimo channel measurements in an indoor environment," IEEE Journal on Selected areas in communications, vol. 21, no. 5, pp. 713-720, 2003. Reducing the dimensionality of data with neural networks. G E Hinton, R R Salakhutdinov, science. 3135786G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," science, vol. 313, no. 5786, pp. 504-507, 2006. Deep learning for massive mimo csi feedback. C.-K Wen, W.-T Shih, S Jin, IEEE Wireless Communications Letters. 75C.-K. Wen, W.-T. Shih, and S. Jin, "Deep learning for massive mimo csi feedback," IEEE Wireless Communications Letters, vol. 7, no. 5, pp. 748-751, 2018. Deep learning-based csi feedback approach for time-varying massive mimo channels. T Wang, C.-K Wen, S Jin, G Y Li, IEEE Wireless Communications Letters. 82T. Wang, C.-K. Wen, S. Jin, and G. Y. Li, "Deep learning-based csi feedback approach for time-varying massive mimo channels," IEEE Wireless Communications Letters, vol. 8, no. 2, pp. 416-419, 2018. Attention model for massive mimo csi compression feedback and recovery. Q Cai, C Dong, K Niu, 2019 IEEE Wireless Communications and Networking Conference (WCNC). IEEEQ. Cai, C. Dong, and K. Niu, "Attention model for massive mimo csi compression feedback and recovery," in 2019 IEEE Wireless Communi- cations and Networking Conference (WCNC). IEEE, 2019, pp. 1-5. Convolutional neural networkbased multiple-rate compressive sensing for massive mimo csi feedback: Design, simulation, and analysis. J Guo, C.-K Wen, S Jin, G Y Li, IEEE Transactions on Wireless Communications. 194J. Guo, C.-K. Wen, S. Jin, and G. Y. Li, "Convolutional neural network- based multiple-rate compressive sensing for massive mimo csi feedback: Design, simulation, and analysis," IEEE Transactions on Wireless Com- munications, vol. 19, no. 4, pp. 2827-2840, 2020. Binarized aggregated network with quantization: Flexible deep learning deployment for csi feedback in massive mimo system. Z Lu, X Zhang, H He, J Wang, J Song, arXiv:2105.00354arXiv preprintZ. Lu, X. Zhang, H. He, J. Wang, and J. Song, "Binarized aggregated network with quantization: Flexible deep learning deployment for csi feedback in massive mimo system," arXiv preprint arXiv:2105.00354, 2021. Bit-level optimized neural network for multi-antenna channel quantization. C Lu, W Xu, S Jin, K Wang, IEEE Wireless Communications Letters. 91C. Lu, W. Xu, S. Jin, and K. Wang, "Bit-level optimized neural network for multi-antenna channel quantization," IEEE Wireless Communications Letters, vol. 9, no. 1, pp. 87-90, 2019. Binary neural network aided csi feedback in massive mimo system. Z Lu, J Wang, J Song, IEEE Wireless Communications LettersZ. Lu, J. Wang, and J. Song, "Binary neural network aided csi feedback in massive mimo system," IEEE Wireless Communications Letters, 2021. Multi-resolution csi feedback with deep learning in massive mimo system. --, "Multi-resolution csi feedback with deep learning in massive mimo system," pp. 1-6, 2020. Cv-3dcnn: Complex-valued deep learning for csi prediction in fdd massive mimo systems. Y Zhang, J Wang, J Sun, B Adebisi, H Gacanin, G Gui, F Adachi, IEEE Wireless Communications Letters. 102Y. Zhang, J. Wang, J. Sun, B. Adebisi, H. Gacanin, G. Gui, and F. Adachi, "Cv-3dcnn: Complex-valued deep learning for csi prediction in fdd massive mimo systems," IEEE Wireless Communications Letters, vol. 10, no. 2, pp. 266-270, 2020. C Trabelsi, O Bilaniuk, Y Zhang, D Serdyuk, S Subramanian, J F Santos, S Mehri, N Rostamzadeh, Y Bengio, C J , arXiv:1705.09792Deep complex networks. arXiv preprintC. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, "Deep complex networks," arXiv preprint arXiv:1705.09792, 2017. Cbam: Convolutional block attention module. S Woo, J Park, J.-Y. Lee, I So Kweon, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)S. Woo, J. Park, J.-Y. Lee, and I. So Kweon, "Cbam: Convolutional block attention module," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19. Squeeze-and-excitation networks. J Hu, L Shen, G Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141. Searching for mobilenetv3. A Howard, M Sandler, G Chu, L.-C Chen, B Chen, M Tan, W Wang, Y Zhu, R Pang, V Vasudevan, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionA. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan et al., "Searching for mobilenetv3," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 1314-1324.
[ "https://github.com/SIJIEJI/CLNet", "https://github.com/Kylin9511/CRNet" ]
[ "Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents", "Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents" ]
[ "Rui Meng [email protected] \nSchool of Computing and Information\nUniversity of Pittsburgh\nMila\n\nMcGill University ♠ Microsoft Research\nMontréal\n", "Khushboo Thaker [email protected] \nSchool of Computing and Information\nUniversity of Pittsburgh\nMila\n\nMcGill University ♠ Microsoft Research\nMontréal\n", "Lei Zhang \nSchool of Computing and Information\nUniversity of Pittsburgh\nMila\n\nMcGill University ♠ Microsoft Research\nMontréal\n", "♣ Yue ", "Dong ♦ Xingdi ", "Yuan ♠ Tong ", "He ♣Wang ♠ Daqing \nSchool of Computing and Information\nUniversity of Pittsburgh\nMila\n\nMcGill University ♠ Microsoft Research\nMontréal\n" ]
[ "School of Computing and Information\nUniversity of Pittsburgh\nMila", "McGill University ♠ Microsoft Research\nMontréal", "School of Computing and Information\nUniversity of Pittsburgh\nMila", "McGill University ♠ Microsoft Research\nMontréal", "School of Computing and Information\nUniversity of Pittsburgh\nMila", "McGill University ♠ Microsoft Research\nMontréal", "School of Computing and Information\nUniversity of Pittsburgh\nMila", "McGill University ♠ Microsoft Research\nMontréal" ]
[ "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing" ]
Faceted summarization provides briefings of a document from different perspectives. Readers can quickly comprehend the main points of a long document with the help of a structured outline. However, little research has been conducted on this subject, partially due to the lack of large-scale faceted summarization datasets. In this study, we present FacetSum, a faceted summarization benchmark built on Emerald journal articles, covering a diverse range of domains. Different from traditional documentsummary pairs, FacetSum provides multiple summaries, each targeted at specific sections of a long document, including the purpose, method, findings, and value. Analyses and empirical results on our dataset reveal the importance of bringing structure into summaries. We believe FacetSum will spur further advances in summarization research and foster the development of NLP systems that can leverage the structured information in both long texts and summaries.
10.18653/v1/2021.acl-short.137
[ "https://www.aclanthology.org/2021.acl-short.137.pdf" ]
235,265,871
2106.00130
0046db9213f4686a917d0539f3a1c2919500acc2
Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents August 1-6, 2021 Rui Meng [email protected] School of Computing and Information University of Pittsburgh Mila McGill University ♠ Microsoft Research Montréal Khushboo Thaker [email protected] School of Computing and Information University of Pittsburgh Mila McGill University ♠ Microsoft Research Montréal Lei Zhang School of Computing and Information University of Pittsburgh Mila McGill University ♠ Microsoft Research Montréal ♣ Yue Dong ♦ Xingdi Yuan ♠ Tong He ♣Wang ♠ Daqing School of Computing and Information University of Pittsburgh Mila McGill University ♠ Microsoft Research Montréal Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20211080 Faceted summarization provides briefings of a document from different perspectives. Readers can quickly comprehend the main points of a long document with the help of a structured outline. However, little research has been conducted on this subject, partially due to the lack of large-scale faceted summarization datasets. In this study, we present FacetSum, a faceted summarization benchmark built on Emerald journal articles, covering a diverse range of domains. Different from traditional documentsummary pairs, FacetSum provides multiple summaries, each targeted at specific sections of a long document, including the purpose, method, findings, and value. Analyses and empirical results on our dataset reveal the importance of bringing structure into summaries. We believe FacetSum will spur further advances in summarization research and foster the development of NLP systems that can leverage the structured information in both long texts and summaries. Introduction Text summarization is the task of condensing a long piece of text into a short summary without losing salient information. Research has shown that a well-structured summary can effectively facilitate comprehension (Hartley et al., 1996;Hartley and Sydes, 1997). A case in point is the structured abstract, which consists of multiple segments, each focusing on a specific facet of a scientific publication (Hartley, 2014), such as background, method, conclusions, etc. The structure therein can provide much additional clarity for improved comprehension and has long been adopted by databases and publishers such as MEDLINE and Emerald. Despite these evident benefits of structure, summaries are often framed as a linear, structure-less sequence of sentences in the flourishing array of summarization studies (Nallapati et al., 2017;See Title Emotion in enterprise social media systems Purpose The purpose of this paper is to investigate enterprise social media systems and quantified gender and status influences on emotional content presented in these systems. Method Internal social media messages were collected from a global software company running an enterprise social media system. An indirect observatory test using Berlo's "source-messagechannel-receiver" model served as a framework to evaluate sender, message, channel and receiver for each text. These texts were categorized by gender and status using text analytics with SAP SA to produce sentiment indications. Findings Results reveal women use positive language 2.1 times more than men. Senior managers express positive language 1.7 times more than non-managers, and feeling rules affect all genders and statuses, but not necessarily as predicted by theory. Other findings show that public messages contained less emotional content, and women expressed more positivity to lower status colleagues. Men expressed more positivity to those in higher positions. Many gender and status stereotypes found in face-toface studies are also present in digital enterprise social networks. Value This study offers a behavioral measurement approach free from validity issues found in self-reported surveys, direct observations and interviews. The collected data offered new perspectives on existing social theories within a new environment of computerized, enterprise social media. Keyword Social media, Gender, Communication, Computer-mediated et al., 2017;Paulus et al., 2018;Grusky et al., 2018;Narayan et al., 2018;Sharma et al., 2019;Lu et al., 2020;Cachola et al., 2020). We postulate that a primary reason for this absence of structure lies in the lack of a high-quality, large-scale dataset with structured summaries. In fact, existing studies in faceted summarization (Huang et al., 2020;Tauchmann et al., 2018;Jaidka et al., 2016;Contractor et al., 2012;Kim et al., 2011;Jaidka et al., 2018;Stead et al., 2019) are often conducted with rather limited amount of data that are grossly insufficient to meet today's ever-growing model capacity. We aim to address this issue by proposing the FacetSum dataset. It consists of 60,024 scientific articles collected from Emerald journals, each associated with a structured abstract that summarizes the article from distinct aspects including purpose, method, findings, and value. Scale-wise, we empirically show that the dataset is sufficient for training large-scale neural generation models such as BART (Lewis et al., 2020) for adequate generalization. In terms of quality, each structured abstract in FacetSum is provided by the original author(s) of the article, who are arguably in the best position to summarize their own work. We also provide quantitative analyses and baseline performances on the dataset with mainstream models in Sections 2 and 3. FacetSum for Faceted Summarization The FacetSum dataset is sourced from journal articles published by Emerald Publishing 1 (Figure 1). Unlike many publishers, Emerald imposes explicit requirements that authors summarize their work from multiple aspects (Emerald, 2021): Purpose describes the motivation, objective, and relevance of the research; Method enumerates specific measures taken to reach the objective, such as experiment design, tools, methods, protocols, and datasets used in the study; Findings present major results such as answers to the research questions and confirmation of hypotheses; and Value highlights the work's value and originality 2 . Together, these facets give rise to a comprehensive and informative structure in the abstracts of the Emerald articles, and by extension, to FacetSum's unique ability to support faceted summarization. General Statistics We collect 60,532 publications from Emerald Publishing spanning 25 domains. From a summarization perspective, these differences imply that FacetSum may pose significantly increased modeling and computation challenges due to the increased lengths in both the source and the target. Moreover, the wide range of research domains (Figure 3, Appendix D) may also introduce much linguistic diversity w.r.t. vocabulary, style, and discourse. Therefore, compared to existing scientific publication datasets that only focus on specific academic disciplines (Cohan et al., 2018;Cachola et al., 2020), FacetSum can also be used to assess a model's robustness in domain shift and systematic generalization. To facilitate assessment of generalization, we reserve a dev and a test set each consisting of 6,000 randomly sampled data points; the remaining data are intended as the training set. We ensure that the domain distribution is consistent across all three subsets. Besides, we intentionally leave out Open-Access papers as another test set, to facilitate researchers who do not have full Emerald access 3 . Structural Alignment In this section, we focus our analysis on one of the defining features of FacetSum -its potential to support faceted summarization. Specifically, we investigate how the abstract structure (i.e., facets) aligns with the article structure. Given an abstract facet A and its corresponding article S, we quantify this alignment by: SA = {arg max s i ∈S (Rouge-1(si, aj)) : aj ∈ A} (1) Semantically, S A consists of sentence indices in S that best align with each sentence in A. Sentence-level Alignment We first plot the tuples {(s i , i/|S|) : i ∈ S A }, where s i is the i-th sentence in S, and |S| is the number of sentences in S. Intuitively, the plot density around position i/|S| entails the degree of alignment between the facet Figure 2: Oracle sentence distribution over a paper. X-axis: 10,000 papers sampled from FacetSum, sorted by full text length from long to short; y-axis: normalized position in a paper. We provide each sub-figure's density histogram on their right. A and the article S at that position 4 . With 10,000 articles randomly sampled from FacetSum, Figure 2 exhibits distinct differences in the density distribution among the facets in FacetSum. For example, with A = Purpose, resemblance is clearly skewed towards the beginning of the articles, while Findings are mostly positioned towards the end; the Method distribution is noticeably more uniform than the others. These patterns align well with intuition, and are further exemplified by the accompanying density histograms. Section-level Alignment We now demonstrate how different abstract facets align with different sections in an article. Following conventional structure of scientific publications (Suppe, 1998;Rosenfeldt et al., 2000), we first classify sections into Introduction, Method, Result and Conclusion using keyword matching in the section titles. 5 Given a section S i ⊆ S and an abstract A j ⊆ A, we define the section-level alignment g( S i , A j ) as Rouge-1(cat(S i A j ), cat(A j )), where cat(·) denotes sentences concatenation, and S i A j is defined by Equation (1). Table 2 is populated by varying A j and S i across the rows and columns, respectively. Full denotes the full paper or abstract (concatenation of all facets). We also include the concatenation of introduction and conclusion (denoted I+C) as a possible value for S i , due to its demonstrated effectiveness as summaries in prior work (Cachola et al., 2020). The larger numbers on the diagonal (in red) empirically confirm a strong alignment between FacetSum facets and their sectional counterparts in articles. We also observe a significant performance gap between using I+C and the full paper as S i . One possible reason is that the summaries in FacetSum (particularly Method and Findings) may contain more detailed information beyond introduction and conclusion. This suggests that for some facets in FacetSum, simple tricks to condense full articles do not always work; models need to instead comprehend and retrieve relevant texts from full articles in a more sophisticated manner. Experiments and Results We use FacetSum to benchmark a variety of summarization models from state-of-the-art supervised models to unsupervised and heuristics-based models. We also provide the scores of a sentence-level extractive oracle system (Nallapati et al., 2017). We report Rouge-L in this section and include Rouge-1/2 results in Appendix E. Unsupervised Models vs Heuristics We report performances of unsupervised and heuristics summarization methods (see Table 3). Tailoring to the unique task of generating summaries for a specific facet, we only use the section (defined in Section 2.2) corresponding to a facet as model input. Evaluation is also performed on the concatenation of all facets (column Full), which resembles the traditional research abstract. Lead-K/Tail-K are two heuristic-based models that extract the first/last k sentences from the source text. We observe that heuristic models do not perform well on Full, where the unsupervised models can achieve decent performance. Nevertheless, all models perform poorly on summarizing individual facets, and unsupervised models fail to perform better than simple heuristics consistently. The inductive biases of those models may not be good indicators of summary sentences on specific facets. A possible reason is that they are good at locating overall important sentences of a document, but they cannot differentiate sentences of each facet, even we try to alleviate this by using the corresponding section as input. Supervised Models As for the supervised baseline, we adopt the BART model (Lewis et al., 2020), which has recently achieved SOTA performance on abstractive summarization tasks with scientific articles (Cachola et al., 2020). We propose two training strategies for the BART model, adapting it to handle the unique challenge of faceted summarization in FacetSum. In BART, we train the model to generate the concatenation of all facets, joined by special tokens that indicate the start of a specific facet (e.g., |PURPOSE| to indicate the start of Purpose summary). During evaluation, the generated text is split into multiple facets based on the special tokens, and each facet is compared against the corresponding ground-truth summary. In BART-Facet, we train the model to generate one specific facet given the source text and an indicator specifies which facet to generate. Inspired by CATTS (Cachola et al., 2020), we prepend section tags at the beginning of each training input to generate summaries for a particular facet (see implementation details in Appendix C). Empirically, supervised models outperform unsupervised baselines by a large margin (Table 3). Comparing between the two training strategies, BART-Facet outperforms BART significantly. While BART performs comparably on Purpose, performance decreases drastically for subsequent facets, possibly due to current models' inadequacy with long targets. Thus it can perform decently at the beginning of generation (≈40 on Purpose), where the dependency is relatively easy-to-handle. However, the output quality degrades quickly towards the end (≈5 on Value). With I+C as source text, both training strategies exhibit much better results than using full paper. This is opposite to the observation in Table 2, potentially due to the limitation of the current NLG systems, i.e., the length of source text has crucial impacts to the model performance. With the much extended positional embeddings in our models (10,000 tokens), we suspect some other issues such as long term dependencies may lead to this discrepancy, which warrants further investigation. We introduce FacetSum to support the research of faceted summarization, which targets summarizing scientific documents from multiple facets. We provide extensive analyses and results to investigate the characteristics of FacetSum. Our observations call for the development of models capable of handling very long documents and outputting controlled text. Specifically, we will consider exploring the following topics in future work: (1) incorporating methods for long-document processing, such as reducing input length by extracting key sentences (Pilault et al., 2020) References C Implementation Details To make BART take full text as input, we extend the positional embedding to 10,000 tokens. This was required to leverage long text of papers in FacetSum with average length of 6000 words. Experiments of unsupervised baselines are implemented with Sumy (Belica, 2021) and official code of HipoRank. We tune the hyperparameters of HipoRank with the validation set. The BART experiments are finetuned using Fairseq (Ott et al., 2019), with learning rate of 3e −5 , batch size of 1, max tokens per batch of 10,000 and update frequency of 4. We finetune all models for 20,000 steps with single NVIDIA Tesla V100 16GB and we report the results of the last checkpoint. The small batch size is the consequence of the large input size. For inference, we use beam size of 4 and maximum length of 500/200 tokens for BART/BART-Facet respectively. D Domains Covered by FacetSum In Figure 3, we show the distribution of domain categories in FacetSum. E Full Results In this section, we provide additional experiment results. In Table 6, we show the full results of the extractive oracle system (first row in Table 3). In Table 7, we provide full results of all other models (heuristic models, unsupervised models, and supervised models in Table 3). F Example of Outputs by BART and BART-Facet In Purpose The purpose of this paper is to advance the information society research by examining and better understanding the impact of the adoption information and communication technologies (ICT) within households on improving sustainability. The purpose of this paper is to answer the question: Does the adoption of information and communication technologies (ICT) within households contribute to sustainability? For this purpose, the study focuses on addressing the following specific questions: What criteria clearly and deeply describe the ICT adoption within households? What are the criteria that clearly describe sustainability within households, and are there statistically significant correlations between ICT and sustainability? The purpose of this paper is to answer the question: Does the adoption of information and communication technologies (ICT) within households contribute to sustainability? Method The survey questionnaire was used and data collected from 679 Polish households were statistically analyzed to understand the phenomena of ICT adoption and sustainability as well as identify correlations between them. A survey was carried out in Poland. The data were collected through a structured questionnaire and analyzed using structural equation modeling (SEM). The study is based on a quantitative research design. The data were collected through a questionnaire survey among the households in Poland. The survey was carried out in two phases. In the first phase, the ICT adoption within households was measured using a structured questionnaire, and in the second phase the data were analyzed using structural equation modeling (SEM) to test the hypothesized relationships. Findings The research findings reveal that the ICT adoption is well described by the ICT outlay, information culture, ICT management and ICT quality, whereas sustainability is composed of ecological, economic, socio-cultural and political sustainability. Furthermore, the ICT quality, ICT management and information culture have a significant impact on sustainability, whereas the ICT outlay does not have such an impact. Results show that ICT is a key enabler of sustainability in households. The results also show that there are statistically significant correlation between the IIT adoption within the households and sustainability. The results show that the adoption of ICT within households is positively related to sustainability. The results also show that there are statistically significant correlations between the ICT adoption within households and sustainability. Value The paper provides and verifies a new theoretical model of sustainable information society to depict various dimensions shaping the ICT adoption and their impact on different types of sustainability in the context of households. This study is the first to empirically investigate the impact of ICT on sustainability. The findings of this study will be complementary with findings concerning the contribution of IIT to sustainability in enterprises and allow for the advancement in the sustainable information society (SIS) research. This study contributes to the literature by providing a deeper understanding of the ICT adoption within households and the contribution of ICT to sustainability in transition economies, i.e. the former European Eastern Bloc countries. BART-Facet provides more information of Method and less errors than BART (e.g. "IIT" is a typo of "ICT"). However both models tend to directly copy text from the source, for example both outputs of Purpose can be found in the introduction of the paper. Figure 1 : 1An example of the proposed FacetSum dataset. Each facet of the structured abstract summarizes different sections of the paper. or segments (Zhao et al., 2020); (2) examining the possibility of building a benchmark for systematic generalization (Bahdanau et al., 2018) with the domain categories; (3) automatically structuring traditional abstracts (Huang et al., 2020) with FacetSum. Table 1 : 1Statistics of the FacetSum dataset. Table 1 1lists some de- scriptive statistics of the dataset. Since FacetSum is sourced from journal articles, texts therein are naturally expected to be longer compared to other formats of scientific publications. In addition, al- though each facet is more succinct than the tradi- tional, structure-less abstracts, a full length abstract containing all facets can be considerably longer. Table 2 : 2Scores of sentence aligning in Rouge-L. Table 3 : 3Model performance on FacetSum (Rouge-L). SeeTable 6and 7 in Appendix E for full results. Bold text indicates the best scores on FacetSum test split in each column. Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. 2018. Systematic generalization: What is required and can it be learned? In International Conference on Learning Representations.Mišo Belica. 2021. sumy: Automatic text summarizer. https://github.com/miso-belica/ sumy. Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel S Weld. 2020. Tldr: Extreme summarization of sci- entific documents. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: Findings, pages 4766-4777. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621. Ed Collins, Isabelle Augenstein, and Sebastian Riedel. 2017. A supervised approach to extractive sum- marisation of scientific papers. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 195-205, Vancouver, Canada. Association for Computational Linguistics. Danish Contractor, Yufan Guo, and Anna Korhonen. 2012. Using argumentative zones for extractive sum- marization of scientific articles. In Proceedings of COLING 2012, pages 663-678. Yue Dong, Andrei Romascanu, and Jackie CK Che- ung. 2020. Hiporank: Incorporating hierarchical and positional information into graph-based unsu- pervised long document extractive summarization. arXiv preprint arXiv:2005.00513. Emerald. 2021. Writing an arti- cle abstract. https://www. emeraldgrouppublishing.com/how-to/ authoring-editing-reviewing/ write-article-abstract. [Online; ac- cessed 26-January-2021]. Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479. Yihong Gong and Xin Liu. 2001. Generic text summa- rization using relevance measure and latent semantic analysis. In Proceedings of the 24th annual interna- tional ACM SIGIR conference on Research and de- velopment in information retrieval, pages 19-25. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 708-719. James Hartley. 2014. Current findings from research on structured abstracts: an update. Journal of the Medical Library Association: JMLA, 102(3):146. James Hartley and Matthew Sydes. 1997. Are struc- tured abstracts easier to read than traditional ones? Journal of Research in Reading, 20(2):122-136. James Hartley, Matthew Sydes, and Anthony Blurton. 1996. Obtaining information accurately and quickly: are structured abstracts more efficient? Journal of information science, 22(5):349-356. Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python. Ting-Hao Huang, Chieh-Yang Huang, Chien- Kuang Cornelia Ding, Yen-Chia Hsu, and C Lee Giles. 2020. Coda-19: Using a non-expert crowd to annotate research aspects on 10,000+ abstracts in the covid-19 open research dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the cl-scisumm 2016 shared task. In Proceedings of the joint workshop on bibliometric-enhanced informa- tion retrieval and natural language processing for digital libraries (BIRNDL), pages 93-102. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2018. Insights from cl-scisumm 2016: the faceted scientific document summarization shared task. International Journal on Digital Libraries, 19(2-3):163-171. Su Nam Kim, David Martinez, Lawrence Cavedon, and Lars Yencken. 2011. Automatic classification of sentences to support evidence based medicine. In BMC bioinformatics, volume 12, pages 1-10. BioMed Central. Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880. Yao Lu, Yue Dong, and Laurent Charlin. 2020. Multi- xscience: A large-scale dataset for extreme multi- document summarization of scientific articles. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8068-8074. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: a recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proceedings of the Thirty-First AAAI Con- ference on Artificial Intelligence, pages 3075-3081. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learn- ing Representations. Jonathan Pilault, Raymond Li, Sandeep Subramanian, and Chris Pal. 2020. On extractive and abstractive neural document summarization with transformer language models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 9308-9319, Online. As- sociation for Computational Linguistics. Franklin L Rosenfeldt, John T Dowling, Salvatore Pepe, and Meryl J Fullerton. 2000. How to write a paper for publication. Heart, Lung and Circulation, 9(2):82-87. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083. Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2204-2213. Connor Stead, Stephen Smith, Peter Busch, and Sa- vanid Vatanasakdakul. 2019. Emerald 110k: a mul- tidisciplinary dataset for abstract sentence classifica- tion. In Proceedings of the The 17th Annual Work- shop of the Australasian Language Technology Asso- ciation, pages 120-125. Frederick Suppe. 1998. The structure of a scientific paper. Philosophy of Science, 65(3):381-405. Christopher Tauchmann, Thomas Arnold, Andreas Hanselowski, Christian M Meyer, and Margot Mieskes. 2018. Beyond generic summarization: A multi-faceted hierarchical summarization corpus of large heterogeneous data. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018). Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond sumbasic: Task- focused summarization with sentence simplification and lexical expansion. Information Processing & Management, 43(6):1606-1618. Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R. Fabbri, Irene Li, Dan Friedman, and Dragomir R. Radev. 2019. Scisummnet: A large annotated corpus and content-impact models for sci- entific paper summarization with citation networks. Proceedings of the AAAI Conference on Artificial In- telligence, 33(01):7386-7393. Yao Zhao, Mohammad Saleh, and Peter J Liu. 2020. Seal: Segment-wise extractive-abstractive long-form text summarization. arXiv preprint arXiv:2006.10213. A Keyword List for Identifying Paper Sections Category Keyword Introduction intro, purpose Method design, method, approach Result result, find, discuss, analy Conclusion conclu, future Table 4 : 4Keywords for identifying paper sections used in Section 2.2. B Most Frequent Words in Each Abstract Facet Facet Verb Noun Adjective Purpose aim paper social examin purpos new investig studi organiz explor manag differ develop research public Method base studi structur conduct data qualit collect analysi differ test model empir develop paper social Findings found result signific indic studi posit suggest manag social provid effect differ identifi relationship higher Value provid studi new contribut paper social develop research differ base manag empir examin literatur import Table 5 : 5Top five frequent verbs/nouns/adjectives in each facet of structured abstract. We preprocess the text with lowercasing, stemming and stopword removal and extract part-of-speech tags using Spacy (Honnibal et al., 2020). Table 8 , 8we show an example of the generated faceted summaries by BART and BART-Facet of the same paper, compared against the ground-truth faceted abstract.Intro body 53.32/22.96/48.59 52.51/26.48/44.66 41.27/16.05/34.03 44.67/17.49/37.10 44.65/17.80/36.47 Method body 52.05/20.52/47.35 45.16/16.61/36.84 48.60/21.67/41.00 44.77/17.69/37.67 40.94/13.55/32.94 Result body 56.85/23.79/51.97 47.90/18.07/38.96 42.31/14.46/34.41 53.71/26.32/46.44 44.93/16.91/36.66 Conclu body 55.26/25.26/50.58 47.76/18.88/38.94 40.53/13.84/32.83 51.81/25.81/44.73 46.14/19.66/38.10R1/R2/RL Full Purpose Method Findings Value Full body 64.92/33.75/60.39 57.35/30.24/49.42 53.30/26.40/45.58 59.30/33.25/52.42 53.39/26.84/45.55 IC body 58.82/28.42/54.17 53.60/27.13/45.73 43.13/17.08/35.64 52.03/25.90/44.86 48.97/22.84/41.09 Table 6 : 6Full results (Rouge-1/2/L) of the extractive oracle system (Nallapati et al., 2017) on FacetSum. Bold text indicates the best scores in the lower four rows in each column.R1/R2/RL Full Purpose Method Findings Value Table 8 : 8Outputs by BART and BART-Facet on different facets. Both models are able to generate reasonable summaries given the specified facet. The data has been licensed to researchers at subscribing institutions to use (including data mining) for noncommercial purposes. See detailed policies at https:// www.emerald.com/ 2 There are three optional facets (about research, practical and social implications) that are missing from a large number of articles and hence omitted in this study. Both the split information of FacetSum and the code for scraping and parsing the data are available at https: //github.com/hfthair/emerald_crawler We use the relative position i/|S| so that all positions are commensurate across multiple documents.5 To ensure close-to-perfect precision, we choose keywords that are as specific and prototypical to each section as possible (listed in Appendix A). The resulting recall is around 0.7, i.e. about 70% of sections can be correctly retrieved with the titlekeyword matching method. And we find 2,751 (out of 6,000) test samples that all four sections are matched successfully. Though far from perfect, we believe this size is sufficient for the significance of subsequent analyses. . Bart-Facet, I+C 50.62/20.97/47.09 49.59/28.70/43.47 34.61/11.82/29.07 36.42/12.63/30.97 35.37/11.75/28.90BART-Facet I+C 50.62/20.97/47.09 49.59/28.70/43.47 34.61/11.82/29.07 36.42/12.63/30.97 35.37/11.75/28.90 . Bart-Facet, I+C 48.31/22.63/51.32 49.59/28.69/43.66 35.82/12.84/30.16 37.46/14.02/32.22BART-Facet I+C 48.31/22.63/51.32 49.59/28.69/43.66 35.82/12.84/30.16 37.46/14.02/32.22 Full results (Rouge-1/2/L) of different models on FacetSum. Bold text indicates the best scores on. 7Table 7: Full results (Rouge-1/2/L) of different models on FacetSum. Bold text indicates the best scores on
[ "https://github.com/miso-belica/" ]
[ "Improving the Robustness of Summarization Models by Detecting and Removing Input Noise", "Improving the Robustness of Summarization Models by Detecting and Removing Input Noise" ]
[ "Kundan Krishna \nwork done while at Google Research\nCarnegie Mellon University\n\n", "Yao Zhao \nGoogle Research\n\n", "Jie Ren \nGoogle Research\n\n", "Balaji Lakshminarayanan \nGoogle Research\n\n", "Jiaming Luo \nGoogle Research\n\n", "Mohammad Saleh \nGoogle Research\n\n", "Peter J Liu [email protected] \nGoogle Research\n\n" ]
[ "work done while at Google Research\nCarnegie Mellon University\n", "Google Research\n", "Google Research\n", "Google Research\n", "Google Research\n", "Google Research\n", "Google Research\n" ]
[]
The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively understudied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points.
10.48550/arxiv.2212.09928
[ "https://export.arxiv.org/pdf/2212.09928v1.pdf" ]
254,877,111
2212.09928
d2d90f6a687e56a2dcf6442386537d68e61f66ad
Improving the Robustness of Summarization Models by Detecting and Removing Input Noise Kundan Krishna work done while at Google Research Carnegie Mellon University Yao Zhao Google Research Jie Ren Google Research Balaji Lakshminarayanan Google Research Jiaming Luo Google Research Mohammad Saleh Google Research Peter J Liu [email protected] Google Research Improving the Robustness of Summarization Models by Detecting and Removing Input Noise The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively understudied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points. Introduction Despite rapid progress in abstractive summarization in recent years (Lewis et al., 2020;Raffel et al., 2020b;Zhang et al., 2020), virtually all works have tested models using test data which is identically distributed as the training data, and little attention has gone into studying their robustness to input distribution shift caused by input noise. Data from different domains which have been addressed in summarization research, may contain noise of different types. For example, when summarizing a news article on a web page, there can be embedded elements such as ads or tweets which may be included as part of the article due to erroneous text extraction. A system summarizing chatroom conversations might encounter artifacts such as URLs, or sometimes even code shared between participants. If the text to be summarized is acquired by scanning a document, noise can be introduced in the form of OCR errors (Jing et al., 2003). However, the impact of different kinds of noise on modern abstractive summarization systems, and ways to accurately detect and remove that noise, remain largely unknown. In this work, we study how noise in the input affects the output generated by summarization models, and propose a method to detect and remove it. We synthetically inject 4 types of noise to 4 abstractive summarization datasets with diverse styles (Narayan et al., 2018;Kim et al., 2019;Gliwa et al., 2019;See et al., 2017), and quantify the drop in aggregate metrics for the output summaries (Section 3). We also study how the quality of generated summaries varies with factors such as the amount of noise and size of the models. For our experiments, we use PEGASUS (Zhang et al., 2020) models -Transformer-based pre-trained models which deliver competitive performance across abstractive summarization benchmarks. We present a method to detect and remove noisy spans in the input, which works without prior knowledge of the noise type or access to its samples, yet can recover a large fraction of the drop in output quality resulting from noise addition (Section 4). Our approach for detecting noisy spans is based on variations of the out-of-distribution (OOD) detection techniques proposed by Ren et al. (2022) -Relative Mahalanobis Distance OOD Score, which uses the embeddings computed by the summarization model's encoder. Our approach does not require any additional training or use of external models, hence it is relatively efficient. Figure 1 shows our method's impact on a sample noisy document. Finally, we investigate how different parts of the model architecture cause the drop in output quality upon adding noise to the input (Section 5). We attribute the performance drop to two phenomena: (i) corruption of the representations of nonnoisy input tokens computed by the encoder due Figure 1: Effect of noise addition and filtering on the model generated summary for a sample document. Random URLs are injected to the original document as noise. The color indicates the value of our proposed OOD score for a text span -red represents positive and blue represents negative OOD scores, with saturation proportional to the magnitude. Removing the detected noisy parts from input and feeding to summarization model results in a summary closer to the ground truth. to contextualization with neighboring noise; and (ii) distraction of the decoder such that it assigns non-zero attention to the representations of noisy input tokens. To quantify their contribution to drop in output quality, we perform an ablation where we remove the encoder embeddings of the noisy tokens before running the decoder, hence eliminating the effect of decoder distraction. We find that in a majority of cases this leads to partial recovery in output quality suggesting that generally both factors are responsible to some extent for the poor output summaries. In summary, we make the following contributions: • We quantify the impact of various kinds of noise on pretrained Transformer-based summarization models, demonstrating drops in output quality upto 12 ROUGE-1 points. • We show that this noise can be detected using adaptations of recently proposed out-ofdistribution detection method, without ever being exposed to it in advance. Our approach can recover much of the performance drop (sometimes as large as 11 ROUGE-1 points), improving robustness and safety for real-world model deployment. • We examine how different parts of the model's computation are affected by the introduction of input noise, leading to generation of inferior summaries. Related Work Research on the behavior of summarization models on noisy inputs is quite sparse. Jing et al. (2003) investigated how the performance of extractive summarization models is impacted by noise due to OCR errors while summarizing scanned documents. More recently, Meechan-Maddon (2019) studied the effect of noise in the form of ASR errors on abstractive summarization models based on convolutional neural networks. In contrast, we experiment with pre-trained Transformer models which are now preferred in popular use due to their superior performance (Lewis et al., 2020;Zhang et al., 2020;Raffel et al., 2020b), and address a wide variety of noise types and summarization datasets. The effect of noisy inputs has also been studied for NLP tasks other than summarization, such as machine translation (Niu et al., 2020) and question answering (Peskov et al., 2019). Multiple works across machine translation (Karpukhin et al., 2019;Vaibhav et al., 2019), question answering (Peskov et al., 2019) and summarization (Jing et al., 2003) have used synthetic noise to create noisy inputs. Similar to these works, we also create synthetic noisy inputs due to lack of a dataset with naturally occurring labeled noise. One distinguishing aspect of our work is that our noise detection/removal method works without exposing the model to the noise during training, which is closer to practical scenarios where unknown types of noise can be encountered after a model is deployed. In (c) and (d) we also show the quality after noise removal (the shaded area). Quality is measured as the geometric mean of ROUGE-1/2/L scores and averaged over the non-varying factors. We set noise amount to 0.5 in (c) and (d). Impact of noise addition We inject noisy text spans in between sentences of the clean articles. The insert position of each noisy text span is sampled independently and uniformly at random (see Figure 7 in Appendix for an example). Overall, we consider the following choices of a noisy text span: • Code -a random line of code from a corpus of Python programs (Husain et al., 2019). Code may be shared in professional chatrooms. • Emoji -randomly sampled emojis taken from the version 15 release on unicode.org. Emojis can be found in conversations and social media posts. • URL -a random URL from the first 1% of validation set of the the Colossal Common Crawl Corpus(C4) (Raffel et al., 2020b). URLs can be referenced in news articles or mentioned in chatrooms. • Randomsent -a random sentence from the first 1% of validation set of the C4 corpus. We experiment with different amounts of noise added to the input which is treated as a hyperparameter. We measure the amount of noise in terms of the number of noisy tokens added to the input divided by the total number of tokens in the input after noise addition. We experiment with 4 different datasets -XSUM (Narayan et al., 2018), CNN/DailyMail (See et al., 2017), SAMSum (Gliwa et al., 2019) and RedditTIFUlong (Kim et al., 2018). Our datasets span a variety of domains, where the first two datasets deal with summarizing news articles, and the remaining two consider summarizing conversations and social media posts respectively. For all experiments with each summarization dataset, we use PEGA-SUS models (Zhang et al., 2020) finetuned on that dataset. We evaluate the performance of models using ROUGE scores (Lin, 2004) of the corresponding summaries generated by the them. Effect of noise amount: We compare four different levels of noise, 5%, 10%, 25%, and 50% (50% means the amount of noise tokens is equal to the amount of the clean tokens.). As shown in Figure 2, we see a near monotonic decrease in output quality as more noise is added to the data. In Figure 2a, we group it by datasets while averaging across model sizes and noise types. This reveals that some datasets are more robust to noise than others (e.g. CNN/DailyMail is most robust), and the relative trends in performance drops remain similar across different noise amounts. In Figure 2b, we group the performance drops by noise types while averaging across datasets and model sizes. We see a clear gap between the drops for Code and Randomsent vs Emoji and URL, with the gap widening as the noise amount is increased. Effect of noise type: In general, we see the models are more robust to URLs and emojis, and less robust to Randomsent and Code noise types as demonstrated by performance drops (averaged across model sizes) shown in Figure 2c. We suspect that some of the this could be due to the presence of URLs and emojis in the training dataset itself, due to which the model may have learned to be robust to those noise types. In addition, from Figure 2c we see that models trained on different datasets have varying sensitivity to different kinds of noises. For example, SAMSum is notoriously susceptible to Randomsent noise, leading to a drop of about 10 Rouge-1 points averaged across model sizes (Table 5 in Appendix), whereas for CNN/DailyMail Code is the most harmful type of noise. Effect of model size: We compare PEGASUS models of 3 different sizes (number of parameters) -Small (50M), Base (200M), and Large (500M). As shown by performance drops (averaged over noise types) in Figure 2d, one might expect larger models to be less susceptible to noise, but it does not seem to be the case in general and simply scaling up models may not solve robustness. In some cases, large models can still suffer loss of over 10 ROUGE-1 points with addition of noise (see Table 5 in Appendix). A qualitative analysis of the summaries generated for noisy inputs revealed that there exist some frequent bad summaries which are generated by the models for many noisy inputs. This is observed for models finetuned on XSUM and RedditTIFUlong datasets, while for the other two datasets we did not observe such a pattern. We show five of the most frequently generated summaries for XSUM and RedditTIFU-long in Table 1. We see that the generated summary (for noisy inputs) is often just punctuation marks such as a period or a semicolon. Notably, for XSUM dataset, some of the frequently generated bad summaries were also present as ground truth summaries in the train set. For example, "All images are copyrighted." was the ground truth summary for 39 articles in the train set. This suggests that upon encountering input noise, the model can fall back to behaving like an unconditioned language model and generating high frequency sequences from the train set. 4 Noise detection and quality recovery 4.1 Noise detection Ren et al. (2022) studied various methods for detecting OOD inputs for conditional language generation tasks, including summarization. They showed that the proposed embedding-based OOD detection method Relative Mahalanobis distance (RMD) worked well. Specifically, given an input sequence x = x 1 . . . x t , the method obtains the input embedding z = 1 t Σ i h i by averaging the encoder's final-layer hidden state vectors h i corresponding to the input sequence token x i . The OOD score is defined as the difference between two Mahalanobis distances (MD), S(x) := RMD(z) := MD in (z) − MD 0 (z),(1) where MD in (z) = (z −µ) T Σ −1 (z −µ) measures the distance from z to the Gaussian distribution (z) = (z − µ 0 ) T Σ −1 0 (z − µ 0 ) measures the distance to the Gaussian N (µ 0 , Σ 0 ) fitted using samples of z obtained using background data. The in-domain Gaussian distribution is fitted using the in-domain training set, and the background distribution is fitted using the same number of examples from C4 (Raffel et al., 2020a) which represents a large and broad set of domains. In our experiments we use 10, 000 examples to fit each distribution. The RMD score is regarded as a background contrastive score that indicates how close the input sequence is to the in-domain compared to the background domains. A negative score suggests relatively in-domain, while a positive score suggests out-of-domain (OOD). Instead of computing a single OOD score for the entire input document sequence as in Ren et al. (2022), in this work, we focus on detecting smaller sub-parts of OOD noise within the input document sequence. We propose three variants: Leaveout-Sentence (LO-Sent) In this case, we compute the OOD scores of the input with and without a sentence in it. The negative of the change in the OOD score after removing the sentence denotes the OOD score of that sentence. Intuitively, if removing the sentence decreases the overall OOD score, that sentence is assigned a positive OOD score and vice-versa. S LO-Sent (x i:j ) = S(x 1:t ) − S(x 1:(i−1);(j+1):t )(2) Leaveout-Token (LO-Tok) This is very similar to the previous method LO-Sent except that instead of removing a sentence, we remove a token at a time and hence get OOD scores for each token, Sentencewise (Sent) Instead of computing the score based on embeddings averaged over the tokens in the whole input document sequence (consisting of multiple sentences), we fit Gaussian distributions at the sentence level by averaging the token embeddings in a sentence z i:j = 1 j−i+1 j k=i h k . We use the sentence embeddings from in-domain data and C4 data to fit the two Gaussian distributions, N (µ sent , Σ sent ) and N (µ sent 0 , Σ sent 0 ). S sent (x i:j ) = MD sent in (z i:j ) − MD sent 0 (z i:j ) (4) where MD sent in and MD sent 0 are MDs to N (µ sent , Σ sent ) and N (µ sent 0 , Σ sent 0 ) respectively. GPT-2 likelihood We also experiment with a simple language model baseline to generate the noisiness scores based on average negative loglikelihood (NLL) of tokens in a sentence, as given by the pretrained GPT-2 model. Intuitively, a higher value of NLL signifies that a token is unlikely to occur given the past context, which should hold true in case of noisy tokens with clean past context. S GPT2 (x i:j ) = − 1 j − i + 1 j k=i log p G (x k |x <k )(5) where p G (x k |x <k ) is the probability assigned by the GPT-2 model to token x k given previous tokens. To calculate performance of models at noise detection, we compare the assigned OOD score for each token with its ground truth label and we compute the ROC AUC scores for comparison. For the two sentence level scores, S LO-Sent (x i:j ) and S sent (x i:j ), we assign each token's OOD score to be the sentence level OOD score for the sentence which contains that token. We compute evaluation metrics in two ways: (i) per-example basis where the AUC score is computed for each example and then they are all averaged across the dataset. (ii) overall basis where all the predictions across the entire dataset are pooled together before computing Quality recovery after noise filtering To remove noise from the input, we simply remove all sentences that have an OOD score greater than a threshold, and then evaluate how much output quality gets recovered after this. We set the threshold of OOD score for filtering to be the 99 percentile value of the OOD scores computed for sentences in the clean version of the dataset (without any noise). The chosen percentile is set to be this high to minimize false positives which can lead to removal of useful non-noisy information from the input. Since the threshold is computed using only the clean dataset and the model trained on that, we do not need any prior information about the noise (similar to OOD score computation). We show the performance of noise filtering for different noise types, model sizes and datasets in Table 3. For succinctness, we show the geometric mean of the ROUGE-1,2 and L variants, and point the reader to the Appendix (Table 5) for detailed results with individual variants of ROUGE. After noise filtering, we can recover a large part of the drop in ROUGE scores that occurred due to the added noise. In cases of large drop such as the Randomsent noise type with XSUM and SAM-Sum datasets, we can recover 4-6 and 6-7 points respectively depending on the model size (Table 3). We also present aggregate trends of recovery of output quality using our filtering approach in Figure 2c and 2d. We can see that we recover over half of the drop in the performance on 9 out of 16 combinations of datasets and noise types (Figure 2c), with the best performance observed on XSUM and SAMSum datasets and the worst on CNN/DailyMail. The method also succeeds in recovering performance across all 3 model sizes (Figure 2d). We experimented with various thresholding strategies such as setting thresholds to be constant (a) Increase in summary quality after filtering with different thresholding approaches, for different datasets and noise types (b) Precision and recall for noise detection across different filtering experiments with varying thresholds, with the resulting change in output quality Figure 3: Change in output quality for different thresholding techniques (a) and its correlation with the precision and recall of noise detection (b). The changes in summary quality are illustrated by color (blue shows increase and red shows decrease, saturation denotes magnitude clipped to range [0,5]) irrespective of the dataset or model (e.g. 0), or to be equal to a different percentile value (other than 99%) of the OOD scores produced by the model used on clean data. We also tried choosing the optimal threshold based on F1-score of noise detection on a hold-out validation set (assuming a scenario where we have access to labeled noisy samples). We tried 6 thresholding techniques in total, compared in Figure 3a. Setting a constant threshold of 0 provides gains in some cases but in other cases makes the model outputs worse, due to filtering out useful non-noisy content. To prevent this, one can use a very high threshold such a 500 which practically eliminates cases of further drop in performance (Figure 3a), but the performance gains produced in that case are small because less noise is filtered. The best approach turns out to be setting it be the 99 percentile of the clean data OOD scores, which produces different thresholds for different models, and leads to the highest average gain in output quality among the strategies tried, with minimal cases of further degradation. Surprisingly, optimizing the threshold based on F1score of noise detection on a validation set also reduces the output quality in many cases, suggest-ing that F1-score may not be the best predictor for the quality of summary produced after filtering. We conduct noise filtering for each of our experimental setups (all datasets, noise types and amounts, model sizes) with three thresholds -0, 200 and 500 and compare the resulting change in summary quality with the precision and recall of the noise detection in Figure 3b. We find that a precision lower than around 0.7 usually leads to a drop in summary quality, even if the recall is nearly perfect suggesting that almost all noise has been removed. This suggests that precision is more important than recall for improving summary quality. Investigating causes of loss in performance There are two distinct mechanisms which can lead to worsening of generated summaries upon addition of input noise. The first is the corruption of the encoder's representation of useful clean tokens. The encoder transformer uses self-attention over input tokens to generate their contextualized representations. In cases where noise is present in the input, self-attention can distort the encoder repre-sentations of clean tokens. The second mechanism is the distraction of the decoder such that it assigns non-zero attention to the noisy tokens' embeddings and this impairs its computation. Even if there is no corruption in the embeddings of clean tokens, the embeddings of noisy tokens can receive non-zero cross-attention from the decoder and influence its generation. If neither of these two phenomena occur, the generated summary on the noisy and clean variants of any input would be the same. In this section we investigate the contribution of these two factors in the degradation of output quality. 5.1 Are the clean token embeddings corrupted by the presence of noise? We observe that the OOD scores of the clean tokens increase after addition of noise. In Figure 4, we shown an example of this for the XSUM dataset after adding Code noise, where the OOD scores are computed using the Sent method. This suggests that the distribution of clean tokens' embeddings moves farther from the in-domain distribution (learnt from clean in-domain data) relative to the background distribution (learnt from C4 corpus), after addition of noise. We made this observation for different datasets and noise types, although the extent of the increase in OOD scores varies across them. Figure 4: Distribution of OOD scores of (i) clean tokens before adding noise (doc) (ii) clean tokens after adding noise (doc-post-noise) and (iii) noisy tokens after adding them (noise) (using base model size and 0.5 noise amount) 5.2 How much performance can be recovered by preventing distraction of the decoder? We design an ablation experiment to measure how the performance drop would change if there is no distraction of the decoder by embeddings of noisy tokens. Any drop in output quality in such as setup is attributable only to the corruption of the clean tokens' encoder representations. We remove the embeddings of the (ground truth) noisy tokens after passing the noisy input through the encoder of the PEGASUS model, and then use the decoder to generate the summary using only the remaining embeddings ( Figure 5). Since the removal is done after passing the whole input through the self-attention layers of the encoder, the clean tokens' embeddings are already distorted, and the decoder has to generate the summary using these distorted embeddings. The difference from the usual scenario is that the decoder does not have to include the noisy tokens' embeddings in the computation. We find that this mostly leads to an increase in output quality compared to when the noisy token embeddings are not removed ( Figure 6). The biggest improvements come for XSUM and SAMSum datasets, whereas for CNN/DailyMail dataset no improvement is seen for any of the 4 noise types. Surprisingly, for the RedditTIFU-long dataset with the URL and Randomsent noise types, removing the noisy tokens' embeddings decreases the ROUGE scores further, suggesting that retaining those embeddings is somehow useful for the decoder. The above ablation study highlights the necessity of running the encoder twice -once for computing OOD scores to detect noise, and then again to compute the encoder representations of the input after removing noisy tokens. While one can save computation time by reusing the encoder embeddings of the clean tokens computed during OOD scoring to feed them to the decoder for generation, results from the ablation suggest that this would give sub-optimal performance recovery ( Figure 6). Conclusion and Future Work In this work, we quantified the impact that noisy inputs can have on the output quality of summarization models, for a variety of datasets and noise types. We then proposed a method to detect and remove noise from the input without using any extra models, training, or prior information about noise types, and demonstrated its efficacy. One direction for future work is to investigate what makes certain models more susceptible to specific noise types. Another interesting direction would be to carry out experiments for noise filtering with realworld noisy data rather than using synthetically generated noisy examples. Figure 2 : 2of noise amount by noise type (c) Effect of noise type by dataset (d) Effect of model size by dataset Change in output quality upon addition of noise to inputs, while varying different factors -noise amount in (a) and (b), noise type in (c), and model size in (d). S LO-Tok (x i ) = S(x 1:t ) − S(x 1:(i−1);(i+1):t ). (3) Figure 5 : 5Ablation experiment where the decoder does not have to process the noisy tokens' embeddings. Figure 6 : 6Performance drops for different datasets and noise types, with the shaded area showing drops when the noisy tokens' encoder embeddings are removed before running the decoder (using the base model size and 0.5 noise amount) Table 1 : 1The frequencies of most commonly generated summaries on noisy versions of XSUM and RedditTIFU-long validation sets (Noisy) and their frequencies before adding noise (Clean) (using the base model size and Code noise type with noise amount set to 0.5) XSUM RedditTIFU-long Summary Noisy Clean Summary Noisy Clean . (period) 145 1 : (colon) 230 0 A chronology of key events: 108 0 ** 68 2 All images are copyrighted. 62 7 i'm a f**king idiot. 16 3 All pictures are copyrighted. 9 4 i'm an idiot. 15 22 The following is a summary of key events: 5 0 ] 13 0 Table 2 : 2Performance of different methods for noise detection aggregated across datasets (using the base model size and 0.5 noise amount )Method Overall AUC Per-example AUC Code Emoji Randomsent URL Code Emoji Randomsent URL LO-Tok 77.10 84.25 73.63 85.41 78.52 84.17 74.74 86.83 LO-Sent 88.04 88.83 85.43 95.66 89.46 87.94 87.00 96.08 Sent 89.37 82.73 90.65 90.64 91.70 82.80 93.83 93.64 GPT-2 78.20 55.29 81.19 62.44 77.90 54.96 80.00 60.71 N (µ, Σ) fitted with in-domain samples of z, and MD 0 Table 3 : 3ROUGE scores (geometric mean of 1/2/L) on clean input and changes when adding different kinds of noise, and after the noise is filtered out using Sent method (Noise amount: 0.5) Add Filter Add Filter Add Filter Add FilterModel size Clean Code Emoji Randomsent URL XSum Small 31.66 21.43 27.50 23.28 31.33 22.28 28.44 25.50 30.30 Base 35.18 27.64 32.01 30.03 34.49 26.28 32.32 26.87 33.97 Large 37.18 35.86 36.89 36.36 36.83 31.68 35.09 35.81 36.77 CNN-Dailymail Small 31.96 25.27 23.37 31.24 31.46 30.01 30.38 29.69 30.39 Base 33.09 26.27 25.39 32.53 32.70 31.31 31.53 30.74 31.25 Large 33.44 29.60 30.99 33.11 33.02 31.97 32.36 32.03 32.67 Samsum Small 37.96 33.00 36.80 36.83 36.73 28.11 35.18 34.17 37.31 Base 39.74 36.95 38.89 39.18 38.97 31.96 37.51 36.89 39.47 Large 41.63 38.80 40.91 41.46 41.42 31.85 38.58 39.19 40.81 Reddit-TIFU Small 15.51 11.53 13.55 12.97 15.21 13.40 14.70 13.41 14.09 Base 17.54 12.16 14.55 13.33 14.42 14.18 16.62 15.71 16.23 Large 18.15 13.33 16.06 14.89 15.76 13.92 17.32 15.96 16.88 a single AUC score. We show the scores averaged across the 4 datasets in Table 2. In general, the LO-Tok method performs the worst of the three OOD-based methods, while Sent and LO-Sent per- form comparably. Comparing the GPT-2 baseline with LO-Tok, GPT-2 performs clearly better for Randomsent, comparably for Code, and clearly worse for Emoji and URL noise types. However, GPT-2 lags behind LO-Sent and Sent for all noise types. Between Sent and LO-Sent, Sent performs better for Code and Randomsent and LO-Sent per- forms better for Emoji and URL noise types. For its simplicity, we use the Sent method for OOD detection in rest of the paper. A AppendixA.1 Selection of shorter inputs to avoid truncationIn our experiments, we exclude those datapoints from the datasets which are longer than a certain threshold. This is done to avoid any truncation of the input (including inputs with added noise) when feeding them into the model. Since adding noise to the input increases its length, it may happen that some clean tokens might be pushed beyond the maximum allowed input length and hence removed when the input is truncated. In such a scenario, removing noisy tokens before feeding the sequence into the model would also cause such clean tokens to be fed into the model again because they can now be accommodated within the input length limit. When measuring the benefit of noise filtering, the benefit from removal of noisy tokens would then be confounded with the benefit from such "resurrection" of clean tokens. To avoid this we only retain those inputs in our datasets where the input length would be within limit even after addition of noise.Since the maximum noise amount we use in our experiments is 0.5, we only retain datapoints which have no more than half of the maximum allowed tokens to input into the model.(Table 4). SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer, 10.18653/v1/D19-5409Proceedings of the 2nd Workshop on New Frontiers in Summarization. the 2nd Workshop on New Frontiers in SummarizationHong Kong, ChinaAssociation for Computational LinguisticsBogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70-79, Hong Kong, China. Association for Computational Linguistics. Codesearchnet challenge: Evaluating the state of semantic code search. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, Marc Brockschmidt, arXiv:1909.09436arXiv preprintHamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- searchnet challenge: Evaluating the state of seman- tic code search. arXiv preprint arXiv:1909.09436. Summarization of noisy documents: a pilot study. Hongyan Jing, Daniel Lopresti, Chilin Shih, Proceedings of the HLT-NAACL 03 Text Summarization Workshop. the HLT-NAACL 03 Text Summarization WorkshopHongyan Jing, Daniel Lopresti, and Chilin Shih. 2003. Summarization of noisy documents: a pilot study. In Proceedings of the HLT-NAACL 03 Text Summariza- tion Workshop, pages 25-32. Training on synthetic noise improves robustness to natural noise in machine translation. Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, Marjan Ghazvininejad, Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). the 5th Workshop on Noisy User-generated Text (W-NUT 2019)Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in ma- chine translation. In Proceedings of the 5th Work- shop on Noisy User-generated Text (W-NUT 2019), pages 42-47. Abstractive summarization of reddit posts with multi-level memory networks. Byeongchang Kim, Hyunwoo Kim, Gunhee Kim, Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2018. Abstractive summarization of reddit posts with multi-level memory networks. Abstractive summarization of Reddit posts with multi-level memory networks. Byeongchang Kim, Hyunwoo Kim, Gunhee Kim, 10.18653/v1/N19-1260Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaAssociation for Computational Linguistics1MinneapolisByeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of Reddit posts with multi-level memory networks. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2519-2531, Min- neapolis, Minnesota. Association for Computational Linguistics. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. The effect of noise in the training of convolutional neural networks for text summarisation. Ailsa Meechan-Maddon, Ailsa Meechan-Maddon. 2019. The effect of noise in the training of convolutional neural networks for text summarisation. Don't give me the details, just the summary! topicaware convolutional neural networks for extreme summarization. Shashi Narayan, Shay Cohen, Maria Lapata, Conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsShashi Narayan, Shay Cohen, and Maria Lapata. 2018. Don't give me the details, just the summary! topic- aware convolutional neural networks for extreme summarization. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807. Association for Computational Linguis- tics. Evaluating robustness to input perturbations for neural machine translation. Xing Niu, Prashant Mathur, Georgiana Dinu, Yaser Al-Onaizan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsXing Niu, Prashant Mathur, Georgiana Dinu, and Yaser Al-Onaizan. 2020. Evaluating robustness to input perturbations for neural machine translation. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8538- 8544. Mitigating noisy inputs for question answering. Denis Peskov, Joe Barrow, Pedro Rodriguez, Graham Neubig, Jordan Boyd-Graber, arXiv:1908.02914arXiv preprintDenis Peskov, Joe Barrow, Pedro Rodriguez, Graham Neubig, and Jordan Boyd-Graber. 2019. Mitigating noisy inputs for question answering. arXiv preprint arXiv:1908.02914. Exploring the limits of transfer learning with a unified text-totext transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020a. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67. Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020b. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21(140):1-67. Out-of-distribution detection and selective generation for conditional language models. Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, Peter J Liu, 10.48550/ARXIV.2209.15558Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mo- hammad Saleh, Balaji Lakshminarayanan, and Pe- ter J. Liu. 2022. Out-of-distribution detection and selective generation for conditional language mod- els. Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, 10.18653/v1/P17-1099Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsAbigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics. Improving robustness of machine translation with synthetic noise. Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, Graham Neubig, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, and Graham Neubig. 2019. Improving robustness of ma- chine translation with synthetic noise. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1916-1920. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu, PMLR. Code 31.54 / 12.44 / 25.07 38.74 / 17.34 / 31.43 47.53 / 24.48 / 39.63 Emoji 31.79 / 15.10 / 26.29 40.08 / 20.32 / 33.24 47.86 / 25.02 / 40.16International Conference on Machine Learning. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In In- ternational Conference on Machine Learning, pages 11328-11339. PMLR. Code 31.54 / 12.44 / 25.07 38.74 / 17.34 / 31.43 47.53 / 24.48 / 39.63 Emoji 31.79 / 15.10 / 26.29 40.08 / 20.32 / 33.24 47.86 / 25.02 / 40.16 Samsum Small Base Large Clean -50.56 / 25. 66 / 42.16 51.73 / 27.80 / 43.64 53.50 / 29.53 / 45.68Samsum Small Base Large Clean - 50.56 / 25.66 / 42.16 51.73 / 27.80 / 43.64 53.50 / 29.53 / 45.68 . Reddit-Tifu, Reddit-TIFU
[]
[ "Cost-Efficient Deployment of a Reliable Multi-UAV Unmanned Aerial System", "Cost-Efficient Deployment of a Reliable Multi-UAV Unmanned Aerial System" ]
[ "Student Member, IEEENithin Babu ", "Fellow, IEEEPetar Popovski ", "Fellow, IEEEConstantinos B Papadias " ]
[]
[]
In this work, we study the trade-off between the reliability and the investment cost of an unmanned aerial system (UAS) consisting of a set of unmanned aerial vehicles (UAVs) carrying radio access nodes, called portable access points (PAPs)), deployed to serve a set of ground nodes (GNs). Using the proposed algorithm, a given geographical region is equivalently represented as a set of circular regions, where each circle represents the coverage region of a PAP. Then, the steady-state availability of the UAS is analytically derived by modelling it as a continuoustime birth-death Markov decision process (MDP). Numerical evaluations show that the investment cost to guarantee a given steady-state availability to a set of GNs can be reduced by considering the traffic demand and distribution of GNs.Index Terms-Portable access points, cost efficiency, Reliability, Markov decision process.
10.1109/vtc2022-fall57202.2022.10013015
[ "https://export.arxiv.org/pdf/2208.14503v1.pdf" ]
251,953,346
2208.14503
b850ae158973fa394b2aaba6fdeaffb1b9763d1e
Cost-Efficient Deployment of a Reliable Multi-UAV Unmanned Aerial System Aug 2022 Student Member, IEEENithin Babu Fellow, IEEEPetar Popovski Fellow, IEEEConstantinos B Papadias Cost-Efficient Deployment of a Reliable Multi-UAV Unmanned Aerial System Aug 20221Index Terms-Portable access pointscost efficiencyReliabil- ityMarkov decision process In this work, we study the trade-off between the reliability and the investment cost of an unmanned aerial system (UAS) consisting of a set of unmanned aerial vehicles (UAVs) carrying radio access nodes, called portable access points (PAPs)), deployed to serve a set of ground nodes (GNs). Using the proposed algorithm, a given geographical region is equivalently represented as a set of circular regions, where each circle represents the coverage region of a PAP. Then, the steady-state availability of the UAS is analytically derived by modelling it as a continuoustime birth-death Markov decision process (MDP). Numerical evaluations show that the investment cost to guarantee a given steady-state availability to a set of GNs can be reduced by considering the traffic demand and distribution of GNs.Index Terms-Portable access points, cost efficiency, Reliability, Markov decision process. I. INTRODUCTION Unmanned aerial vehicles (UAVs) carrying radio access nodes, hereafter referred to as portable access points (PAPs), deployed to provide temporary cellular service in remote areas or to assist in emergencies has gained large attention lately [1]. The architecture and Quality-of-Service (QoS) requirements for such a system have been proposed in the 3GPP item [2]. Regardless of the limited onboard available energy constraint, the mobile feature of a PAP may enhance the received signalto-noise ratio (SNR) values through better communication channels compared to a conventional fixed infrastructure system. Since the received SNR values are functions of aerial locations of the PAPs, efficient PAP deployment planning is of paramount importance. In [1], the authors summarize the works that have considered UAV(s) placement problems from an energy efficiency perspective, whereas, [3] outlines the works that position UAV(s) to maximize communicationrelated parameters such as coverage area and throughput. [4] proposes a general probabilistic line-of-sight (LoS)-non-LoS air to ground channel model and determines the optimal altitude that maximizes the coverage region. The authors of [5] propose a graph-based algorithm to improve the throughput Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. This version of the work has been accepted for publication in VTC2022-Fall, Workshops. This work is supported by the project PAINLESS which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 812991. N. Babu by jointly optimizing the user association, UAV altitude, and transmission direction, whereas in [6] we propose an energyefficient 3D deployment of a multi-PAP system. From [1] - [6], the existing works maximize the respective performance metrics by considering a constant traffic demand. However, in practice, the traffic demands from the users are not constant and this varying nature of the traffic demand can be efficiently utilized by an unmanned aerial system (UAS) to reduce the investment cost: if the rate at which a new service request arrives is much lower compared to the time required by a PAP to complete the current request (low traffic intensity), then instead of deploying an additional PAP, the mobile feature of the PAP can be used to assign the same PAP to serve the new request after serving the current request thereby reducing the investment cost. Hence the main challenge here is to find the optimal number of PAPs required to ensure the availability of at least one idle PAP when a new service request generates represented as a reliability metric called the steady-state availability of the system: the higher the number of PAPs, the higher the steady-state availability. In this work, we determine the minimum investment cost, a function of the number of PAPs, required to guarantee a given availability threshold in different traffic intensity scenarios by exploiting the trade-off between the number of PAPs and the system reliability which has, to the best of our knowledge, not been considered in the literature. Section II explains the system model and definitions. In Section III, firstly, the given geographical region is represented as a set of minimum number of circular cells each representing the coverage region of a PAP using the circle packing theory; then in Section III-A a reliability analysis of the UAS is performed to determine the optimal number of PAPs required to guarantee a given steady state availability threshold by modeling the system as a birth-death Markov process. All our main findings from the numerical evaluations are discussed in Section IV. II. SYSTEM MODEL A set of N stationary ground nodes (GNs) that are uniformly distributed along a circular geographical region of radius R are considered to be served by a set of PAPs. As shown in Fig. 1, the known locations of all the GNs and the PAPs are registered to a centralised controller unit called the UAS traffic management (UTM) unit [2]. The deployment and positioning of the PAPs according to the traffic demand from the GNs are controlled by the UTM through the PAP-UTM links. Moreover, the PAPs are assumed to be equipped with a directional antenna of half-power beamwidth 2θ with antenna gain in direction (ψ, ω) given by, G p = G o θ 2 −θ ≤ ψ ≤ θ, −θ ≤ ω ≤ θ, ≈ 0 otherwise,(1) where G o ≈ 2.2846 [5]. All the GNs are equipped with an omnidirectional antenna and are allocated with orthogonal channels. Consequently, the given geographical area is divided into cells of circular shape representing the possible coverage regions of the PAPs. A cell in which at least one GN is active is called an active cell, whereas, a cell in which no GN requires a PAP service is called an idle cell. When a GN from a previously idle cell requires the service of a PAP, the request is sent to the UTM through the corresponding GN-UTM link and the UTM sends an activation command to an idle PAP through the PAP-UTM link. The radius (vertical coordinate of a PAP) of the coverage region is jointly determined by the uplink and downlink quality-of-service (QoS) constraints and the horizontal plane coordinates of the PAPs are assumed to be those of the center of the cells. 1) Channel model: The path loss between a GN located at a distance of r i from the center of a cell and the corresponding PAP hovering at an altitude h p is taken as the probabilistic mean of the line-of-sight (LoS) and non-LoS (N-LoS) path loss components [6]: L(r i , h p ) = P l × L l + (1 − P l ) × L nl , = r 2 i + h 2 p g 0 FSPL × [η 2 nl + P l (η 2 l − η 2 nl )], ηm(φi), mean additional path loss (2) where g 0 represents the channel gain at a reference distance of 1m ; P l = 1/{1 + a exp [−b(φ i − a) )]} is the LoS probability with a and b being the environment-dependent parameters given in [4] and φ i = (180/π)tan −1 (h p /r i ); η l and η nl are the mean value of the additional path loss due to shadowing for LoS and N-LoS links, respectively. 2) PAP coverage region: A GN is considered to be in the coverage region of a PAP if it has minimum downlink and uplink SNR values. The SNR received at the i th GN from a PAP (downlink) is GpP L(ri,hp)σ 2 , where P and σ 2 are the power allocated to a GN and noise power, respectively. From (2), L(R d p , h p ) ≥ L(r j , h p ) ∀r j ≤ R d p . Hence, for a given downlink QoS constraint, Γ d , considering the SNR at an edge GN, the downlink coverage radius is determined as, R d p (Γ d , θ) = G p g 0 P sin 2 θ Γ d σ 2 η m (θ) .(3) For the uplink, each GN chooses its transmit power according to the uplink power control specified in the 3GPP technical report [7]. Then the transmit power for the i th GN (in Watts) is given by, P i = P a L(r i , h p ),(4) where P a is the target arrived power at the PAP. Accordingly, the received SNR from all the GNs at a PAP will be the same: (G p P a )/σ 2 . If Γ u is the minimum uplink QoS threshold, then P a = Γ u σ 2 /G p . Also, the average power transmitted by a GN should be less than the maximum power P max that decides the uplink coverage radius R u p : P a L(R u p , h p ) ≤ P max , R u p (Γ u , θ) = G p g 0 P max sin 2 θ Γ u σ 2 η m (θ) .(5) Therefore the coverage radius of a PAP is given by, R p (Γ d , Γ u , θ) = min R d p (Γ d , θ), R u p (Γ u , θ) and the corresponding hovering height of the PAP is h p = R p (Γ d , Γ u , θ)tanθ. III. COST VS RELIABILITY In this section, we analyze the trade-off between the investment cost and reliability of the UAS. The investment cost is determined by the total number of PAPs required to guarantee a given reliability threshold to the GNs. The given circular geographical region needs to be covered by a set of PAP coverage regions of radius R p (Γ d , Γ u , θ). If R > R p (Γ d , Γ u , θ), we need multiple cells to cover the given region. With the determined value of R p (Γ d , Γ u , θ) from Section II, the problem of finding the minimum number of required cells and their locations is solved using a regularhexagon-based 7-circle multi-level circle packing as detailed in Algorithm 1: the maximum radius of a circle which can be covered by 7 equi-radius circles of radius R p (Γ d , Γ u , θ) is R = 2R p (Γ d , Γ u , θ); one of these 7 smaller circles will be concentric with the given circular region of radius R, and the centers of the remaining 6 circles lie on the vertices of a regular hexagon of side length √ 3R p (Γ d , Γ u , θ) [8]. If (x l−1 , y l−1 ) is the center of the region to be covered, then centers of the 7 smaller circles of radius r l that covers the given region have the coordinates {(x j , y j )} where, x j = x l−1 + r l √ 3cos 2πj 6 ∀j ∈ {0, 1, ...5} ,(6) y j = y l−1 + r l √ 3sin 2πj 6 ∀j ∈ {0, 1, ...5} , (x 6 , y 6 ) = (x l−1 , y l−1 ). Algorithm 1 arranges the 7-circle packing pattern on multiple levels as shown in Fig. 2: in the first level, the centers of 7 smaller circles of radius R/2 are determined using step 4 (set of blue circles); if the radius of the smaller circle is less than or equal to the coverage radius of a PAP, the packing stops otherwise the packing continues to cover each smaller circle obtained from the previous level (set of green circles). This continues until the radius of the circle to be covered is less than or equal to R p (Γ d , Γ u , θ). In step 10, the circles representing the cells without any uncovered GNs (dotted green circles) are discarded to obtain the minimum number of cells. Remark 1: The number of levels required to cover a given geographical region of radius R using Algorithm 1 is, l max = log 2 R R p (Γ d , Γ u , θ) . It should be noted that, with given R and Γ d , Γ u , and θ, 10 Remove the circles with zero non-shared GNs; 11 Output:Total number of cells, n = |C l |. A. Reliability Analysis and Optimal Number of PAPs. In this section, we find the optimal number of PAPs required to guarantee a given reliability threshold to the GNs distributed along n cells. Let A(t) be an event that a PAP is available for serving the request from a new active cell at time t; hence, the steady-state availability is defined as, 1 0 j u • • • • • • ! ( " 1)! ( " # + 1)! ( " #)! ( " $ + 1)! % $% (# + 1)% #% 2%A = lim t→∞ P[A(t)],(9) where P[A(t)] is the probability of A(t); here, the unavailability (1 − A) is only due to busy PAPs, but not the wireless environment. We consider n cells determined using Algorithm 1 as n sources that generate n independent Poisson inputs with common intensity λ. Also, the service times of PAPs at cells are considered to be independent, exponentially distributed random variables with parameter κ. The deployment of an additional PAP is only required when a new request originates from an idle cell. Additionally, the probability that more than one idle cell becomes active or vice versa is assumed to be negligible. Let u be the number of PAPs available in the UAS to serve n cells; thus the considered system can be modelled as a birth-death Markov process as shown in Fig. 3 with statespace Z = {0, 1, 2, ..u} representing the number of serving PAPs and transition rates, λ j = (n − j)λ; ∀j = {0, 1, 2, ...(u − 1)}, (10) κ j = jκ ∀j = {1, 2, ...u}.(11) Let {p j , j ∈ Z} be the steady-state probability distribution of the considered Markov process. Then the state equations from Fig. 3 are expressed as [9], κp 1 = nλp 0 ,(12) (n − j + 1)λp j−1 + (j + 1)κp j+1 = (jκ + (n − j)λ) p j , (n − u + 1)λp u−1 = uκp u .(13) From (13), p 1 = nδp 0 , where δ = λ/κ. By successive substitution we get, p j = C(n, j)δ j p 0 ∀j ∈ Z,(14) where C(n, j) = n!/(j!(n − j)!). For a finite R value, the number of cells will be finite; hence, the stationary state probabilities should satisfy the normalizing condition Σ u j=0 p j = 1: p 0 + Σ u j=1 C(n, j)δ j p 0 = 1,(15)p 0 = 1 1 + Σ u j=1 C(n, j)δ j .(16) Let F be an event that an active cell does not find an available PAP, then using (14) and (16), P (F ) = p u = C(n, u)δ u 1 + Σ u j=1 C(n, j)δ j .(17) Hence, the steady-state availability of the system can be determined as, Thus, for a given availability threshold, ρ, the optimal number of PAPs required is obtained using (18) as, A = 1 − P (F ) = 1 − C(n, u)δ u 1 + Σ u j=1 C(n, j)δ j .(18)u opt = min{(u|A >= ρ), n}.(19) The degree of PAP utilization defined as the ratio of the mean number of deployed PAPs to the total number of PAPs is given by, η(u opt ) = Σ uopt j=1 jp j u opt = Σ uopt j=1 jC(n, j)δ j u opt 1 + Σ uopt j=1 C(n, j)δ j . (20) IV. NUMERICAL EVALUATION AND CONCLUSION In this section, we present our main findings from numerical evaluations.The considered simulation parameters are g 0 = 1.42 × 10 −4 , Γ d = Γ u = 100, σ 2 = 1.25 × 10 −14 W , P = 1mW, P max = 1W . Fig. 4 shows the variations of the average number of cells required (rounded to the nearest integer) as a function of the number of GNs for suburban and urban deployment scenarios; the corresponding (a, b, η l , η nl , θ) parameters used are (4.83, 0.43, 1.01, 11.22, 70 o ), (9.6, 0.16, 1.12, 10, 52 o ) [4]. The averaging is done over the minimum number of cells obtained using Algorithm 1 for 600 independent realizations of GNs. From the figure, for a given number of GNs, the number of cells required increases as the radius of both the suburban and urban region increase because of the widely distributed GNs in a larger geographical region. Additionally, for a given QoS threshold, the number of cells required to cover any N GNs in the urban region is higher than that in the suburban region: this is because the radius of the coverage region of a PAP decreases as we move from suburban to urban region due to an increase in the number and density of buildings (η m (θ)). 600m, N = 15, urban) and it shows the variation of the availability and PAP utilization factor, given by (18) and (20), as a function of the normalized investment cost; the normalized investment cost is taken as the ratio of u opt to the total number of cells. As seen in the figure, the steady-state availability increases with deploying more PAPs (more investment cost). However, for low traffic intensity scenarios, δ = 0.1, a steady-state availability of 1 is achievable with PAPs equal to half the number of cells. This benefit of investment cost reduction becomes less with an increase in the traffic intensity: since the rate of arrival of the new service request is higher than the service duration, the probability of finding an idle PAP to serve the request from a new active cell is very small thereby demanding a deployment scheme that assigns one dedicated PAP per cell. Fig. 6 is plotted by considering a suburban deployment scenario with 25 GNs; it shows the variation of the average normalized cost incurred to guarantee a given availability threshold to GNs for different traffic intensity scenarios. As expected, the average number of cells required increases with an increase in R. Both the availability threshold and the traffic intensity decide the investment cost; the cost can be reduced either if the intensity of the traffic demand from the GNs is low or if the availability threshold is light. Moreover, the cost reduction is more noticeable if the GNs are widely separated. Hence, from Fig. 5 and Fig. 6, the number of PAPs required should be economically determined by considering not only the area of the geographical region but also the traffic intensity and the distribution of the GNs in the considered region. For a given geographical region with a set of GNs, the minimum number of cells required is determined using Algorithm 1, and depending on the type of service required by the GNs and the traffic intensity, a suitable availability threshold could be selected which in turn decides the optimal number of required PAPs and the corresponding investment cost. The optimal number of PAP determination by considering a general distribution of the traffic intensity is left as future work. Fig. 1 : 1System setup. Fig. 2 : 2Position of cells for R=800m. step 2 to 9 takes a running time of O (l max ), lmax − c)(7 lmax − c − 1) pair checks; hence the complexity of Algorithm 1 is, at worst O l max + 7 2lmax . Algorithm 1: Optimal Number of Cells 1 Input: R, R p (Γ d , Γ u , θ), l = 0, C l = {(0, 0)}; 2 while 1 do 3 l = l + 1; r l = R/2; 4For each c l−1 = (x l−1 , y l−1 ) ∈ C l−1 , find the set of center of PAP coverage circles using (6) -(8):C l ; 5 if r l ≤ R p (Γ d , Γ u , θ) Fig. 3 : 3State transition diagram of the UAS. Fig. 4 : 4Average number of cells Vs number of GNs . Fig. 5 5is plotted considering 10 cells (R = Fig. 6 : 6Normalized cost Vs radius of the region. and C. B. Papadias are with Research, Technology and Innovation Network (RTIN), The American College of Greece, Greece (e-mail: [email protected], [email protected]). N. Babu, C. B. Papadias and P. Popovski are with Department of Electronic Systems, Aalborg University, Denmark (e-mail: [email protected],[email protected],[email protected]) Role of UAVs in public safety communications: Energy efficiency perspective. S Shakoor, Z Kaleem, M I Baig, O Chughtai, T Q Duong, L D Nguyen, IEEE Access. 7S. Shakoor, Z. Kaleem, M. I. Baig, O. Chughtai, T. Q. Duong, and L. D. Nguyen, "Role of UAVs in public safety communications: Energy efficiency perspective," IEEE Access, vol. 7, pp. 140 665-140 679, 2019. 3GPP; Technical Specification Group Services and System Aspects. unmanned aerial system (uas) support," Stage 1; Release 17"3GPP; Technical Specification Group Services and System Aspects; unmanned aerial system (uas) support," Stage 1; Release 17, 2017. UAV Base Station Location Optimization for Next Generation Wireless Networks: Overview and Future Research Directions. C T Cicek, H Gultekin, B Tavli, H Yanikomeroglu, 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS). IEEEC. T. Cicek, H. Gultekin, B. Tavli, and H. Yanikomeroglu, "UAV Base Station Location Optimization for Next Generation Wireless Networks: Overview and Future Research Directions," in 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS). IEEE, 2019, pp. 1-6. Optimal LAP Altitude for Maximum Coverage. A Al-Hourani, S Kandeepan, S Lardner, IEEE Wireless Communications Letters. 36A. Al-Hourani, S. Kandeepan, and S. Lardner, "Optimal LAP Altitude for Maximum Coverage," IEEE Wireless Communications Letters, vol. 3, no. 6, pp. 569-572, 2014. Joint Optimization of Altitude and Transmission Direction in UAV-Based Two-Way Communication. W Huang, D M Kim, W Ding, P Popovski, IEEE Wireless Communications Letters. 84W. Huang, D. M. Kim, W. Ding, and P. Popovski, "Joint Optimization of Altitude and Transmission Direction in UAV-Based Two-Way Com- munication," IEEE Wireless Communications Letters, vol. 8, no. 4, pp. 984-987, 2019. Energy-Efficient 3D Deployment of Aerial Access Points in a UAV Communication System. N Babu, C B Papadias, P Popovski, IEEE Communications Letters. 2412N. Babu, C. B. Papadias, and P. Popovski, "Energy-Efficient 3D Deploy- ment of Aerial Access Points in a UAV Communication System," IEEE Communications Letters, vol. 24, no. 12, pp. 2883-2887, 2020. 3GPP, Physical Layer Procedures. TR 36.213"3GPP, Physical Layer Procedures," TR 36.213, Sep. 2015, v 10.12, 2017. Thinnest Covering of a Circle by Eight, Nine, or Ten Congruent Circles. G F Tóth, Combinatorial and computational geometry. 5236159G. F. Tóth, "Thinnest Covering of a Circle by Eight, Nine, or Ten Congruent Circles," Combinatorial and computational geometry, vol. 52, no. 361, p. 59, 2005. F Beichelt, Stochastic Processes in Science, Engineering and Finance. Chapman and Hall/CRCF. Beichelt, Stochastic Processes in Science, Engineering and Finance. Chapman and Hall/CRC, 2006.
[]
[ "ON THE STRUCTURE OF THE NEHARI SET ASSOCIATED TO A SCHRÖDINGER-POISSON SYSTEM WITH PRESCRIBED MASS: OLD AND NEW RESULTS", "ON THE STRUCTURE OF THE NEHARI SET ASSOCIATED TO A SCHRÖDINGER-POISSON SYSTEM WITH PRESCRIBED MASS: OLD AND NEW RESULTS" ]
[ "Gaetano Siciliano ", "Kaye Silva " ]
[]
[]
In this paper we apply the fibering method of Pohozaev to a Schrödinger-Poisson system, with prescribed L 2 norm of the unknown, in the whole R 3 . The method makes clear the role played by the exponents p = 3, p = 8/3, p = 10/3.Beside to show as old results can be obtained in a unified way, we exhibit also new ones. r 17 5.
10.1007/s11856-023-2477-9
[ "https://arxiv.org/pdf/2007.01718v1.pdf" ]
220,347,090
2007.01718
6e38ac372ebe6138dabf3c0e620c2d5b51457e0a
ON THE STRUCTURE OF THE NEHARI SET ASSOCIATED TO A SCHRÖDINGER-POISSON SYSTEM WITH PRESCRIBED MASS: OLD AND NEW RESULTS Gaetano Siciliano Kaye Silva ON THE STRUCTURE OF THE NEHARI SET ASSOCIATED TO A SCHRÖDINGER-POISSON SYSTEM WITH PRESCRIBED MASS: OLD AND NEW RESULTS arXiv:2007.01718v1 [math.AP] 3 Jul 2020 In this paper we apply the fibering method of Pohozaev to a Schrödinger-Poisson system, with prescribed L 2 norm of the unknown, in the whole R 3 . The method makes clear the role played by the exponents p = 3, p = 8/3, p = 10/3.Beside to show as old results can be obtained in a unified way, we exhibit also new ones. r 17 5. Introduction It is well-known that the following Schrödinger equation (where all the physical constants are normalised to unity) (1.1) i∂ t ψ = −∆ x ψ + q | · | −1 * |ψ| 2 ψ − λ|ψ| 8/3 ψ, ψ : R 3 × R → C, has a relevant role in many physical models. Here i is the imaginary unit, ∆ x is the Laplacian with respect to the spatial variables, * is the x−convolution and q, λ are positive parameters. As we can see, two types of potentials, different in nature, appear in the equation: the first one is the Hartree (or Coulomb) potential given by V H (·, t) = | · | −1 * |ψ(·, t)| 2 which is nonlocal and the second one is the Slater approximation of the exchange term, given by |ψ| 8/3 ψ, which is local, although nonlinear. In this context q and λ are also called, respectively, Poisson constant and Slater constant. The nonlocal potential can be seen as "generated" by the same wave function ψ, in virtue of the Poisson equation −∆ x V H = 4π|ψ| 2 . A particular feature of (1.1) is that, due to the invariance by U (1) gauge-transformations and the invariance by time translations, by the Noether theorem, on the solutions ψ the quantities M (ψ)(t) = |ψ(x, t)| 2 dx and E(ψ)(t) = 1 2 |∇ x ψ(x, t)| 2 dx + q 4 1 | · | * |ψ(·, t)| 2 |ψ(x, t)| 2 dx − 3λ 8 |ψ(x, t)| 8/3 dx are conserved in time. In physical terms they are called respectively mass and energy of the solution. Since the Hartree potential and the Slater term have different sign in the energy functional, they are in competition and then a different behaviour of E is expected depending on the values of the parameters q and λ. For more physical details and the derivation of (1.1) see e.g. the seminal works [6,12,16,18,23] and the references therein. We mention that the above equation have been derived also in the framework of Abelian Gauge Theories in [4] and called Schrödinger-Poisson system. In this work we consider the problem of finding standing waves solutions ψ(x, t) = u(x)e −iℓt u : R 3 → R, ℓ ∈ R to the above equation (1.1) under the mass constraint (as justified by the mass conservation law) and where the exponent 8/3 is replaced by a more general p. More specifically we consider the problem of finding ℓ ∈ R and u ∈ H 1 (R 3 ) satisfying (1.2)    −∆u + qφ u u − λ|u| p−2 u = ℓu, in R 3 u 2 = r (from now on all the integrals will be in R 3 and dx will be omitted) where • q, λ, r > 0 are given parameters, • p ∈ (2, 6), • φ u ∈ D 1,2 (R 3 ) is the unique solution of the Poisson equation −∆φ = 4πu 2 in R 3 , that can be represented, for u ∈ H 1 (R 3 ), as φ u = 1 | · | * u 2 . In particular we are interested in finding ground state solutions u, namely the solutions with minimal energy in the sense specified below. The usual way to attack the problem is by variational methods. Indeed the weak solutions of equation (1.2) are easily seen to be critical points of the energy functional E(u) := E q,λ (u) = 1 2 |∇u| 2 + q 4 φ u u 2 − λ p |u| p , constrained to the L 2 (R 3 ) sphere S r = u ∈ H 1 (R 3 ) : u 2 = r , as it follows by the Lagrange multiplier rule; in this case ℓ ∈ R is the Lagrange multiplier associated to the critical point. Then this problem fit into the question of finding critical points of the energy on the mass constraint (see [5]). An interesting problem is the search of ground states solutions, namely the minima of E on S r , since they give rise to stable standing waves solutions for the evolution problem (1.1). The problem is not trivial since the behaviour of E depends on q, λ, p but actually also the value r has a main role. The search of minima has been addressed by Lions in the celebrated papers [15] where he studied the problem from a mathematical point of view and established, roughly speaking, that the existence of minimisers is equivalent to the strict sub-additive inequality (1.3) inf Sr E < inf Ss E + inf S r−s E, 0 < s < r. In turn, this is equivalent to show that dichotomy does not occur when one tries to apply the concentration-compactness principle of Lions (since the vanishing is avoided due to inf Sr E < 0). As showed in recent papers, inequality (1.3) does hold in certain intervals that depends on the values of p. See e.g. Bellazzini and Siciliano [2,3], Catto and Lions [8], Sanchez and Soler [20], Jeanjean and Luo [11], Catto et al. [7] and Colin and Watanabe [9]. We point out that in the last decades equations like (1.2), even without the mass constraint, have been extensively studied due to the mathematical challenges raised by the nonlocal term φ u and its competition with the local nonlinearity. The aim of this paper is to establish, by using the fibration method of Pohozaev developed in [19] and the notion of extremal values introduced in Il'yasov [10], a general framework which permits us to search for global minimizers of E over components of a suitable Nehari type set. These components are shown to be differentiable manifolds and natural constraints for the energy functional. The method proposed in this work makes more clear the relation between minimizers of E restricted to S r and the parameters q, λ, p and r. In particular it gives some light on the role of special exponents p appearing in the Schrödinger-Poisson system: p = 8/3, p = 3, p = 10/3. Moreover it relates the strict sub-additive inequality with topological properties of some natural curves that crosses the Nehari manifolds as r varies. Indeed beside recovering known results, we get also new ones and interesting estimates. We think that this fibering approach can be used to solve also other different problems involving suitable constraint (different from the L 2 −norm). To conclude this Section, we point out that recently the fibration method together with the notion of extremal values, that guarantees regions of parameters in which the Nehari manifold method can be applied to prove existence of solutions, has been used successfully also in other types of equations as in [21,22,24]. Statements of the results In this paper we obtain four types of results, all based on the introduction of a suitable Nehari set type for the functional E restricted to S r and on its properties. Before to state our results, we need to introduce some notations. First of all, by using standard methods, it is easy to see (see Proposition 4.2) that every critical point of E restricted to S r belongs to the Nehari type set N r := N r,q,λ = u ∈ S r : |∇u| 2 + q 4 φ u u 2 − 3(p − 2) 2p λ |u| p = 0 Indeed this set has been already introduced in [1,11]. However, by means of the fibering method, we are able to decompose N r into the subsets N + r = u ∈ N r : |∇u| 2 − 3(p − 2)(3p − 8) 4p λ |u| p > 0 , N 0 r = u ∈ N r : |∇u| 2 − 3(p − 2)(3p − 8) 4p λ |u| p = 0 , N − r = u ∈ N r : |∇u| 2 − 3(p − 2)(3p − 8) 4p λ |u| p < 0 so that N r = N + r ∪N 0 r ∪N − r and even more, we show that whenever nonempty, N + r and N − r are differentiable manifolds of codimension 2 in H 1 (R 3 ) and natural constraint for the functional E restricted to S r (see Theorem 4.9). One of the main ingredients in our proofs will be the analysis of the functional R p (u) = |∇u| 2 3p−8 4(p−3) q φ u u 2 10−3p 4(p−3) λ |u| p 1 2(p−3) , u ∈ S 1 , p = 3. This functional is obtained with the help of Pohozaev's fibration method and is the so-called nonlinear Rayleigh quotient introduced by Il'yasov in [10]. Its topological properties are related with existence and non-existence of solutions for our problem and, although not in this form, this functional was already used in [7]. See also [9] where the nonlinear Rayleigh quotient was found by fixing r > 0 and considering q as a parameter; however this is different from our approach, whose main goal is to analyse the topological properties of the Nehari set also when r varies. (I) Our first results concern with new inequalities we were not able to find in the literature. They are obtained by exploring the functional R p . Theorem 2.1. For each p ∈ [10/3, 6) there exists a constant C q,λ,p > 0 such that λ |u| p ≤ C q,λ,p |∇u| 2 3p−8 2 u 2 2(p−3) q φ u u 2 3p−10 2 , ∀u ∈ H 1 (R 3 ) \ {0}. We remark that similar inequalities are known in the literature, see Catto et al. [7] for the case p ∈ [8/3, 10/3]. We have also the following inequality whose proof will be more involved than the previous theorem. Theorem 2.2. For each p ∈ (2, 3) there exists a constant C q,λ,p > 0 such that λ |u| p ≥ C q,λ,p u 2 2(p−3) q φ u u 2 10−3p 2 |∇u| 2 3p−8 2 , ∀u ∈ H 1 (R 3 ) \ {0}. (II) The second type of results deal with the structure of N r and its consequences. The situation will be different in the cases p = 3 and p = 3 and related existence/non-existence results are obtained. The case p = 3. For each p ∈ (2, 6) \ {3}, define the infimum of E over a subset of the Nehari set, by I r := I r,q,λ = inf{E(u) : u ∈ N + r ∪ N 0 r }. In particular inf Sr E ≤ I r . With our approach we are able to show the following. Theorem 2.3. There holds: i) If p ∈ (2, 8/3), then N r = N + r = ∅ and −∞ < I r = inf Sr E < 0 for all r > 0. ii) If p ∈ (10/3, 6), then N r = N − r = ∅ and I r = −∞ for all r > 0. iii) If p = 8/3, then N r = N + r = ∅ and −∞ < I r = inf Sr E < 0 for all r > 0. iv) If p = 10/3, then there exists a constant K GN > 0 such that N r = ∅ if, and only if, 5 3K GN 1 λ < r 2/3 . In case N r = ∅ we have that N r = N − r and I r = −∞. v) If p ∈ (8/3, 3), then N + r , N 0 r , N − r are non-empty and I r < 0 for all r > 0. vi) If p ∈ (3, 10/3), then there exist 0 < r * < r * 0 such that 1) N + r , N − r are non-empty if, and only if r > r * . 2) N 0 r = ∅ if, and only if r ≥ r * . Moreover if r > r * 0 , then I r = inf Sr E < 0 while if r ∈ [r * , r * 0 ] , then I r ≥ 0. Theorem 2.3 (parts of it) can be found in most of the works cited until now, in particular we would like to refer the reader to the works [7,11], where some calculations can be found explicitly. Our contribution here is to show how with a general framework it is possible to connect all these results with the partitioning of the Nehari set N r in terms of N + r , N 0 r , N − r . Moreover we give a characterisation of the quantities r * , r * 0 in terms of R p , namely r * = 4 10 − 3p 3p − 8 3p−10 4(p−3) 4p 3(p − 2)(3p − 8) 1 2(p−3) inf w∈S 1 R p (w), and r * 0 = 2(10 − 3p) 3p − 8 3p−10 4(p−3) p 3p − 8 1 2(p−3) inf w∈S 1 R p (w), (see (5.1)) and it will be evident that they are related to some geometrical properties of the Nehari set (see Proposition 4.5). Note that (as already known) for p ∈ [10/3, 6) we have I r = −∞, which is equivalent to N r = N − r and also suggests a mountain pass geometry (see Bellazini et al. [1]). Observe that itens iv) and vi) of Theorem 2.3 give also results of nonexistence of critical points of E over S r depending on r. Indeed since every critical point of E constrained to S r belongs to N r , it follows that if N r is empty, then there is no critical point at all; therefore as an immediate consequence of Theorem 2.3 we infer Corollary 1. The functional E constrained to S r has no critical points if: i) p = 10/3 and 5 3K GN 1 λ ≥ r 2/3 , ii) p ∈ (3, 10/3) and r < r * . Note that the results in Theorem 2.3 and Corollary 1 are independent on q > 0 and the unique case in which λ has a role is when p = 10/3. The case p = 3. Here the situation changes drastically in the sense that r has not anymore a role and the properties of the Nehari sets depends on q and λ. In order to make clear this dependence, we use the notations N q,λ , I q,λ , . . . instead of the previous N r , I r . . .. We prove the following. Theorem 2.4. Let p = 3 and r > 0. For each fixed q > 0, there exist positive constants λ * q < λ * 0,q such that i) N + q,λ , N − q,λ are non-empty if, and only if λ > λ * q . Moreover, if λ > λ * q then N 0 q,λ = ∅. ii) If λ > λ * 0,q , then I q,λ = inf Sr E < 0, while if λ ∈ (0, λ * 0,q ), then I q,λ ≥ 0. Similarly to the quantities r * 0 , r * , the quantities λ * 0,q , λ * q have a geometrical interpretation and are given by λ * 0,q = 9 2 1/2 q 1/2 inf w∈S 1 |∇w| 2 φ w w 2 1/2 |w| 3 , and λ * q = 2q 1/2 inf w∈S 1 |∇w| 2 φ w w 2 1/2 |w| 3 . We observe that unlike Theorem 2.3 item vi), in Theorem 2.4 we were not able to describe the behavior of N q,λ * 0,q . This is due to the fact that u ∈ N q,λ * 0,q if, and only if u is a minimiser of the quotient |∇w| 2 φ w w 2 1/2 |w| 3 on S 1 . The fact that the above quotient is bounded away from zero is due to Lions [14], however, it is an open problem if this functional has a minimizer. As was pointed out in [7], the minimizers of that functional also are (up to some constant) global minimizers of I q,λ * 0,q and I q,λ * 0,q = 0. As in before, we can deduce by using Theorem 2.4 a non-existence result. Corollary 2. Let p = 3 and r, q > 0. The functional E constrained to S r has no critical points if λ ∈ (0, λ * q ). (III) Our third type of result complements [3, Theorem 4.1]. Indeed in [3] (see also [7]) the authors proved, among other things, that for small r, there exist minimizers for E over S r . With our approach we are able to give a quantitative estimate on the "smallness" of r in terms of R p which guarantees the existence of minimisers. (IV) The fourth type of result deal with existence of global minimiser with positive energy when p ∈ (3, 10/3) which have never been treated in the literature. In this case the inequality inf Sr E < 0 is no longer true for r ∈ [r * , r * 0 ]. Moreover inf Sr E = 0 if 0 < r ≤ r * 0 and it is not achieved. We extend these results by showing existence of local minimizers for E on S r with positive energy, when r belongs to a neighborhood of r * 0 . Unfortunately we are not able to cover the whole range (3, 10/3). Our result is Theorem 2.6. There exists p 0 ∈ (3, 10/3) such that if p ∈ (p 0 , 10/3), then i) the function [r * , ∞) ∋ r → I r is decreasing, I r * 0 = 0 and I r > 0 for r ∈ [r * , r * 0 ); ii) for each r ∈ [r * , r * 0 ) there exists u ∈ N + r ∪ N 0 r such that I r = E(u); iii) there exists ε > 0 such that if r ∈ (r * 0 − ε, r * 0 ), then there exists u ∈ N + r such that I r = E(u). Indeed we find explicitly the number p 0 = 73 + √ 145 27 which is new in the literature. We believe it is an interesting problem to study the case p ∈ (3, p 0 ]. Organisation of the paper. The paper is organised as follows. In Section 3 we study deeply the Rayleigh quotient R p . Indeed most of the results are based on its properties. We give then the proof of Theorem 2.1 and Theorem 2.2. In Section 4 we introduce the set N r and give a description via the fibering method of its subsets N + r , N − r , N 0 r on which study the energy functional E. In particular we show that N + r , N − r are differentiable manifolds and natural constraints for E (see Theorem 4.9). In Section 5 we study deeply these sets depending on the parameters q, λ, p, r. This analysis will allow us to prove our second type of results, namely Theorem 2.3 and Theorem 2.4. In Section 6 we show the subadditivity condition for I r that will serve a prerequisite for the subsequent Section. In Section 7 we prove Theorem 2.5 and Theorem 2.6. Section 8 is devoted to show as we can recover with our methods the results in [1]. In the final Section 9 we give new estimate concerning I r 1 and I r 2 for r 1 < r 2 obtained by means of the fibering approach. Notations. As a matter of notations, in all the paper we denote with · p the L p −norm in R 3 . We use o n (1) to denote a vanishing sequence. In all the paper, given a function u and t > 0, we set u t (x) = t 3 2 u(tx). Note that u 2 = u t 2 . 3. The nonlinear Rayleigh quotient R p Let us start with a simple and general result, whose proof is straightforward so is omitted. Proposition 3.1. Suppose that b = 0, ce−bf = 0, (bd−ae)/(ce−bf ) > 0, (af −cd)/(ce−bf ) > 0, A, B, C > 0 and p ∈ (2, 6) \ {3}. Then the system (3.1) atA + brB + cr p 2 −1 t 3p 2 −5 C = 0, dtA + erB + f r p 2 −1 t 3p 2 −5 C = 0. admits a unique solution r, t > 0. Moreover explicitly we have r = bd − ae ce − bf 1 2(p−3) af − cd ce − bf 3p−10 4(p−3) A 3p−8 4(p−3) B 3p−10 4(p−3) C 1 2(p−2) . Recall the next two results. Lemma 3.2 (Catto et al. [7]). For each p ∈ (2, 6) and r > 0, there exist a sequence of functions {u n } ⊂ S r and positive constants C 1 , C 2 and C 3 , satisfying |u n | p = C 1 , |∇u n | 2 = C 2 n 2 3 , φ un u 2 n ≤ C 3 n 2 3 , ∀n ∈ N. Lemma 3.3 (Catto et al. [7]). Assume that p ∈ [8/3, 3], then there exist a constant C > 0 such that (3.2) |u| p ≤ C u 2 2(3−p) φ u u 2 p−2 2 |∇u| 2 p−2 , ∀u ∈ H 1 (R 3 ). If p ∈ [3, 10/3], then there exist a constant C > 0 such that (3.3) |u| p ≤ C u 2 2(p−3) φ u u 2 10−3p 2 |∇u| 2 3p−8 2 , ∀u ∈ H 1 (R 3 ). Let us define, for p ∈ (2, 6) \ {3} the quotient (3.4) R p (u) = |∇u| 2 3p−8 4(p−3) q φ u u 2 10−3p 4(p−3) λ |u| p 1 2(p−3) , u ∈ S 1 . Note that in particular (3.5) R 8/3 (u) = λ |u| 8/3 3/2 q φ u u 2 3/2 . The next result is just Lemma 3.2 and the inequality (3.3) of Lemma 3.3 rewritten in terms of our functional R p . Proposition 3.4. The functional R p defined is continuous. Moreover: i) if p ∈ (2, 3), then the functional R p is unbounded from above, ii) if p ∈ (3, 10/3], then the functional R p is bounded away from 0. Proof. The continuity is obvious. To prove i), if {u n } is the sequence given in Lemma 3.2, since p < 3, it follows that R p (u n ) ≥ Cn where C is some positive constant. The proof of ii) is a direct consequence of (3.3) of Lemma 3.3. For future reference we consider the system (3.6)        rt 2 |∇u| 2 + r 2 t 4 q φ u u 2 − 3(p − 2) 2p r p/2 t 3(p−2) 2 λ |u| p = 0, q 2 r 2 t φ u u 2 − p − 2 p r p/2 t 3(p−2) 2 λ |u| p = 0, where u ∈ S 1 and r, t > 0. From Proposition 3.1 and recalling R p defined in (3.4) and (3.5), we have that if p ∈ (2, 6) \ {3}, then the system has a unique solution ( r(u), t(u)) which is given by • if p ∈ (2, 3) with p = 8/3: (3.7) r(u) = 1 2 2(p − 2) p 2 3p−8 3p−8 4(3−p) R p (u), t(u) =     p 2(p − 2) r(u) 4−p 2 q λ φ u u 2 |u| p     2 3p−8 ; • if p = 8/3: (3.8) r(u) = 1 2 3 2 R 8/3 (u), t(u) = 1 2 5 2 λ |u| p 3 2 q φ u u 2 1 2 |∇u| 2 . Remark 1. Note that r(u) as a function of p is continuous in p = 8/3 since lim p→8/3 1 2 2(p − 2) p 2 3p−8 3p−8 4(3−p) = 1 2 3 2 . We recall the following Hardy-Littlewood-Sobolev inequality (see [13,Theorem 4.3]): Theorem 3.5. Assume that 1 < a, b < ∞ satisfies 1 a + 1 b = 5 3 . Then there exists a constant H a,b > 0 sucht that R 3 ×R 3 f (x)g(y) |x − y| dxdy ≤ H a,b f a g b , ∀f ∈ L a (R 3 ), g ∈ L b (R 3 ). Then we can prove the following result. Proposition 3.6. For each p ∈ [10/3, 6), there exists a constant C q,λ,p > 0 such that R p (u) ≥ C q,λ,p , ∀u ∈ S 1 , Proof. We have by definition R p (u) = |∇u| 2 3p−8 4(p−3) λ |u| p 1 2(p−3) q φ u u 2 3p−10 4(p−3) , u ∈ S 1 . We can assume that ∇u 2 = 1 and hence the conclusion is a simple consequence of Sobolev embbedings and the Hardy-Littlewood-Sobolev inequality. More involved is the proof of the next result. Proposition 3.7. For each p ∈ (2, 3), there exists a constant C q,λ,p > 0 such that R p (w) ≥ C q,λ,p , ∀w ∈ S 1 , Proof. First note that R p (w t ) = R p (w) and |∇w t | 2 = t 2 |∇w| 2 ∀w ∈ S 1 , t > 0. From this it follows that (3.9) inf w∈S 1 R p (w) = inf R p (w) : w ∈ S 1 , |∇w| 2 = 1 . Indeed, for any ε > 0 there exists u ∈ S 1 such that R p (u) ≤ inf S 1 R p + ε. Then, if we consider u t * , where t * |∇u| 2 = 1, we have that u t * ∈ S 1 and |∇u t * | 2 = 1. Consequently inf R p (w) : w ∈ S 1 , |∇w| 2 = 1 ≤ R p (u t * ) = R p (u) ≤ inf S 1 R p + ε and (3.9) follows. The approach to prove the theorem will be different according to the values of p. Case 1: p ∈ (8/3, 3). Assume on the contrary that there exists a sequence {w n } ⊂ S 1 such that R p (w n ) → 0 as n → +∞. Let r(w n ) and t(w n ) the solutions of system (3.6), see (3.7), and set for brevity r n = r(w n ) and u n = r 1/2 n w t(wn) n . It is easy to see that, for all n ∈ N, (3.10)        |∇u n | 2 + q 4 φ un u 2 n − 3(p − 2) 2p λ |u n | p = 0, q 2 φ un u 2 n − p − 2 p λ |u n | p = 0. Now observe that (3.10) is the same as [3, equation (4.9)] and therefore E(u n ) = 3 − p 2 − p |∇u n | 2 < 0, ∀n ∈ N, which implies that I rn ≤ E(u n ) < 0 and hence, since I rn → 0 as n → +∞, we obtain that E(u n ) → 0 as n → ∞. The last convergence implies [3, formula (4.10)]. Therefore, by following the proof of Theorem 4.1., step 5, case (e) of [3], we reach a contradiction and hence R p is bounded from below over the sphere S 1 . To treat the other cases of p, we observe that, since p < 3, the Lemma is proved once we show that R 2(p−3) p is bounded above if w 2 = ∇w 2 = 1. Case 2: p ∈ (12/5, 8/3]. By choosing a = p/2 and b = 3p/(5p − 6) from Theorem 3.5 we obtain φ w w 2 ≤ H a,b w 2 p/2 w 2 b = H a,b w 2 p w 2 2b , ∀w ∈ H 1 (R 3 ). Since 2 < 2b < p, from the interpolation inequality we have that w 2b ≤ w 2(3−p) 3(p−2) p w p 3(p−2) 2 , and hence φ w w 2 ≤ H a,b w 2p 3(p−2) p w 2p 3(p−2) 2 . Consequently, for a suitable constant C p,q > 0 depending only on p and q, we get R p (w) 2(p−3) = |∇w| 2 3p−8 2 q φ w w 2 10−3p 2 λ |w| p ≤ C q,p λ |w| p 10−3p 3(p−2) |w| p = C q,p λ |w| p 2(8−3p) 3(p−2) ≤ 2 C q,p λ , ∀w ∈ S 1 , |∇w| 2 = 1. Case 3: p ∈ (2, 12/5]. We choose a = b = 6/5 in Theorem 3.5 and use the interpolation inequality to conclude that φ w w 2 ≤ H 6/5,6/5 w 4 12/5 ≤ H 6/5,6/5 w 6p 6−p p w 2(12−5p) 6−p 6 , ∀w ∈ H 1 (R 3 ). From the Sobolev inequality we obtain that, for a suitable constant S > 0, depending only on p, we have φ w w 2 ≤ H 6/5,6/5 S w 6p 6−p p , ∀w ∈ S 1 , |∇w| 2 = 1. Consequently, for a suitable constant C p,q > 0 depending only on p and q, we have R 2(p−3) p (w) = |∇w| 2 3p−8 2 q φ w w 2 10−3p 2 λ |w| p ≤ C q,p λ |w| p 3(10−3p) 6−p |w| p = C q,p λ |w| p 8(3−p) 6−p ≤ 2 C q,p λ , ∀w ∈ S 1 , |∇w| 2 = 1, and hence the proof is concluded. As a consequence of the previous proposition, we have the new inequalities stated in Theorem 2.1 and Theorem 2.2. 3.1. Proof of Theorem 2.1 and Theorem 2.2. They follows respectively by Proposition 3.6 and Proposition 3.7 with a simple L 2 −normalization. Natural constraints for E In this Section we prove the existence of a natural constraint for the energy functional E restricted to S r . Although such a constraint appeared already in [1,11], the proof that it is a manifold and a natural constraint seems to be new. We start with a well know Pohozaev identity, which will be quite useful: for a, b, c, d ∈ R consider the equation (4.1) −a∆u + bu + cφ u u + d|u| p−2 u = 0, u ∈ H 1 (R 3 ), and define P : H 1 (R 3 ) → R by P (u) = a 2 |∇u| 2 + 3b 2 u 2 + 5c 4 φ u u 2 + 3d p |u| p . Then we have the following Pohozaev's identity, see [17, Theorem 2.2]: Proposition 4.1. If u satisfies (4.1), then P (u) = 0. As a consequence we get the next result which is already known (see [11, Lemma 2.1]) but that we prove for completeness. Proposition 4.2. Assume that u ∈ S r is a critical point of E restricted to S r , then |∇u| 2 + q 4 φ u u 2 − 3(p − 2) 2p λ |u| p = 0. Proof. Indeed, from the Lagrange multiplier rule there exist µ ∈ R such that E ′ (u) = µu, that is, u is a solution of −∆u + qφ u u − λ|u| p−2 u = µu. In particular u satisfies |∇u| 2 + q φ u u 2 − λ |u| p = µ u 2 and by Proposition 4.1 also 1 2 |∇u| 2 − 3 2 µ u 2 + 5 4 q φ u u 2 − 3λ p |u| p = 0 which together give the desired equality. Proposition 4.2 justifies the introduction of the set (4.2) N r,q,λ := u ∈ S r : |∇u| 2 + q 4 φ u u 2 − 3(p − 2) 2p λ |u| p = 0 , since it contains any solution u of (1.2). In the following we will simply write N r . Define also N + r = u ∈ N r : |∇u| 2 − 3(p − 2)(3p − 8) 4p λ |u| p > 0 , (4.3) N 0 r = u ∈ N r : |∇u| 2 − 3(p − 2)(3p − 8) 4p λ |u| p = 0 , (4.4) N − r = u ∈ N r : |∇u| 2 − 3(p − 2)(3p − 8) 4p λ |u| p < 0 (4.5) Just in Subsection 5.2 it will be more convenient to explicit the dependence on q and λ, instead of r, since they will have an important role. To obtain basic estimates for the elements of N r let us recall the Gagliardo-Nirenberg inequality: (4.6) |u| p ≤ K GN |∇u| 2 3(p−2) 4 u 2 6−p 4 , ∀u ∈ H 1 (R 3 ), where K GN > 0, hereafter, is the Gagliardo-Nirenberg constant which depends only on p. Then we have 3. If p ∈ (3, 10/3), then there exists constants c p , c ′ p > 0 such that |∇u| 2 ≥ c p r and |u| p ≥ c ′ p λr . 4. For p ∈ (10/3, 6), we have |∇u| 2 ≥ K 4 (10−3p) GN 3(p − 2)λ 2p 4 10−3p r 6−p 10−3p . Proof. Preliminarily observe that for any u ∈ N r we have (4.7) |∇u| 2 ≤ 3(p − 2) 2p λ |u| p . Combining (4.7) with the Gagliardo-Nirenberg inequality (4.6) we infer, for any u ∈ N r , |∇u| 2 ≤ K GN 3(p − 2)λ 2p |∇u| 2 3(p−2) 4 r 6−p 4 . From this we deduce 1., 2. and 4. 3. Now assume that p ∈ (3, 10/3). From [11,Lemma 2.3], there exist c, c p > 0 positive constants, such that |∇u| 2 + q 4 φ u u 2 − 3(p − 2) 2p λ |u| p ≥ c |∇u| 2 − c p |∇u| 2 3 2 r 1 2 , ∀u ∈ S r . Therefore c |∇u| 2 − c p |∇u| 2 3 2 r 1 2 ≤ 0, ∀u ∈ N r , and hence (4.8) |∇u| 2 ≥ c c p 2 1 r , ∀u ∈ N r . As for the second estimate, it follows by (4.7) and (4.8). For the sake of completeness we observe, by looking at the proof of [11, Lemma 2.2 and Lemma 2.3], that the constants appearing in 3. of Proposition 4.3 are given explicitly by (4.9) c p = p − 3 4 − p K GN 3(p − 2)(4 − p)2 7−p p 1/(p−3) , c ′ p = 2p 3(p − 2) c c p 2 , c = 64π − 1 64π and do not depend on q. As we will see, item 2. in Proposition 4.3 will be improved in Theorem 5.3. 4.1. The fibration for N r . We will use the fibration method of Pohozaev to study N r . Given u ∈ S 1 , define the fiber map ϕ r,q,λ,u : t ∈ (0, ∞) −→ E(r 1/2 u t ) ∈ R where u t (x) = t 3 2 u(tx) ∈ S 1 and then r 1/2 u t ∈ S r . Also in this case, until Subsection 5.2 we will write simply ϕ r,u . Then explicitly we have ϕ r,u (t) = t 2 2 r |∇u| 2 + t 4 r 2 q φ u u 2 − t 3 2 p−3 p r p/2 λ |u| p . A simple computation gives the next Lemma 4.4. The fiber map ϕ r,u is a smooth function and ϕ ′ r,u (t) = tr |∇u| 2 + r 2 4 q φ u u 2 − 3(p − 2) 2p t 3p 2 −4 r p/2 λ |u| p , ϕ ′′ r,u (t) = r |∇u| 2 − 3(p − 2)(3p − 8) 4p t 3p 2 −5 r p/2 λ |u| p . Then we can give a complete description of the fiber ϕ r,u . Proposition 4.5. For each u ∈ S 1 the following statements hold. I) If p ∈ (2, 8/3), then ϕ r,u has only one critical point at t + r (u) which is a global minimum with ϕ ′′ r,u (t + r (u)) > 0. II) If p = 8/3, we have: 1) if r 2 4 φ u u 2 − r p/2 p λ |u| p < 0, then ϕ r,u has only one critical point at t + r (u) which is a global minimum with ϕ ′′ r,u (t + r (u)) > 0; 2) if r 2 4 φ u u 2 − r p/2 p λ |u| p ≥ 0, then ϕ r,u is strictly increasing and has no critical points. III) If p ∈ (8/3, 10/3), then there are three possibilities: 1) ϕ r,u has exactly two critical points at t − r (u) < t + r (u). Moreover t + r (u) corresponds to a local minimum while t − r (u) corresponds to a local maximum with ϕ ′′ r,u (t + r (u)) > 0 and ϕ ′′ r,u (t − r (u)) < 0; 2) ϕ r,u is strictly increasing and has exactly one critical point at t 0 r (u). Moreover t 0 r (u) corresponds to an inflection point; 3) ϕ r,u is strictly increasing and has no critical points. IV) If p = 10/3, we have: 1) if r 2 |∇u| 2 − r p/2 p λ |u| p < 0, then ϕ r,u has only one critical point at t − r (u) which is a global maximum with ϕ ′′ r,u (t − r (u)) < 0; 2) if r 2 |∇u| 2 − r p/2 p λ |u| p ≥ 0, then ϕ r,u is strictly increasing and has no critical points. V) If p ∈ (10/3, 6), then ϕ r,u has only one critical point at t − r (u) which is a global maximum with ϕ ′′ r,u (t − r (u)) < 0. Proof. It is straightforward. A direct application of the Implicit Function Theorem shows that Lemma 4.6. Fix u ∈ S 1 and suppose that (a, b) ∋ r → t + r (u) (respectively t − r (u)) is well defined. Then (a, b) ∋ r → t + r (u) (respectively t − r (u)) is C 1 in (a, b) . From Lemma 4.4 it is easy to see that, for each r > 0, N r given in (4.2) can be written also as N r = u ∈ S 1 : ϕ ′ r,u (1) = 0 which, in some sense, justifies the name of Nehari set. Moreover it holds (see (4.3), (4.4) and (4.5)) that N + r = u ∈ N r : ϕ ′′ r,u (1) > 0 , N 0 r = u ∈ N r : ϕ ′′ r,u (1) = 0 , N − r = u ∈ N r : ϕ ′′ r,u (1) < 0 (4.10) and N r = N + r ∪ N 0 r ∪ N − r . Remark 2. Note that, given u ∈ S 1 , t * is a critical point of the fiber map ϕ r,u if and only if r 1/2 u t * ∈ N r . Actually t * is a minimum (respect. maximum or inflection) point of ϕ r,u if and only if r 1/2 u t * ∈ N + r (respect. N − r or N 0 r ). In the following we study deeply the sets N + r and N − r . 4.2. N + r and N − r as natural constraints. Let us start by defining, for r > 0, the functions Proof. Let us show the proof for N + r since for N − r is completely analogous. The proof will follow once we prove that h ′ (u) = 0, g ′ (u) = 0 and h ′ (u), g ′ (u) are linearly independent for each u ∈ N + r . In fact, h ′ (u) = 0 is straightforward. Suppose on the contrary that there exists u ∈ N + r and c ∈ R such that g ′ (u) = ch ′ (u). h(u) = 1 2 u 2 − r 2 , for u ∈ H 1 (R 3 ), g(u) = ϕ ′ r,u (1), for u ∈ S 1 . It follows that −2∆u − cu + qφ u u − 3(p − 2) 2 λ|u| p−2 u = 0. From Propostion 4.1 we conclude that                  |∇u| 2 − 3c 2 r + 5 4 q φ u u 2 − 9(p − 2) 2p λ |u| p = 0, 2 |∇u| 2 − cr + q φ u u 2 − 3(p − 2) 2 λ |u| p = 0, |∇u| 2 + q 4 φ u u 2 − 3(p − 2) 2p λ |u| p = 0. Let us set for brevity A = |∇u| 2 , B = q φ u u 2 , C = λ |u| p and solve the system with respect to these variables. A simple calculation shows that it has a unique solution when p = 3, in which case A = rc(8 − 3p) 8(p − 3) , B = rc(3p − 10) 2(p − 3) , C = rcp 6(p − 2)(p − 3) . We substitute A, C in ϕ ′′ r,u (1) to conclude that ϕ ′′ r,u (1) = 0, and hence a contradiction. If p = 3 we have two cases: when c = 0, then the system has no solution, which is a contradiction, however, when c = 0, the system has the following solution A = C 4 , B = C, C > 0. We substitute A, C in ϕ ′′ r,u (1) to conclude that ϕ ′′ r,u (1) = 0, again a contradiction. From all these contradictions we conclude that h ′ (u) and g ′ (u) are linearly independent for each u ∈ N + r . Moreover a careful look to the previous calculations shows that g ′ (u) = 0 is impossible, since in that case we would have c = 0, which gives a contradiction in all cases. Therefore g ′ (u) = 0 and N + r is a C 1 manifold with co-dimension 2 in H 1 (R 3 ). Now we prove that N + r and N − r are natural constraints for the energy functional E. Lemma 4.8. Assume that there exists u ∈ N + r ∪ N − r and µ, ν ∈ R such that E ′ (u) = µh ′ (u) + νg ′ (u) where h and g are given by (4.11). Then ν = 0. Proof. Indeed, applying Proposition 4.1 to the equation E ′ (u)−µh ′ (u)−νg ′ (u) = 0 we conclude that 3 2 (E ′ (u)u − µh ′ (u)u − νg ′ (u)u) − P (u) = 0. Simple calculations shows that 3 2 (E ′ (u)u − µh ′ (u)u − νg ′ (u)u) − P (u) = g(u) − νϕ ′′ r,u (1), which implies that νϕ ′′ r,u (1) = 0, and hence ν = 0. The next step is then to see for which values of q, λ, p, r the sets N r , N + r , N − r are non-empty. As a consequence of this study, we will be able to recover some results known in the literature by our unified approach. . In this case we can give a simple description of N r . We prefer to state separately the limit cases p = 8/3 and p = 10/3. Theorem 5.1. Let r > 0. Then i) if p ∈ (2, 8/3), then N r = N + r = ∅; ii) if p ∈ (10/3, 6), then N r = N − r = ∅. Proof. The proof of i) is a direct consequence of Proposition 4.5 item I) since for each u ∈ S 1 we have that r 1/2 u t + r (u) ∈ N + r . Similarly, the proof of ii) is a direct consequence of Proposition 4.5 item V ), since for each u ∈ S 1 we have that r 1/2 u t − r (u) ∈ N − r . Theorem 5.2. Let r > 0. If p = 8/3, then N r = N + r = ∅. Proof. In fact, from Proposition 4.5 item II) it is sufficiently to prove that there exists u ∈ S 1 such that r 2 4 φ u u 2 − r p/2 p |u| p < 0. If {u n } ⊂ S 1 is the sequence given by Lemma 3.2, then lim n→∞ r 2 4 qφ un u 2 n − r p/2 p λ |u n | p ≤ lim n→∞ C 3 n 2/3 r 2 4 q − C 1 r p/2 p λ = −C 1 r p/2 p λ. Therefore for n sufficiently large, we have that r 1/2 u t + r (un) n ∈ N + r . .6)). Moreover in this case it is N r = N − r . Proof. By Proposition 4.5 item IV ) it is sufficient to estimate, for u ∈ S 1 , the quantity r 2 |∇u| 2 − r p/2 p |u| p . By the Gagliardo-Nirenberg inequality (4.6) we have that |u| p ≤ K GN |∇u| 2 , ∀u ∈ S 1 ,     where K GN = sup u∈S 1 |u| p |∇u| 2     . It follows that r 2 |∇u| 2 − r p/2 p λ |u| p ≥ r 2 |∇u| 2 − K GN r p/2 p λ |∇u| 2 = r |∇u| 2 1 2 − 3K GN 10 r 2/3 λ . By definition of K GN , there exist u ∈ S 1 with r 2 |∇u| 2 − r p/2 p λ |u| p < 0 if, and only if, 5 3K GN 1 λ < r 2/3 , in which case r 1/2 u t − r (u) ∈ N − r . 5.1.2. The case p ∈ (8/3, 10/3) \ {3}. In this case the description of N r is more involved. We use the ideas introduced by Il'yasov [10]: for r > 0 and u ∈ S 1 , consider the system (recall the definitions in Subsection 4.1) ϕ r,u (s) = ϕ ′ r,u (s) = 0. Since p ∈ (8/3, 10/3) \ {3} we can solve it with respect to the variables s and r to obtain a unique solution, denoted hereafter with (s 0 (u), r 0 (u)), given by s 0 (u) =     p 3p − 8 1 r (p−2)/2 |∇u| 2 λ |u| p     2 3p−10 , and r 0 (u) = 2(10 − 3p) 3p − 8 3p−10 4(p−3) p 3p − 8 1 2(p−3) R p (u), where R p is defined in Section 3. The following proposition is just a consequence of the definitions and makes clear that p = 3 is a threshold. Proposition 5.4. Assume that p ∈ (8/3, 10/3)\{3}.Then for each u ∈ S 1 , there exist a unique pair (s 0 (u), r 0 (u)) such that ϕ r 0 (u),u (s 0 (u)) = ϕ ′ r 0 (u),u (s 0 (u)) = 0. Moreover (1) If p ∈ (8/3, 3) and r < r 0 (u), then ϕ r,u (s 0 (u)) < 0 and ϕ ′ r,u (s 0 (u)) = 0, while if r > r 0 (u), then ϕ r,u (s 0 (u)) > 0 and ϕ ′ r,u (s 0 (u)) = 0. (2) If p ∈ (3, 10/3) and r < r 0 (u), then ϕ r,u (s 0 (u)) > 0 and ϕ ′ r,u (s 0 (u)) = 0, while if r > r 0 (u), then ϕ r,u (s 0 (u)) < 0 and ϕ ′ r,u (s 0 (u)) = 0. Similarly, for r > 0 and u ∈ S 1 we consider the system ϕ ′ r,u (s) = ϕ ′′ r,u (s) = 0. Again, since p = 3 (and p = 10/3), we can solve it with respect to the variables s and r to obtain a unique solution, hereafter denoted with (s(u), r(u)), given by s(u) =     4p 3(p − 2)(3p − 8) 1 r (p−2)/2 |∇u| 2 λ |u| p     2 3p−10 , and r(u) = 4 10 − 3p 3p − 8 3p−10 4(p−3) 4p 3(p − 2)(3p − 8) 1 2(p−3) R p (u). Similarly to Proposition 5.4 we have: Proposition 5.5. Assume that p ∈ (8/3, 10/3) \ {3}. Then for each u ∈ S 1 , there exist a unique pair (s(u), r(u)) such that ϕ ′ r(u),u (s(u)) = ϕ ′′ r(u),u (s(u)) = 0. Moreover (1) If p ∈ (8/3, 3) and r < r(u), then ϕ ′ r,u (s(u)) < 0 and ϕ ′′ r,u (s(u)) = 0, while if r > r(u), then ϕ ′ r,u (s(u)) > 0 and ϕ ′′ r,u (s(u)) = 0. (2) If p ∈ (3, 10/3) and r < r(u), then ϕ ′ r,u (s(u)) > 0 and ϕ ′′ r,u (s(u)) = 0, while if r > r(u), then ϕ ′ r,u (s(u)) < 0 and ϕ ′′ r,u (s(u)) = 0. Furthermore Then the description of N r , N + r , N − r is given. Theorem 5.7. There hold: i) suppose that p ∈ (8/3, 3). Then for each r > 0 there exists u ∈ S 1 such that inf t>0 ϕ r,u (t) < 0. Moreover N + r and N − r are non-empty. ii) Suppose that p ∈ (3, 10/3). If r < r * , then N r = ∅, while if r > r * , then N + r and N − r are non-empty. Moreover if r < r * 0 , then inf t>0 ϕ r,u (t) ≥ 0 for each u ∈ S 1 , while if r > r * 0 , then there exists u ∈ S 1 such that inf t>0 ϕ r,u (t) < 0. Proof. i) Fix r > 0 and assume on the contrary that for each u ∈ S 1 we have that inf t>0 ϕ r,u (t) ≥ 0. In particular, it follows that ϕ r,u (t + r (u)) ≥ 0 and therefore, from Proposition 5.4 we conclude that r 0 (u) ≤ r for all u ∈ S 1 . This contradicts Proposition 5.6 (iii) and hence there exists u ∈ S 1 such that inf t>0 ϕ r,u (t) < 0. To conclude, note from Proposition 4.5 item III) that if u ∈ S 1 satisfies inf t>0 ϕ r,u (t) < 0, then r 1/2 u t − r (u) ∈ N − r and r 1/2 u t + r (u) ∈ N + r . ii) Fix r < r * and suppose on the contrary that N r = ∅. Take u ∈ N r and observe from Proposition 4.5 item III) that there exists t > 0 such that ϕ ′ r,u (t) ≤ 0 and ϕ ′′ r,u (t) = 0. From Proposition 5.5 we conclude that r ≥ r(u) ≥ r * which is clearly a contradiction and therefore N r = ∅. Now fix r > r * and assume on the contrary that N + r = ∅, which implies from Proposition 4.5 item III) that N − r = ∅ (and vice-versa). From the same proposition, we conclude that for each u ∈ S 1 , when ϕ ′′ r,u (t) = 0 then ϕ ′ r,u (t) ≥ 0. It follows from Proposition 5.5 that r < r(u) for all u ∈ S 1 , again a contradiction and hence N + r and N − r are non-empty. By using the function S 1 ∋ u → r 0 (u) instead of S 1 ∋ u → r(u), the rest of the proof is similar. Now we can give the proof of Theorem 2.3. Indeed i) follows by Theorem 5.1 and Proposition 4.5 item I). ii) Follows by Theorem 5.1 and Proposition 4.5 item V ). iii) Follows by Theorem 5.2. iv) Follows by Theorem 5.3. v) and vi) follow by Theorem 5.7. 5.2. The case p = 3 and Proof of Theorem 2.4. In this case the system ϕ r,u (t) = ϕ ′ r,u (t) = 0 has no solution with respect to the variables t, r. Therefore, instead of the variable r, we will solve the system with respect to the variable λ and analyze the dependence of the solutions with respect to q. It will be clear from the calculations that, at least topologically speaking, there are no changes in the fibering maps with respect to r, hence, to reflect the dependence on q, λ, we change the notation here; so for example we will write N q,λ , ϕ q,λ,u , . . . instead of N r , ϕ r,u , . . . we used up to now. Consider then the system of equations ϕ q,λ,u (t) = ϕ ′ q,λ,u (t) = 0. We solve this system with respect to the variables t, λ to find a unique solution given by t 0,q (u) =     3 r 1/2 |∇u| 2 λ |u| 3     −2 , and λ 0,q (u) = 9 2 1 2 q 1 2 |∇u| 2 φ u u 2 1 2 |u| 3 . Similarly, we consider the system ϕ ′ q,λ,u (t) = ϕ ′′ q,λ,u (t) = 0 and solve it with respect to the variables t and λ to obtain a unique solution given by t q (u) =     4 r 1/2 |∇u| 2 λ |u| 3     −2 , and λ q (u) = 2q 1 2 |∇u| 2 φ u u 2 1 2 |u| 3 . As an application of Lemma 3.3 we have: Proposition 5.8. For each r, q > 0, the functions S 1 ∋ u → λ 0,q (u), λ q (u) are bounded away from zero. Moreover λ q (u) < λ 0,q (u) for all u ∈ S 1 . For each r, q > 0 define λ * 0,q := inf u∈S 1 λ 0,q (u) and λ * q := inf u∈S 1 λ q (u). Then the proof of Theorem 2.4 can be finished. Indeed it is similar to the proof of Theorem 5.7 (we use Proposition 5.8 instead of Proposition 5.6). 6. On the sub-additive property for p ∈ (2, 10/3) For each p ∈ (2, 10/3) define I r := I r,q,λ = inf{E(u) : u ∈ N + r ∪ N 0 r }. Since E is bounded from below on S r (see e.g. [2, Lemma 3.1]) and N r ⊂ S r , we conclude from Theorem 2.3 that I r is well defined, that is I r > −∞. In this Section we show how our method can be used to prove the sub-additive condition for I r , namely (6.1) I r < I s + I r−s , 0 < s < r. Again it is convenient to study separately the case p = 3. 6.1. The case p ∈ (2, 10/3) \ {3}. We recall that, given u ∈ S 1 , by definition ( r(u), t(u)) is the unique solution of ii) if p ∈ (8/3, 3), then r(u) < r 0 (u) < r(u), for all u ∈ S 1 .        rt 2 |∇u| 2 + r 2 t 4 q φ u u 2 − 3(p − 2) 2p r p/2 t 3(p−2) 2 λ |u| p = 0, q 2 r 2 t φ u u 2 − p − 2 p r p/2 t 3(p−2) 2 λ |u| p = 0, Proof. That S 1 ∋ u → r(u) is bounded away from 0, for all p ∈ (2, 10/3) \ {3}, follows from Proposition 3.4 and Theorem 3.7. The proofs of i) and ii) are straightforward. As was already observed (see e.g. [3]), in order to prove the strict sub-additive condition (6.1), it is sufficiently to show that I r /r is decreasing in r. Our strategy to prove that I r /r is decreasing in r will be the following: we will construct paths that cross the Nehari manifolds when r varies and show that the energy restricted to these paths, divided by r, is decreasing. Then we will show that the function I r /r will inherit this property for some specific values of r. Fix u ∈ S 1 and i) if p ∈ (2, 8/3), define f (r) := ϕ r,u (t + r (u)) for all r ∈ (0, ∞); ii) if p = 8/3 and r 2 4 φ u u 2 − 3 8 r 4/3 λ |u| 8/3 < 0, define f (r) := ϕ r,u (t + r (u)) for all r ∈ (0, r(u)) where, in this case, r(u) is by definition the unique r > 0 for which r 2 4 φ u u 2 − 3 8 r 4/3 λ |u| 8/3 = 0; iii) if p ∈ (8/3, 3), define f (r) := ϕ r,u (t + r (u)) for all r ∈ (0, r(u)); iv) if p ∈ (3, 10/3), define f (r) = ϕ r,u (t + r (u)) for all r ∈ (r(u), ∞). Define also g(r) := f (r) r . Clearly f , and consequently g, depends on u ∈ S 1 . Proposition 6.2. Let u ∈ S 1 . i) If p ∈ (2, 8/3), then the function (0, ∞) ∋ r → g(r) is decreasing for all r ∈ (0, r(u)) and increasing for all r ∈ ( r(u), r(u)). ii) If p = 8/3, then the function (0, ∞) ∋ r → g(r) is decreasing for all r ∈ (0, r(u)) and increasing for r ∈ ( r(u), r(u)). iii) If p ∈ (8/3, 3), then the function (0, r(u)) ∋ r → g(r) is decreasing for all r ∈ (0, r(u)). iv) If p ∈ (3, 10/3), then the function (r(u), ∞) ∋ r → g(r) is decreasing for all r ∈ ( r(u), ∞). Proof. Indeed, from the definition of f (r), it follows from Lemma 4.6 that g is C 1 and g ′ (r) = rϕ ′ r,u (t + r (u)) + q 2 t + r (u) φ u u 2 − p − 2 p r p/2−2 t + r (u) 3p 2 −3 λ |u| p . For simplicity denote t r = t + r (u). It follows that g ′ (r) = 0 if, and only if (6.2)        rt r |∇u| 2 + r 2 4 q φ u u 2 − 3(p − 2) 2p r p/2 t 3p 2 −4 r λ |u| p = 0, q 2 t r φ u u 2 − p − 2 p r p/2−2 t 3p 2 −3 r λ |u| p = 0, which is equivalent to system (3.6). Fixed r > 0, define h(t) := q 2 φ u u 2 − p − 2 p r p 2 −2 t 3p 2 −4 λ |u| p . We consider two cases: Case 1: p = 8/3. Observe that the first equation of (3.6) has a unique solution t. By plugging this solution on the left hand side of the second equation, which is exactly th(t), the proof of ii) is complete. Case 2: p ∈ (2, 10/3) \ {8/3, 3}. Note that the second equation of (3.6) has a unique solution t. By plugging this solution on the left hand side of the first equation, which is exactly ϕ ′ r,u (t), we conclude, by using Proposition 4.5, the following: 1) if p ∈ (2, 8/3) and r ∈ (0, r(u)), then ϕ ′ r,u (t) > 0, while for r ∈ ( r(u), r(u)) we have that ϕ ′ r,u (t) < 0; 2) if p ∈ (8/3, 3) and r ∈ (0, r(u)), then ϕ ′ r,u (t) < 0; 3) if p ∈ (3, 10/3) and r ∈ ( r(u), ∞), then ϕ ′ r,u (t) < 0. Now we can prove i), iii) and iv). i) If r ∈ (0, r(u)), then from item 1), we conclude that t > t r and hence h(t r ) < h(t) = 0, while if r ∈ ( r(u), r(u)), then t < t r and hence h(t r ) > h(t) = 0. iii) If r ∈ (0, r(u)), then from item 2), we conclude that t < t r and hence h(t r ) < h(t) = 0, that is g ′ (r) < 0. iv) If r ∈ ( r(u), ∞), then from item 3), we conclude that t < t r and hence h(t r ) < h(t) = 0, that is g ′ (r) < 0. Let us define now M r = u u 2 : u ∈ N + r and E(u) < 0 . Lemma 6.3. There holds: i) if p ∈ (2, 3) and 0 < r 1 < r 2 < r * , then M r 1 = M r 2 = S 1 ; ii) if p ∈ (3, 10/3) and r * < r 1 < r 2 , then M r 1 ⊂ M r 2 . Proof. i) Fix 0 < r < r * . Then, ϕ r,u satisfies Item III − 1) of Proposition 4.5 for all u ∈ S 1 and hence M r 1 = M r 2 = S 1 . ii) Indeed, if u ∈ M r 1 , then the fiber map ϕ r 1 ,u satisfies Item III − 1) of Proposition 4.5. From Proposition 5.5 it follows that r(u) < r 1 < r 2 and hence ϕ r 2 ,u also satisfies Item III − 1) of Proposition 4.5, which implies that M r 1 ⊂ M r 2 . Proof. In order to prove the lemma, it is sufficiently to prove that the left hand side of the second equation of system (6.2) is bounded from above by c for all u ∈ M r and r ∈ [a, b]. First observe from Proposition 6.1 that r(u) < r 0 (u) < r for all u ∈ M r and all r ∈ [a, b] and hence from Theorem 6.2, we conclude that g ′ (r) < 0 for all u ∈ M r and r ∈ [a, b]. Now note that g(r) = ϕ r,u (t + r (u))/r = ϕ r,su (t + r (su))/r for all s > 0 and therefore, by choosing s = 1/ ∇u 2 , we can assume that ∇u 2 = 1 for all u ∈ M r . Suppose on the contrary that there exists a sequence {u n } ⊂ M r with ∇u n 2 = 1 and corresponding sequences t n > 0, r n ∈ [a, b] such that t n =     p 2(p − 2) r 4−p 2 n q λ φ un u 2 n |u n | p     2 3p−8 + o n (1). By plugging t n in the first equation of (6.3) we conclude that r n = r(u n ) + o n (1) and hence r n = cr 0 (u n ) + o n (1) < cr n + o n (1) where c ∈ (0, 1), which is a contradiction. Then there exists a negative constant c r such that g ′ (r) < c r for all u ∈ M r . Since we do not have a priori estimates like in Proposition 4.3 for the case p ∈ (2, 3), a similar version of Lemma 6.4 for that case is not so immediate, however, if we control the term |u| p , then we can prove the following: Lemma 6.5. Suppose that p ∈ (2, 3) and let r ∈ [a, b] where 0 < a < b < inf u∈S 1 r(u). Fix d > 0, then there exists a negative constant c such that g ′ (r) < c for all u ∈ M r satisfying |u t + r (u) | p ≥ c and all r ∈ [a, b]. where θ ∈ (r 1 , r 2 ). As a consequence I r 2 r 2 ≤ I r 1 r 1 + c(r 2 − r 1 ), and the proof of is complete. As an immediate consequence of Theorem 6.6 we have the subadditivity inequality for I r . Theorem 6.7. There holds: i) if p ∈ (2, 3), then for each r 1 , r 2 ∈ (0, inf u∈S 1 , r(u)), with r 1 < r 2 , we have that I r 2 < I r 1 + I r 2 −r 1 ; ii) if p ∈ (3, 10/3), then for each r 1 , r 2 ∈ (r * 0 , ∞), with r 1 < r 2 , we have that I r 2 < I r 1 + I r 2 −r 1 . Therefore I r r 3 = inf u∈M 1 ϕ r,u (t + r (u)) r 3 = inf u∈M 1 ϕ 1,u (t + 1 (u)) = I 1 and the proof is completed. Then we have also for p = 3 the subadditivity condition. Theorem 6.11. Suppose that λ > λ * 0,q , then for each 0 < r 1 < r 2 , we have that I r 2 < I r 1 + I r 2 −r 1 . Proof. From Theorem 2.4 we know that λ * q < λ * 0,q and I 1 < 0, therefore, the conclusion is a consequence of Proposition 6.10. Constrained Minimization for p ∈ (2, 10/3) \ {3} Now we turn our attention to the existence of minimizers: it is convenient to consider two cases according to the values of p: • p ∈ (2, 3), • p ∈ (3, 10/3), although the first case is almost done. 7.1. The case p ∈ (2, 3) and Proof of Theorem 2.5. The proof follows immediately from Theorem 6.7. 7.2. The case p ∈ (3, 10/3) and Proof of Theorem 2.6. By the definitions (see (5.1)): ∀r > r * 0 : I r = inf Sr E < 0 and I r * 0 = inf S * r 0 E = 0. In both cases the existence of minimizers is already known (see [3,7,11] and also our Theorem 6.7). However as we will see 0 = inf Sr E < I r if r ∈ (r * , r * 0 ). Let us start with the following Theorem 7.1. If (r * , +∞) ∋ r → I r is decreasing, then for each r ∈ (r * , r * 0 ) there exists u ∈ N + r ∪ N 0 r such that I r = E(u). Proof. In fact, let {u n } ⊂ N + r ∪ N 0 r be a minimizing sequence to I r . It follows from Proposition 4.3 that there exist positive constants c, C such that c ≤ u n ≤ C, ∀n ∈ N. and we conclude that u n 0 in L p (R 3 ). So {u n } does not vanish and then, up to translations, there exists a subsequence, still denoted by {u n }, that converges weakly in H 1 (R 3 ), strongly in L 2 loc (R 3 ) and almost everywhere in R 3 , to some non-zero function u ∈ H 1 (R 3 ). From [25, Lemma 2.2 ], we conclude that (7.1) I r = lim n→∞ E(u n ) = E(u) + lim n→∞ E(u n − u). Let as usual Q(u) = |∇u| 2 + q 4 φ u u 2 − 3(p − 2) 2p λ |u| p = 0, and note that We claim that Q(u) ≤ 0. On the contrary we would have from (7.2) that Q(u n − u) < 0 for sufficiently large n. From Proposition 4.5, there exists t n > 0 such that (u n − u) tn ∈ N + un−u 2 2 for large n. Once E(u n − u) < I r from (7.1) and u n − u 2 2 < r from (7.3) for sufficiently large n, we conclude that I un−u 2 2 < E((u n − u) tn ) < E(u n − u) < I r , which contradicts the hypothesis that (r * , ∞) ∋ r → I r is decreasing and therefore Q(u) ≤ 0. From Proposition 4.5 there exists t > 0 such that u t ∈ N + u 2 2 ∪ N 0 u 2 2 . Thus, since E(u) ≤ I r from (7.1) and u 2 2 ≤ r from (7.3), we conclude that I u 2 2 ≤ E(u t ) ≤ E(u) ≤ I r . Therefore, from the hypothesis (r * , ∞) ∋ r → I r is decreasing, we conclude that r = u 2 2 , u ∈ N + In order to make use of Theorem 7.1, we need to show that (r * , +∞) ∋ r → I r is decreasing. Unfortunately we are able to do so only for some values of p ∈ (3, 10/3), although we conjecture it is true for all p in the range. We note here that in fact, when I r < 0 this is a standard result in the literature. However when I r > 0, which is the case for r ∈ [r * , r * 0 ] (see Theorem 2.3), the inequalities goes in the opposite direction and thus the proof seems not to be direct. Our strategy to prove that I r is decreasing in r will be the following: we will construct paths that crosses the Nehari manifolds when r varies and shows that the energy restricted to theses paths is decreasing. To this end we need to calculate some derivatives. Proof. For simplicity denote A = |∇u| 2 , B = φ u u 2 and C = λ |u| p . Since u ∈ N + r we have that Proof. As we observed before E r * 0 = I r * 0 and there exists w ∈ N + r * 0 with E(w) = I r * 0 . Let u ∈ S 1 be such that w = r * 0 u t + r * 0 (u) . Since E(w) = 0 we conclude from the definition of r * 0 and Theorem 5.7 that r 0 (u) = r * 0 . Moreover r * = r(u). It follows that t + r (u) is well defined for each r ∈ (r * , r * 0 ). Since I r ≥ 0 for all r ∈ (r * , r * 0 ) we obtain from Corollary 3 that 0 = lim r↑r * 0 E(ru t + r (u) ) ≥ lim r↑r * 0 I r ≥ 0 and the proof is concluded. Theorem 7.8. For each p ∈ (p 0 , 10/3) there exists ε > 0 such that for each r ∈ (r * 0 − ε, r * 0 ), I r is achieved. More specifically, there exists u ∈ N + r satisfying I r = E(u). i) This item is direct since for p = 10/3, the system is linear in t. ii) Since t r is continuous (see Lemma 4.6), it is sufficiently to show that there exist some 0 < r 1 < r(u) < r 2 such that h(r 1 ) < 0 and h(r 2 ) > 0. We start with the existence of r 1 . We claim that (8.2) lim r→0 h(r) < 0. Indeed, if t r is bounded from above as r → 0, then (8.2) is obvious, therefore let us assume that t r → ∞ as r → 0. Note from the first equation of (8.1) that Therefore the claim is proved. Now we prove the existence of r 2 . We claim that Indeed, if t r is bounded away from 0 as r → 0, then (8.3) is obvious, therefore let us assume that t r → 0 as r → ∞. Note from the first equation of (8.1) that |∇u| 2 − 3(p − 2) 2p r p/2−1 t 3p−10 2 r λ |u| p = − r 4t r q φ u u 2 . |∇u| 2 − 3(p − 2) 2p r p/2−1 t Since r/4t r → +∞ as r → +∞, we conclude that r p/2−1 t 3p−10 2 r λ |u| p → +∞ as r → +∞ and the proof is complete. Let p ∈ (10/3, 6) and for r, c > 0 define M r = u u 2 : u ∈ N − r and |u| p ≥ c and |∇u| 2 ≤ d . Proof. In order to prove the lemma, it is sufficiently to prove that the left hand side of the second equation of system (8.1) is bounded from above by c for all u ∈ M r and r ∈ [a, b]. From Theorem 8.1, we have that f ′ (r) < 0 for all u ∈ S 1 . Now note that f (r) = ϕ r,u (t + r (u)) = ϕ r,su (t + r (su)) for all s > 0 and therefore, by choosing s = 1/ ∇u 2 , we can assume that ∇u 2 = 1 for all u ∈ M r . Suppose on the contrary that there exists a sequence {u n } ⊂ M r satisfying ∇u n 2 = 1 and corresponding sequences {t n } ⊂ (0, +∞), exists u ∈ S r satisfying E(u) = min Sr E. Proposition 4. 3 . 3Let r, λ > 0 and u ∈ N r . 1. For p ∈ (2, 10/3), we have . 7 . 7Whenever nonempty, N + r and N − r are C 1 manifolds in H 1 (R 3 ) of co-dimension 2. . Whenever nonempty, N + r and N − r are C 1 manifold in H 1 (R 3 ) of co-dimension 2 and natural constraints for E. 5 . 5Structure of N r , N + r and N − r The structure of N r , N + r and N − r strongly depends on the values of p and indeed different approaches are needed. The particular value p = 3 is treated separately. 5.1. The case p = 3 and Proof of Theorem 2.3. In is convenient to consider the cases p ∈ (2, 8/3] ∪ [10/3, 6] and p ∈ (8/3, 10/3) \ {3}. 5.1.1. The case p ∈ (2, 8/3] ∪ [10/3, 6) Theorem 5. 3 .( 3Let r > 0. If p = 10/3. Then N r = ∅ if and only if as usual K GN is the Gagliardo-Nirenberg constant as in (4 Proposition 5 . 6 . 56For each u ∈ S 1 we have that:i) if p ∈ (8/3, 3), then r 0 (u) < r(u); ii) if p ∈ (3, 10/3), then r 0 (u) > r(u). Moreover iii) if p ∈ (8/3, 3), then the functions S 1 ∋ u → r 0 (u), r(u) are unbounded from above; iv) if p ∈ (3, 10/3), then the functions S 1 ∋ u → r 0 (u), r(u) are bounded away from zero.Proof. The proofs of i) and ii) are straightforward and the proofs of iii) and iv) are consequence of Proposition 3.4.To treat the case p ∈ (3, 10/3) we need also the numbers see ( 3 3.7) and(3.8) for the explicit value of the solutions.Proposition 6.1. For each p ∈ (2, 10/3) \ {3}, the functional S 1 ∋ u → r(u) is bounded away from 0. Moreover, i) if p ∈ (3, 10/3), then r(u) < r(u) < r 0 (u), for all u ∈ S 1 ; Lemma 6 . 4 . 64Suppose that p ∈ (3, 10/3) and let r ∈ [a, b] where r * 0 < a < b. Then there exists a negative constant c such that g ′ (r) < c r for all u ∈ M r and r ∈ [a, b]. λ |u n | p = o n (1), From Proposition 4.3 and Gagliardo-Nirenberg inequality it follows that there exists positive constants c, d such that c ≤ t n ≤ d and c ≤ |u n | p ≤ d for all n. Therefore from the second equation of (6.3) we obtain that Q (u n ) = Q(u) + lim n→∞ Q(u n − u), and (7.3) u 2 2 = r − lim n→∞ u n − u 2 2 . E(u) = I r . Lemma 7 . 2 . 72If ϕ ′ r,u (t) = 0 and ϕ ′′ r,u (t) > 0, then 1 λ |u| p < 0. the conclusion. Corollary 3 . 3Let I ⊂ R be an open interval and fix u ∈ S 1 . If t + r (u) is defined for all r ∈ I, then the function I ∋ r → t + r (u) is C 1 . 2 + o r (1), as r → 0. Lemma 8. 2 . 2Suppose that p ∈ (10/3, 6) and r ∈ [a, b] where 0 < a < b < inf u∈S 1 r(u), then there exists a negative constant c such that f ′ (r) < c for all u ∈ M r and all r ∈ [a, b]. Proof. In order to prove the lemma, it is sufficiently to prove that the left hand side of the second equation of system (6.2) is bounded from above by c for all u ∈ M r satisfying |u t + r (u) | p ≥ d and all r ∈ [a, b]. From Theorem 6.2, we have that g ′ (r) < 0 for all u ∈ M r . Now note that g(r) = ϕ r,u (t + r (u))/r = ϕ r,su (t + r (su))/r for all s > 0 and therefore, by choosing s = 1/ ∇u 2 , we can assume that ∇u 2 = 1 for all u ∈ M r satisfying |u t + r (u) | p ≥ d. Suppose on the contrary that there exists a sequence {u n } ⊂ M r satisfying ∇u n 2 = 1 and |u t + r (un) n | p ≥ d and corresponding sequences t n > 0, r n ∈ [a, b] such that        r n t n |∇u n | 2 + r 2 n 4 q φ un u 2 n − 3(p − 2) 2p r p/2 n t 3p 2 −4 n λ |u n | p = 0,Arguing as in the proof of Lemma 6.4 we conclude thatfor some ε, which is a contradiction. The proof is complete.At this point we have the desired result on I r /r. Theorem 6.6. There holds: i) if p ∈ (2, 3), then the function (0, inf u∈S 1 , r(u)) ∋ r → I r /r is decreasing;ii) if p ∈ (3, 10/3), then the function (r * 0 , ∞) ∋ r → I r /r is decreasing. Proof. i) Fix 0 < r 1 < r 2 < inf u∈S 1 , r(u) < r * and let {u n } ⊂ N + r 1 be a minimizing sequence to I r 1 . Since every such sequence is non-vanishing, we can assume that |u n | p ≥ d for some positive constant d and all r ∈ [r 1 , r 2 ]. From Lemma 6.3, Lemma 6.5 and the mean value theorem, we conclude that, for all n ∈ N,where θ ∈ (r 1 , r 2 ). As a consequenceand the proof of i) is complete.ii) Fix r * 0 < r 1 < r 2 and note from Lemma 6.3, Lemma 6.4 and the mean value theorem thatRemark 3. When p ∈ (2, 8/3] we see from Theorem 6.2 that after r(u) the function g is increasing. This suggest that the same property may hold for I r /r when r is big and this suggests that I r may not satisfies the strict sub-additive property.6.2. The case p = 3. We assume that λ > λ * q , which implies from Theorem 2.4 that N + r = ∅ for all r > 0. We define in this caseSince, as observed in Subsection 5.2, the system ϕ ′ r,u (t) = ϕ ′′ r,u (t) does not depend on r > 0, it follows that Lemma 6.8. There holds:From Lemma 6.8 we conclude that if u ∈ M 1 ⊂ S 1 , then t + r (u) is defined for all r > 0 and thus we can define f (r) = ϕ r,u (t + r (u)). Lemma 6.9. For each r > 0 and u ∈ M 1 , we have that f (r) = f (1)r 3 .Proof. Note thatSince ru t + r (u) ∈ N + r , we also have thatwhich completes the proof.Proposition 6.10. For each r > 0, we have that I r = I 1 r 3 .Proof. For each u ∈ M 1 , we have from Lemma 6.9 thatProof. Indeed, define F (r, t) = ϕ ′ r,u (t) for r ∈ I and t > 0. From Lemma 7.2 it follows that F (r, t + r (u)) = 0 and ∂F ∂r (r, t + r (u)) < 0. The proof is then a consequence of the Implicit Function Theorem.Consider the equation −27x 2 + 146x − 192 = 0. It has two real roots and the biggest one is given byProof. For simplicity denoteSince u ∈ N + r (see Lemma 4.4 and (4.10)) we have thatFrom the equality in (7.5) we conclude thatand then, from the inequality in (7.5), we obtainSince by assumptions u ∈ N + r (see Remark 2), from Proposition 4.3 there exists a constant c ′ p > 0 such thattherefore, from the definition of p 0 , coming back to (7.6), it follows that, from which the conclusion easily follows.Observe that by (4.9), c ′′ p has an explicit expression. Proposition 7.4. Suppose that p ∈ (p 0 , 10/3), then the function (r * , ∞) ∋ r → I r is decreasing.Proof. Define f (r) = E(r 1/2 u t + r (u) ) and set for brevity t(r) = t + r (u). Observe from Proposition 4.6 that f is differentiable andand hence E rwhich implies that I r 2 < I r 1 and the proof is finished.As a consequence of Theorem 7.1 and Proposition 7.4 we have:Theorem 7.5. Fix p ∈ (p 0 , 10/3), then for each r ∈ (r * , r * 0 ) there exists u ∈ N + r ∪ N 0 r such that I r = E(u). Now we will show that for r near r * 0 the minimizer found in Theorem 7.5 belongs to N + r . To this end we need to compare the energy of E restricted to N 0 r with I r . Lemma 7.6. For each r > r * , there exists a positive constant c such thatProof. Indeed by using the pair of equations that characterize u ∈ N 0 r , that isProof. From Theorem 7.5 it remains to prove that u ∈ N + r . Let c/r be the constant given by Lemma 7.6. Given 0 < d < c/r, from Lemma 7.7 there exists ε > 0 such that I r < d for all r ∈ (r * 0 − ε, r * 0 ). In particular since I r < c/r it follows that u / ∈ N 0 r for all r ∈ (r * 0 − ε, r * 0 ) and consequently u ∈ N + r .We can finish now the proof of Theorem 2.6. In fact i) follows by Proposition 7.4 and Lemma 7.7; ii) follows by Theorem 7.5 and iii) follows by Theorem 7.8.The case p ∈ [10/3, 6)This case was treated in[1], where existence of global minimizers over the Nehari manifold N − r was proved for small r. Their proof relies on the fact that for small r the function J r = inf{E(u) : u ∈ N − r }, is decreasing.Fix u ∈ S 1 and define i) if p = 10/3 and r 2ii) if p ∈ (10/3, 3), define f (r) := ϕ r,u (t − r (u)) for all r ∈ (0, ∞). Now observe from Lemma 4.6 that f is C 1 andFor simplicity denote t r = t − r (u). It follows that f ′ (r) = 0 if, and only ifFrom Proposition 3.1, system (8.1) has a unique solution (r(u), t(u)), whereNote that r(u) = r(u) when p = 10/3.ii) If p ∈ (10/3, 6), then the function (0, ∞) ∋ r → f (r) is decreasing for all r ∈ (0, r(u)) and increasing for r ∈ (r(u), ∞).Proof. Let t r = t − r (u). It is enough to show that the left hand side of the second equation of system (8.1) is negative. To this end, by multiplying the first equation by −4 and substituting we obtain thatFrom Proposition 4.3 and the condition |∇u tn n | 2 ≤ d we conclude that {t n } is bounded away from zero and infinity, therefore |u n | p is bounded away from zero and arguing as in the proof of Lemma 6.4 we conclude that r n = r(u n ) + o n (1) ≥ inf u∈S 1 r(u) + o n (1) > b + ε + o n (1) for some ε, which is a contradiction. The proof is complete. Lemma 8.3. Suppose that p ∈ (10/3, 6), then J r > 0 and every minimizing sequence is bounded and non-vanishing.Proof. Note thatTherefore from Proposition 4.3 we deduce that J r > 0. If {u n } is a minimizing sequence, then from (8.4) we conclude that { ∇u n 2 } is bounded and hence {u n } is bounded in H 1 (R 3 ). Moreover this sequence can not be vanishing, since on the contrary, we would obtain from the equationthat |∇u n | 2 → 0 which contradicts with J r > 0.Theorem 8.4. The function (0, ∞) ∋ r → J r is decreasing over the interval (0, inf u∈S 1 r(u)).Proof. Fix 0 < r 1 < r 2 < inf u∈S 1 , r(u) < r * and let {u n } ⊂ N + r 1 be a minimizing sequence to I r 1 . From the mean value theorem we have that J r 2 ≤ ϕ r 2 ,un (t − r 2 (u n )) = ϕ r 1 ,un (t − r 1 (u n )) + f ′ (θ n )(r 2 − r 1 ), ∀n ∈ N where θ n ∈ (r 1 , r 2 ). Note from Lemma 8.3 that {u n } ⊂ M r 1 and therefore {u n } ⊂ M r for all r ∈ [r 1 , r 2 ]. From Lemma 8.2 we conclude that f ′ (θ n )(r 2 − r 1 ) < c(r 2 − r 1 ) where c < 0. As a consequence J r 2 ≤ J r 1 + c(r 2 − r 1 ), and the proof is complete.From[1]we conclude Theorem 8.5. For each r ∈ (0, inf u∈S 1 r(u)), there exists u ∈ N − r such that J r = E(u).Appendix. New inequalitiesWe conclude with some estimates; in particular the second one is new in the literature. ii) for each p ∈ (3, 10/3) and r * < r 1 < r 2 , thenwhere c ′ p > 0 is the constant given in Proposition 4.3.Proof. i) Indeed, fix r 1 < r 2 and ake u ∈ M r 2 satisfying E(r 1/2 2 u t(r 2 ) ) < 0. For simplicity we set t i = t(r i ), i = 1, 2. From Lemma 6.3 we know that u ∈ M r 1 and E(r 1/2 1 u t 1 ) < 0, which implies that t 1 is a global minimum for the fiber map ϕ r 1 ,u and thereforeSince p ∈ (2, 3), it follows that (r 1 /r 2 ) 2(p−3) − 1 > 0, therefore if {u n } ⊂ M r 2 is choosen in such a way that {r 1/2 2 u t 2 n } is a minimizing sequence for I r 2 , since it must be non-vanishing we obtain thatii) Indeed, fix r * < r 1 < r 2 and take u ∈ M r 1 . For simplicity, let again t i = t(r i ) for i = 1, 2 and setObserve that Q(r Since Q(r 1/2 1 u t 1 ) = 0, r 1 < r 2 and p > 3, we conclude that Q(r Existence and instability of standing waves with prescribed norm for a class of Schrödinger-Poisson equations. J Bellazzini, L Jeanjean, T Luo, Proc. Lond. Math. Soc. 3232J. Bellazzini, L. Jeanjean, T. Luo, Existence and instability of standing waves with prescribed norm for a class of Schrödinger-Poisson equations, Proc. Lond. Math. Soc. 3 107 (2013), no. 2, 303-339. 4, 5, 7, 12, 30, 32 Stable standing waves for a class of nonlinear Schrödinger-Poisson equations. J Bellazzini, G Siciliano, Z. Angew. Math. Phys. 6221J. Bellazzini, G. Siciliano, Stable standing waves for a class of nonlinear Schrödinger-Poisson equations, Z. Angew. Math. Phys. 62 (2010), 267-280. 3, 21 Scaling properties of functionals and existence of constrained minimizers. J Bellazzini, G Siciliano, J. Funct. Anal. 26126J. Bellazzini, G. Siciliano, Scaling properties of functionals and existence of constrained minimizers, J. Funct. Anal. 261 (2011) 2486-2507. 3, 6, 10, 21, 26 An eigenvalue problem for the Schrödinger-Maxwell equations. V Benci, D Fortunato, Topol. Methods Nonlinear Anal. 112V. Benci, D. Fortunato, An eigenvalue problem for the Schrödinger-Maxwell equations, Topol. Methods Nonlinear Anal. 11 (1998), 283-293. 2 . V Benci, Hylomorphic Solitons, Milan J Math, 77V. Benci, Hylomorphic solitons, Milan J. Math. 77 (2009), 271 -332. 3 Local approximation for the Hartree-Fock exchange potential: a deformation approach. O Bokanowski, N Mauser, Math. Models Methods Appl. Sci. 96O. Bokanowski, N. Mauser, Local approximation for the Hartree-Fock exchange potential: a deformation approach, Math. Models Methods Appl. Sci. 9 (1999), no. 6, 941-961. 2 Existence of steady states for the Maxwell-Schrödinger-Poisson system: exploring the applicability of the concentration-compactness principle. I Catto, J Dolbeault, O Sánchez, J Soler, Math. Models Methods Appl. Sci. 231026I. Catto, J. Dolbeault, O. Sánchez, J. Soler, Existence of steady states for the Maxwell-Schrödinger-Poisson system: exploring the applicability of the concentration-compactness principle, Math. Models Methods Appl. Sci. 23 (2013), no. 10, 1915-1938. 3, 4, 5, 6, 8, 26 Binding of atoms and stability of molecules in Hartree and Thomas-Fermi type theories. I. A necessary and sufficient condition for the stability of general molecular systems. I Catto, P L Lions, 1051-1110. 3Comm. Partial Differential Equations. 177-8I. Catto, P.L. Lions, Binding of atoms and stability of molecules in Hartree and Thomas-Fermi type theories. I. A necessary and sufficient condition for the stability of general molecular systems, Comm. Partial Differential Equations 17 (1992), no. 7-8, 1051-1110. 3 Standing waves for the nonlinear Schrödinger equation coupled with the Maxwell equation. M Colin, T Watanabe, Nonlinearity. 3054M. Colin, T. Watanabe, Standing waves for the nonlinear Schrödinger equation coupled with the Maxwell equation, Nonlinearity 30 (2017), no. 5, 19201947. 3, 4 yasov On extreme values of Nehari manifold method via nonlinear Rayleigh's quotient. Y , Topol. Methods Nonlinear Anal. 49218Y. Il'yasov On extreme values of Nehari manifold method via nonlinear Rayleigh's quotient, Topol. Methods Nonlinear Anal. 49 (2017), no. 2, 683-714. 3, 4, 18 Sharp nonexistence results of prescribed L 2 -norm solutions for some class of Schrödinger-Poisson and quasi-linear equations. L Jeanjean, T Luo, Z. Angew. Math. Phys. 644263, 4, 5, 12, 13L. Jeanjean, T. Luo, Sharp nonexistence results of prescribed L 2 -norm solutions for some class of Schrödinger-Poisson and quasi-linear equations, Z. Angew. Math. Phys. 64 (2013), no. 4, 937-954. 3, 4, 5, 12, 13, 14, 26 The Hartree-Fock Theory for Coulomb Systems. E H Lieb, B Simon, Commun. Math. Phys. 532E.H. Lieb, B. Simon, The Hartree-Fock Theory for Coulomb Systems, Commun. Math. Phys. 53, (1977) 185-194. 2 . E H Lieb, M Loss, Graduate Studies in Mathematics. 149American Mathematical SocietyAnalysisE.H. Lieb, M. Loss, Analysis, Graduate Studies in Mathematics Volume 14, Providence, Rhode Island, American Mathematical Society, 2001 9 Solutions of Hartree-Fock equations for Coulomb systems. P L Lions, Comm. Math. Phys. 1091P. L. Lions, Solutions of Hartree-Fock equations for Coulomb systems,Comm. Math. Phys. 109 (1987), no. 1, 3397. 6 The concentration-compactness principle in the Calculus of Variation. The locally compact case, part I and II. P L Lions, 109-145 and 223-283. 3Ann. Inst. H. Poincaré Anal. Non Lineaire. 1P. L. Lions, The concentration-compactness principle in the Calculus of Variation. The locally compact case, part I and II, Ann. Inst. H. Poincaré Anal. Non Lineaire 1 (1984), 109-145 and 223-283. 3 The Schrödinger-Poisson-Xα equation. N J Mauser, Appl. Math. Lett. 142N. J. Mauser, The Schrödinger-Poisson-Xα equation, Appl. Math. Lett. 14 (2001) 759-763. 2 The Schrödinger-Poisson equation under the effect of a nonlinear local term. D Ruiz, J. Func. Anal. 237D. Ruiz The Schrödinger-Poisson equation under the effect of a nonlinear local term, J. Func. Anal. 237 (2006), 655-674. 12 R G Parr, W Yang, Density Functional Theory of Atoms and Molecules. Oxford Univ. PressR. G. Parr, W. Yang, Density Functional Theory of Atoms and Molecules (Oxford Univ. Press, 1989). 2 Pohozaev The fibration method for solving nonlinear boundary value problems (Russian). S I , Translated in Proc. Steklov Inst. Math. 1923Trudy Mat. Inst. SteklovS.I. Pohozaev The fibration method for solving nonlinear boundary value problems (Russian), Trudy Mat. Inst. Steklov 192 (1990), 146-163. Translated in Proc. Steklov Inst. Math. 1992, no. 3, 157-163. 3 Long-time dynamics of the Schrödinger-Poisson-Slater system. O Sánchez, J Soler, J. Statist. Phys. 1141-2O. Sánchez, J. Soler, Long-time dynamics of the Schrödinger-Poisson-Slater system, J. Statist. Phys. 114 (2004), no. 1-2, 179-204. 3 The fibering method approach for a non-linear Schrödinger equation coupled with the electromagnetic field. G Siciliano, K Silva, Publ. Mat. 643G. Siciliano, K. Silva, The fibering method approach for a non-linear Schrödinger equation coupled with the electromagnetic field, Publ. Mat. 64 (2020), 373-390. 3 On an abstract bifurcation result concerning homogeneous potential operators with applications to PDEs. K Silva, J. Diff. Equations. 2699K. Silva, On an abstract bifurcation result concerning homogeneous potential operators with applications to PDEs, J. Diff. Equations 269 (2020), no. 9, 7643-7675. 3 A simplification of the Hartree-Fock method. J C Slater, Phys. Rev. 812J. C. Slater, A simplification of the Hartree-Fock method, Phys. Rev. 81 (1951) 385-390. 2 On branches of positive solutions for p−Laplacian problems at the extreme value of the Nehari manifold method. Y Il&apos;yasov, K Silva, Proc. Amer. Math. Soc. 1467Y. Il'yasov, K. Silva On branches of positive solutions for p−Laplacian problems at the extreme value of the Nehari manifold method, Proc. Amer. Math. Soc. 146 (2018), no. 7, 2925-2935. 3 On the existence of solutions for the Schrödinger-Poisson equations. L Zhao, F Zhao, J. Math. Anal. Appl. 346L. Zhao, F. Zhao, On the existence of solutions for the Schrödinger-Poisson equations, J. Math. Anal. Appl. 346 (2008) 155-169. 26 Siciliano) Departamento de Matemática -Instituto de Matemática e Estatística Universidade de São Paulo Rua do Matão 1010, 05508-090 São Paulo, SP, Brazil E-mail address: [email protected] (K. Silva) Instituto de Matemática e Estatística. Rua Samambaia. Goiânia, GOUniversidade Federal de GoiásBrazil E-mail address: [email protected](G. Siciliano) Departamento de Matemática -Instituto de Matemática e Estatística Universidade de São Paulo Rua do Matão 1010, 05508-090 São Paulo, SP, Brazil E-mail address: [email protected] (K. Silva) Instituto de Matemática e Estatística. Universidade Federal de Goiás, Rua Samambaia, 74001-970, Goiânia, GO, Brazil E-mail address: [email protected]
[]
[ "DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware Architectures", "DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware Architectures" ]
[ "Muhammad Abdullah Hanif [email protected] \nFaculty of Informatics\nTechnische Universität Wien (TU Wien)\nViennaAustria\n", "Muhammad Shafique [email protected] \nDivision of Engineering\nNew York University Abu Dhabi (NYUAD)\nAbu DhabiUnited Arab Emirates\n" ]
[ "Faculty of Informatics\nTechnische Universität Wien (TU Wien)\nViennaAustria", "Division of Engineering\nNew York University Abu Dhabi (NYUAD)\nAbu DhabiUnited Arab Emirates" ]
[]
Negative Biased Temperature Instability (NBTI)-induced aging is one of the critical reliability threats in nano-scale devices. This paper makes the first attempt to study the NBTI aging in the on-chip weight memories of deep neural network (DNN) hardware accelerators, subjected to complex DNN workloads. We propose DNN-Life, a specialized aging analysis and mitigation framework for DNNs, which jointly exploits hardware-and software-level knowledge to improve the lifetime of a DNN weight memory with reduced energy overhead. At the softwarelevel, we analyze the effects of different DNN quantization methods on the distribution of the bits of weight values. Based on the insights gained from this analysis, we propose a micro-architecture that employs low-cost memory-write (and read) transducers to achieve an optimal duty-cycle at run time in the weight memory cells, thereby balancing their aging. As a result, our DNN-Life framework enables efficient aging mitigation of weight memory of the given DNN hardware at minimal energy overhead during the inference process.
10.23919/date51398.2021.9473943
[ "https://arxiv.org/pdf/2101.12351v1.pdf" ]
231,728,338
2101.12351
a32ae470939042a75511e417fae23d3baf2c81d7
DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware Architectures Muhammad Abdullah Hanif [email protected] Faculty of Informatics Technische Universität Wien (TU Wien) ViennaAustria Muhammad Shafique [email protected] Division of Engineering New York University Abu Dhabi (NYUAD) Abu DhabiUnited Arab Emirates DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware Architectures Negative Biased Temperature Instability (NBTI)-induced aging is one of the critical reliability threats in nano-scale devices. This paper makes the first attempt to study the NBTI aging in the on-chip weight memories of deep neural network (DNN) hardware accelerators, subjected to complex DNN workloads. We propose DNN-Life, a specialized aging analysis and mitigation framework for DNNs, which jointly exploits hardware-and software-level knowledge to improve the lifetime of a DNN weight memory with reduced energy overhead. At the softwarelevel, we analyze the effects of different DNN quantization methods on the distribution of the bits of weight values. Based on the insights gained from this analysis, we propose a micro-architecture that employs low-cost memory-write (and read) transducers to achieve an optimal duty-cycle at run time in the weight memory cells, thereby balancing their aging. As a result, our DNN-Life framework enables efficient aging mitigation of weight memory of the given DNN hardware at minimal energy overhead during the inference process. I. INTRODUCTION DNN accelerators have already become an essential part of various machine learning systems [1] [2]. DNNs usually require a large number of parameters to offer high accuracy, which comes at the cost of high memory requirements; see Fig. 1a. Dedicated memory hierarchies are designed to tradeoff between the low-cost storage offered by the off-chip DRAMs and the energy-/performance-efficient access offered by the on-chip SRAMs [1]; see Fig. 1b for access energy. This has led to an increasing trend towards the use of larger on-chip memory in the state-of-the-art DNN accelerators [3] [4], with the recent wafer-scale chips having up to 18 GBs of on-chip memory [5]. However, due to continuous technology scaling, the onchip SRAM-based memories are becoming increasingly vulnerable to different reliability threats, for example, soft errors and aging [6] [7] [8]. Studies have shown that even a single fault in weights of critical neurons can result in significant degradation of applicationlevel accuracy [9]. State-of-the-art works have focused on analyzing and mitigating the effects of faults in DNN accelerators w.r.t. DNN accuracy [10]. However, to the best of our knowledge, no prior works have analyzed and optimized the aging of the on-chip weight memories of DNN accelerators, especially when considering diverse dataflows of different DNNs and the impact of different types of quantizations on the weight distributions. Aging due to NBTI: In PMOS transistors when a negative gateto-source voltage is applied, it can break-down the Si-H bond at the oxide-interface, thereby causing a gradual increase in the threshold voltage (V th ) over the device lifetime, which results in poor drive current and a reduction in the noise margin [11] 1 . To overcome this V th shift, the operating frequency of the device has to be reduced by more than 20% over its entire lifetime [12]. However, due to strict performance and energy constraints (specifically for embedded applications), the V th shift cannot be addressed just by design-time delay margins or adaptive operating frequency adjustments [13], as this leads to a significant loss in the system's performance and energy efficiency. Therefore, in traditional computing systems, alternate opportunities have to be exploited to overcome this challenge [12]. One such opportunity lies in the fact that the NBTI aging phenomenon is partially reversed by removing the stress. 1 A similar phenomenon called PBTI happens in NMOS transistors, though NBTI has been considered relatively more serious compared to PBTI [6]. The NBTI effect is minimum here because the NBTI stress will equally be distributed between the two PMOS transistors existing in the SRAM NBTI Aging of On-chip Memories: On-chip memories are typically built using 6T-SRAM cells to achieve high area and power efficiency. A 6T-cell is composed of two inverters coupled with two access transistors (see Fig. 2a). The inverters store complementary values to store a single bit. Each inverter has a PMOS transistor and an NMOS transistor. Depending on whether the cell is storing '0' or '1', one of the PMOS transistors is always under stress, when the transistor is on. As aging of a cell is defined by its most-aged transistor, the lowest aging is achieved when both the PMOS transistors receive onaverage the same amount of stress over the entire lifetime of the device, i.e., the percentage of the entire lifetime for which the cell stores a '1' (duty-cycle) is 50%, as shown in Fig. 2b. Note that NBTI aging strongly depends on average long-term stress and weakly on shortterm statistics [14]. Therefore, the key challenge in aging mitigation of on-chip memories is to balance their duty-cycle over the entire lifetime without affecting system-level performance. (b) (a) State-of-the-art techniques and their limitations: Various techniques have been proposed at circuit-level and at architecture-level. At circuit-level, the structure of SRAM-cells is modified to reduce the aging rate [16] [13]. For example, Ricketts et al. [16] proposed an asymmetric SRAM structure for workloads having biased bit distribution, but due to their high data dependence, they are applicable only in specific scenarios. Recovery boosting through dedicated recovery accelerating circuit is another method for enhancing the lifetime of the SRAM cells [17], but it increases power/energy consumption due to additional transistors per cell, and therefore cannot be used in energy-constrained large-sized memories [18]. At architecture-level, periodic inversion of data is used to reduce the aging rate of on-chip caches [19]. However, it cannot guarantee optimal duty-cycle, specifically in cases where the same data is periodically reused, e.g., in DNN-based systems where the same set of parameters are reused for processing each input sample. Calimera et al. in [20] improved recovery of unutilized portions of memory, but at high area & energy cost of expensive online monitoring. The technique also suffers from serious performance degradation in dynamic workload scenarios. Another set of techniques uses bit rotations to cater NBTI aging in registers [15], but they work only in cases where the overall distribution of bits is relatively balanced. Moreover, they use barrel shifters that incur high area and power overheads. The work in [21] proposed a configurable micro-architecture for reducing aging rate of video memories, but only works for streaming video applications. In summary, the state-of-the-art techniques either incur high overheads in terms of area and power/energy or rely on certain specific workloads, but cannot be employed in DNN accelerators due to the unique properties of DNN hardware and workloads, as we will illustrate later in this paper. Additional Challenges from the Deep Learning Perspective: The dataflow (i.e., computation scheduling) for a given DNN on a specific hardware is defined as per the DNN architecture and the hardware implementation to achieve maximum energy-/performance-efficiency. Altering the dataflow to balance the duty-cycle in on-chip SRAM cells can result in significant degradation of system-level efficiency. Therefore, an aging mitigation technique that does not require any alteration to the dataflow or the mapping of the data in on-chip SRAM is desired. Our Novel Contributions: Towards this, we propose DNN-Life, an aging analysis and mitigation framework for on-chip memories of DNN hardware (see Fig. 3). Our framework employs two key features: 1) Aging Analysis [Section III]: We analyze the impact of using different data representation formats and quantization methods for weights of a DNN on the probability distribution of weight-bits, as this can provide useful insights for designing an effective and low-overhead aging mitigation technique. 2) Aging Mitigation [Section IV]: We propose a scheme and supporting micro-architecture for mitigating the NBTI-aging of 6T-SRAM-based on-chip weight memory of DNN accelerators with minimal energy overhead. Noteworthy, our scheme does not require any alteration to the dataflow of DNN inference or on-chip data mapping, and thereby maintains the energy and performance benefits of the system. The micro-architectural extensions for aging mitigation are integrated in the DNN accelerator before and after the on-chip weight memory in the form of aging-aware write and read transducers, as shown in Fig. 4. Insights DNN Hardware Accelerator II. OVERVIEW OF OUR DNN-LIFE FRAMEWORK In this work, we propose DNN-Life, a novel aging analysis and mitigation framework for weight memories of DNN hardware accelerators. It employs a low-cost data encoding scheme that accounts for diverse DNN workloads to adapt over time to balance the dutycycle in each on-chip weight memory cell to alleviate the NBTI-aging effects. Towards this, the two key features of our framework are: 1) Analysis: We analyze the probability distribution of weight-bits of different pre-trained DNNs to find key insights that help in developing a low-cost aging-mitigation scheme. To consider the variations in the distribution across number representation formats and the methods used to transform the weights to those formats, we consider different number representation formats and different commonly used conversion methods. The detailed analysis and insights are presented in Section III. 2) Architecture: Based on the gathered insights, we design a data encoding module and an aging controller. The encoder is responsible for encoding the weights before writing the values to the weight memory, and the aging controller is responsible for generating encoding information required to encode the data such that the duty-cycle is balanced. The encoding information is then stored to be used by the corresponding decoder module. The data encoder is deployed inside the DNN hardware accelerator right before the weight memory, and the corresponding decoder is installed after the memory, to decode the weights before passing them for computations. The integration of the encoder and the decoder modules in a DNN accelerator is illustrated in Fig. 4a. The details of the micro-architecture are presented in Section IV. A. DNN Hardware Architecture Our DNN hardware architecture is based on well-established DNN accelerator models, such as [22] for dense DNNs. Our accelerator is composed of an Activation Buffer, a Weight Buffer, a Processing Array, and an Accumulation Unit; see Fig. 4a. Our proposed weightmemory aging mitigation modules integrated in the architecture are also shown in the figure (see details in Section IV). The activation and weight buffers provide intermediate storage for the activations and weights, respectively, to reduce the costly off-chip memory accesses. The buffers provide data to the processing array for performing the computations. For this work, we assume a memory hierarchy similar to Bit-Tactical [22], DaDianNao [3] and TPU [4], according to which: 1) the activation buffer is large enough to store the activations of a single layer of a DNN; 2) the activation memory can provide N number of activation values to the processing array at a time; and 3) the weight memory can provide f ×N weights to the processing array simultaneously. The processing array (see Fig. 4b) is composed of f number of Processing Elements (PEs) that share the activations, and therefore can perform N number of multiplications for f different filters at the same time. Each PE has an adder tree to compute the sum of the multiplications. The computed sum is passed to the accumulation unit where it is added with the corresponding partial sums to generate the output activation value. Note, as the filters can be significantly large, the computation of each output activation can take several cycles, depending on the filter size. B. Dataflow in the DNN Accelerator To perform the computations of a DNN layer using the above accelerator, the weights have to be partitioned into blocks that can be accommodated in the on-chip memory. The goal of partitioning is to maximize the use of available PEs. The input/output feature maps and the filters/neurons all are divided into so-called tiles, depending on the available on-chip storage for the corresponding data type. Works like SmartShuttle [23] provide methods to find an optimal tiling configuration and computation scheduling policy for a layer of a DNN for a given memory hierarchy. Fig. 5 illustrates the policy that we employ for partitioning the filters of a CONV layer. Note, we support the well-established tiling technique so that we can demonstrate that our technique can benefit a wide-range of existing DNN hardware accelerators. The figure also illustrates the sequence in which the blocks are moved to the on-chip weight memory and the corresponding computations are scheduled. The filters are first divided into sets, where each set contains f number of filters. Note, f is mainly defined based on the number of filters that the hardware accelerator can process in parallel. Afterwards, a chunk of data (grey boxes in Fig. 5) from a set is selected to be moved to the on-chip memory. The selected chunk contains a block of data of size r × c × ch from the same location of each filter in the set. The sequence in which the grey boxes are traversed in the filters defines rest of the dataflow. The used sequence is shown as steps in Fig. 5. III. ANALYSIS OF THE DISTRIBUTION OF WEIGHT-BITS FOR DIFFERENT DNNS & THEIR IMPACT ON DUTY-CYCLE Before presenting the design of the proposed aging mitigation modules in Section IV, here we first present an analysis which highlights the rationale behind the proposed design. Processing Array Adder Tree Accumulation Unit Reg. + ... ... Reg. + Output Activations Step 2 Step 4 ... A. Analyzing the Distribution of Weight-Bits For this analysis, we consider the AlexNet and the VGG-16 networks, trained on the ImageNet dataset. As different data representations for weights, we consider 32-bit floating point representation (IEEE 754 standard) and 8-bit integer format achieved using range-linear symmetric and asymmetric quantization techniques [24]. Fig. 6 illustrates the ratio of observing a '1' to the total number of observations (which corresponds to probability of observing a '1') at each bit-location of a word for all three data representation formats for both the networks. By analyzing the distributions, the following key observations are made: 1) The probability of getting a '1' value at a particular bit-location of a randomly selected weight depends on the network, the data representation format, and the method used to transform the data to the particular data representation format. For example, the probability of getting a '1' at a particular bit-location in symmetric 8-bit representation is almost the same across bit-locations within a network for both the considered DNNs, however, it varies across networks. Similarly, the probability of getting a '1' at the lower bit-locations in 32-bit floating-point representation is around 0.5, however, the distribution of bits at higher bit-locations varies across bit-locations as well as across DNNs. 2) Representation of weights using a specific format cannot guarantee a distribution that offers 0.5 probability at each bit-location, i.e., a distribution that can potentially lead to a balanced duty-cycle. For example, out of all the studied cases, only the distribution of the AlexNet when represented using 8-bit integer format achieved using symmetric range-linear quantization offers close to 0.5 probability for all the bit-locations. 3) The average probability of getting a '1' across bit-locations in a specific format is also not guaranteed to be equal to 0.5. For example, see the distributions of 8-bit asymmetrically quantized DNNs. Therefore, barrel shifter-based balancing techniques would not produce desirable results in such cases. In the following, we develop a probabilistic model to analyze the effectiveness of different aging mitigation techniques. 1) Probabilistic Model: Assume the on-chip memory of a given DNN accelerator is composed of I × J cells. For mapping the weights of a DNN onto the memory, we assume: (a) the same dataflow as presented in Fig. 5; (b) each block of weights is kept in the on-chip memory for equal amount of time, and it is fetched only once during a single inference (similar to the dataflow for the DNN accelerator proposed in [22]); (c) each block of data mapped onto the on-chip memory fits perfectly to it. Based on the aforementioned conditions and the given DNN size, we can divide the DNN into K blocks that translates to K number of data mappings onto the on-chip weight memory. Now, if the same DNN is used repeatedly for inferencing with the same dataflow, a single on-chip memory cell is mapped with only K different bits. If the probability of getting a '1' for all the bits is given by ρ, the probability of getting a duty-cycle less than and equal to b/K, or greater than and equal to 1 − b/K, can be computed using the following equation, except when b/K = 0.5, where the probability is 1. P b/K = b i=0 K i ρ i × (1 − ρ) K−i + K i=K−b K i ρ i × (1 − ρ) K−i (1) Here, b is an arbitrary variable with the range from 0 to K/2 . Note that we combine (i) the cases in which duty-cycle is less than and equal to b/K and (ii) the cases in which duty-cycle is greater than and equal to 1 − b/K, because in a symmetric 6T-SRAM cell both the cases cause the same level of stress in one of the two PMOS transistors. Assuming the above computed probability to be the same for all the cells of the on-chip memory, the probability of at least n number of cells (out of I × J) experiencing duty-cycle less than and equal to b/K, or greater than and equal to 1 − b/K can be computed using the following equation. Pn = I×J i=n I × J i P i b × (1 − P b ) I×J−i(2) 2) An Example Case-Study: Let us consider a scenario where K = 20 and ρ = 0.5 (i.e., the best-case with balanced bit distribution), and I × J = 8192. Fig. 7a shows the probability for each possible value of b computed using Eq. 1. Note, even for b/K = 0.3, the probability is over 0.1, i.e., more than 10% of the cells are expected to experience a duty-cycle of less than 0.3, or greater than 0.7. Now, if we employ a given aging mitigation technique that offers upto 7 shifts to increase the number of different bits that are mapped to a single cell, we can theoretically increase the value of K to 160, assuming the bits to be independent from each other and the ideal shifting policy. Putting K = 160 in the above mentioned example, Fig. 7b shows the probabilities for different b/K values. As can be seen from Fig. 7b, the probabilities at lower b/K values have dropped significantly. The above analysis implies that by significantly increasing K and having ρ = 0.5, we can achieve close to ideal duty-cycle for all the cells. Now, instead of a barrel shifter, if we employ an inversion-based duty-cycle balancing technique where every other write to the same location is inverted, for the given scenario, the value of K remains the same, as it is even. Moreover, as ρ is defined to be 0.5, the inversionbased policy has no impact on ρ either. Therefore, we get the same probabilities as presented in Fig. 7a. However, note that the inversionbased policy is mainly useful for achieving ρ = 0.5 in cases where the distribution of bits is biased either towards '0' or '1'. C. Challenges in Designing an Efficient Aging Mitigation System Based on the above analysis, we outline the following key challenges in designing a generic aging mitigating system. 1) The probability of occurrence of non-ideal duty-cycle is considerable even with the state-of-the-art fixed aging mitigation techniques. Therefore, a more robust method has to be designed by exploiting the fact that NBTI-aging is more dependent on the average duty-cycle over the lifetime of the device [14]. 2) The distribution of bits and the duty-cycle is significantly affected by the datatype used for representing the weights. Therefore, the mitigation technique should be generic and independent of the datatype used so that it is beneficial for various DNN accelerators. Moreover, in practical scenarios, each layer of a DNN can have a different size. Therefore, each layer can take different amount of time for processing that can vary significantly across layers. Also, different DNNs can have different number of layers. Therefore, a method that keeps track of all these factors at a fine granularity can help in significantly reducing the aging rates. However, such methods are super costly. This makes it very challenging to develop a generic method that offers effective aging mitigation at reasonable overheads. IV. A MICRO-ARCHITECTURE FOR MITIGATING AGING OF THE ON-CHIP WEIGHT MEMORY OF DNN ACCELERATORS To address the above challenges, we propose a Write Data Encoder (WDE) for encoding the weights before writing them to the on-chip weight memory, and a Read Data Decoder (RDD) which performs the inverse function of the WDE while reading the data from the on-chip memory and before passing it to the processing array. The integration of the proposed modules in the DNN accelerator is shown in Fig. 4a. Moreover, we propose an aging mitigation controller which generates the control signals (metadata) for the write (and read) transducer. The proposed micro-architectures of the WDE and the aging mitigation controller is shown in Fig. 8. Write Data Encoder (WDE): It leverages the inversion logic that besides its low-overhead 2 , also enables to perfectly balance out the distribution of bits in the cells of the memory when the distribution is originally biased towards either '0' or '1', as highlighted in Section III. The inversion logic in the proposed micro-architecture is implemented using XOR gates as they allow the aging mitigation controller to enable or disable it using just a 1-bit enable (E) signal. Another key advantage of this design is that the micro-architecture of the RDD is the same as WDE, where the same E signal (metadata) that is used to encode the weights is used (at a later point in time) for decoding them before passing them to the processing array. Moreover, the proposed WDE and RDD modules are highly scalable, as increasing the width of the modules require only a linear increase in the number of XOR gates. Therefore, the widths of these modules can be defined directly based on the DNN accelerator configuration without affecting the energyefficiency of the system. Aging Mitigation Controller: The controller is the core part of the proposed micro-architecture, as it is responsible for generating the enable signal (E) that enables/disables the inversion logic in WDE. The design is based on the observations made in Section III that the higher the number of different bits to be written on an SRAM cell during its lifetime (i.e., K in Eq. 1) that are generated from a uniform distribution the lower the chances of observing a deviation in its dutycycle from 0.5 (see Figs. 7), i.e., the ideal point shown in Fig. 2b. Therefore, to increase the number of different bits to be written on an SRAM cell, we employ a True Random Bit Generator (TRBG) to generate the enable signal and decide whether the upcoming data should be written with or without inversion in the memory cell. TRBG adds the sense of randomness in the bits to be written in the memory and thereby leads to larger K value and lower aging. Note in practical scenarios, the output of TRBGs can be biased towards either '0' or '1', which can eventually affect the duty-cycle. Therefore, to mitigate this, we periodically invert the output of the TRBG after a defined number of iterations with the help of an M -bit register before using it as the enable signal, which balances the bias. For aging estimation, we use Static Noise Margin (SNM) to quantify the NBTI-aging of 6T-SRAM cells, similar to [21] [25]. The SNM defines the tolerance to noise that directly affects the read stability of a cell [26], i.e., if the SNM of a cell is low, the cell is highly susceptible to read failures. As per [15] [21] [25], SNM mainly depends on the duty-cycle over the entire lifetime of the cell, and the least SNM degradation is achieved at 50% duty-cycle. To obtain SNM results, we employ a similar device aging model as used in state-of-the-art studies like [21] [25]. However, due to its duty-cycle optimization focus, our proposed technique is orthogonal to the given device aging models, and other device-level models can easily be integrated in our framework. Based on the models, the SNM degradation of a 6T-SRAM cell can be computed using the duty-cycle. From the analysis, the best SNM degradation for 6T-SRAM cell after 7 years is 10.82% (at 50% duty-cycle), and the worst is 26.12% (at 0% and 100% duty-cycle). For large-scale simulations, we integrated the output of these models into a memory simulator of the baseline DNN hardware (described in Section II-A). The simulator takes the DNN hardware configuration, dataflow, pre-trained DNN architecture and test samples as inputs. We also built a memory simulator for a TPU-like hardware architecture [4] to validate the proposed aging-mitigation technique across DNN hardware accelerators. The hardware configurations used for the evaluation are presented in Table I. The DNNs used are the AlexNet and the VGG-16 with the ImageNet dataset and a custom network with MNIST dataset. The custom network is composed of two CONV layers and two FC layers, i.e., CONV (16,1,5,5), CONV(50, 16,5,5), FC(256,800) and FC (10,256). For each setting the duty-cycles are estimated based on the values observed in 100 inferences. The bias balancing register is defined to be a 4-bit register (i.e., M=4), for all the corresponding cases. B. Aging Estimation Results and Comparisons In this subsection, we analyze the impact of using different aging mitigation policies on the SNM degradation of the 6T-SRAM on-chip weight memory cells after 7 years. We mainly considered four different policies: (1) No aging mitigation, (2) Inversion-based, (3) Barrel shifter-based, and (4) DNN-Life. For the proposed DNN-Life, we consider three different cases: (i) TRBG is not biased and it generates 0s and 1s with equal probability (referred in the results as Bias=0.5); (ii) TRBG is biased and it generates 1s with 0.7 probability, and the aging controller does not have a bias balancing register (referred in the results as without bias balancing with Bias=0.7); and (iii) TRBG is biased and it generates 1s with 0.7 probability and the aging controller has a 4-bit bias balancing register (referred in the results as with bias balancing with Bias=0.7). Moreover, we performed experiments considering three different data representation formats for weights: (1) 32-bit floating point format; (2) 8-bit integer format when weights are quantized using symmetric quantization method; and (3) 8-bit integer format when weights are quantized using asymmetric quantization method. Fig. 9 shows the distributions of SNM degradation in the memory cells obtained using different aging mitigation policies and a pretrained AlexNet model. The Y-axis of each bar graph shows the percentage of the number of cells and the X-axis of each shows SNM degradation levels. Note that, for these experiments, we assumed the baseline DNN accelerator configuration presented in Table I and the dataflow shown in Fig. 5 with f = 8. Also, we assumed that only a single DNN (i.e., the AlexNet) is used for data inference throughout the lifetime of the device. As can be seen in the figure, the inversion-based and barrel shifter-based aging balancing reduce the SNM degradation of the SRAM cells, however, they do not offer minimum SNM degradation (see 2 and 3 in comparison with 1 in Fig. 9). This behavior is observed to be consistent across all the data representation formats (see 2 till 7 in comparison with their respective without aging mitigation graphs in Fig. 9). Specifically, the inversion-based aging balancing offers sub-optimal aging mitigation in case of the 32-bit floating point format (see 2 in Fig. 9), where most of the cells experience around 10.8% SNM degradation (see a in Fig. 9). However, this is not the ideal scenario as there are 4% cells that experience highest level of SNM degradation (see b in Fig. 9) and a few that experience moderate level of SNM degradation (see c in Fig. 9). Now, if we analyze the results of the proposed DNN-Life with bias balancing, it offers maximum aging-mitigation (i.e., all the cells experience around 10.8% SNM degradation) in all the cases (see 8 , 9 and 10 in Fig. 9). Impact of biased TRBG on aging balancing of 6T-SRAM onchip weight memory: Fig. 9 also illustrates the impact of using proposed design without bias correction when the duty-cycle of TRBG is 0.7. As can be seen in the figure, for all the data representation formats, having biased TRBG and no bias correction leads to less reduction in SNM degradation of the 6T-SRAM cells (e.g., see 11 in comparison with 8 in Fig. 9). This behavior is consistent across all the data representation formats. Impact across different hardware accelerators: Fig. 11 shows the impact of using the proposed aging-mitigation technique for a TPU-like [4] Neural Processing Unit (NPU) architecture that has an on-chip weight FIFO which is four tiles deep, where one tile is equivalent to weights for 256 × 256 PEs. Each PE has a single MAC unit that can perform 8-bit multiplication. For our implementation, we assumed the weight FIFO to be a circular buffer-based design. We performed analysis using the three different networks mentioned earlier. All the DNNs are quantized to 8-bits using post-training symmetric quantization. Considering the dataflow of the NPU, the parameter f was set to 256. As can be seen in Fig. 11, the inversionbased aging mitigation policy offers optimal results for the AlexNet and the VGG-16 networks (see 1 and 2 in Fig. 11). However, when used for the custom DNN, almost all the memory cells experience significant SNM degradation (see 3 in Fig. 11). The barrel shifterbased approach also offer sub-optimal results (see 4 till 6 in Fig. 11). However, the proposed DNN-Life with bias balancing offers maximum aging mitigation (see 7 till 9 in Fig. 11). This shows that DNN-Life can be used for a wide range of DNN accelerators. C. Area and Power Results The area, power and delay characteristics of three different WDEs composed of different aging balancing units are shown in Table II. All three WDEs are designed for 64 bit-width. The barrel shifter-based WDE consumes the most amount of area and power. The proposed design consumes slightly more power and area as compared to the inversion-based WDE. However, as shown in the previous subsection, it offers best aging-mitigation in all the possible scenarios regardless of the size of the given DNN, the data representation format and the on-chip weight memory size. Note that, at hardware level, we realized TRBG using a 5-stage ring oscillator. VI. CONCLUSION In this paper, we proposed DNN-Life, an aging-mitigation framework that employs read and write transducers to reduce NBTIinduced aging of 6T-SRAM on-chip weight memory in DNN hardware accelerators. We analyzed different DNN data representation formats at the software-level and their potential for balancing the dutycycle in SRAM cells. Based on the analysis, we proposed a microarchitecture that makes use of a True Random Bit Generator (TRBG) to ensure optimal duty-cycle at runtime, thereby balancing the aging of complimentary parts in 6T-SRAM cells of the weight memory. As a result, our DNN-Life enables efficient aging mitigation of weight memory of a given DNN hardware with minimal energy overhead. Fig. 1 : 1(a) Accuracy and size comparison of few of the state-of-the-art DNNs (b) Access energy comparison of SRAM with DRAM (data source:[1]). time that the cell stores zero[%] Fig. 2 : 2(a) A 6T-SRAM Cell; and (b) its SNM degradation after 7 years[15] Fig. 3 : 3Overview of the design-time and run-time steps involved in our DNN-Life framework. Our novel contributions are highlighted in colored boxes. Fig. 4 : 4(a) Architecture of the baseline DNN accelerator. The highlighted boxes, i.e., Write Data Encoder (WDE), Read Data Decoder (RDD) and Aging Controller, are the proposed modules for mitigating NBTI aging of weight memory. (b) A detailed view of the processing array and the accumulation unit. Fig. 5 : 5Division of filters of a CONV layer of a DNN into smaller blocks that can be accommodated in the on-chip weight memory. Different colors correspond to different sets of filters/blocks. The gray colored boxes define one block of r × c × ch × f size. The steps show the sequence in which the blocks are moved to the on-chip fabric for scheduling their computations. 16 Fig. 6 : 166Probability of getting a '1' at a specific bit position of a weight in VGG-Distribution of bits of weights of different different DNNs when represented in different data representation formats. Symmetric and asymmetric represent which post-training quantization method is used to transform the data for the corresponding distribution. B. A Probabilistic Model-based Analysis for Aging of 6T-SRAM On-chip Weight Memory of a DNN Accelerator Fig. 7 : 7Probability of occurrence of b/K ≥ duty-cycle ≥ 1 − b/K when (a) K = 20, and (b) K = 160 Fig. 8 : 8Proposed micro-architecture for effective aging mitigation of 6T-SRAM weight memory of DNN accelerators. Fig. 9 : 9V. RESULTS AND DISCUSSIONA. Experimental SetupFig. 10illustrates the overall experimental setup used for evaluation. The setup consists of hardware synthesis for estimating the power, area and delay characteristics of the proposed modules, and simulations for aging estimation of the 6T-SRAM on-chip weight memory of different DNN hardware accelerators. For hardware synthesis, we implemented SNM degradation of 6T-SRAM on-chip weight memory cells of the baseline DNN accelerator when used for performing inferences only using the AlexNet network. Each bar graph shows the percentage of the number of cells (Y-axis) experiencing different level of SNM degradation (X-axis). Fig. 10 : 10Overall experimental setup used for evaluation. different aging balancing circuits and our DNN-Life architecture in Verilog. The circuits are synthesized for the TSMC 65nm technology using Cadence Genus. Fig. 11 : 11SNM degradation of 6T-SRAM on-chip weight memory cells of a TPU-like NPU when used for performing inferences using the AlexNet, the VGG-16 and the custom DNN, individually. The networks are quantized to 8-bit format using symmetric range-linear quantization method. . . . ....A 1 A 2 A N Off-Chip Memory (DRAM) Weight Buffer Activation Buffer Accumulation Unit Write Data Encoder (WDE) Read Data Decoder (RDD) Control Unit Processing Array PE 1 PE 2 PE f ... DNN Accelerator Input Activations Output Activations Weights Encoded Weights Encoded Weights Decoded Weights Partial Sums Output Activations Aging Controller W 1 <1> W 2 <1> W N <1> ... ... W 1 <f> W 2 <f> W N <f> ... PE 1 PE f ... ... ... ... ... ... TABLE I : IHardware configurations and settings used in evaluationBaseline Accelerator (Section II-A) TPU-like NPU [4] Weight memory size 512KB 256KB Activation memory size 4MB 24MB PE array size 8 PEs (1 PE = 8 Multipliers) 256 x 256 PEs (1 PE = 1 MAC) Networks AlexNet AlexNet, VGG-16 and Custom Without Aging Mitigation Inverter-based Barrel Shifter- based DNN-Life with Bias Balancing when Bias = 0.7 VGG-16 for ImageNet Dataset Custom Network for MNIST Dataset NPU with Different DNNs Individually AlexNet for ImageNet Dataset TABLE II : IIHardware results of different Write Data Encoders (WDEs) Delay [ps] Power [nW] Area [cell area]Barrel Shifter based WDE 977.7 345190 9035 Inversion based WDE 811.6 10716 195 Proposed WDE with Aging Mitigation Controller 581.8 13747 295 low overhead compared to other techniques such as shifting, which requires costly barrel shifters (as shown later in Section V) ACKNOWLEDGMENT This work is partially supported by Intel Corporation through Gift funding for the project "Cost-Effective Dependability for Deep Neural Networks and Spiking Neural Networks." Efficient processing of deep neural networks: A tutorial and survey. V Sze, Proceedings of IEEE. 10512V. Sze et al., "Efficient processing of deep neural networks: A tutorial and survey," Proceedings of IEEE, vol. 105, no. 12, pp. 2295-2329, 2017. Hardware and software optimizations for accelerating deep neural networks: Survey of current trends, challenges, and the road ahead. M Capra, IEEE Access. M. Capra et al., "Hardware and software optimizations for accelerating deep neural networks: Survey of current trends, challenges, and the road ahead," IEEE Access, 2020. Dadiannao: A machine-learning supercomputer. Y Chen, IEEE/ACM MICRO Symposium. Y. Chen et al., "Dadiannao: A machine-learning supercomputer," in IEEE/ACM MICRO Symposium, 2014, pp. 609-622. In-datacenter performance analysis of a tensor processing unit. N P Jouppi, ACM/IEEE ISCA. N. P. Jouppi et al., "In-datacenter performance analysis of a tensor processing unit," in ACM/IEEE ISCA, 2017, pp. 1-12. Hot chips: The biggest chip in the world. P Mclellan, Accessed: 2019-09-10P. McLellan. (2019) Hot chips: The biggest chip in the world. Accessed: 2019-09-10. [Online]. Available: https://community.cadence.com/cadence blogs 8/b/breakfast-bytes/posts/the-biggest-chip-in-the-world Reliable on-chip systems in the nano-era: Lessons learnt and future trends. J Henkel, ACM/ESDA/IEEE DAC. 99J. Henkel et al., "Reliable on-chip systems in the nano-era: Lessons learnt and future trends," in ACM/ESDA/IEEE DAC, 2013, p. 99. Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead. M Shafique, IEEE Design & Test. 372M. Shafique et al., "Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead," IEEE Design & Test, vol. 37, no. 2, pp. 30-57, 2020. Thermal management for dependable on-chip systems. J Henkel, IEEE ASP-DAC. J. Henkel et al., "Thermal management for dependable on-chip systems," in IEEE ASP-DAC, 2013, pp. 113-118. Robust machine learning systems: Reliability and security for deep neural networks. M A Hanif, IEEE IOLTS. M. A. Hanif et al., "Robust machine learning systems: Reliability and security for deep neural networks," in IEEE IOLTS, 2018, pp. 257-260. Matic: Learning around errors for efficient low-voltage neural network accelerators. S Kim, IEEE DATE. S. Kim et al., "Matic: Learning around errors for efficient low-voltage neural network accelerators," in IEEE DATE, 2018, pp. 1-6. Nbti induced performance degradation in logic and memory circuits: How effectively can we approach a reliability solution?. K Kang, IEEE ASP-DAC. K. Kang et al., "Nbti induced performance degradation in logic and memory circuits: How effectively can we approach a reliability solution?" in IEEE ASP- DAC, 2008, pp. 726-731. Hayat: Harnessing dark silicon and variability for aging deceleration and balancing. D Gnad, 2015 52nd ACM/EDAC/IEEE DAC. D. Gnad et al., "Hayat: Harnessing dark silicon and variability for aging deceleration and balancing," in 2015 52nd ACM/EDAC/IEEE DAC, 2015. A proactive wearout recovery approach for exploiting microarchitectural redundancy to extend cache sram lifetime. J Shin, ACM/IEEE SIGARCH Computer Arch. News. 363J. Shin et al., "A proactive wearout recovery approach for exploiting microarchitectural redundancy to extend cache sram lifetime," in ACM/IEEE SIGARCH Computer Arch. News, vol. 36, no. 3, 2008, pp. 353-362. Penelope: The nbti-aware processor. J Abella, IEEE/ACM MICRO Symposium. IEEE Computer Society. J. Abella et al., "Penelope: The nbti-aware processor," in IEEE/ACM MICRO Symposium. IEEE Computer Society, 2007, pp. 85-96. Analysis and mitigation of nbti aging in register file: An endto-end approach. S Kothawade, IEEE ISQED. S. Kothawade et al., "Analysis and mitigation of nbti aging in register file: An end- to-end approach," in IEEE ISQED, 2011, pp. 1-7. Investigating the impact of nbti on different power saving cache strategies. A Ricketts, IEEE DATE. A. Ricketts et al., "Investigating the impact of nbti on different power saving cache strategies," in IEEE DATE, 2010, pp. 592-597. Enhancing nbti recovery in sram arrays through recovery boosting. T Siddiqua, IEEE TVLSI. 204T. Siddiqua et al., "Enhancing nbti recovery in sram arrays through recovery boosting," IEEE TVLSI, vol. 20, no. 4, pp. 616-629, 2011. A low-power memory architecture with application-aware power management for motion disparity estimation in multiview video coding. B Zatt, IEEE/ACM ICCAD. B. Zatt et al., "A low-power memory architecture with application-aware power management for motion disparity estimation in multiview video coding," in IEEE/ACM ICCAD, 2011, pp. 40-47. Aging-aware instruction cache design by duty cycle balancing. T Jin, IEEE IVLSI. T. Jin et al., "Aging-aware instruction cache design by duty cycle balancing," in IEEE IVLSI, 2012, pp. 195-200. Partitioned cache architectures for reduced nbti-induced aging. A Calimera, IEEE DATE. A. Calimera et al., "Partitioned cache architectures for reduced nbti-induced aging," in IEEE DATE, 2011, pp. 1-6. Enaam: Energy-efficient anti-aging for on-chip video memories. M Shafique, ACM/IEEE DAC. 101M. Shafique et al., "Enaam: Energy-efficient anti-aging for on-chip video memories," in ACM/IEEE DAC, 2015, pp. 101:1-101:6. Bit-tactical: Exploiting ineffectual computations in convolutional neural networks: Which, why, and how. A Delmas, arXiv:1803.03688preprintA. Delmas et al., "Bit-tactical: Exploiting ineffectual computations in convolutional neural networks: Which, why, and how," preprint arXiv:1803.03688, 2018. Smartshuttle: Optimizing off-chip memory accesses for deep learning accelerators. J Li, IEEE DATE. J. Li et al., "Smartshuttle: Optimizing off-chip memory accesses for deep learning accelerators," in IEEE DATE, 2018, pp. 343-348. Fixed point quantization of deep convolutional networks. D Lin, ICML. D. Lin et al., "Fixed point quantization of deep convolutional networks," in ICML, 2016, pp. 2849-2858. Content-aware low-power configurable aging mitigation for sram memories. M Shafique, IEEE Transactions on Computers. 6512M. Shafique et al., "Content-aware low-power configurable aging mitigation for sram memories," IEEE Transactions on Computers, vol. 65, no. 12, pp. 3617-3630, 2016. Statistical analysis of sram cell stability. K , ACM/IEEE DAC. K. Agarwal et al., "Statistical analysis of sram cell stability," in ACM/IEEE DAC, 2006, pp. 57-62.
[]
[ "Разделение переменных и бифуркации первых интегралов в одной задаче Д.Н. Горячева", "Разделение переменных и бифуркации первых интегралов в одной задаче Д.Н. Горячева" ]
[ "П Е Рябов [email protected] \nФинансовый университет при Правительстве РФ Россия\nМосква\n" ]
[ "Финансовый университет при Правительстве РФ Россия\nМосква" ]
[]
Получено ZZZZZZZZZZZZZZZZZZZZZZZZZZПостроено вещественное разделение переменных для одного частного случая интегрируемости Д.Н. Горячева в задаче о движении твердого тела в жидкости. Получены уравнения Абеля-Якоби с многочленом шестой степени под радикалом. Все фазовые переменные алгебраически выражены через разделенные переменные. На основе явных решений исследована фазовая топология задачи.Ключевые слова: уравнения Кирхгофа, интегрируемая система, алгебраическое разделение переменныхP.E. Ryabov The separation of variables and bifurcations of first integrals in one problem of D.N. GoryachevThe equations of motion in one partial integrable case of D.N. Goryachev in the rigid body dynamics are separated by the real change of variables. We obtain the Abel-Jacobi equations with the polynomial of degree 6 under the radical. The natural phase variables (components of the momentum and the momentum force) are expressed via separated variables explicitly in elementary algebraic functions. Basing on these expressions the phase topology of the system is described.
null
[ "https://export.arxiv.org/pdf/1102.2588v1.pdf" ]
117,030,375
1102.2588
e4eec13eb1ae2e4079c7817a8c3859b4d8e51374
Разделение переменных и бифуркации первых интегралов в одной задаче Д.Н. Горячева 13 Feb 2011 П Е Рябов [email protected] Финансовый университет при Правительстве РФ Россия Москва Разделение переменных и бифуркации первых интегралов в одной задаче Д.Н. Горячева 13 Feb 2011Kirchhoff equationsintegrable systemalgebraic separation of variables Mathematical Subject Classification 2000: 70E1770G40 Получено ZZZZZZZZZZZZZZZZZZZZZZZZZZПостроено вещественное разделение переменных для одного частного случая интегрируемости Д.Н. Горячева в задаче о движении твердого тела в жидкости. Получены уравнения Абеля-Якоби с многочленом шестой степени под радикалом. Все фазовые переменные алгебраически выражены через разделенные переменные. На основе явных решений исследована фазовая топология задачи.Ключевые слова: уравнения Кирхгофа, интегрируемая система, алгебраическое разделение переменныхP.E. Ryabov The separation of variables and bifurcations of first integrals in one problem of D.N. GoryachevThe equations of motion in one partial integrable case of D.N. Goryachev in the rigid body dynamics are separated by the real change of variables. We obtain the Abel-Jacobi equations with the polynomial of degree 6 under the radical. The natural phase variables (components of the momentum and the momentum force) are expressed via separated variables explicitly in elementary algebraic functions. Basing on these expressions the phase topology of the system is described. Введение Уравнения Кирхгофа движения твердого тела в жидкости в общем случае имеют вид система (1.1) гамильтонова с двумя степенями свободы, в связи с чем для ее интегрируемости достаточно в дополнение к интегралу H указать еще один интеграл, независимый с H почти всюду. В работе [1] найден случай интегрируемости, в котором M = M × ∂H ∂M + α × ∂H ∂α ,α = α × ∂H ∂M ,(1.H = 1 2 (M 2 1 + M 2 2 ) + M 2 3 + 1 2 [c(α 2 1 − α 2 2 ) + b α 2 3 ]. (1.3) В предположении L = 0 (1.4) система (1.1) на P a,0 имеет первый интеграл F = [M 2 1 − M 2 2 − b(α 2 1 − α 2 2 ) a 2 α 2 3 + cα 2 3 ] 2 + 4[M 1 M 2 − bα 1 α 2 a 2 α 2 3 ] 2 . (1.5) В частном случае b = 0 этот интеграл найден С.А. Чаплыгиным [2]. Там же выполнено и сведение задачи при b = 0 к эллиптическим квадратурам. Дальнейшие обобщения рассматриваемая интегрируемая система получила в работе Х.М. Яхья [3]. Явного сведения случая Горячева (1.3)-(1.5) к квадратурам до настоящего момента найдено не было. В работе [4] на основе идей бигамильтонова подхода предложены переменные разделения, однако предложенные уравнения типа Абеля-Якоби полностью в явной форме не выписаны. Не найдены и зависимости фазовых переменных от переменных разделения, в связи с чем результаты [4] еще далеки от завершения. В настоящей публикации представлено явное разделение переменных для случая Горячева. Это решение не требует привлечения каких-либо математических теорий. Получены простые разделенные дифференциальные уравнения, все исходные фазовые переменные алгебраически выражены через переменные разделения. 2 Параметризация интегральных многообразий В этом параграфе мы обобщаем замены С.А. Чаплыгина [2], следуя геометрическому подходу к разделению переменных, предложенному в [5,6,7]. Вместо одного из интегралов H, F можно рассматривать интеграл в форме, предложенной в [4]: K = [M 2 1 + M 2 2 + b α 2 3 ] 2 + 2cα 2 3 (M 2 1 − M 2 2 ) + c 2 α 4 3 . При условии (1.4) он выражается через H и F следующим образом: K = F + 4b a 2 H − b 2 a 4 .(2.1) Подходящим выбором единиц измерений и направления подвижных осей можно добиться того, чтобы было выполнено c = 1 и a = 1. Введем переменные x, z, ξ, полагая x 2 = α 2 1 + α 2 2 , z = α 2 3 , ξ = M 2 1 + M 2 2 + b α 2 3 . (2.2) Эти переменные подсказаны работой [2], в которой при b = 0 они привели к разделению. Следуя С.В. Ковалевской, введем также комплексную замену (i 2 = −1) w 1 = M 1 + iM 2 , w 2 = M 1 − iM 2 , x 1 = α 1 + iα 2 , x 2 = α 1 − iα 2 . (2.3) Геометрический интеграл примет вид x 2 + z = 1,(2.M 3 = − x 2 w 1 + x 1 w 2 2 √ z . (2.5) Подстановка (2.2), (2.5) в уравнения H = h, K = k (2.6) интегрального многообразия J h,k ⊂ P = P 1,0 приводит их к виду w 2 1 + w 2 2 = k − ξ 2 z − z, (2.7) x 2 1 (w 2 2 + z) + x 2 2 (w 2 1 + z) = 4hz − 2ξ − 2b 1 − 1 z . (2.8) Из определения (2.2) с использованием (2.4), (2.7) имеем также w 1 w 2 = ξ − b z , (2.9) z 2 (w 2 1 + z)(w 2 2 + z) = kz 2 − 2bξz + b 2 ,(2. 10) x 1 x 2 = 1 − z. (2.11) Из (2.7), (2.9) находим w 1 = 1 2 √ z ( √ ϕ − + √ ϕ + ), w 2 = 1 2 √ z ( √ ϕ − − √ ϕ + ), где ϕ ± (z, ξ) = k ± 2b − (ξ ± z) 2 . Введем комплексно сопряженные переменные µ 1 = z(w 2 1 + z), µ 2 = z(w 2 2 + z),(2.12) и пусть µ 2 = µ 1 µ 2 . Тогда уравнение (2.10) примет вид µ 2 = kz 2 − 2bξz + b 2 . (2.13) Отсюда, в частности, следует, что любая траектория системы изображается в виде кривой на поверхности второго порядка (2.13) в вещественном трехмерном пространстве R 3 (z, ξ, µ). Из (2.12), (2.7) находим µ 1 + µ 2 = z 2 − ξ 2 + k. Обозначая ψ = z 2 − ξ 2 + k, ψ ± = ψ ± 2µ, можем записать √ µ 1 = 1 2 ψ + + ψ − , √ µ 2 = 1 2 ψ + − ψ − . (2.14) Обозначим θ = 2hz 2 − (ξ + b)z + b, θ ± = θ ± (1 − z)µ. Тогда из (2.8), (2.11), (2.12) получим x 1 = 1 √ 2µ 2 θ + + θ − , x 2 = 1 √ 2µ 1 θ + − θ − , или, с учетом (2.14) x 1 = √ 2 θ + + θ − ψ + − ψ − , x 2 = √ 2 θ + − θ − ψ + + ψ − . Выражения для M 3 , α 3 находим теперь из (2.5) и второго соотношения (2.2). Таким образом, найдены все алгебраические выражения для фазовых переменных через две вспомогательных переменных z, ξ. Возвращаясь к вещественным компонентам фазового вектора, имеем M 1 = 1 2 ϕ − z , M 2 = − i 2 ϕ + z , M 3 = 1 4 √ 2µz √ ϕ + ψ − θ + + ψ + θ − − √ ϕ − ψ − θ − + ψ + θ + , α 1 = 1 2 √ 2µ ψ + θ + + ψ − θ − , α 2 = − i 2 √ 2µ ψ + θ − + ψ − θ + , α 3 = √ z. (2.15) Считая, что в представленных выше формулах выбрано µ 0, запишем условие существования вещественных решений (2.15) в виде системы неравенств ϕ + (z, ξ) 0, ϕ − (z, ξ) 0, ψ + (z, ξ) 0, ψ − (z, ξ) 0, θ + (z, ξ) 0, θ − (z, ξ) 0. Заметим,что подстановка k через постоянную интеграла F , которую по очевидным соображениям обозначим через f 2 , дает θ + θ − = [ξ − b − f + (b + f − 2h)z][ξ − b + f + (b − f − 2h)z]z 2 . (2.16) Кроме того ψ + ψ − = ϕ + ϕ − = [(z − ξ) 2 + 2b − k][(z + ξ) 2 − 2b − k]. Отсюда следует, что проекция интегрального многообразия J h,k на плоскость (z, ξ) ограничена сегментами (возможно, и неограниченными) прямых ξ − b ± f + (b ∓ f − 2h)z = 0, z − ξ ± √ k − 2b = 0, z + ξ ± √ k + 2b = 0, каждая из которых, если она корректно определена, служит касательной к кривой второго порядка G : kz 2 − 2bξz + b 2 = 0. (2.17) Выбрав семейство таких касательных в качестве координатной сети в соответствующей компоненте плоскости, получим, что в такой сети для всех допустимых пар постоянных первых интегралов область возможности движения окажется прямоугольной, что, как правило, ведет к разделению переменных [6,7]. Вещественное разделение переменных Каждая из касательных к кривой (2.17) в области kz 2 − 2bξz + b 2 0, в свою очередь, является проекцией на плоскость (z, ξ) пары прямолинейных образующих однополостного гиперболоида (2.13) в пространстве R 3 (z, ξ, µ). Выберем в качестве параметров двух семейств таких образующих корни u 1 , u 2 квадратного уравнения zu 2 − 2bu + (2bξ − kz) = 0. (3.1) Его дискриминант, равный µ 2 , в силу (2.10) неотрицателен на всех траекториях, поэтому u 1 , u 2 вещественны. Тогда из (3.1), (2.13) найдем z = 2b u 1 + u 2 , ξ = u 1 u 2 + k u 1 + u 2 , µ = b u 1 − u 2 u 1 + u 2 . (3.2) Обозначим p 1 (u) = 2b + k − u 2 , p 2 (u) = 2b − k + u 2 , p 3 (u) = (u − b) 2 − f 2 , и пусть p ij = p i (u j ), r ij = √ p ij (i = 1, 2, 3; j = 1, 2). (3.3) Знак каждой из величин r ij произволен, но во всех используемых одновременно формулах должен быть выбран одинаковым. Имеем ϕ + = − p 11 p 12 (u 1 + u 2 ) 2 , ϕ − = − p 21 p 22 (u 1 + u 2 ) 2 , ψ + = p 12 p 21 (u 1 + u 2 ) 2 , ψ − = p 11 p 22 (u 1 + u 2 ) 2 , θ + = 2bp 31 (u 1 + u 2 ) 2 , θ − = 2bp 32 (u 1 + u 2 ) 2 . Подставляя эти выражения вместе с (3.2) в (2.15), выбираем знаки радикалов согласованными таким образом, чтобы выполнялись интегральные соотношения (2.6). Для этого достаточно выполнения равенства, по смыслу согласованного с (2.16): √ ϕ + √ ϕ − = ψ + ψ − . Поэтому положим √ ϕ + = −i r 11 r 12 u 1 + u 2 , √ ϕ − = i r 21 r 22 u 1 + u 2 , ψ + = r 12 r 21 u 1 + u 2 , ψ − = r 11 r 22 u 1 + u 2 , θ + = √ 2br 31 u 1 + u 2 , θ − = √ 2br 32 u 1 + u 2 . Тогда из (2.15) получаем M 1 = i r 21 r 22 2 2b(u 1 + u 2 ) , M 2 = − r 11 r 12 2 2b(u 1 + u 2 ) , M 3 = − i 2 √ b(u 2 1 − u 2 2 ) (r 12 r 22 r 31 + r 11 r 21 r 32 ), α 1 = 1 2 √ b(u 2 1 − u 2 2 ) (r 12 r 21 r 31 + r 11 r 22 r 32 ), α 2 = − i 2 √ b(u 2 1 − u 2 2 ) (r 11 r 22 r 31 + r 12 r 21 r 32 ), α 3 = √ 2b √ u 1 + u 2 . (3.4) Для вывода дифференциальных уравнений для новых переменных u 1 , u 2 из определения (2.2) в силу системы (1.1) найдем ξ = 2α 3 (α 2 M 1 + α 1 M 2 ),ż = 2α 3 (α 1 M 2 − α 2 M 1 ), что в подстановке (3.4) дает ξ = (u 2 1 − k)r 12 r 22 r 32 + (u 2 2 − k)r 11 r 21 r 31 √ b(u 2 1 − u 2 2 )(u 1 + u 2 ) ,ż = − 2 √ b(r 11 r 21 r 31 + r 12 r 22 r 32 ) (u 2 1 − u 2 2 )(u 1 + u 2 ) . С другой стороны, из (3.2) имеем ξ = 1 (u 1 + u 2 ) 2 (u 2 2 − k)u 1 + (u 2 1 − k)u 2 ,ż = − 2b (u 1 + u 2 ) 2 (u 1 +u 2 ). Из двух последних систем получим разделенную систему Полученные выше аналитические результаты по разделению переменных справедливы и для b < 0, но качественное поведение системы может значительно отличаться. В частности, полная энергия в окрестности особенности α 3 = 0 окажется неограниченной снизу. В качестве пары независимых (почти всюду) интегралов на P удобно выбрать K, F . Ведем интегральное отображение (u 1 − u 2 )u 1 = W (u 1 ), (u 1 − u 2 )u 2 = W (u 2 ), (3.5) где W (u) = 1 b p 1 (u)p 2 (u)p 3 (u) = 1 b (2b + k − u 2 )(2b − k + u 2 )[(u − b) 2 − f 2 ].J = K × √ F : P → R 2 (4.2) и рассмотрим интегральные многообразия J k,f = J −1 (k, f ) = {ζ ∈ P : K(ζ) = k, F (ζ) = f 2 } (f 0). В силу соотношения (2.1) они совпадают с соответствующими многообразиями (2.6). Далее мы пользуемся методами работы [8]. Напомним некоторую терминологию. Допустимым множеством называется подмножество D плоскости (k, f ), состоящее из всех точек, для которых J k,f = ∅. Фиксируем (k, f ) ∈ D. Достижимой областью Acc(k, f ) называем проекцию интегрального многообразия J k,f на плоскость (u 1 , u 2 ). Из (3.4), (3.5) в предположении (4.1) следует, что любая связная компонента достижимой области является прямоугольником, ограниченным отрезками прямых, параллельных координатным осям. На этих прямых значение постоянной координаты является корнем многочлена W (u), называемого максимальным многочленом. Напомним, что бифуркационной диаграммой называется подмножество Σ ⊂ R 2 , над которым отображение (4.2) не является локально-тривиальным. В условиях рассматриваемой задачи оно совпадает с множеством критических значений J. Наличие зависимостей (3.4) гарантирует, что Σ есть часть дискриминантного множества ∆ многочлена (3.6), содержащаяся в D: Σ = ∆ ∩ D. (4.3) Из (3.6) получаем, что ∆ состоит из трех прямых k = 2b, k = −2b, f = 0 и четырех парабол k = ±2b + (f ± b) 2 . (4.4) Для нахождения допустимого множества требуется получить необходимые и достаточные условия вещественности значений (3.4) в терминах знаков подрадикальных выражений в наиболее компактной форме независимых неравенств, легко проверяемых в областях, на которые плоскость постоянных интегралов делится дискриминантным множеством. Пусть B = {0, 1} -булева пара. Следуя [8], вводим функцию (булев знак) bsgn : R → B, такую, что bsgn(θ) = 0, θ 0 1, θ < 0 . Обозначая символом ⊕ сумму по модулю 2, имеем свойство bsgn(θ 1 θ 2 ) = bsgn(θ 1 ) ⊕ bsgn(θ 2 ). Введем булевы переменные z i = bsgn p i1 , z 3+i = bsgn p i2 (i = 1, 2, 3). где z = (z 1 , ..., z 6 ) T ∈ B 6 , булева матрица A имеет вид (нуль изображаем пустой клеткой для наглядности) z 1 z 2 z 3 z 4 z 5 z 6 ζ 1 1 1 1 ζ 2 1 1 ζ 3 1 1 ζ 4 1 1 1 , (4.7) а ζ 0 = (0, 1, 1, 0) T . Заметим, что p 1 (u) > −p 2 (u). поэтому к системе (4.6) можно добавить условия-импликации (z 1 → ¬z 2 ) = 1, (z 4 → ¬z 5 ) = 1, ⇔ (z 1 , z 2 ) = (1, 1), (z 4 , z 5 ) = (1, 1). Кроме того, транспозиция u 1 ↔ u 2 в формулах (3.4) равносильна замене знака у всех r ij одновременно, поэтому можно условиться, например, о выполнении неравенства u 2 1 < u 2 2 , (4.8) что дает p 21 < p 22 , откуда (z 5 → z 2 ) = 1 ⇔ (z 2 , z 5 ) = (0, 1). Перечисленным условиям отвечает лишь одно из четырех решений системы (4.6) z = (0, 1, 1, 0, 0, 0) T . Следовательно, единственная система неравенств, обеспечивающая вещественность решения (3.4) при договоренности (4.8), имеет вид (k + 2b) − u 2 1 0, u 2 1 − (k − 2b) 0, (u 1 − b) 2 − f 2 0, (k + 2b) − u 2 2 0, u 2 2 − (k − 2b) 0, (u 2 − b) 2 − f 2 0. (4.9) Так как мы условились брать f неотрицательным, то отсюда сразу следует, что решения возможны лишь в квадранте k 2b, f 0. Рис. 1: Дискриминантное множество и кодировка областей: а) b < 1; б) b > 1. m 1 = √ k − 2b, m 2 = √ k + 2b (0 < m 1 < m 2 ), n 1 = b − f, n 2 = b + f (n 1 n 2 ).Таблица 1 Номер области Корни W Область изменения u 1 Область изменения u 2 I −m 2 < −m 1 < n 1 < m 1 < n 2 < m 2 [n 1 , m 1 ] [n 2 , m 2 ] II −m 2 < −m 1 < n 1 < n 2 < m 1 < m 2 [n 1 , n 2 ] [m 1 , m 2 ] III −m 2 < n 1 < −m 1 < m 1 < n 2 < m 2 [−m 1 , m 1 ] [n 2 , m 2 ] III * −m 2 < −m 1 < m 1 < n 1 < m 2 < n 2 ∅ [m 1 , n 1 ] IV −m 2 < −m 1 < n 1 < m 1 < m 2 < n 2 [n 1 , m 1 ] ∅ V −m 2 < n 1 < −m 1 < m 1 < m 2 < n 2 [−m 1 , m 1 ] ∅ VI n 1 < −m 2 < −m 1 < m 1 < m 2 < n 2 [−m 1 , m 1 ] ∅ VII −m 2 < −m 1 < m 1 < n 1 < n 2 < m 2 ∅ [m 1 , n 1 ] ∪ [n 2 , m 2 ] соответствии с их нумерацией на рис. 1, имеем: 1) m 1 = |n 1 |; 2) m 1 = n 2 ; 3) m 2 = n 2 ; 4) m 2 = |n 1 |. Сводка результатов по распределению значений (4.11) приведена в первых двух столбцах табл. 1. Оставшиеся два столбца содержат промежутки осцилляции переменных u 1 , u 2 , определенные согласно (4.12). Таким образом, движения возможны лишь в областях I − III, при этом проекция интегрального многообразия на плоскость разделенных переменных состоит из одного прямоугольника. Суммируем полученную информацию относительно допустимого множества. Теорема 1. Допустимое множество в пространстве констант первых интегралов K, √ F имеет вид: 1) при b < 1 D = {0 f b, k 2b + (f − b) 2 } ∪ {b f 2 √ b − b, k 0}∪ ∪ {f 2 √ b − b, k −2b + (f + b) 2 }; 2) при b 1 D = {0 f 1, k 2b + (f − b) 2 } ∪ {f 1, k −2b + (f + b) 2 }. В силу этих неравенств получим согласно (4.3) утверждение о бифуркационной диаграмме. Теорема 2. Бифуркационная диаграмма отображения K× √ F состоит из следующих множеств: при b < 1 a) k = 2b, b f 2 √ b − b, b) f = 0, k 2b + b 2 , c) k = 2b + (f − b) 2 , 0 f 1, d) k = 2b + (f + b) 2 , f 0, e) k = −2b + (f + b) 2 , f 2 √ b − b; а при b 1 a) f = 0, k 2b + b 2 , b) k = 2b + (f − b) 2 , 0 f 1, c) k = 2b + (f + b) 2 , f 0, d) k = −2b + (f + b) 2 , f 1. Отрезок оси f = 0 включается в бифуркационную диаграмму, поскольку очевидно, что нолькритическое значение функции F , а для арифметического корня √ F в соответствующих точках нарушается гладкость. Допустимое множество и бифуркационная диаграмма указаны на рис. 2. Номерами 1-7 отмечены различные участки множеств (a)-(e), на которые их разбивают узловые точки диаграммы. Очевидно, что при переходе к значениям b > 1 картина лишь упрощается, поэтому топологический анализ достаточно провести для случая b < 1. На рис. 2,а отмечены пути λ 1 − λ 3 , указав бифуркации вдоль которых, мы получим полную информацию для построения любого грубого топологического инварианта. Рис. 2: Допустимое множество и бифуркационная диаграмма: а) b < 1; б) b > 1. Фазовая топология Рассмотрим наиболее богатый случай b < 1. Для вычисления количества компонент связности регулярных интегральных многообразий (количества торов Лиувилля) и критических интегральных поверхностей воспользуемся методом булевых вектор-функций [8]. Сопоставим каждому радикалу r ij в (3.3) его булев знак z i = bsgn r i1 , z 3+i = bsgn r i2 (i = 1, 2, 3). Напомним, что здесь подразумеваются следующие предварительные действия, не влияющие на используемые формулы. Фиксирована некоторая связная компонента Π достижимой области -прямоугольник осцилляции пары (u 1 , u 2 ). Тогда вполне определены знаки всех подкоренных выражений p ij в зависимостях (3.4). У отрицательных p ij изменим знак, множитель i вынесем в коэффициент перед произведением радикалов и соответственно переопределим r ij так, что это значение будет вещественным. Сопоставляя каждому моному от радикалов в (3.4) компоненту булевой вектор-функции, получим те выражения для компонент, которые фигурируют в левых частях уравнений (4.5). Получим B-линейную функциюà : B 6 → B 8 . Исключение зависимых компонент (элементарные преобразования строк соответствующей матрицы), сводят ее к функции A : B 6 → B 4 с матрицей (4.7), которую также обозначим через A. Дальнейший алгоритм состоит в следующем. На заданном прямоугольнике Π аргументы функции A разбиваем на две группы, помещая в первую булевы знаки радикалов, не меняющих знак на траекториях в Π, а во вторую -булевы знаки радикалов, периодически меняющих знак на этих траекториях. Для каждого из нефиктивных аргументов второй группы элементарными преобразованиями матрицы A добьемся того, чтобы соответствующий столбец стал единичным, после чего исключим из A этот столбец и строку, содержащую выбранный аргумент. В результате получим B-линейное отображение, зависящее только от аргументов первой группы. Если p -его ранг, то количество связных компонент интегрального многообразия, накрывающего прямоугольник Π равно 2 p [8]. В случае, когда компонент, зависящих только от аргументов первой группы, не остается, полагаем p = 0, и соответствующий прямоугольник накрывается одной компонентой. В критическом случае (наличие кратного корня у максимального многочлена) делаем то же самое с дополнительной оговоркой о том, что радикалы, содержащие кратный корень, всегда относятся ко второй группе, даже если этот кратный корень -внутренняя точка для промежутка осцилляции и фактически за конечное время не достигается соответствующей переменной разделения. Приведенные рассуждения применяются без учета возможности выбора различных знаков у радикала √ u 1 + u 2 . Этот выбор задает одно из двух инвариантных подмногообразий в P, на которые фазовое пространство системы разбивает исключенное из рассмотрения подмножество α 3 = 0 особенностей потенциала. Поэтому в полном фазовом пространстве все последующие результаты о количестве компонент связности необходимо умножить на два. Будем считать, что для определенности мы ограничиваемся подмногообразием в P, заданным неравенством α 3 > 0 (очевидно, рассматриваемая система обладает симметрией -обращением знака величин M 1 , M 2 , α 3 ). Информация о достижимых областях, аргументах второй группы и остающихся после редукции компонентах булевой вектор-функции собрана в табл. 2. 1) где M ∈ R 3 -импульсивный момент, α ∈ R 3 -импульсивная сила, H = H(M , α) L = M 1 α 1 + M 2 α 2 + M 3 α 3и полная энергия H. На совместном уровне P a,ℓ = {Γ = a 2 , L = ℓ}, dim P a,ℓ = 4 дают полное решение задачи сведения случая Д.Н. Горячева к квадратурам.4 Допустимая область и бифуркационная диаграммаНачиная с этого момента, считаем, что оставшийся свободный физический параметр положителен b > 0. Поэтому при выборе (4.8) получаем, что u 2 0, и тогда система (4.9) определяет следующие условия на достижимую областьu 1 ∈ [−m 1 , m 1 ] ∩ [n 1 , n 2 ], u 2 ∈ [m 1 , m 2 ]\[n 1 , n 2 ].(4.12)Пересечение множества ∆ с квадрантом (4.10) показано на рис. 1. При переходе через значение b = 1 исчезает область III и возникает область, обозначенная III * . На параболах ( При отсутствии кратных корней многочлена W любая компонента достижимой области Acc(k, f ) имеет внутреннюю точку. В такой точке условия вещественности (3.4) запишутся в виде системы B-линейных уравнений z 2 ⊕ z 5 = 1,z 1 ⊕ z 4 = 0, z 3 ⊕ z 4 ⊕ z 5 = 1, z 1 ⊕ z 2 ⊕ z 6 = 1, z 2 ⊕ z 3 ⊕ z 4 = 0, z 1 ⊕ z 5 ⊕ z 6 = 0, z 1 ⊕ z 3 ⊕ z 5 = 1, z 2 ⊕ z 4 ⊕ z 6 = 1.(4.5) Очевидно, ранг этой системы над B равен 4 и она эквивалентна системе Az = ζ 0 , (4.6) Таблица 2Номер области (сегмента)Достижимая областьВторая группаПоясним преобразования матрицы (4.7) в трех основных случаях. Для области I и сегментов 2, 5, 6, 7 последовательно исключаем пары столбец-строка (z 1 , ζ 1 ), (z 3 , ζ 4 ), после чего последний столбец становится единичным и исключается пара (z 6 , ζ 4 ). Остается одна строка, в которой аргумент z 5 из второй группы -исключаем и ее. Остается вектор-функция без компонент (в табл. 2 она отмечена как тождественный ноль), то есть p = 0. На сегменте 7 тот факт, что и аргумент z 4 оказывается во второй группе, уже не нужен.Для области II и сегмента 1 последовательно исключаем пары столбец-строка (z 3 , ζ 3 ), Остается одна строка, отвечающая функции z 1 ⊕ z 2 ⊕ z 6 , в которой все аргументы из первой группы. после чего к строке ζ 1 прибавим строку ζ 2 . Столбец z 5 становится единичным и исключается пара. Остается одна строка, отвечающая функции z 2 ⊕ z 3 с аргументами из первой группы. Следовательно, p = 1. A A A B Рис. 3: Круговая молекула для узловых точекz 4 , ζ 4 ), после чего к строке ζ 1 прибавим строку ζ 2 . Столбец z 5 становится единичным и ис- ключается пара (z 5 , ζ 2 ). Остается одна строка, отвечающая функции z 1 ⊕ z 2 ⊕ z 6 , в которой все аргументы из первой группы. Следовательно, p = 1. Для области III и сегментов 3, 4 исключаем пару (z 1 , ζ 1 ), после чего к строке ζ 4 прибавим строку ζ 1 . Столбец z 5 становится единичным и исключается пара (z 5 , ζ 2 ). К строке ζ 4 прибавим строку ζ 3 . Столбец z 6 становится единичным и исключается пара (z 6 , ζ 3 ). Остается одна строка, отвечающая функции z 2 ⊕ z 3 с аргументами из первой группы. Следовательно, p = 1. A A A B Рис. 3: Круговая молекула для узловых точек. Они указаны в последнем столбце таблицы. Соответственно, вдоль всех путей λ 1 −λ 3 с учетом указанного направления имеем одинаковую последовательность бифуркаций ∅ → 2A → 2T 2 → B → T 2 → A → ∅. Молекула показана на рис. 3. Использованы обозначения атомов в соответствии с современной классификацией. Теперь по количеству компонент связности однозначно определяются и интегральные многообразия, лежащие в выбранной половине системы (фиксированный знак α 3 ). см., например, [9Теперь по количеству компонент связности однозначно определяются и интегральные много- образия, лежащие в выбранной половине системы (фиксированный знак α 3 ). Они указаны в последнем столбце таблицы. Соответственно, вдоль всех путей λ 1 −λ 3 с учетом указанного направ- ления имеем одинаковую последовательность бифуркаций ∅ → 2A → 2T 2 → B → T 2 → A → ∅. Молекула показана на рис. 3. Использованы обозначения атомов в соответствии с современной классификацией (см., например, [9]). При переходе к значениям b > 1 происходят очевидные упрощения -исчезает регулярная область III с двумя торами Лиувилля и примыкающие к ней сегменты 3, 4, 6. Отметим, что наличие явного алгебраического решения с разделенными переменными позволяет легко вычислить инварианты траекторной эквивалентности, основываясь на числах вращения. Автор выражает искреннюю признательность профессору М.П. Харламову за полезные сове. При переходе к значениям b > 1 происходят очевидные упрощения -исчезает регулярная область III с двумя торами Лиувилля и примыкающие к ней сегменты 3, 4, 6. Отметим, что наличие явного алгебраического решения с разделенными переменными позво- ляет легко вычислить инварианты траекторной эквивалентности, основываясь на числах враще- ния. Автор выражает искреннюю признательность профессору М.П. Харламову за полезные сове- Новые случаи интегрируемости динамических уравнений Эйлера // Изв. Варшавского ун-та. -1916. -№ 3. Д Н Горячев, Горячев Д.Н. Новые случаи интегрируемости динамических уравнений Эйлера // Изв. Вар- шавского ун-та. -1916. -№ 3. -С. 1-13. Новое частное решение задачи о движении твердого тела в жидкости // Труды отд-я физ. наук общества любителей естествознания. С А Чаплыгин, 1903. -11, 2. -С. 7-10Чаплыгин С.А. Новое частное решение задачи о движении твердого тела в жидкости // Труды отд-я физ. наук общества любителей естествознания. -1903. -11, 2. -С. 7-10. New integrable problems in the dynamics of rigid bodies with the Kovalevskaya configuration. I -The case of axisymmetric forces. H Yehia, Mech. Res. Commun. Yehia H.M. New integrable problems in the dynamics of rigid bodies with the Kovalevskaya configuration. I -The case of axisymmetric forces // Mech. Res. Commun. -1996. -23, 5. - P. 423-427. On the generalized Chaplygin system. A V Tsiganov, J. of Math. Sciences. 8-2010. -168Tsiganov A.V. On the generalized Chaplygin system // J. of Math. Sciences. -2010. -168, 8. - P. 901-911. Explicit integration of one problem of motion of the generalized Kowalevski top. M P Kharlamov, A Y Savushkin, Mech. Res. Commun. Kharlamov M.P., Savushkin A.Y. Explicit integration of one problem of motion of the generalized Kowalevski top // Mech. Res. Commun. -2005. -№ 32. -P. 547-552. Обобщение 4-го класса Аппельрота: область существования движений и разделение переменных // Нелинейная динамика. М П Харламов, Харламов М.П. Обобщение 4-го класса Аппельрота: область существования движений и раз- деление переменных // Нелинейная динамика. -2006. -2, 4. -С. 453-472. Separation of variables in the generalized 4th Appelrot class // Regular and Chaotic Dynamics. M P Kharlamov, 12Kharlamov M.P. Separation of variables in the generalized 4th Appelrot class // Regular and Chaotic Dynamics. -2007. -12, 3. -P. 267-280. Топологический анализ и булевы функции. I. Методы и приложения к классическим системам // Нелинейная динамика. -2010. -6, 4. М П Харламов, Харламов М.П. Топологический анализ и булевы функции. I. Методы и приложения к клас- сическим системам // Нелинейная динамика. -2010. -6, 4. -С. 769-805. Интегрируемые гамильтоновы системы. Геометрия, топология, классификация // Ижевск: Изд-во РХД. А В Болсинов, А Т Фоменко, 1,2Болсинов А.В., Фоменко А.Т. Интегрируемые гамильтоновы системы. Геометрия, топология, классификация // Ижевск: Изд-во РХД, 1999. -Т. 1,2.
[]
[ "Evidence of Long-Term Period Variations in the Exoplanet Transit Database (ETD)", "Evidence of Long-Term Period Variations in the Exoplanet Transit Database (ETD)" ]
[ "Simone R Hagey \nThe University of British Columbia\n6224 Agricultural Road VancouverV6T 1Z1BCCanada\n", "Billy Edwards \nAIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité de Paris\nF-91191Gif-sur-YvetteFrance\n\nBlue Skies Space Ltd\n69 Wilson StreetEC2A 2BBLondonUK\n\nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n", "Aaron C Boley \nThe University of British Columbia\n6224 Agricultural Road VancouverV6T 1Z1BCCanada\n" ]
[ "The University of British Columbia\n6224 Agricultural Road VancouverV6T 1Z1BCCanada", "AIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité de Paris\nF-91191Gif-sur-YvetteFrance", "Blue Skies Space Ltd\n69 Wilson StreetEC2A 2BBLondonUK", "Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK", "The University of British Columbia\n6224 Agricultural Road VancouverV6T 1Z1BCCanada" ]
[]
We analyze a large number of citizen science data and identify eight Hot Jupiter systems that show evidence for deviations from a constant orbital period: HAT-P-19 b, HAT-P-32 b, TrES-1 b, TrES-2 b, TrES-5 b, WASP-4 b, WASP-10 b, and WASP-12 b. The latter system is already well known to exhibit strong evidence for tidal orbital decay and serves as an important control for this study. Several other systems we identify have disputed period drifts in the literature, allowing the results here to serve as an independent analysis. The citizen science data are from the Exoplanet Transit Database (ETD), which is a global project established in 2008 by the Variable Star and Exoplanet Section of the Czech Astronomical Society. With over 400 planets and 12,000 contributed observations spanning 15 years, the ETD is brimming with potential for studying the long-term orbital evolution of close-in Hot Jupiters. We use our results to discuss prioritization of targets for follow up investigations, which will be necessary to confirm the period drifts and their causes.
10.3847/1538-3881/ac959a
[ "https://export.arxiv.org/pdf/2209.10752v1.pdf" ]
252,438,633
2209.10752
cd1916902d163f3a915f6d77ee4cdad2f32ae70b
Evidence of Long-Term Period Variations in the Exoplanet Transit Database (ETD) September 23, 2022 Simone R Hagey The University of British Columbia 6224 Agricultural Road VancouverV6T 1Z1BCCanada Billy Edwards AIM CEA CNRS Université Paris-Saclay Université de Paris F-91191Gif-sur-YvetteFrance Blue Skies Space Ltd 69 Wilson StreetEC2A 2BBLondonUK Department of Physics and Astronomy University College London Gower StreetWC1E 6BTLondonUK Aaron C Boley The University of British Columbia 6224 Agricultural Road VancouverV6T 1Z1BCCanada Evidence of Long-Term Period Variations in the Exoplanet Transit Database (ETD) September 23, 2022Submitted to The Astronomical JournalDraft version Typeset using L A T E X twocolumn style in AASTeX631 5 Paris Region Fellow 6 Canada Research Chair in Planetary Astronomy (Accepted September 9, 2022)Exoplanets (498) -Exoplanet dynamics (490) -Transit timing variation method (1710) We analyze a large number of citizen science data and identify eight Hot Jupiter systems that show evidence for deviations from a constant orbital period: HAT-P-19 b, HAT-P-32 b, TrES-1 b, TrES-2 b, TrES-5 b, WASP-4 b, WASP-10 b, and WASP-12 b. The latter system is already well known to exhibit strong evidence for tidal orbital decay and serves as an important control for this study. Several other systems we identify have disputed period drifts in the literature, allowing the results here to serve as an independent analysis. The citizen science data are from the Exoplanet Transit Database (ETD), which is a global project established in 2008 by the Variable Star and Exoplanet Section of the Czech Astronomical Society. With over 400 planets and 12,000 contributed observations spanning 15 years, the ETD is brimming with potential for studying the long-term orbital evolution of close-in Hot Jupiters. We use our results to discuss prioritization of targets for follow up investigations, which will be necessary to confirm the period drifts and their causes. INTRODUCTION Hot Jupiters (HJs) are gas giant planets that orbit their host stars with periods less than about 12 days. Their provenance remains debated, although several possible formation pathways exist, such as dynamical excitation followed by tidal circularization (Fabrycky & Tremaine 2007), large scale disc migration (Lin et al. 1996), and in situ formation (Boley et al. 2016;Batygin et al. 2016). Regardless of how they formed, HJs' proximity to their host stars leads to tidal interactions, which in turn should affect their orbital evolution. Specifically, many HJ orbits should be shrinking gradually over time due to a transfer of angular momentum from the planet to the host star, eventually leading to planetary engulfment (Levrard et al. 2009;Matsumura et al. 2010). This situation occurs when the orbital period of a HJ is shorter than the rotational period of its host star (Penev et al. 2018) or if the angular momentum vectors are antialigned. For a given arrangement, the decay rate can vary depending on the magnitude of stellar tidal dissipation (Levrard et al. 2009;Matsumura et al. 2010). Directly measuring the orbital decay rate provides an estimate of the stellar tidal quality factor Q * . While theory predicts many HJs should be decaying, direct detection has remained difficult. Nonetheless, Patra et al. (2020) suggest that certain ensemble properties of HJs indirectly reveal evidence of orbital decay -properties such as the infrequency of planets with periods less than one day, the rapid rotation of some host stars, and the lack of short period planets around such stars. In principle, careful observations of planetary transit centres over decade timescales should reveal direct evidence for orbital decay (Birkby et al. 2014), appearing as a quadratic timing variation. However, orbital decay is not the only possible scenario for HJ orbital evolution, and some processes cause transit variations that mimic decay. For instance, apsidal orbital precession causes a sinusoidal variation in transit times. Even for fast precession rates, predicted precession periods extend over many decades (Ragozzine & Wolf 2009), making the sinusoidal behaviour very difficult to detect. Because of this, the curvature of the transit timing variations from apsidal precession can be consistent with the signal of orbital decay over (relatively) short observing windows. It must be stressed that measuring the apsidal precession rate of a HJ is valuable on its own, as it provides an estimate of the planetary Love number which enables probing the interior density distribution (Ragozzine & Wolf 2009). Yet other effects may also cause observable transit time variations that do not reflect an inherent change in an HJ's orbit. A line-of-sight acceleration is one such example, in which a wide-orbit companion causes the host star and its HJ to accelerate toward the observer, leading to an apparent decay of the orbital period (Bouma et al. 2020). Stellar activity may also lead to a perceived change in a planet's orbital period on timescales comparable to current HJ observational baselines. A transit lightcurve is deformed if the planet passes over a starspot, which can affect the fitted transit centre time. In the case of persistent starspots, there could be a continuous perturbation of the fitted transit center times. Ioannidis et al. (2016) found that in rare cases this periodic nature could mimic TTVs induced by planetary companions, and Patra et al. (2020) suggest that stellar activity cycles with very long periods could mimic orbital decay. Investigating the effects of stellar activity and line-of-sight acceleration on the transit timing of the ETD targets is beyond the scope of this work, but must be considered in follow-up studies. Whatever the cause, the earliest known HJs have transit observational baselines that span over a decade, making it possible to detect orbital decay or other long-term transit timing variations. Though many systems of interest have been suggested (see Patra et al. 2020), only WASP-12 b has a clear signature of spiraling into its host star. Indeed, since Maciejewski et al. (2016) first suggested that WASP-12 b does not follow a constant period ephemeris, the accumulation of more transit and secondary eclipse data has continuously supported an orbital decay model (Patra et al. 2017;Yee et al. 2019;Turner et al. 2020). With the inclusion of observations from the Transiting Exoplanet Survey Satellite (TESS Ricker et al. 2015), the apsidal precession model has largely been rejected as well (Turner et al. 2020). This work has sparked increased interest within the exoplanet community in searching for decaying HJs, but as of now no other systems have shown such compelling results. Given the large number of transiting planets that have been discovered, regularly obtaining follow-up observations for each of them has become a difficult task given the limited resources of professional facilities and the perceived low science yield of such observations. However, given their large transit depth, high quality observations of HJs around bright stars can be achieved with relatively modest equipment. As such, citizen astronomers have for decades been able to observe exoplanetary transits; and importantly, projects such as the Exoplanet Transit Database (ETD), ExoClock (Kokori et al. 2021(Kokori et al. , 2022 and Exoplanet Watch (Zellem et al. 2020) provide platforms that allow these observers to collectively contribute to cataloguing light curves. Amateur exoplanet transit observations further contribute to updating ephemerides and maximizing observing efficiency at major facilities by ensuring reliable timing. The TESS mission, in particular, has developed the TESS Follow-Up Observing Program (TFOP) to enlist amateur observers to aid in eliminating false-positives from TESS exoplanet candidates (Collins 2019). Furthermore, via the ORBYTS programme (Sousa-Silva et al. 2018) high-school students have contributed to refining the ephemerides of potential targets for the ESA Ariel mission (Tinetti et al. 2018(Tinetti et al. , 2021 using the Las Cumbres Observatory Global Telescope (LCOGT) network of robotic telescopes (e.g. Edwards et al. 2020Edwards et al. , 2021. There are also studies in the literature that use select citizen science transits for transit timing variation (TTV) studies (see e.g. Baluev et al. 2015;Petrucci et al. 2019;Sonbas et al. 2021). Nonetheless, the exclusive use of such amateur observations are rare, despite their potential for examining long-term orbital changes such as period decay. In this work, we analyze the wealth of citizen scientist observations spanning over a decade in the ETD to demonstrate these data sets can be used to identify HJ systems that are strong candidates for exhibiting observable orbital evolution. Such selections can then be followed-up using large facilities, making efficient use of resources. In Section 2, we describe the ETD, and in Section 3 outline our target selection process. Section 4 introduces the transit timing models and model fitting processes. Our results are presented in Section 5 and discussed in detail, with an emphasis on prospects for future observations, in Section 6. This work would not be possible without the hundreds of citizen scientists who submitted transit observations to the ETD and we ask the reader to take note of the acknowledgements at the end of this paper. DATA The data used in this study are from the Exoplanet Transit Database (ETD). The ETD was established in 2008 by the Variable Star and Exoplanet Section of the Czech Astronomical Society, and allows any observer to register and upload transit observations (Poddaný et al. 2010(Poddaný et al. , 2011. As of early 2022, there are 399 planets listed in the database and over 12,000 contributed observations. 1 The website also provides transit predictions and a lightcurve fitting tool, requiring users to upload their target flux, error, and timestamps. The user is able to upload data with either a geocentric or heliocentric JD format based on the UTC time standard. The fitted transit times are then provided by the database in HJD UTC , which we convert to BJD TDB for this study. For many systems, there are literature data mixed in with the citizen science observations -one of the initial goals of the ETD project was to have a single space for all amateur and professional transit observations (Poddaný et al. 2010). The inclusion of literature results for any given planetary system in the database is nonetheless incomplete, with the majority posted in the earlier years of the project. While the database is vast and thus brimming with potential, there are two reasons the ETD is limited in its immediate use: 1) the simplicity of its automatic transit fitting routine and 2) a submission process that allows for inconsistencies in the format of submitted data. The light curve fitting is done via a Levenberg-Marquardt (L-M) nonlinear least squares algorithm; namely, m(t i ) = A − 2.5 log F (z[t i , t 0 , D, b], p, c 1 )+ B(t i − t mean ) + C(t i − t mean ) 2 ,(1) where m i is the relative magnitudes taken at time t i , and F (z, p, c 1 ) is the relative flux, which depends on the radius ratio p, projected planet-star separation z, and limb-darkening coefficient c 1 . Stellar limb-darkening is modeled linearly. The projected separation z depends on the mid-transit time t 0 , transit duration D, and impact parameter b. Systematic trends in the data are described by B and C, and the zero-point shift of the magnitudes by A. In the fitting algorithm, A, t 0 , D, p, B, and C are free parameters. The linear limb-darkening coefficient c 1 is fixed at a value of 0.5 (Poddaný et al. 2010). 1 http://var2.astro.cz/ETD/index.php Many studies using ETD observations re-fit the light curves (e.g. Baluev et al. 2015;Mallonn et al. 2019;Davoudi et al. 2021;Edwards et al. 2021). However, the different formats of the observations, both in terms of the machine-readable table format (comma-separated, tab-separated, etc.) and the timestamps provided, make reformatting and refitting the data a very time consuming process, particularly when assessing a large number of observations. This work involves a total of 4792 transit observations, making refitting the lightcurves an impracticable course of action in many instances. Hence, instead of refitting the data, we take the fitted transit centres at face value, as the intent is to show that the ETD, and citizen science more generally, can be an efficient and convenient way to flag planetary systems with nonlinear long-term timing trends. The only data categories used in this study are the ETD times, the associated uncertainties on those times, and the epoch numbers. In doing this, there is an implicit assumption that the fitting routine used by ETD does not have a systematic bias and that the timestamps given are in the expected format of HJD UTC . If this is the case, we argue that the main effect of taking the data at face value is to increase the noise. This approach must be used with caution, as the ETD lightcurve fitting routine is not immune to systematic bias. The Levenberg-Marquardt algorithm is at higher risk of becoming trapped in a local minimum than a more rigorous MCMC routine, and there is no correction for red noise. Systematic noise in the lightcurves are modeled quadratically (Eq. 1), though visual inspection of a subset of ETD transits implies that a higher-order approximation may improve the fitting. Additionally, limb-darkening is modeled linearly with a coefficient fixed at a value of 0.5 (Poddaný et al. 2010), which Petrucci et al. (2013) found to be insufficient for the study of short term TTVs. Despite these potential sources of systematic bias, the agreement of our WASP-12 b analysis with the literature (Section 5) shows that our assumption holds for the purpose of an efficient analysis of potential targets. The overall quality of observations within the ETD is highly variable. Despite this, when analyzed carefully and consistently, they can be used in independent work and to supplement observations from professional observatories. A good example of the latter is in Kipping & Bakos (2011), who directly compare TrES-2 b transits from the ETD to high-precision Kepler observations of the same epoch number. The ETD observations were found to be in excellent agreement with the Kepler transits, with most of the transit centers matching to within 1 − σ. To aid in analyzing the data, transit observations in the ETD are ranked by a "data quality index" (DQ) from 1-5, with 1 being those of the highest quality. The DQ index α is calculated as α = δ S N l ,(2) where δ is the minimum flux, S the mean absolute deviation from the best-fit transit model, N the number of data points, and l the length of the observing run. The greater the value of α, the better quality data. Poddaný et al. (2010) provides the thresholds used for ranking any given observation. For this study, only observations with data quality 1-3 were considered. TARGET SELECTION There are over 400 systems in the ETD, 2 each with varying numbers of observations, data quality, and observational baselines. We are interested in data that might reveal the long-term evolution of planets and, given the range in data quality, focus only on systems that have a large number of observations. The analysis is restricted to planetary systems that have more than 80 contributed observations with a data quality index of 3 or better. We found that model fits on systems with fewer observations become too dependent on local variations in the data. While there is not a clear transition, and a unique value will not exist, in this scenario a value of 80 removes systems with this strong dependency. Based on this criteria, 30 systems were chosen for further analysis, described further in Section 4. These 30 systems are listed in Table 1. METHODS A multi-step process was used to search for trends in the transit centre times. The transit centres are taken directly from the Exoplanet Transit Database (ETD) without performing new fits on the individual transit observations. The timestamps, provided by the database in HJD UTC , are converted to BJD TDB for this study following the standard set by Eastman et al. (2010). This conversion is done because, for example, the UTC (Universal Coordinated Time) standard drifts with the addition of leap seconds. Thus, comparing only UTC times that span multiple years may introduce an artificial drift in the transit times that could mimic a true variation in the HJ's orbit. The ETD includes long-term observations of WASP-12 b, which serves as a critical reference due to its comprehensive study in the literature (Maciejewski et al. 2016;Patra et al. 2017;Yee et al. 2019; 2 404 planets and 12,291 observations as of May 16, 2022 Turner et al. 2020;Wong et al. 2022). It is thus reassuring that the WASP-12 b results from this study are in agreement with literature values, showing a highly significant decay (Table A9). This alone highlights the value of the ETD observations and suggests that some of the other compelling targets found in the ETD should be investigated further. Observations of select systems from the database are run through a Python pipeline developed specifically for this project. The data, results, and code used for this project are publicly available on Zenodo. 3 Transit Timing Models Three different transit timing models are tested against the ETD transit centres using Markov Chain Monte Carlo (MCMC) methods. Initial tests were conducted using the emcee package (Foreman-Mackey et al. 2013). However, as discussed below, one of the models developed unstable behaviour and did not converge in general. To have more direct control over the MCMC algorithm, we implemented a custom Metropolis-Hastings MCMC with a Gibbs sampler (Ford 2006). This latter approach is used for all of the results presented here. For the models, we consider the cases of (1) a planet on a constant orbital period, (2) a planet on a decaying, circular orbit, and (3) a planet that has a constant orbital period, but the orbit is precessing. In the constant orbital period case, the transit center times increase linearly with the transit epoch E such that t tra = t 0 + P E and (3) t occ = t 0 + P 2 + P E,(4) where t 0 is the reference epoch, P is the period, and t tra and t occ are the expected transit and occultation times, respectively. The second model assumes a steady change in the orbital period with epoch. The simplest expression of this behaviour is to include a quadratic term in the expected transit centre times. If the fitted period derivative dP dE is negative, then the planet is decaying. The model is given by t tra = t 0 + P E + 1 2 dP dE E 2 and(5)t occ = t 0 + P 2 + P E + 1 2 dP dE E 2 .(6) The third timing model attempts to use apsidal precession to explain timing variations, which requires the HJ orbit to have a non-zero eccentricity -as we will see, even small eccentricities can give rise to measurable effects. To capture the system's behaviour in this model, the argument of pericentre ω is assumed vary at a constant rate, leading to sinusoidal trends in the timing data. Following Patra et al. (2017), the transit and occultation times can be expressed as t tra = t 0 + P s E − eP a π cos ω,(7)t occ = t 0 + P a 2 + P s E eP a π cos ω,(8)ω(E) = ω 0 + dω dE E, and(9)P s = P a 1 − dω/dE 2π ,(10) for argument of pericentre ω, phase ω 0 , and precession rate dω/dE. In these equations, P s represents the planet's sidereal period, which is assumed to be fixed, while P a is the "anomalistic" period. This latter period is also fixed, but accounts for the additional period signal due to precession. The MCMC is used to determine the posterior distributions for the model parameters. The constant period model only has two free parameters: the reference epoch and the period. The decay model has three parameters, adding the dP/dE term. Finally, the apsidal precession model has five parameters: the reference epoch, sidereal period, eccentricity, phase constant, and the precession rate. Due to the different number of free parameters between the models, the Bayesian Information Criterion (BIC) is used to compare the relative accuracy and necessity of each model to describe the transit timing data. The BIC is defined as: BIC = χ 2 + k log n,(11) where n is the number of data points and k is the number of free parameters. Thus, the BIC accounts for the accuracy of the model through the χ 2 statistic, while penalizing the model for the number of free parameters used to describe the data (i.e., the model's complexity). The difference in BIC relates to the Bayesian posterior odds ratio such that a ∆BIC = 10 corresponds to a 150:1 ratio for the stronger model, and a ∆BIC = 5 to 13:1 (Liddle 2007). 4 With this in mind, the evidence strongly supports the model with the lower BIC value over the one with the larger value when ∆BIC > 10. For 5 < ∆BIC < 10, the evidence supports the model with the lower BIC value, and if ∆BIC < 5, while the lower BIC model is favoured, the evidence in support of that distinction is weak. Data Cleaning The transit observations in the ETD are of inconsistent quality and require at least some degree of cleaning. We note above that there is already a numerical data quality factor for each entry in the ETD, which can be used for making an initial selection of data. 5 However, this alone is insufficient due to spurious data points, i.e., points that are well-removed from emergent trends, but with a wide range of reported uncertainties. To address this, an iterative sigma-clipping algorithm is integrated into the MCMC analysis. The process is monitored to avoid introducing an artificial trend in the transit timing curves. The pipeline begins by finding a preliminary bestfit orbital decay model using 100,000 iterations of the MCMC code. The first ten percent of the resulting chain is discarded as burn-in, which is found to be sufficient for the given chain lengths. The resulting preliminary model is subtracted from the timing data. The variance of the residuals is then calculated and used to flag any datum whose nominal value lies outside a 3-σ deviation from the residual mean. Another round of fitting is then run, including the MCMC fitting, but with the flagged data excluded. This process is repeated until no points fall outside of the 3-σ range. Convergence is achieved on average after 3.6 iterations with an average number of 9 observations removed. After this is complete a final, longer run of the MCMC is performed for the orbital decay model (5,000,000 iterations with 10% discarded as burn-in) and, if desired, the precession model. A worry of using the sigma-clipping method is that it may introduce a false signal, a potential issue that was recognized early on. To address this, during development of the fitting routines, the same overall method was tried but with a best-fit linear model subtracted from the data instead of the best-fit orbital decay model. It was found that the fitted parameters from both methods agreed within respective uncertainties and that the most likely timing model for any given system remained the same. Essentially, data that are better described by a linear trend just resulted in a quadratic best-fit model with very low curvature, and we did not "clip" the data into a quadratic trend. Given comparable results, we decided to focus on using the decay model for the sigmaclipping because it would be, in principle, more sensitive to trend detection. However, the linear model subtraction approach is still run to confirm strong timing trends and discussed as appropriate. 5 As a reminder, we automatically remove transit centres that have an associated data quality factor of 4 and 5 (1 is the best and 5 is the worst) Model Fitting Approach The data analysis pipeline described above is applied twice in total. For the first application, each of the 30 star-planet systems selected from the ETD (see Section 3) are processed in the pipeline. During this run, all transit centres that have a DQ of 1, 2, or 3 are included in model fitting. Eight systems show evidence of a nonlinear trend (Table 1) and are investigated further. Two additional targets are included due to potential nonlinear trends being previously reported in the literature (and with different studies reporting inconsistent results): WASP-4 b (see Bouma et al. 2019;Southworth et al. 2019;Baluev et al. 2019Baluev et al. , 2020Bouma et al. 2020;Ivshina & Winn 2022;Turner et al. 2022) and WASP-43 b (see Blecic et al. 2014;Murgas et al. 2014;Chen et al. 2014;Ricci et al. 2014;Jiang et al. 2016;Hoyer et al. 2016;Zhao et al. 2018). The resulting ten systems are then manually cleaned before being run through the data pipeline a second time. This is a time-consuming process, which is why it is only done at this stage. Specifically, any observation in the database without a clear transit or a clear ingress and egress are manually flagged and excluded from further analysis. Entries without a reported uncertainty are also excluded, automatically. Because this project purposefully does not involve re-fitting light curves, the manual data inspection is necessary to filter out transits without clearly defined edges. It also demonstrates that relying on the DQs alone is insufficient, despite the intended use of DQs. For example, Poddaný et al. (2010) state that partial transits uploaded to the ETD would automatically be given a data quality index of 5, but this automated flagging appears to have stopped after some time. The transit observations that are included in these cleaned datasets are listed in Appendix B for reproducibility. With these ten systems now having been manually cleaned, the data are run through the pipeline a second time. At this point, the apsidal precession model is also included. The orbital decay model fitting is well-behaved and consistently converges to a clear solution. Uniform prior distributions are placed on the 3 free parameters (t 0 , P , and dP/dE) and a wide parameter space can be explored. Fitting the precession model, however, proved to be non-trivial due to degeneracies between viable models, which we attribute to be due, in part, to the high variance of the data. In particular, the anti-correlations between the eccentricity, e, and the precession rate, dω/dE, as well as between e and the reference transit time, t 0 , lead to posterior solutions that show two or more strong peaks. After exploring many options, a selection of priors and bounds on parameters was determined to be necessary to derive converged results. Indeed, it is this issue that ultimately led to writing a custom MCMC routine. To ensure convergence of our posterior chains, we examined a number of outputs such as trace plots, corner plots, and the autocorrelation coefficient as a function of lag, all of which are consistent with convergence. Table 2 summarizes the prior distributions and bounds placed on the parameters for all three transit timing models. A normal distribution prior is placed on the period, centered on the best-fit result from the constant period model fit. This is justified by the observation that this constant period can be viewed as an analog for the sidereal period in the apsidal precession model (Equation 10). A uniform prior distribution is appropriate for ω 0 , and log-uniform priors are used for e and dω/dE to account for the wide range of possible parameter space. Without placing a formal bound on the precession rate, the model tends to go to high frequencies in an attempt to fit the datum-to-datum variation, an unphysical situation. To address this, we use the results of Ragozzine & Wolf (2009), who estimate that WASP-12 b should have a precession rate of 19 deg/yr (roughly 0.001 radian/epoch). Out of all of the systems they studied, WASP-12 b has by far the highest predicted rate. With this in mind, an upper bound of 0.001 radian/epoch is placed on dω/dE for all systems, even for WASP-12 b, as a higher bound on the precession rate does not result in a better fit to the data in this case (tests using a higher bound for WASP-12 b were conducted to confirm this result). A lower bound of 1 × 10 −6 rad/epoch is also chosen to avoid the M-H algorithm becoming trapped in a local minimum, as very low values of dω/dE and e cause the fitted precession model to approach a line even when the results of the orbital decay model indicate the presence of curvature in the data. The eccentricity is restricted to be less than 0.1, which is appropriate for the HJ population, and greater than 1 × 10 −5 . Ragozzine & Wolf (2009) found that even eccentricities on the order of 10 −5 can result in detectable transit timing variations, though measurements of eccentricities are essentially unconstrained by 10 −3 . The strictest requirement for convergence of the apsidal precession model is on the parameter bounds of the reference transit center time t 0 . The parameter t 0 is allowed to be within a range of ±0.01 (just over 30 minutes) or ±0.1 days around the best-fit value from the constant period model, depending on the system. This final adjustment allows convergence on the solution, but only as a perturbation of the constant period case. To aid convergence, a non-linear least squares fit (scipy implementation) of the precession model was done within these bounds to select the best initial value of the model parameters. Altogether, this procedure facilitates convergence of the precession models with the given data sets, and with all of this in mind, we next discuss the results. RESULTS As noted above, the first run of the analysis pipeline includes all DQ 1-3 observations from all 30 of the initial targets (see Section 3). Table 1 ranks the star-planet systems by the difference in their Bayesian Information Criterion (∆BIC), with a negative value favouring the decay model. The best-fit period decay rates, along with the corresponding 1-σ uncertainties, are given in ms yr −1 . For some systems, such as GJ 436 b and XO-2 b, the period derivative is positive. In such cases, we found the scatter in the data to be very high, and did not investigate such systems further in this study. They will nonetheless need to be examined in future studies as better data are acquired. From Table 1, the top eight candidates from this first analysis are selected for the second, deeper analysis described in Section 4.0.3. Those top eight candidates, in order of evidence for decay, are WASP-12 b, TrES-2 b, WASP-10 b, HAT-P-32 b, TrES-5 b, TrES-3 b, HAT-P-19 b, and TrES-1 b. With the exception of TrES-1 b, all of these systems yield a ∆BIC > 10 (Section 4.0.1). The evidence for TrES-1 b is still compelling, with ∆BIC = 9.9. In addition to these eight targets, WASP-4 b and WASP-43 b are selected due to their relevance in the literature, for a total of ten targets. The results of the second round of analysis is summarized in Table 3. In the following sub-sections, each of these star-planet systems is presented individually, with an expanded discussion of that system's model fitting. The cleaned data sets for the top ten systems are further provided in the tables listed in Appendix B, including the epoch number, transit centres and respective uncertainty, data quality index, and observer name(s). The full MCMC outputs from the analyses can be found in the tables in Appendix A, which include the best-fit model parameters and their 1-σ uncertainties, as well as metrics such as the χ 2 , number of data, etc. Out of the eight highest-ranked systems from Table 1, all but TrES-3 b maintain statistically significant evidence of orbital decay in the ETD transit centres. WASP-4 b and WASP-43 b, both of which favoured a constant period transit timing model in the first analy-sis (Table 1), exhibit varying results. For WASP-43 b, the second analysis maintains that the planet's transits follow a linear ephemeris (see Section 5.0.10), whereas for WASP-4 b the favoured model changes to orbital decay with a decay rate of −6.21 ± 0.70 ms/yr (see Section 5.0.2). As a cautionary approach, the secondary analysis was repeated under the assumption that the transit centre uncertainties in the ETD are underestimated, thus replacing them with the standard deviation of the spread of the timing residuals -i.e., the observed transit centres minus the times calculated from the best-fit linear ephemeris. In general, using a nonlinear least squares algorithm can lead to optimistic parameter uncertainties. In addition, when describing the ETD lightcurve fitting routine, Poddaný et al. (2010) acknowledge that the errors of their L-M fit may be underestimated due to the lack of red noise correction and a fixed impact parameter. Substituting the ETD uncertainties in this way allow for an examination of the curvature in the data unbiased by the uncertainties given by the ETD lightcurve fitting tool (Table 4). It is notable that, with the exception of HAT-P-32 b and WASP-10 b, all of the resulting decay rates agree within the respective uncertainties with those in Table 3. However, as to be expected, the uncertainties are much larger, and only three systems (WASP-12 b, HAT-P-19 b, and TrES-1 b) still exhibit statistical evidence for departure from a linear ephemeris, using a fixed variance of the data (4). Note-Transit timing model comparison for the second run of the analysis pipeline on the reduced data of the top 10 systems of interest. The ∆BIC is calculated for the comparison of the constant-period and orbital decay transit timing models, where a negative value favours the latter. The BIC values for the apsidal precession model fits are provided for further comparison. 3 Note-Model comparison of the results from the reduced data of the top 10 systems after replacing the ETD transit centre uncertainties with the standard deviation of the nominal timing residuals. A negative ∆BIC value favours the orbital decay model. Note that for aesthetic reasons, the scaling of the y-axis varies for each plot. Each datum is the difference between an observed time and the time predicted by the best-fit constant period model. The size of the data points correspond to the data quality index from 1-3, with the higher-quality transits being larger. The red crosses represent observations removed during the sigma-clipping process. timing residuals is clear when looking at the ETD generated O-C plot. 6 Prior to any data cleaning, the ETD transit times fit an orbital decay model with a period derivative of −29.1 ± 1.0 ms/yr, preferred over a linear ephemeris by ∆BIC of 408.7 (Table 1). This is by far the greatest ∆BIC of all of the targets, with the next best being 46.8 for TrES-2 b. After removing partial and duplicate transits, 218 WASP-12 b transit times remain (Table B9), five of which are from published literature studies. The final analysis of the cleaned WASP-12 b data increases the ∆BIC to 441.3 favouring orbital decay at a rate of −31.6±1.0 ms/yr (see Figure 1). The quadratic trend is also favoured over apsidal precession by a ∆BIC of 17.6. The best-fit precession rate is 0.000336 +0.000026 −0.000013 rad per epoch with an eccentricity of 0.0287 +0.0022 −0.0039 (Table A9). WASP-4 b The Hot Jupiter WASP-4 b has an orbital period of 1.338231624 ± 0.000000068 days and a radius and mass of 1.42 R J and 1.22 M J , respectively (Wilson et al. 2008;Bonomo et al. 2017). The first suggestion of a period variation in WASP-4 b in the literature was from Bouma et al. (2019), who added 18 TESS transits to a collection of observations going back to 2007 and found the period to be changing at a rate of −12.6 ± 1.2 ms/yr. Later in the same year, Southworth et al. (2019) included their own data from the years 2009-2019 and found that WASP-4 b is decaying at a rate of −9.2 ± 1.1 ms/yr, while Baluev et al. (2019) conducted a homogeneous analysis and found no statistically significant evidence of a nonlinear timing trend. In 2020, Baluev et al. (2020) detected a nonlinear trend with 3.4-σ significance and Bouma et al. (2020) reported a period change of −8.64 ± 1.26 ms/yr for WASP-4 b from transit observations. However, after an analysis of new radial velocity data, they found that the Doppler effect from WASP-4 b accelerating toward the earth has the effect of decreasing the period by −5.94 ± 0.39 ms/yr. Thus, they conclude the period change is 'mostly or entirely' due to acceleration of the system along the line of sight (Bouma et al. 2020). However, a recent comprehensive analysis from Turner et al. (2022) did not detect this acceleration in the RV data, instead finding evidence of an additional planet in the system with a period of approximately 7000 days in addition to TTVs that are consistent with orbital decay at a rate of −7.33 ± 0.71 ms/yr. Another recent study from Ivshina & Winn (2022) measured the period of WASP-4 b to be changing at a compatible rate of −5.81 ± 1.58 ms/yr. Like WASP-43 b, WASP-4 b was selected for further investigation in this study because of this interest in the literature. The ETD lists 68 transit observations of WASP-4 b from the years 2009-2021, 63 of which are DQ 1-3. As many of the transits are high quality, this is a valuable resource for contributing to our understanding of the WASP-4 star-planet system. The initial analysis of the ETD transit centres does not support a departure from a linear ephemeris, with the ∆BIC favouring the constant period model by 2.7 and the best-fit decay rate being −1.09 ± 0.66 ms/yr (Table 1). After the exclusion of partial and duplicate transits for the second analysis, 55 observations of WASP-4 b remain (Table B7), ten of which are from published literature studies. The analysis of the cleaned WASP-4 b data changes the favoured transit timing model ( Figure 1). In the second analysis, orbital decay at a rate −6.21±0.70 ms/yr is strongly favoured for WASP-4 b over the constant period model by a ∆BIC of 35.4 (Table 3). The apsidal precession model is also a clear contender by a ∆BIC of 26.1 when compared to a constant period, but orbital decay is still strongly favoured over precession by ∆BIC = 9.3. The best-fit precession rate is 0.00048 +0.00040 −0.00024 with an eccentricity of 0.0027 +0.0079 −0.0017 (Table A7). This study does not evaluate the likelihood or directly model a line-ofsight acceleration of the WASP-4 b system, but in this context, the orbital decay rate can be treated as a constant period derivative within a different physical context. Whatever the underlying reason, the evidence in the data support a timing variation. 5.0.3. WASP-10 b WASP-10 b is a 3.2 M J , 1.1 R J Hot Jupiter discovered on a 3.09272932 ± 0.00000032 day orbit in 2009 (Christian et al. 2009;Bonomo et al. 2017). It has been the subject of multiple timing studies in the past due to a measured periodic TTV, with proposed explanations including starspot occultations (Barros et al. 2013) and a 0.1 M J companion with a period of about 5.23 days (Maciejewski et al. 2011b,a). A second planet in the system has never been confirmed, and we have not found further discussion in the literature of other studies looking for long-term trends in the transit times of WASP-10 b, such as orbital decay or apsidal precession. The ETD hosts 202 observations of WASP-10 b spanning the years 2007-2021, 174 of which are DQ 1-3. An initial examination of the WASP-10 b data gave a bestfit orbital decay rate near −130 ms/yr, largely driven by a single data point from Johnson et al. (2009). Removing this observation from the data set gave a more con- servative period derivative of −26.4±2.7 ms/yr, which is the value reported in the initial round of analyses (Table 1). WASP-10 b was selected for further analysis, with the orbital decay model being strongly favoured over a constant period by a ∆BIC of 43.0. During the data reduction process it was revealed that the observation in Johnson et al. (2009) was corrected in an erratum (Johnson et al. 2010). In addition to this, the transit centres published in Christian et al. (2009) and Maciejewski et al. (2011b) had to be converted to BJD TDB from BJD UTC and BJD TT , respectively. After removing all partial and duplicate transits, there are 129 observations of WASP-10 b remaining, including 19 measurements from the literature ( Table B8). The final analysis of the cleaned WASP-10 b data retained the strong preference for the orbital decay model (Figure 2) by a ∆BIC of 35.3 with a best-fit decay rate of −21.9 ± 2.4 ms/yr (Table 3). The apsidal precession model is also a better fit to the data than a constant period, but orbital decay is favoured by ∆BIC of 8.6. The best-fit apsidal precession model gives an eccentricity 0.0037 +0.0027 −0.0011 and precession rate 0.00078 +0.00015 −0.00020 rad per epoch (Table A8). HAT-P-19 b HAT-P-19 b is a Saturn-mass (0.29 M J ) planet with a 1.1 R J radius on a 4.008778 ± 0.000006 day orbit (Hartman et al. 2011a;Bonomo et al. 2017). In the discovery paper, Hartman et al. (2011a) detected a linear trend in the radial velocity residuals, indicating the presence of a another body in the planetary system. This observation led to multiple studies seeking periodic TTVs (Seeliger et al. 2015;Maciejewski et al. 2018;Baştürk et al. 2020), but we have not found further discussion in the literature on testing for long-term trends in the timing residuals. HAT-P-19 b was selected as a target of interest after the first round of analyses showed a preference for orbital decay (∆BIC = 26.3) at a rate of −55.2±7.2 ms/yr ( Table 1) (Table B1). The second analysis of HAT-P-19 b on this cleaned data set ( Figure 2) yields a period derivative of −57.7 ± 7.3 ms/yr, this time favouring the decay model by a ∆BIC of 27.3 (Table 3). The orbital decay model is also favoured over apsidal precession by a ∆BIC of 8.4. The best-fit precession model yields an eccentric-ity of 0.0068 +0.0009 −0.0010 and precession rate 0.000905 +0.000059 −0.000068 rad per epoch (Table A1). TrES-5 b TrES-5 b is a 1.8 M J , 1.2 R J Hot Jupiter discovered in 2011 on a 1.48224460 ± 0.00000070 day orbit (Mandushev et al. 2011;Bonomo et al. 2017). The first suggestion of transit timing variations in the literature was by Sokov et al. (2018), who detected periodic timing variations indicative of a second planet in the system with a mass of 0.24 M J in the 1:2 resonance orbit. Maciejewski et al. (2021) were not able to confirm or reject the existence of an additional body in the system via shortterm TTVs. However, they found a long-term variation of the orbital period of −20.4 ± 4.7 ms/yr and conclude it is most likely a line-of-sight acceleration of the system induced by a massive, wide-orbiting companion. At the time of writing, the most recent timing study of TrES-5 b is from Ivshina & Winn (2022). Their analysis, which includes TESS data, found the period to be changing at a rate −17.47 ± 3.79 ms/yr. The first analysis of the TrES-5 b ETD transit centres indicates the period could be changing at a rate of −29.7 ± 3.6 ms/yr, with the quadratic orbital decay model favoured over a constant period by a ∆BIC of 26.8 (Table 1). The ETD hosts 268 transits of TrES-5 b from the years 2012-2021, 233 of which are DQ 1-3. There are no literature observations of TrES-5 b on the database. After removal of partial and duplicate transits there are 110 remaining (Table B6). The analysis of these cleaned data showed an increase in the best-fit period derivative to −34.5 ± 4.6 ms/yr in the orbital decay model, favoured over a constant period by a ∆BIC of 22.2 (Table 3). The quadratic model is also a better fit to the data than the apsidal precession model by a difference ∆BIC = 10.1 (see Figure 3). The best-fit precession rate is 0.00058 +0.00021 −0.00014 rad per epoch with an eccentricity 0.0098 +0.0068 −0.0045 (Table A6). 5.0.6. TrES-1 b TrES-1 b is a 0.7 M J , 1.1 R J Hot Jupiter on a 3.03006973 ± 0.00000018 day orbit (Alonso et al. 2004;Bonomo et al. 2017). TrES-1 b was discovered in 2004, the first of the Trans-Atlantic Exoplanet Survey (TrES), and thus has one of the longest potential observational baselines in this study. It has been the subject of various short-term TTV studies in the past (Rabus et al. 2008(Rabus et al. , 2009Baluev et al. 2015), but no significant variations were detected. Most recently Ivshina & Winn (2022) have included TESS data and found the TrES-1 b period to be changing at a rate −18.36 ± 3.73 ms/yr. As this was part of a much larger study they did not go into further detail, but suggest the system is worth monitoring. The ETD hosts 223 observations of TrES-1 b spanning the years 2004-2021, 169 of which are DQ 1-3. Prior to any data cleaning, the ETD transit times fit an orbital decay model with a period derivative of −7.9±1.4 ms/yr, preferred over a linear ephemeris by ∆BIC of 9.9 ( Table 1). The TrES-1 b system has the weakest evidence of a nonlinear timing trend out of the top eight systems picked from Table 1, but the results still fit the criteria for strong evidence with a ∆BIC > 5. Nineteen of the TrES-1 b transit times on the database are from published literature studies, though it was discovered that five of them are duplicates and were excluded. In addition, the observations from Hrudková et al. (2008) had to be converted from BJD UTC to BJD TDB . After these considerations, there are 68 remaining observations (Table B3). The final analysis of the cleaned TrES-1 b data (see Figure 3) yields a best-fit period derivative of −10.9±2.1 ms/yr with the orbital decay model being favoured over a constant period by a ∆BIC of 9.7 (Table 3). Thus, the results of the second analysis are consistent with the first, which contained data duplicates and observations with inconsistent timestamps, suggesting the fit results are robust. The BIC values for the constant period and apsidal precession models are almost indistinguishable (Table A3). The best-fit apsidal precession model gives an eccentricity 0.0030 +0.0037 −0.0015 and precession rate 0.00057 +0.00024 −0.00019 rad per epoch. TrES-2 b TrES-2 b is a 1.2 M J , 1.2 R J Hot Jupiter discovered in 2006 on a 2.470613374 ± 0.000000019 day, near grazing orbit (O'Donovan et al. 2006;Bonomo et al. 2017). As TrES-2 b lies within the Kepler field, it was extensively studied in the years following its discovery. Early publications suggest evidence of short-term transit timing and duration variations, with explanations including a third body in the system or the existence of a moon (Rabus et al. 2009;Mislis & Schmitt 2009). Later studies, however, were not able to support these claims, thus citing no evidence of short or long-term transit timing variations for TrES-2 b (Kipping & Bakos 2011;Schröter et al. 2012;Raetz et al. 2014). We have not found any TTV studies of TrES-2 b in years since. The initial analysis of the TrES-2 b ETD data showed a strong preference for the orbital decay model with a period derivative of −20.7 ± 2.1 ms/yr (Table 1). The ∆BIC favours the decay model over a constant period by a significant value of 46.8. Because of this, TrES-2 b was selected as a target of interest for further analysis. The ETD hosts 331 transits of TrES-2 b spanning the years 2006-2021, 270 of which are DQ 1-3. After removal of partial and duplicate transits 152 remain (Table B4), as many were partial transits. The analysis of these cleaned data yields a less convincing, but still notable, argument for the decay scenario. It is worth highlighting that the first few epochs are mostly literature observations, which may be driving the model due to their generally higher precision and having been measured several years before the ETD observations begin (Figure 4). The ∆BIC favours the decay model over a linear ephemeris by 8.3, with the best-fit decay rate being −12.6 ± 2.4 ms/yr (Table 3). Though the ∆BIC metric favours the orbital decay model over a constant period, it does not favour the apsidal precession model over a constant period. If there is statistically significant curvature in the timing residuals, it is reasonable to expect that the apsidal precession model should be a better fit than a constant period. Looking at only the χ 2 values, the constant period model (χ 2 = 868.8) is a worse fit than both apsidal precession (χ 2 = 857.4) and orbital decay (χ 2 = 855.6) models. The distinction between the two timing models is less clear in the case of TrES-2 b than for the other systems explored here due to the large spread in the ETD data and the high number of DQ 3 observations (Figure 4). We found that TrES-2 b was the only target that could not pass the MCMC convergence criteria for the apsidal precession model. The resulting best-fit apsidal precession rate and eccentricity are 0.00061 +0.00023 −0.00022 rad per epoch and 0.0030 +0.0039 −0.0014 , respectively (Table A4). HAT-P-32 b HAT-P-32 b is a 0.8 M J , 1.8 R J Hot Jupiter discovered in 2011 on a 2.15000825 ± 0.00000012 day orbit (Hartman et al. 2011b;Bonomo et al. 2017). In 2014 it was the subject of a TTV study seeking evidence of an additional body in the system, but amplitudes of greater than 1.5 minutes were ruled out (Seeliger et al. 2014). The contributed observations on the ETD span the years 2007-2021, though there is a 633 orbit gap after the discovery transit. During a preliminary examination of the HAT-P-32 b transit centres it was noted that, without the discovery epoch, there is visual curvature in the timing residuals. 7 This point was removed for the initial round of analysis so that the significance of the curvature could be examined and compared with other systems. HAT-P-32 b was selected for further analysis, as orbital decay at a rate of −30.2±3.3 ms/yr is favoured over a constant period ephemeris by ∆BIC = 39.7 (Table 1). The ETD hosts 167 transits of HAT-P-32 b, 142 of which are DQ 1-3. After the exclusion of partial and duplicate transits, 88 observations remain (Table B2). This includes the discovery epoch, which was added back to the data set as it was found that the transit time was not converted from BJD UTC to HJD UTC when listed on the database (Hartman et al. 2011b). Results from the final analysis of the cleaned HAT-P-32 b data maintain a preference for the orbital decay model over a constant period by a ∆BIC of 7.4, as well as the apsidal precession model by a ∆BIC of 9.4 ( Figure 4). However, the χ 2 of the orbital decay and apsidal precession models are indistinguishable. The best-fit period derivative for the decay model is −7.3 ± 1.5 ms/yr, and the apsidal precession model yields an eccentricity of 0.0022 +0.0049 −0.0012 and precession rate 0.00054 +0.00028 −0.00024 rad/epoch (Table A2). The ETD hosts 556 observations of TrES-3 b from the years 2007-2021, 465 of which are DQ 1-3. This is one of the longest and most complete observational baselines in this study. In the initial round of analysis the orbital decay model was favoured for TrES-3 b by a ∆BIC of 26.7 when compared to the linear model (Table 1). The best-fit decay rate is small at −5.08 ± 0.63 ms/yr, but it was selected for further analysis. Many of the observations of TrES-3 b on the ETD do not have a clear ingress or egress and many of the early observations do not have a recorded timing uncertainty, resulting in their immediate exclusion. After the removal of partial transits and duplicates (including the first literature observation) there are 231 observations remaining (Table B5). The second, follow up analysis on this cleaned data set is more favourable toward a constant period model (Figure 5). The best-fit decay rate is −2.75 ± 0.78 ms/yr, consistent with a constant period when considering the variance in the data. The BIC values for the decay and constant period models are indistinguishable (Table 3). The precession model is strongly disfavoured by its BIC, which can be explained by the apparent lack of curvature in the timing residuals. Indeed, the precession model is driven to a very low eccentricity of 0.0005 and a slow precession rate of 0.00013 rad per epoch, and is ultimately a poor fit (Table A5) Zhao et al. (2018) found that orbital decay is slightly preferred over a constant period, but at a rate of −0.005248 ± 0.001714 s/yr. In this study, the initial analysis of ETD transit centres strongly favours a constant period (Table 1), but the system was selected for further analysis due to the interest in the literature. The ETD hosts 186 transits of WASP-43 b spanning the years 2010-2021, with 162 being DQ 1-3. After the exclusion of partial and duplicate transits there are 126 remaining observations (Table B10). Analysis of the cleaned data supports the constant period model for WASP-43 b over orbital decay by ∆BIC of 4.4 ( Figure 5). In addition to this, the best-fit orbital decay rate is −1.0 ± 1.2 ms/yr, which is consistent with a constant orbital period (Table 3). Like TrES-3 b, apsidal precession results in poor fit due to the sinusoidal nature of the model and the strong linearity of the data (Table A10). DISCUSSION AND CONCLUSIONS This study has utilized the wealth of citizen science observations on the Exoplanet Transit Database (ETD) to identify Hot Jupiter systems that are candidates for the detection of orbital decay, or more precisely, detectable period drifts. In the process, we have demonstrated the potential of citizen scientists to contribute to the detection of long-term orbital evolution signatures of close-in giant planets. We highlight systems in the ETD that should be prioritized for further study in Table 3. We find that eight star-planet systems in the ETD show statistically compelling quadratic trends in the transit times and particularly recommend future follow-up of HAT-P-19 b, HAT-P-32 b, TrES-1 b, and WASP-10 b. For an additional approach, we purposefully take a pessimistic view of the data's predictive power and assume that the uncertainties of the ETD transit centres are generally unreliable. In this case, we use the variance of the timing residuals during fitting and find that three of the targets (WASP-12 b, HAT-P-19 b, and TrES-1 b) still show evidence of a negative period derivative. In addition to orbital decay, we consider apsidal precession as a source of timing variations -in no case is this model preferred over orbital decay. In many cases, however, the χ 2 value of the orbital decay and apsidal precession models are similar. We further note that only the low-eccentricity (e << 0.1) expansion for precession is used in this work (as well as typically in others). Preliminary analysis of WASP-12 b using the higher-order eccentricity expansion for precession, up to e 5 as presented in Ragozzine & Wolf (2009), does show promise in fitting the data, but only if a high eccentricity (e ≈ 0.1) is used. The overall results are also not clearly better than the orbital decay model and so are not pursued further here. Confirmation of period drifts, regardless of the cause, would provide additional constraints on the system. For orbital decay, the period derivative can be used to infer properties such as the planet's remaining lifetime and the stellar tidal dissipation rate. For precession, the model could constrain the interior density distribution of Hot Jupiters and potentially yield an upper limit on orbital eccentricities (of course, radial velocity data, if available, provide independent eccentricity constraints). Other possible effects, such as line-of-sight acceleration and stellar activity, will also need to be considered in understanding the underlying cause of observed timing deviations. As our study aims to aid in selecting systems for longterm observing campaigns, in addition to extending the observed baselines of those systems, a next step is to conduct in-depth analyses into the identified candidates. Such studies should combine all available data sets for the system of interest, homogeneously refitting them where possible. We will aim to conduct such studies, also seeking to obtain further observations of these systems, particularly by collaborating with citizen scientists. The exoplanet WASP-12 b has proved to be critical as a control for this study and serves as an important test case for orbital decay searches in general. The consistency between the results using citizen science data and those in the literature demonstrates the feasibility of such work. The ETD offers a rich history of transit times for hundreds of star-planet systems that can make the search for evidence of long-term dynamical evolution, orbital decay in this case, more efficient. While this work does not suggest that any of the identified systems, apart from WASP-12 b, are indeed undergoing orbital decay or other period deviations, the work does suggest that there is sufficient evidence to keep looking. ACKNOWLEDGEMENTS This work would not be possible without the dedication and skill of the Exoplanet Transit Database (ETD) coordinators and observers, and we sincerely thank them all for their ongoing contributions. The names of observers of the data used for this studyas given on ETD with minor formatting changes -can be found in the supplementary data tables on Zenodo at https://doi.org/10.5281/zenodo.7098460 and in Appendix B. As of May 13, 2022 there are 625 ETD contributors consisting of universities, teams, and individuals from around the world. Additionally, this network of observers expands with the inclusion of the ExoClock and Exoplanet Watch programmes. We emphasize the science potential of such a large community of groundbased observers, and encourage continued collaboration and the inclusion of new observers. Having identified key systems to follow-up in further depth, we look forward to collaborating with the participants of these programmes as we continue to investigate the potential for non-linear ephemerides. We thank Darin Ragozzine for helpful comments on the draft of this paper. This work is supported, in part, by an NSERC Discovery Grant, an NSERC CRE-ATE grant, and the Canada Research Chairs program. BE is a Laureate of the Paris Region fellowship programme which is supported by the Ile-de-France Region and has received funding under the Horizon 2020 innovation framework programme and the Marie Sklodowska-Curie grant agreement no. 945298. APPENDIX A. FULL MCMC RESULTS The results of the Metropolis-Hastings MCMC sampling of the constant period, orbital decay, and apsidal precession transit timing models are presented below. The top ten targets of interest, as discussed in section 5, are listed alphabetically. B. EXOPLANET TRANSIT DATABASE (ETD) OBSERVATIONS The final cleaned data sets for the ten targets of interest (section 5) are highlighted in the following tables to ensure reproducibility of these results. Observer names are also listed, as-given in the ETD with minor formatting changes. For entries where there are more than two observers, the full list of names is given in the table notes. Note-* and Romina P. Di Sisto ** Hoňková K., Juryšek J. Note-* Jacques Clement, Jean-Philippe Nougayrede ** Danilo Zardin, Marco Fiaschi *** Marana Space Explorer Center 5.0.1. WASP-12 b is a 1.4 M J , 1.8 R J Hot Jupiter discovered in 2008 on a 1.09142090 ± 0.0000002 day orbit(Hebb et al. 2008; Bonomo et al. 2017). The transit timing variations of WASP-12 b have been extensively studied, as it is the only exoplanet for which orbital decay has been unambiguously detected. The first suggestion of a period drift was fromMaciejewski et al. (2016), who found that the WASP-12 b transit times diverged from a linear ephemeris. Various studies have since confirmed this finding, all with compatible decay rates (see e.g.Patra et al. 2017;Yee et al. 2019;Turner et al. 2020), the most recent result being −30.27 ± 1.11 ms/yr fromIvshina & Winn (2022).At the time of writing, the ETD hosts 295 observations of WASP-12 b spanning the years 2008 to 2021, 257 of which have DQ 1-3. The curvature of the transit 2 Figure 1 . 1Timing residuals of WASP-12 b (top) and WASP-4 b (bottom) with future projections of the three transit timing models shown with 150 random draws from the MCMC posterior chains. Figure 2 . 2Similar to Figure 1, but for WASP-10 b (top) and HAT-P-19 b (bottom). . The ETD has 98 observations of HAT-P-19 b spanning the years 2009-2021, 88 of which are DQ 1-3. The only literature observation is the discovery epoch, which had to be converted from BJD UTC to BJD TDB (Hartman et al. 2011a). After the exclusion of partial and duplicate transits, 75 observations of HAT-P-19 b remain Figure 3 . 3Similar to Figure 1, but for TrES-5 b (top) and TrES-1 b (bottom). Figure 4 . 4Similar to Figure 1, but for TrES-2 b (top) and HAT-P-32 b (bottom). 5.0. 9 . 9TrES-3 bTrES-3 b is a 1.8 M J , 1.3 R J Hot Jupiter with an orbital period of 1.306186483 ± 0.00000007 days that was discovered in 2007 (O'Donovan et al. 2007; Bonomo et al. 2017). It has been extensively studied in the search for periodic TTVs indicating the presence of another planet, but no conclusive evidence of such variations have been found(Sozzetti et al. 2009; Gibson et al. 2009; Jiang et al. 2013;Vanko et al. 2013;Püsküllü et al. 2017).Zhao et al. (2018) andMannaday et al. (2020) have investigated the possibility of orbital decay, both concluding that TrES-3 b transit times are consistent with a constant period. However,Mannaday et al. (2020) found the difference in BIC values of the orbital decay and constant period models to be very similar and recommend further observations. J , 1.0 R J Hot Jupiter on a 0.81347437 ± 0.00000013 day orbit(Hellier et al. 2011; Bonomo et al. 2017). The WASP-43 b system has been monitored since 2014 for signs of long-term transit timing variations because its ultra-short orbital period is thought to make it a good candidate for exhibiting or-bital decay (Blecic et al. 2014; Murgas et al. 2014; Chen et al. 2014; Ricci et al. 2014). In 2016, Jiang et al. (2016) detected evidence of orbital decay at a rate of −0.02890795±0.00772547 s/year. Later that same year, Hoyer et al. (2016) published a homogeneous analysis including new data that showed no indication of a period variation. Two years later, Figure 5 . 5Similar to Figure 1, but for TrES-3 b (top) and WASP-43 b (bottom). 1 Table 1 . 11Model comparison for initial analysis of 30 targets from the ETD Target Decay Rate 1-σ Unc. BIC linear BIC decay ∆BIC Note-Results from the first run of the analysis pipeline on the raw transit centre data of 30 systems from the ETD. Targets are ranked by the likelihood of orbital decay model over a constant period model. The ∆BIC values are such that a negative value favours the decay model.(ms/yr) (ms/yr) WASP-12 b -29.1 1.0 2693.4 2284.7 -408.7 TrES-2 b -20.7 2.1 1334.1 1287.3 -46.8 WASP-10 b -26.4 2.7 1890.9 1847.9 -43.0 HAT-P-32 b -30.2 3.3 1084.2 1044.5 -39.7 TrES-5 b -29.7 3.6 831.6 804.8 -26.8 TrES-3 b -5.08 0.63 2459.9 2433.2 -26.7 HAT-P-19 b -55.2 7.2 334.7 308.4 -26.3 TrES-1 b -7.9 1.4 656.5 646.6 -9.9 XO-2 b 13.0 3.1 490.5 487.1 -3.4 Qatar-1 b -6.1 1.4 1858.6 1855.3 -3.3 HAT-P-12 b -13.2 3.6 395.6 393.6 -2.0 GJ-436 b 11.6 3.3 466.5 465.1 -1.4 XO-1 b -13.3 3.9 265.6 264.5 -1.1 WASP-48 b -24.6 8.0 426.6 426.4 -0.2 WASP-52 b -11.8 3.7 513.6 513.4 -0.2 WASP-3 b -9.0 2.8 503.3 503.2 -0.1 HAT-P-10 b -15.3 1.8 636.0 636.9 0.9 HAT-P-23 b -5.2 5.8 435.6 436.5 0.9 HD189733 b 2.5 1.1 2452.3 2454.7 2.4 WASP-2 b -4.2 1.9 641.7 644.3 2.6 HAT-P-20 b -9.3 4.8 579.4 582.1 2.7 WASP-4 b -1.09 0.66 356.4 359.1 2.7 HAT-P-36 b 5.0 2.7 1099.5 1103.0 3.5 CoRoT-2 b 2.2 1.7 649.9 653.5 3.6 HAT-P-3 b -5.1 4.0 755.0 759.0 4.0 GJ-1214 b -0.9 1.8 563.1 567.3 4.2 HAT-P-37 b 6.6 6.8 431.6 435.9 4.2 Qatar-2 b 1.6 2.2 681.4 685.6 4.2 WASP-33 b -1.4 1.5 2926.9 2931.5 4.6 WASP-43 b -0.7 1.2 744.7 749.5 4.8 Table 2 . 2MCMC PriorsNote-The parameters t ref and P ref are the reference ephemeris from the literature, whereas t best and P best are the best-fit values from sampling the constant period model. For the constant period and orbital decay models the bounds on t0 and P are generous to allow for exploration of a large parameter space, as there were no issues with converging on multiple solutions. The restrictions on the apsidal precession priors are discussed in section 4.0.3Parameter Symbol Unit Prior Bounds Constant Period Model Transit Center t0 BJDTDB Uniform (t ref − 0.5, t ref + 0.5) Period P days Uniform (P ref − 0.5, P ref + 0.5) Orbital Decay Model Transit Center t0 BJDTDB Uniform (t best − 0.5, t best + 0.5) Period P days Uniform (P best − 0.5, P best + 0.5) Decay Rate dP/dE days/epoch Uniform (−1 × 10 −7 , 1 × 10 −7 ) Apsidal Precession Model Transit Center t0 BJDTDB Uniform (t best − a, t best + a), a ∈ {.01, .1} Sidereal Period Ps days Normal (P best − 0.1, P best + 0.1) Argument of Periastron ω0 rad Uniform (0, 2π) Precession Rate dω/dE rad/epoch Log-Uniform (1 × 10 −6 , 1 × 10 −3 ) Eccentricity e Log-Uniform (1 × 10 −5 , 1 × 10 −1 ) Table 3 . 3Model comparison for secondary analysis of top 10 targetsTarget Decay Rate 1-σ Unc. BIC linear BIC decay ∆BIC BICprecession (ms/yr) (ms/yr) WASP-12 b -31.6 1.0 2353.1 1911.8 -441.3 1929.4 WASP-4 b -6.21 0.70 404.9 369.5 -35.4 378.8 WASP-10 b -21.9 2.4 1011.3 976.0 -35.3 984.6 HAT-P-19 b -57.7 7.3 270.4 243.1 -27.3 251.5 TrES-5 b -34.5 4.6 541.1 518.9 -22.2 529.0 TrES-1 b -10.9 2.0 190.9 181.2 -9.7 190.2 TrES-2 b -12.6 2.4 878.9 870.6 -8.3 882.4 HAT-P-32 b -7.3 1.5 677.2 669.8 -7.4 679.2 TrES-3 b -2.75 0.78 939.8 938.9 -0.9 982.7 WASP-43 b -1.0 1.2 576.2 580.6 4.4 593.1 Table 4 . 4Model comparison for secondary analysis -data variance for ETD transit timesTarget Decay Rate 1-σ Unc. BIC linear BIC decay ∆BIC (ms/yr) (ms/yr) WASP-12 b -34.8 4.9 223.7 202.7 -21.0 HAT-P-19 b -64 17 80.6 78.2 -2.4 TrES-1 b -16.0 3.7 73.3 68.3 -5.0 WASP-4 b -6.7 2.4 62.6 62.7 0.1 TrES-2 b -22.0 8.0 159.0 160.3 1.3 TrES-5 b -25 11 118.3 120.5 2.2 HAT-P-32 b -32 12 95.5 96.7 1.2 WASP-10 b -10.1 7.6 131.6 135.5 3.9 WASP-43 b 3.5 4.0 126.5 130.9 4.4 TrES-3 b 0.01 1.9 227.8 233.1 5.3 Table A1 . A1HAT-P-19 b ResultsParameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2455091.53464 +0.00010 −0.00010 Period P days 4.00878388 +0.00000015 −0.00000014 Number of Data n 72 Degrees of Freedom k 2 Chi-Square Statistic χ 2 261.8 Bayesian Information Criterion BIC 270.4 Orbital Decay Model Transit Center t0 BJDTDB 2455091.53382 +0.00014 −0.00015 Period P days 4.00878805 +0.00000055 −0.00000054 Decay Rate dP/dE days/epoch −7.33 × 10 −9 +0.92 × 10 −9 −0.93 × 10 −9 Decay Rate dP/dt ms/yr −57.7 +7.2 −7.3 Degrees of Freedom k 3 Chi-Square Statistic χ 2 230.3 Bayesian Information Criterion BIC 243.1 Apsidal Precession Model Transit Center t0 BJDTDB 2455091.5263 +0.0013 −0.0011 Sidereal Period Ps days 4.00878386 +0.00000014 −0.00000014 Argument of Periastron ω0 rad 2.623 +0.046 −0.043 Precession Rate dω/dE rad/epoch 0.000905 +0.000059 −0.000068 Eccentricity e 0.0068 +0.0009 −0.0010 Degrees of Freedom k 5 Chi-Square Statistic χ 2 230.1 Bayesian Information Criterion BIC 251.5 Table A2. HAT-P-32 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2454420.447196 +0.000052 −0.000051 Period P days 2.150008203 +0.000000038 −0.000000038 Number of Data n 87 Degrees of Freedom k 2 Chi-Square Statistic χ 2 668.3 Bayesian Information Criterion BIC 677.2 Orbital Decay Model Transit Center t0 BJDTDB 2454420.447035 +0.000061 −0.000061 Period P days 2.15000874 +0.00000012 −0.00000012 Decay Rate dP/dE days/epoch −0.50 × 10 −9 +0.10 × 10 −9 −0.10 × 10 −9 Decay Rate dP/dt ms/yr −7.3 +1.5 −1.5 Degrees of Freedom k 3 Chi-Square Statistic χ 2 656.4 Bayesian Information Criterion BIC 669.8 Apsidal Precession Model Transit Center t0 BJDTDB 2454420.4458 +0.0008 −0.0034 Sidereal Period Ps days 2.150008198 +0.000000037 −0.000000037 Argument of Periastron ω0 rad 2.55 +0.27 −0.34 Precession Rate dω/dE rad/epoch 0.00054 +0.00028 −0.00024 Eccentricity e 0.0022 +0.0049 −0.0012 Degrees of Freedom k 5 Chi-Square Statistic χ 2 656.8 Bayesian Information Criterion BIC 679.2 Table A3. TrES-1 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2453898.874169 +0.000038 −0.000038 Period P days 3.030069676 +0.000000042 −0.000000042 Number of Data n 66 Degrees of Freedom k 2 Chi-Square Statistic χ 2 182.6 Bayesian Information Criterion BIC 190.9 Orbital Decay Model Transit Center t0 BJDTDB 2453898.874084 +0.000041 −0.000041 Period P days 3.03007055 +0.00000017 −0.00000017 Decay Rate dP/dE days/epoch −1.05 × 10 −9 +0.20 × 10 −9 −0.20 × 10 −9 Decay Rate dP/dt ms/yr −10.9 +2.1 −2.0 Degrees of Freedom k 3 Chi-Square Statistic χ 2 168.6 Bayesian Information Criterion BIC 181.2 Apsidal Precession Model Transit Center t0 BJDTDB 2453898.8715 +0.0015 −0.0035 Sidereal Period Ps days 3.030069678 +0.000000042 −0.000000042 Argument of Periastron ω0 rad 2.67 +0.15 −0.20 Precession Rate dω/dE rad/epoch 0.00057 +0.00024 −0.00019 Eccentricity e 0.0030 +0.0037 −0.0015 Degrees of Freedom k 5 Chi-Square Statistic χ 2 169.2 Bayesian Information Criterion BIC 190.2 Table A4. TrES-2 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2453957.635474 +0.000020 −0.000020 Period P days 2.47061340 +0.000000042 −0.000000042 Number of Data n 149 Degrees of Freedom k 2 Chi-Square Statistic χ 2 868.8 Bayesian Information Criterion BIC 878.9 Orbital Decay Model Transit Center t0 BJDTDB 2453957.635119 +0.000069 −0.000070 Period P days 2.47061447 +0.00000020 −0.00000020 Decay Rate dP/dE days/epoch −0.99 × 10 −9 +0.18 × 10 −9 −0.19 × 10 −9 Decay Rate dP/dt ms/yr −12.6 +2.4 −2.4 Degrees of Freedom k 3 Chi-Square Statistic χ 2 855.6 Bayesian Information Criterion BIC 870.6 Apsidal Precession Model Transit Center t0 BJDTDB 2453957.6333 +0.0011 −0.0031 Sidereal Period Ps days 2.470613392 +0.000000042 −0.000000042 Argument of Periastron ω0 rad 2.47 +0.24 −0.26 Precession Rate dω/dE rad/epoch 0.00061 +0.00023 −0.00022 Eccentricity e 0.0030 +0.0039 −0.0014 Degrees of Freedom k 5 Chi-Square Statistic χ 2 857.4 Bayesian Information Criterion BIC 882.4 Table A5. TrES-3 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2454538.581479 +0.000027 −0.000027 Period P days 1.306186320 +0.000000014 −0.000000014 Number of Data n 218 Degrees of Freedom k 2 Chi-Square Statistic χ 2 929.0 Bayesian Information Criterion BIC 939.8 Orbital Decay Model Transit Center t0 BJDTDB 2454538.581437 +0.000030 −0.000030 Period P days 1.306186497 +0.000000050 −0.000000052 Decay Rate dP/dE days/epoch −0.11 × 10 −9 +0.032 × 10 −9 −0.031 × 10 −9 Decay Rate dP/dt ms/yr −2.75 +0.78 −0.76 Degrees of Freedom k 3 Chi-Square Statistic χ 2 922.8 Bayesian Information Criterion BIC 938.9 Apsidal Precession Model Transit Center t0 BJDTDB 2454538.58143 +0.00010 −0.00055 Sidereal Period Ps days 1.306186320 +0.000000011 −0.000000011 Argument of Periastron ω0 rad 2.5 +2.0 −1.0 Precession Rate dω/dE rad/epoch 0.00013 +0.00052 −0.00012 Eccentricity e 0.0005 +0.0027 −0.0005 Degrees of Freedom k 5 Chi-Square Statistic χ 2 955.8 Bayesian Information Criterion BIC 982.7 Table A6. TrES-5 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2455443.25340 +0.00012 −0.00012 Period P days 1.482246663 +0.000000064 −0.000000064 Number of Data n 109 Degrees of Freedom k 2 Chi-Square Statistic χ 2 531.8 Bayesian Information Criterion BIC 541.1 Orbital Decay Model Transit Center t0 BJDTDB 2455443.25109 +0.00033 −0.00032 Period P days 1.48224954 +0.00000038 −0.00000039 Decay Rate dP/dE days/epoch −1.6e − 09 +0.22 × 10 −9 −0.21 × 10 −9 Decay Rate dP/dt ms/yr −34.5 +4.6 −4.5 Degrees of Freedom k 3 Chi-Square Statistic χ 2 504.9 Bayesian Information Criterion BIC 518.9 Apsidal Precession Model Transit Center t0 BJDTDB 2455443.2490 +0.0021 −0.0032 Sidereal Period Ps days 1.482246666 +0.000000064 −0.000000064 Argument of Periastron ω0 rad 2.11 +0.24 −0.37 Precession Rate dω/dE rad/epoch 0.00058 +0.00021 −0.00014 Eccentricity e 0.0098 +0.0068 −0.0045 Degrees of Freedom k 5 Chi-Square Statistic χ 2 505.6 Bayesian Information Criterion BIC 529.0 Table A7. WASP-4 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2455880.794607 +0.000018 −0.000018 Period P days 1.338231560 +0.000000016 −0.000000015 Number of Data n 55 Degrees of Freedom k 2 Chi-Square Statistic χ 2 396.9 Bayesian Information Criterion BIC 404.9 Orbital Decay Model Transit Center t0 BJDTDB 2455880.794814 +0.000029 −0.000029 Period P days 1.338231695 +0.000000022 −0.000000022 Decay Rate dP/dE days/epoch −0.263 × 10 −9 +0.029 × 10 −9 −0.029 × 10 −9 Decay Rate dP/dt ms/yr −6.21 +0.70 −0.70 Degrees of Freedom k 3 Chi-Square Statistic χ 2 357.4 Bayesian Information Criterion BIC 369.5 Apsidal Precession Model Transit Center t0 BJDTDB 2455880.7937 +0.0007 −0.0033 Sidereal Period Ps days 1.338231569 +0.000000018 −0.000000017 Argument of Periastron ω0 rad 2.95 +0.074 −0.089 Precession Rate dω/dE rad/epoch 0.00048 +0.00040 −0.00024 Eccentricity e 0.0027 +0.0079 −0.0017 Degrees of Freedom k 5 Chi-Square Statistic χ 2 358.8 Bayesian Information Criterion BIC 378.8 Table A8. WASP-10 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2454357.857968 +0.000031 −0.000031 Period P days 3.092728113 +0.000000044 −0.000000044 Number of Data n 122 Degrees of Freedom k 2 Chi-Square Statistic χ 2 1001.7 Bayesian Information Criterion BIC 1011.3 Orbital Decay Model Transit Center t0 BJDTDB 2454357.857571 +0.000054 −0.000054 Period P days 3.09272986 +0.00000020 −0.00000020 Decay Rate dP/dE days/epoch −2.15 × 10 −9 +0.24 × 10 −9 −0.24 × 10 −9 Decay Rate dP/dt ms/yr −21.9 +2.4 −2.4 Degrees of Freedom k 3 Chi-Square Statistic χ 2 961.6 Bayesian Information Criterion BIC 976.0 Apsidal Precession Model Transit Center t0 BJDTDB 2454357.8547 +0.0010 −0.0026 Sidereal Period Ps days 3.092728117 +0.000000044 −0.000000044 Argument of Periastron ω0 rad 2.51 +0.16 −0.12 Precession Rate dω/dE rad/epoch 0.00078 +0.00015 −0.00020 Eccentricity e 0.0037 +0.0027 −0.0011 Degrees of Freedom k 5 Chi-Square Statistic χ 2 960.6 Bayesian Information Criterion BIC 984.6 Table A9. WASP-12 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2454508.978942 +0.000046 −0.000047 Period P days 1.091419487 +0.000000020 −0.000000020 Number of Data n 213 Degrees of Freedom k 2 Chi-Square Statistic χ 2 2342.4 Bayesian Information Criterion BIC 2353.1 Orbital Decay Model Transit Center t0 BJDTDB 2454508.977198 +0.000074 −0.000074 Period P days 1.091421946 +0.000000084 −0.000000084 Decay Rate dP/dE days/epoch −1.093 × 10 −9 +0.036 × 10 −9 −0.036 × 10 −9 Decay Rate dP/dt ms/yr −31.6 +1.0 −1.0 Degrees of Freedom k 3 Chi-Square Statistic χ 2 1895.7 Bayesian Information Criterion BIC 1911.8 Apsidal Precession Model Transit Center t0 BJDTDB 2454508.9700 +0.0014 −0.0008 Sidereal Period Ps days 1.091419481 +0.000000020 −0.000000020 Argument of Periastron ω0 rad 2.382 +0.031 −0.059 Precession Rate dω/dE rad/epoch 0.000336 +0.000026 −0.000013 Eccentricity e 0.0287 +0.0022 −0.0039 Degrees of Freedom k 5 Chi-Square Statistic χ 2 1902.6 Bayesian Information Criterion BIC 1929.4 Table A10. WASP-43 b Results Parameter Symbol Unit Value Uncertainty (1-σ) Constant Period Model Transit Center t0 BJDTDB 2455528.868421 +0.000054 −0.000054 Period P days 0.813474243 +0.000000017 −0.000000017 Number of Data n 116 Degrees of Freedom k 2 Chi-Square Statistic χ 2 566.6 Bayesian Information Criterion BIC 576.2 Orbital Decay Model Transit Center t0 BJDTDB 2455528.868381 +0.000072 −0.000072 Period P days 0.813474304 +0.000000077 −0.000000072 Decay Rate dP/dE days/epoch −0.026 × 10 −9 +0.031 × 10 −9 −0.031 × 10 −9 Decay Rate dP/dt ms/yr −1.0 +1.2 −1.2 Degrees of Freedom k 3 Chi-Square Statistic χ 2 566.3 Bayesian Information Criterion BIC 580.6 Apsidal Precession Model Transit Center t0 BJDTDB 2455528.86842 +0.00015 −0.00013 Sidereal Period Ps days 0.813474243 +0.000000013 −0.000000013 Argument of Periastron ω0 rad 3.1 +2.2 −2.1 Precession Rate dω/dE rad/epoch 0.000015 +0.00015 −0.00001 Eccentricity e 0.00024 +0.0037 −0.0002 Degrees of Freedom k 5 Chi-Square Statistic χ 2 569.3 Bayesian Information Criterion BIC 593.1 Table B1 . B1HAT-P-19 b ETD Transit CentersEpoch Transit Center (BJDTDB) Error Data Quality Observer 0 2455091.534936 0.00034 1 Hartman et al. (2010) 97 2455480.388138842 0.00082 3 Ioannidis P., Avdellidou 97 2455480.388168842 0.00077 3 Lomoz F. 101 2455496.4200886358 0.00089 3 Naves R. 101 2455496.420158636 0.00072 3 Vanhuysse M. 108 2455524.482118315 0.00036 1 Muler G. 108 2455524.482168315 0.00103 3 Husar D. 109 2455528.492328273 0.00043 2 Ruiz J. 198 2455885.274367737 0.00056 2 Ayiomamitis A. 202 2455901.312667875 0.0013 3 Vyvlečka M. * 207 2455921.35074807 0.00087 3 Naves R. 207 2455921.3524580705 0.00075 2 Corfini G. 211 2455937.384608243 0.00146 3 Naves R. 212 2455941.395928289 0.00039 3 Emering F. 263 2456145.8438230567 0.00056 2 Shadic S. 264 2456149.8527831356 0.00073 3 Shadic S. 270 2456173.9060636275 0.00052 2 Garlitz J. 294 2456270.1159558822 0.0006 2 Liyun Zhang ** 448 2456887.472524667 0.00116 3 Horta F. G. 451 2456899.4953250396 0.00103 3 Barbieri L. 458 2456927.5578159047 0.0004 1 Gillier C. 462 2456943.5935563957 0.00049 2 Benni P. 463 2456947.600836518 0.00061 2 Ian 463 2456947.602516518 0.00072 2 Horta F. G. 464 2456951.6101066396 0.00097 3 Samantha Segall 470 2456975.6658073682 0.0016 3 Stan Shadick *** 547 2457284.3391969916 0.00047 2 Yenal Ogmen 549 2457292.355997161 0.00112 3 Veli-Pekka Hentunen 555 2457316.409147663 0.0005 1 Mark Salisbury 558 2457328.4357879097 0.00029 1 Marc Bretton 558 2457328.4374679103 0.00096 3 Enrique Díez Alonso 560 2457336.416918072 0.00155 3 F. L. Juan, V.S. Antoni 560 2457336.454158073 0.00057 2 Marc Bretton 560 2457336.454948073 0.00055 2 David Molina Table B1 continued Table B1 (continued) B1Epoch Transit Center (BJDTDB) Error Data Quality Observer 561 2457340.462168153 0.00067 2 David Molina 657 2457725.3067425108 0.00022 1 Marc Bretton 657 2457725.307742511 0.00045 1 Marc Bretton 658 2457729.3135025227 0.00049 2 Marc Bretton 658 2457729.314712522 0.00067 2 Marc Bretton 659 2457733.324552533 0.00056 2 Marc Bretton 659 2457733.324672533 0.00076 3 Eric Girardin 662 2457745.350042562 0.00026 1 Marc Bretton 664 2457753.366662579 0.00113 3 Marc Bretton 664 2457753.367362579 0.00056 2 Matthieu Bachschmidt 664 2457753.368102579 0.00104 3 Manfred Raetz 667 2457765.3924741726 0.00106 3 Alfonso Carreño 903 2458711.4655538127 0.00155 3 Didier Laloum 903 2458711.467523813 0.00055 2 Yves Jongen 905 2458719.481983491 0.00089 3 Pavel Pintr 906 2458723.4938333295 0.00054 2 Manfred Raetz 907 2458727.5021631676 0.00072 2 Pavel Pintr 908 2458731.5068830056 0.00058 2 Francesco Scaggiante ** † 910 2458739.5285526793 0.00066 2 Marc Deldem 910 2458739.5293526794 0.00123 3 Bruno Christmann 911 2458743.5357525167 0.0008 3 Bruno Christmann 911 2458743.5379125164 0.00054 2 Yves Jongen 912 2458747.5450223526 0.0006 2 Thomas Grunge 912 2458747.5460923524 0.00068 2 Francesco Scaggiante ** † 914 2458755.562742024 0.00038 1 Yves Jongen 917 2458767.589451529 0.00043 1 Yves Jongen 1004 2459116.353526361 0.00061 2 Gabriel Murawski 1005 2459120.3633561833 0.00093 3 F. Lomoz 1009 2459136.397005479 0.00035 1 L.Betti F.Mortari 1010 2459140.405125303 0.00113 3 Francois Regembal 1010 2459140.405615303 0.0006 2 Vicenç Ferrando (AAS) 1010 2459140.4060153035 0.00051 1 Pedro Martorell 1010 2459140.411775303 0.00047 1 Yves Jongen 1015 2459160.447924431 0.00053 2 Manfred Raetz 1015 2459160.4485744312 0.00144 3 Francesco Scaggiante ** † 1015 2459160.451024431 0.00054 2 Snaevarr Gudmundsson 1016 2459164.460244257 0.00048 1 Yves Jongen 1113 2459553.3095287136 0.00045 1 Snaevarr Gudmundsson 1117 2459569.344788184 0.00061 2 Snaevarr Gudmundsson 1117 2459569.3456081836 0.00122 3 Rudi Bjorn Rasmussen 1117 2459569.3461581836 0.00074 2 Jens Jacobsen Table B1 continued Table B1 (continued) B1Epoch Transit Center (BJDTDB) Error Data Quality Observer Note-* Mikulecká B., Henych T. ** Qingfeng Pi, Aiying Zhou *** Ryan Cooney, Taylor Bell ** † Danilo Zardin, Marco Fiaschi Table B2. HAT-P-32 b ETD Transit Centers Epoch Transit Center (BJDTDB) Error Data Quality Observer 0 2454420.44712443 9e-05 1 Hartman et al. (2011) 640 2455796.449701029 0.001 3 Moudrá M., Sobotka P. 640 2455796.454151029 0.00085 3 Zibar M. 640 2455796.454951029 0.00058 2 Trnka J. 666 2455852.351390598 0.00072 2 Brosio A. 668 2455856.651900572 0.00026 1 Dax T. 668 2455856.6520805717 0.00029 1 Shadic S. 668 2455856.6526905717 0.00058 2 Yafunyaev M. 669 2455858.802490559 0.00027 1 Shadic S. 670 2455860.952690547 0.00035 1 Shadick S. 679 2455880.301930445 0.0004 1 Corfini G. 689 2455901.8030603575 0.0007 3 Parijat Singh 694 2455912.549210324 0.00125 3 Naves R. 708 2455942.6541902665 0.00059 2 Zaharevitz D. 820 2456183.452702922 0.00063 2 Arena C. 820 2456183.4537929217 0.00044 1 Ayiomamitis A. 821 2456185.606372944 0.00106 3 Naves R. 833 2456211.408503233 0.00087 3 Emering F. 841 2456228.6035034466 0.0006 2 Carreño A. 843 2456232.9047635035 0.00031 1 Kehusmaa P., Harlingten C. 846 2456239.3550435896 0.00054 1 Horta F. G. 853 2456254.4021838 0.00058 2 Naves R. 873 2456297.399644462 0.00067 2 Ramon Naves 873 2456297.4045044617 0.00094 3 Ferran Grau Horta 974 2456514.556068827 0.00065 2 Horta F. G. 980 2456527.456019134 0.0003 1 Hentunen V. P. 994 2456557.555119869 0.00055 2 Salto J. L. 994 2456557.555199869 0.00053 2 E. Herrero, G. Pascual 1007 2456585.5048705796 0.00028 1 Benishek V. 1014 2456600.555720974 0.0004 1 Benni P. 1021 2456615.6078013796 0.00034 1 Perchak M. 1022 2456617.755701438 0.00047 2 Shadic S. 1033 2456641.4053020943 0.00055 2 Naves R. 1040 2456656.455432522 0.00042 1 Zibar M. Table B2 continued B2 Table B2 (continued) B2Epoch Transit Center (BJDTDB) Error Data Quality Observer 1040 2456656.457572522 0.0006 2 Naves R. 1066 2456712.356464164 0.00052 2 Zibar M. 1168 2456931.6559308483 0.00064 2 Andrew Wilson 1180 2456957.455951669 0.00063 2 Audejean M. 1181 2456959.606411738 0.00025 1 Coline Guyot, Julien Dibon 1187 2456972.50589215 0.00068 2 Ferran Grau Horta 1199 2456998.3068829775 0.00031 1 Mark Salisbury 1206 2457013.3573834603 0.00039 1 Fran Campos 1314 2457245.55843226 0.00034 1 Marc Bretton 1341 2457303.6066339314 0.00046 1 Florian Signoret 1359 2457342.3043250386 0.00056 2 Marc Bretton 1359 2457342.3091750382 0.00051 1 Marc Bretton 1373 2457372.408995888 0.0007 2 David Molina 1379 2457385.306216248 0.00088 3 Marc Bretton 1379 2457385.309306247 0.00019 1 Marc Bretton 1379 2457385.309576248 0.00022 1 Marc Bretton 1483 2457608.913251586 0.00068 2 Sean Curry 1483 2457608.9137915857 0.00057 2 Sean Curry 1507 2457660.5091125686 0.0005 2 Rene Roy 1507 2457660.5096525685 0.00048 1 Marc Deldem 1509 2457664.8103326466 0.0018 3 Martin Fowler 1520 2457688.4585830644 0.00024 1 Mark Salisbury 1522 2457692.762053138 0.00049 2 Michael Fleenor 1522 2457692.764633138 0.00162 3 Martin Fowler 1537 2457725.009333676 0.00018 1 Wonseok Kang 1553 2457759.4088157923 0.0005 1 Veli-Pekka Hentunen 1559 2457772.310505986 0.00025 1 Marc Bretton 1667 2458004.511648076 0.0005 1 Ferran Grau Horta 1675 2458021.7109581344 0.00025 1 O. Cooper * 1706 2458088.361148248 0.00034 1 Pere Guerra 1840 2458376.4630261525 0.00051 1 Gerald Rousseau 1853 2458404.412495722 0.00077 3 Patrick Sogorb 1854 2458406.5629056883 0.00043 1 Veli-Pekka Hentunen 1856 2458410.8630956193 0.00031 1 Brennan Rodgers, Rina Rast 1873 2458447.417845008 0.00056 2 Yves Jongen 1893 2458490.411764231 0.00048 1 Stephanie Ferratfiat Dagot 1899 2458503.3130439823 0.00034 1 Yves Jongen 1906 2458518.3625036813 0.00069 3 Didier Laloum 2007 2458735.514808019 0.00029 1 Yves Jongen 2047 2458821.513585249 0.00044 1 Yves Jongen 2059 2458847.3117443733 0.00047 1 Juergen Dirscherl Table B2 continued Table B2 (continued) B2Epoch Transit Center (BJDTDB) Error Data Quality Observer 2060 2458849.464224299 0.00035 1 Yves Jongen 2066 2458862.3639038536 0.0004 1 Yves Jongen 2180 2459107.461904331 0.00086 3 Roman Ehrenberger 2180 2459107.465784331 0.00058 2 Jordi Lopesino 2186 2459120.3638937897 0.00056 2 Veli-Pekka Hentunen 2201 2459152.616462434 0.00038 1 Yves Jongen 2213 2459178.415271348 0.00079 2 Fabio Mortari 2213 2459178.416191348 0.00038 1 Nicholas Dahlke ** 2219 2459191.3146808017 0.00028 1 Veli-Pekka Hentunen 2222 2459197.7630805285 0.00135 3 Roger Dymock 2233 2459221.414159522 0.00073 2 Snaevarr Gudmundsson 2354 2459481.56350817 0.00055 2 Yves Jongen 2366 2459507.3662070585 0.00018 1 Anaël Wünsche Note-* E. Helou, J. Lowenthal, R. O'Connor, E. Papineau, A. Peck, L. Stephens, K. Walker ** Johanna Hipp, Annalotta Hipp Table B3. TrES-1 b ETD Transit Centers Epoch Transit Center (BJDTDB) Error Data Quality Observer -344 2452856.5293562952 0.0015 1 Charbonneau et al. (2005) -340 2452868.6510561793 0.0022 1 Charbonneau et al. (2005) -239 2453174.6871521757 0.0004 1 Charbonneau et al. (2005) -236 2453183.775952023 0.0005 1 Charbonneau et al. (2005) -235 2453186.806851972 0.0003 1 Charbonneau et al. (2005) -234 2453189.8361519207 0.0019 1 Charbonneau et al. (2005) -215 2453247.4082509303 0.0004 1 Charbonneau et al. (2005) -208 2453268.6206305576 0.00073 3 Walsh B. * -1 2453895.8437196026 0.00018 1 Winn et al. (2007) 0 2453898.8741595424 0.00014 1 Winn et al. (2007) 1 2453901.904469481 0.00019 1 Winn et al. (2007) 3 2453907.9648193605 0.00034 1 Narita et al. (2007) 148 2454347.3239720967 0.00028 1 Andreev M. ** 149 2454350.3537120596 0.00036 1 Andreev M. ** 151 2454356.41492443 0.0001 1 Hrudkova et al. (2008) 152 2454359.44506443 0.00015 1 Hrudkova et al. (2008) 153 2454362.47499443 0.0002 1 Hrudkova et al. (2008) 353 2454968.4888713406 0.00053 3 Trnka J. 384 2455062.4200319457 0.00037 1 LB *** 384 2455062.420881945 0.00046 2 Dřevěný R., Kalisch T. 384 2455062.4216419454 0.00053 2 Sauer T. Table B3 continued B3 Table B3 (continued) B3Epoch Transit Center (BJDTDB) Error Data Quality Observer 626 2455795.698733075 0.00055 2 Walter B. ** † 626 2455795.6997930747 0.00064 2 Walter B. ** † 626 2455795.700673075 0.00053 1 Walter B. ** † 717 2456071.4345378657 0.00055 2 Carreño A. 718 2456074.463297911 0.00112 3 Emering F. 729 2456107.794528403 0.00032 1 Shadic S. 777 2456253.2406419395 0.00105 3 Sokov E. N. 832 2456419.892113591 0.00051 2 Hose K. 852 2456480.493183976 0.00028 1 Nicolas E. 852 2456480.493443976 0.00035 1 Salisbury M. 854 2456486.5557740084 0.00039 1 Schteinman G. M. 879 2456562.3030643417 0.00081 1 Bahar E. * † † 952 2456783.5003144294 0.00043 1 Zibar M. 952 2456783.500524429 0.00055 2 Marchini A. 952 2456783.5016644294 0.00079 3 Dittler U. 985 2456883.4922939567 0.00064 2 Horta F. G. 986 2456886.522203938 0.00056 2 Antonio Zanardo 990 2456898.6428938615 0.00049 2 Alton K. B. 1087 2457192.5608708654 0.00039 1 David Molina 1188 2457498.5967773143 0.00047 1 Marc Bretton 1217 2457586.4680355466 0.00038 1 Marc Bretton 1217 2457586.4689355465 0.00053 2 Ramon Naves 1350 2457989.4687585095 0.00046 2 Mario Morales 1450 2458292.4783824445 0.00038 1 Yves Jongen 1482 2458389.4359507435 0.00111 3 Rene Roy 1490 2458413.6778503587 0.00047 3 Benjamen Smith 1581 2458689.414436853 0.00039 1 Yves Jongen 1583 2458695.473676796 0.00042 1 Alberto Tomatis 1583 2458695.4738267963 0.0005 2 Massaro Salvatore 1584 2458698.504006768 0.00056 2 Mario Morales 1584 2458698.5042267684 0.0005 2 Aleksandra Selezneva 1584 2458698.5047567682 0.0004 1 Yves Jongen 1585 2458701.5347167403 0.00056 2 Yves Jongen 1585 2458701.53555674 0.00051 2 Vicenç Ferrando 1682 2458995.451815497 0.0003 1 Stephane Ferratfiat 1682 2458995.4520054967 0.00036 1 Stephane Ferratfiat 1683 2458998.4816754963 0.00046 2 Anaël Wünsche 1684 2459001.511775497 0.00018 1 Marc Bretton 1713 2459089.3832856687 0.00046 2 Veli-Pekka Hentunen 1713 2459089.383555669 0.00044 1 Yves Jongen 1815 2459398.4497184516 0.00032 1 Anaël Wünsche Table B3 continued Table B3 (continued) B3Epoch Transit Center (BJDTDB) Error Data Quality Observer 1817 2459404.5098685343 0.00045 1 Samuel Diaz Lopez 1817 2459404.5103085344 0.00073 2 Javier de Elias 1844 2459486.32228978 0.00053 1 Alessandro Marchini 1845 2459489.350529831 0.00108 3 Giuseppe Conzo, Zlatko Orbanić 1845 2459489.352429831 0.00047 2 Francesco Scaggiante † † † 1845 2459489.3524598307 0.00038 1 Ivo Peretto Note-* Balonek T., Iadanza K. ** Kuznietsova Y., Krushevska V. *** JT, MK, RD, TK, BH, MM, HK, JS (names unknown) ** † Strickland W., Soriano R. * † † Sevim K., Bastuk O. † † † Danilo Zardin, Marco Fiaschi Table B4. TrES-2 b ETD Transit Centers Epoch Transit Center (BJDTDB) Error Data Quality Observer 0 2453957.6355417324 0.00038 1 Holman et al. (2007) 13 2453989.7536113295 0.00029 1 Holman et al. (2007) 15 2453994.694681268 0.00031 1 Holman et al. (2007) 34 2454041.6365406993 0.0003 1 Holman et al. (2007) 242 2454555.5269558984 0.00123 3 Brát L. 268 2454619.760645541 0.0013 2 Gary 281 2454651.876345383 0.0007 2 Moon 293 2454681.523145253 0.0021 3 Naves 316 2454738.3511950565 0.0009 3 Kocián R. 391 2454923.644556494 0.0007 2 AXA 393 2454928.588676497 0.00112 3 Přibík V. 399 2454943.4139565085 0.00138 3 Brát L. 404 2454955.76324952 5e-05 1 Kepler 405 2454958.233956523 5.1e-05 1 Kepler 406 2454960.704515526 5e-05 1 Kepler 407 2454963.1751195285 5e-05 1 Kepler 408 2454965.6456875317 5e-05 1 Kepler 409 2454968.1163725345 5e-05 1 Kepler 410 2454970.586968538 5e-05 1 Kepler 410 2454970.587256538 0.001 2 Gregorio 411 2454973.057587541 5.1e-05 1 Kepler 412 2454975.5282265446 0.00074 2 Marino G. 412 2454975.5282355445 5.1e-05 1 Kepler 413 2454977.998809548 5e-05 1 Kepler 414 2454980.4652565513 0.0013 3 naves 414 2454980.4686565516 0.0017 3 Srdoc 414 2454980.4689565515 0.0013 2 Gregorio Table B4 continued Table B4 (continued) B4Epoch Transit Center (BJDTDB) Error Data Quality Observer 414 2454980.4693715516 5.1e-05 1 Kepler 415 2454982.9399685552 5.1e-05 1 Kepler 416 2454985.4106815592 5e-05 1 Kepler 417 2454987.881248563 5e-05 1 Kepler 418 2454990.3518565674 5.1e-05 1 Kepler 419 2454992.8225505715 5e-05 1 Kepler 420 2454995.2930915756 5.1e-05 1 Kepler 421 2454997.7636165805 0.00035 1 Scuderi et al. (2009) 421 2454997.7636835803 5.1e-05 1 Kepler 429 2455017.525956621 0.001 2 Gregorio 433 2455027.408156644 0.0019 3 Srdoc 438 2455039.7613566765 0.0011 3 Garlitz 438 2455039.7655566763 0.0007 2 Norby 438 2455039.766826676 0.00096 3 Vander Haagen G. 440 2455044.70385669 0.0008 3 Norby 442 2455049.6501567042 0.00085 3 Vander Haagen G. 446 2455059.5231967345 0.00076 3 Lomoz F. 446 2455059.530516735 0.00137 3 Manfred Raetz 450 2455069.411906767 0.00069 3Šárka Dyčková 548 2455311.5317081646 0.00077 2 Brát L. 550 2455316.4772882015 0.00094 2 Zambelli R. 555 2455328.8263382935 0.00093 3 Shadick S. 557 2455333.764658331 0.00107 3 Shadick S., Patrick M. 567 2455358.47312852 0.00071 3 Trnka J. 582 2455395.536908818 0.00071 2 Naves R. 589 2455412.8277789643 0.00064 2 Hose K. 605 2455452.358239319 0.00134 3 Lopresti C. 610 2455464.7104694373 0.00101 3 Shadick S. 612 2455469.6508994848 0.00115 3 Wiggins P. 701 2455689.536021845 0.00134 3 Brát L. 708 2455706.830652037 0.00072 3 Garlitz J. 712 2455716.715242147 0.00064 3 Dax T. 720 2455736.4814623673 0.00111 3 Rui B. 737 2455778.4785528374 0.00124 3 Centenera F. 742 2455790.8312829765 0.0008 3 Shadic S. 744 2455795.7746130326 0.00117 3 Shadic S. 758 2455830.3589634267 0.00083 3 Dangl G. 758 2455830.362813427 0.00078 3 Brát L. 767 2455852.5972936843 0.00086 3 Fountain Ch. 777 2455877.3005739725 0.00022 1 Fernand Emering 837 2456025.5398756764 0.00067 2 Carreño A. Table B4 continued B4 Table B4 (continued) B4Epoch Transit Center (BJDTDB) Error Data Quality Observer 1470 2457589.4390679193 0.00056 2 Marc Bretton 1485 2457626.495617455 0.00065 2 David Molina 1485 2457626.496167455 0.00091 3 Martin Mašek, Petr Mrňák 1487 2457631.437037392 0.00064 2 David Molina 1508 2457683.321076738 0.00035 1 Marc Bretton 1508 2457683.321246738 0.00044 1 Marc Bretton 1621 2457962.500754582 0.00028 1 Marc Bretton 1723 2458214.502671218 0.00109 3 Eric Girardin 1723 2458214.5035112184 0.00131 3 Yves Jongen 1740 2458256.501560668 0.00047 1 Marc Bretton 1752 2458286.149110282 0.0011 3 Wonseok Kang 1755 2458293.5594901857 0.00116 2 David Molina 1757 2458298.505780122 0.00077 3 Yves Jongen 1776 2458345.4450595295 0.0007 2 David Molina 1791 2458382.5033190865 0.00078 2 Pere Guerra 1793 2458387.445629029 0.00036 1 Marc Bretton 1795 2458392.386858972 0.00066 2 Mario Morales 1802 2458409.6819687765 0.00093 3 Rina Rast 1872 2458582.624527017 0.00063 2 Yves Jongen 1874 2458587.564286971 0.0006 2 Yves Jongen 1878 2458597.4470168785 0.00072 3 Veli-Pekka Hentunen 1880 2458602.393456833 0.0005 1 Veli-Pekka Hentunen 1893 2458634.506636544 0.00102 3 Bruno Christmann 1895 2458639.448856501 0.00098 3 Yves Jongen 1895 2458639.449196501 0.0008 2 Anael Wunsche 1912 2458681.4450561474 0.0009 2 Riccardo Papini 1927 2458718.5081258593 0.00072 2 Yves Jongen 1944 2458760.5085155675 0.00073 2 Yves Jongen 1948 2458770.390205505 0.00053 1 Marc Bretton 2012 2458928.5119447894 0.00124 3 Yves Jongen 2065 2459059.449774492 0.00037 1 Ferran Grau Horta 2080 2459096.5095544807 0.00101 3 Yves Jongen 2082 2459101.4532344816 0.00039 1 Anaël Wünsche 2084 2459106.3934644833 0.00058 2 Yves Jongen 2084 2459106.3955944832 0.00086 3 Thomas Grunge 2086 2459111.3326244857 0.00104 3 Francesco Scaggiante ** 2163 2459301.5743450187 0.00063 2 Yves Jongen 2180 2459343.569035228 0.00071 2 Giorgio Baj 2199 2459390.5132154953 0.00095 3 Alberto García Sánchez 2199 2459390.514115495 0.00098 3 Enrique Arce Mansego 2201 2459395.4489055253 0.00085 2 Javier de Elias Table B4 continued Table B4 (continued) B4Note-* Cihan Tugrul Tezcan, Ozgur Basturk ** Danilo Zardin, Marco FiaschiEpoch Transit Center (BJDTDB) Error Data Quality Observer 2201 2459395.4549155254 0.00076 2 Giuseppe Marino 2254 2459526.393296553 0.00054 2 Rene Roy Table B5. TrES-3 b ETD Transit Centers Epoch Transit Center (BJDTDB) Error Data Quality Observer -270 2454185.911171321 0.000198 1 Sozzeti et al. (2009) -260 2454198.97388822 0.000223 1 Sozzetti et al. (2009) -248 2454214.6470391033 0.00028 1 Sozzetti et al. (2009) -247 2454215.952821094 0.000214 1 Sozzetti et al. (2009) -229 2454239.462400929 0.00047 2 Fabjan T. et al. -13 2454521.5984502416 0.00131 3 Brát L. -3 2454534.663170264 0.00017 1 RISE telescope -2 2454535.9689862663 0.000166 1 Sozzetti et al. (2009) 11 2454552.9497113023 0.000147 1 Sozzetti et al. (2009) 24 2454569.929829345 0.000153 1 Sozzetti et al. (2009) 43 2454594.7466834197 0.000253 1 Sozzetti et al. (2009) 59 2454615.6462704926 0.00017 1 RISE telescope 68 2454627.4033405394 0.0015 3 Srdoc 72 2454632.6268705605 0.00011 1 RISE telescope 78 2454640.4645405933 0.0009 3 Naves 79 2454641.769740599 0.0008 3 Foote 85 2454649.6070806347 0.00013 1 RISE telescope 86 2454650.9127406403 0.0008 2 Moon 88 2454653.525440653 0.0018 3 Mendez 88 2454653.525780653 0.00037 1 RISE telescope 95 2454662.6697006966 0.00034 1 RISE telescope 101 2454670.4979407364 0.0012 3 Naves 101 2454670.5070407363 0.00033 1 RISE telescope 104 2454674.4249707568 0.00052 1 RISE telescope 111 2454683.568080806 0.00018 1 RISE telescope 114 2454687.4885408278 0.0016 3 Mendez 294 2454922.599484535 0.00079 3 Brát L. 294 2454922.6002645353 0.00054 2 Trnka J. 297 2454926.5188245806 0.00075 3 Přibík V. 306 2454938.274124717 0.00038 1 Cao Ch. 310 2454943.499854779 0.0012 3 Wardak 311 2454944.8055747943 0.00136 3 Tieman B. 311 2454944.806154794 0.0013 3 Dvorak Table B5 continued Table B5 (continued) B5Epoch Transit Center (BJDTDB) Error Data Quality Observer 313 2454947.416364825 0.00071 2 Trnka J. 313 2454947.4169648252 0.00089 3 Fabjan T., Mihelčič M. 313 2454947.417434825 0.00158 3 Marek P. 313 2454947.417494825 0.00078 3 Brát L. 319 2454955.2550449194 0.00063 2 Cao Ch. 333 2454973.541655144 0.0008 2 Gregorio 336 2454977.460565193 0.00089 3 Brát L. 336 2454977.4606551933 0.001 2 Gregorio 336 2454977.4606651934 0.00029 1 GajdošŠ., Jakšová I. 336 2454977.4610851933 0.00067 3 Trnka J. 340 2454982.6839552596 0.0007 2 Norby 349 2454994.4399554105 0.0007 2 Naves 363 2455012.726155651 0.0005 2 Norby 372 2455024.4880558103 0.001 3 Srdoc 372 2455024.48895581 0.0009 2 Naves 385 2455041.4638560433 0.001 3 Naves 385 2455041.4642160437 0.00112 3 Benishek V. 388 2455045.3823360982 0.00062 3 Hynek T. 389 2455046.6894461173 0.001 3 Tieman B. 389 2455046.689976117 0.00058 2 Vander Haagen G. 411 2455075.422316529 0.0006 2 GajdošŠ., Jakšová I. 428 2455097.629506857 0.00108 3 Shadick S. 578 2455293.5568600697 0.0006 3 Dangl G. 579 2455294.8645900916 0.00038 1 Jae Woo Lee et al. (2010) 591 2455310.5381603586 0.00068 3 Trnka J. 591 2455310.538200359 0.00067 3 Brát L. 595 2455315.7620004476 0.00061 2 Westall K. 598 2455319.6814405145 0.00037 2 Vanhuysse M. 604 2455327.5166506474 0.00054 3 Saral G. 628 2455358.867121182 0.00018 1 Jae Woo Lee et al. (2010) 631 2455362.7861512485 0.00045 2 Garlitz I. 633 2455365.396041293 0.0007 3 Saral G. 654 2455392.8274417645 0.00042 2 Ken Hose 695 2455446.3814127035 0.00037 1 Scarmato T. 709 2455464.6684130295 0.00059 2 Shadick S. 834 2455627.941185886 0.00045 2 Shadic S. 837 2455631.8583559515 0.00027 1 Tanya Dax * 847 2455644.919696168 0.00093 3 Shadic S. 862 2455664.5143364887 0.00065 3 Jánov R. 876 2455682.8004767834 0.00032 1 Garlitz J. 886 2455695.8620169912 0.0006 2 Shadic S., Tuplip C. Table B5 continued B5 Table B5 (continued) B5Epoch Transit Center (BJDTDB) Error Data Quality Observer 891 2455702.3935770947 0.00065 3 Ayiomamitis A. 924 2455745.4979577563 0.00043 2 Gillier Ch., Montaigut R. 940 2455766.3966180654 0.00118 3 Ruocco N. 993 2455835.6262490456 0.00046 2 Shadic S. 1005 2455851.29864926 0.00045 1 Sauer T. 1127 2456010.6533011654 0.00034 1 Horta F. G. 1130 2456014.5716912034 0.00052 2 Horta F. G. 1130 2456014.5725812037 0.00044 1 Martineli F. 1159 2456052.4534515436 0.00073 3 Lomoz F. 1169 2456065.5130416513 0.00043 2 Polák J. 1173 2456070.738331693 0.00084 3 Carroll J. G. 1209 2456117.7616236177 0.00031 1 Hutcheson M. 1218 2456129.5160036976 0.00033 1 Marino G. 1218 2456129.5166736976 0.0006 3 Arena C. 1219 2456130.8231337066 0.00064 2 Curry S. 1221 2456133.433113724 0.00061 2 Romanyuk Y. 1221 2456133.4348537237 0.00104 3 Emering F. 1222 2456134.7410537326 0.00062 2 L. Shannon ** 1245 2456164.783843919 0.00051 2 Connall J. 1260 2456184.3737040292 0.00079 3 Flechsig G. 1415 2456386.834894545 0.00061 3 Kehusmaa P., Harlingten C. 1427 2456402.509924532 0.00047 2 Romanyuk Y. 1427 2456402.5104345316 0.00068 2 Horta F. G. 1438 2456416.877084512 0.00031 1 Shadic S. 1441 2456420.796114505 0.00062 3 Shadic S. 1443 2456423.4085645 0.00071 2 Carreño A. 1464 2456450.8378044358 0.00066 2 Shadic S. 1464 2456450.838564436 0.00047 2 Kehusmaa P., Harlingten C. 1479 2456470.4304043753 0.00052 2 Marino G. 1489 2456483.492974329 0.00039 1 Salisbury M. 1489 2456483.496254328 0.00062 2 Naves R. 1515 2456517.455024188 0.00057 2 Marino G. 1518 2456521.3718141704 0.0003 1 Marino G. 1518 2456521.3720741705 0.00039 2 Scaggiante F., Zardin D. 1531 2456538.3525040895 0.00084 3 Mollica M., Laudisio F. 1532 2456539.6582840835 0.00047 2 Benni P. 1541 2456551.412894024 0.0005 2 Garcia F. 1554 2456568.3934539333 0.0005 2 Pagel L. 1570 2456589.2942738147 0.00033 1 Salisbury M. 1685 2456739.506592602 0.00058 2 Kleschonok V. 1725 2456791.7512520305 0.00091 3 Shadic S., Aziz U. Table B5 continued B5 Table B5 (continued) B5Epoch Transit Center (BJDTDB) Error Data Quality Observer 1728 2456795.6714519854 0.00076 3 Benni P. 1734 2456803.5081218933 0.00051 2 Zibar M. 1773 2456854.452751263 0.00046 2 Gonzalez J. 1838 2456939.3499600985 0.00049 1 Bretton M. 1950 2457085.644847847 0.00085 2 David Molina Alonso 1980 2457124.8286271747 0.00057 2 Drew Nikirk 1980 2457124.829207175 0.00061 3 Tatiana Kopchuk 1980 2457124.8298971746 0.00059 3 Matt B 1980 2457124.8309971746 0.00073 2 Kirsti Long 1980 2457124.831517175 0.00045 2 Sara Wisk 2002 2457153.566776662 0.00042 1 Marc Deldem 2008 2457161.4013765194 0.00046 2 Marc Bretton 2008 2457161.4016965195 0.00075 3 Francesco Scaggiante, Danilo Zardin 2016 2457171.8535663285 0.00041 2 Stan Shadick, Ashley Stock 2031 2457191.446515968 0.00046 1 Faustino Garcia 2041 2457204.5094073005 0.00086 3 David Molina 2042 2457205.8134472766 0.00023 1 Oleg Mazurenko 2044 2457208.426267228 0.00069 2 Marc Bretton 2044 2457208.4290972278 0.00028 1 Marc Bretton 2051 2457217.5705770585 0.00049 2 Marc Bretton 2052 2457218.8748970344 0.00054 2 Duane Lee, Sophia Dragowsky 2055 2457222.793576961 0.00032 1 Oleg Mazurenko 2074 2457247.611346499 0.00057 3 Dennis Conti 2093 2457272.4293260365 0.00024 1 Marc Bretton 2093 2457272.4295060365 0.0004 1 Mark Salisbury 2135 2457327.2897050064 0.00027 1 Marc Bretton 2260 2457490.562961855 0.00068 3 Marcos Michel 2276 2457511.4615814444 0.00051 2 Marc Bretton 2290 2457529.747891084 0.00044 2 Oleg Mazurenko 2315 2457562.40261044 0.00065 3 Francesco Scaggiante, Danilo Zardin 2315 2457562.4050604394 0.00041 1 Marc Bretton 2327 2457578.07526013 0.00033 1 Wonseok Kang 2351 2457609.4260295145 0.00031 1 Marc Bretton 2351 2457609.4291395145 0.00072 3 Ramon Naves 2547 2457865.4380364544 0.00024 1 Marc Bretton 2555 2457875.8861362766 0.00062 2 Martin Fowler 2570 2457895.479235946 0.00085 3 Ferratfiat Stephanie 2570 2457895.4804859464 0.00018 1 Marc Bretton 2583 2457912.460325666 0.00042 2 Jean-Luc Martin 2596 2457929.44031539 0.00042 1 Mario Morales Aimar 2603 2457938.5839852435 0.00039 1 David Molina Table B5 continued Table B5 (continued) B5Epoch Transit Center (BJDTDB) Error Data Quality Observer 2609 2457946.422665119 0.00017 1 Marc Bretton 2609 2457946.423585119 0.00048 1 David Molina 2621 2457962.095454874 0.00018 1 Wonseok Kang 2623 2457964.7088248334 0.00075 3 Serge Bergeron 2642 2457989.525624457 0.00084 3 Rafael Castillo 2658 2458010.424194148 0.00041 1 Mark Salisbury 2802 2458198.5157017983 0.00045 2 Roman Ponča 2825 2458228.556611485 0.00072 2 Derek OKeeffe 2825 2458228.5578814847 0.00065 2 Matthieu Bachschmidt 2838 2458245.5304113147 0.00093 3 Adam Nowak 2841 2458249.4591012765 0.00044 2 Yves Jongen 2863 2458278.192341003 0.0003 1 Wonseok Kang 2864 2458279.5022409908 0.00038 1 Yves Jongen 2867 2458283.4174509556 0.00048 1 David Molina 2867 2458283.4201409556 0.0004 1 Yves Jongen 2877 2458296.483640839 0.0004 1 Yves Jongen 2887 2458309.5413207286 0.00022 1 Marc Bretton 2887 2458309.5447507286 0.00034 1 Yves Jongen 2890 2458313.462190696 0.00046 1 Yves Jongen 2892 2458316.0719906753 0.00022 1 Wonseok Kang 2897 2458322.603670623 0.00086 3 David Molina 2899 2458325.215280602 0.00028 1 Wonseok Kang 2903 2458330.4397405623 0.00109 3 F. Lomoz 2914 2458344.809140457 0.00076 3 Ram Goel 2929 2458364.4008903247 0.00019 1 Marc Bretton 2929 2458364.401270325 0.00042 1 Yves Jongen 2942 2458381.3818202205 0.0005 2 Yves Jongen 3054 2458527.6744596735 0.00029 1 Yves Jongen 3057 2458531.5942996666 0.00056 2 Yves Jongen 3083 2458565.5540296226 0.00042 1 Pere Guerra 3083 2458565.554309623 0.00039 1 Yves Jongen 3122 2458616.493019618 0.00063 2 Anael Wunsche 3132 2458629.5572196287 0.00035 1 Yves Jongen 3135 2458633.476209633 0.00036 1 Yves Jongen 3138 2458637.394659637 0.00038 1 Yves Jongen 3146 2458647.8428396513 0.00016 1 Brennan Rodgers, Rina Rast 3158 2458663.5175896776 0.0006 2 Alessandro Marchini 3158 2458663.5181596777 0.00067 3 Bruno Christmann 3174 2458684.4154097247 0.00051 2 F. Salvaggio *** 3174 2458684.416349725 0.00033 1 Yves Jongen 3174 2458684.4175497247 0.00034 1 Anael Wunsche Table B5 continued Table B5 (continued) B5Note-* Stacy Irwin, and Karissa Haire. ** R. Campbell, D. Russell, and W. Strickland. *** M. Banfi, R.Papini, and G. Marino.Table B6. TrES-5 b ETD Transit CentersEpoch Transit Center (BJDTDB) Error Data Quality Observer 3187 2458701.3968897727 0.00032 1 Yves Jongen 3187 2458701.4002997726 0.00102 3 Juan Martínez Sánchez 3197 2458714.459789816 0.00039 1 Yves Jongen 3210 2458731.4375198805 0.00074 2 Josep Gaitan 3210 2458731.439279881 0.00059 2 Alberto Tomatis 3213 2458735.357599897 0.00039 1 Alberto García Sánchez 3213 2458735.357859897 0.00039 1 Yves Jongen 3240 2458770.6236100714 0.00072 2 N. Thomas 3252 2458786.3000701647 0.00091 3 Jens Jacobsen 3252 2458786.300100165 0.00046 2 Anael Wunsche 3252 2458786.300140165 0.00039 1 Manfred Raetz 3377 2458949.5718215886 0.00027 1 Manfred Raetz 3377 2458949.573211589 0.00039 1 Yves Jongen 3377 2458949.5733115887 0.00068 3 Etienne Bertrand 3378 2458950.8789316025 0.00057 2 Yves Jongen 3390 2458966.5522717745 0.00028 1 Manfred Raetz 3406 2458987.4505720134 0.00063 3 Francois Regembal 3406 2458987.4508820134 0.00104 3 Jens Hennig 3406 2458987.4520520135 0.00089 3 Guy Brabant 3407 2458988.7580320286 0.00067 2 N. Thomas, S. Dahlke 3468 2459068.4314830746 0.00105 3 Ehrenberger R. 3468 2459068.435843074 0.00054 2 Alberto García Sánchez 3478 2459081.498203266 0.00024 1 Marc Bretton 3494 2459102.3963535833 0.0014 3 Martin Tylšar 3632 2459282.6490468206 0.0004 2 Etienne Bertrand 3632 2459282.6503368206 0.00035 1 Yves Jongen 3635 2459286.565336899 0.00062 2 Serguei Jourba 3635 2459286.569526899 0.00057 2 Yves Jongen 3648 2459303.550437244 0.00053 2 Yves Jongen 3664 2459324.448087675 0.00042 2 Veli-Pekka Hentunen 3690 2459358.4083683873 0.00047 2 Giuseppe Marino 3710 2459384.533928947 0.00085 3 Alessandro Marchini 3723 2459401.5131093175 0.00051 2 Samuel Diaz Lopez Epoch Transit Center (BJDTDB) Error Data Quality Observer 544 2456249.595185279 0.00108 3 Shadic S. Table B6 continued B6 Table B6 (continued) B6Epoch Transit Center (BJDTDB) Error Data Quality Observer 1292 2457358.317884283 0.00092 3 Marc Bretton 1363 2457463.554524246 0.00062 2 Veli-Pekka Hentunen 1371 2457475.4124742285 0.00055 2 Veli-Pekka Hentunen 1375 2457481.343554218 0.00085 3 Veli-Pekka Hentunen 1419 2457546.56186406 0.00125 3 David Molina 1421 2457549.5267740507 0.00039 1 Marc Bretton 1432 2457565.828953998 0.00056 2 Oleg Mazurenko 1446 2457586.5831739255 0.0008 3 David Molina 1452 2457595.4753238917 0.00085 3 David Molina 1481 2457638.4573837165 0.00112 3 David Molina 1487 2457647.3534436775 0.00042 2Š. Gajdoš, J.Šubjak 1489 2457650.3178736647 0.0012 3 Veli-Pekka Hentunen 1516 2457690.339813482 0.00052 2 ESSEIVA Nicolas 1518 2457693.304213468 0.00028 1 Marc Bretton 1547 2457736.2894932576 0.00025 1 Marc Bretton 1585 2457792.6119745243 0.00074 3 Veli-Pekka Hentunen 1651 2457890.4441538425 0.00035 1 Marc Bretton 1682 2457936.391023461 0.00052 2 Yenal Ogmen 1686 2457942.3198134094 0.00053 2 Yenal Ogmen 1693 2457952.6964833187 0.00038 1 Michael Fleenor 1700 2457963.0729932273 0.00047 2 Wonseok Kang 1734 2458013.4698527735 0.00059 3 Ferran Grau Horta 1741 2458023.8438126785 0.00057 3 J Garlitz 1746 2458031.2557226103 0.00045 2 Yenal Ogmen 1749 2458035.701562569 0.00063 3 JGarlitz 1765 2458059.419232348 0.00078 3 Mark Salisbury 1769 2458065.3475322914 0.00083 3 Mario Morales 1842 2458173.5505112167 0.00119 3 Veli-Pekka Hentunen 1909 2458272.864280099 0.00114 3 Joe Garliz 1929 2458302.50680974 0.00055 2 Mark Salisbury 1933 2458308.436299667 0.00059 2 Yves Jongen 1933 2458308.436689667 0.00042 1 Marc Bretton 1935 2458311.3995296303 0.0005 2 Yenal Ogmen 1945 2458326.2240494466 0.00044 2 Wonseok Kang 1956 2458342.5263592442 0.0004 1 Marti Poch 1964 2458354.3845690964 0.00058 3 Yenal Ogmen 1987 2458388.4777786722 0.00026 1 Marc Bretton 2016 2458431.462358143 0.00056 2 Mark Salisbury 2153 2458634.5293655614 0.00059 2 Pere Guerra 2153 2458634.5294755613 0.00063 2 Anael Wunsche 2155 2458637.495375522 0.00024 1 Marc Bretton Table B6 continued Table B6 (continued) B6Table B7. WASP-4 b ETD Transit CentersEpoch Transit Center (BJDTDB) Error Data Quality Observer 2155 2458637.4954055217 0.00051 2 Yves Jongen 2182 2458677.5164949866 0.00087 3 Ramon Naves 2184 2458680.478934947 0.00099 3 Bruno Christmann 2186 2458683.4463249077 0.00054 2 Yves Jongen 2215 2458726.430114338 0.00046 2 Yves Jongen 2217 2458729.3947442994 0.00041 2 Yves Jongen 2217 2458729.3952342994 0.00026 1 Marc Bretton 2221 2458735.3223542213 0.00042 2 Yenal Ogmen 2238 2458760.5221838932 0.00058 2 Anaël Wünsche 2279 2458821.2931831274 0.00045 2 Mark Salisbury 2352 2458929.4972218275 0.00074 3 Veli-Pekka Hentunen 2414 2459021.394720729 0.00074 2 Nello Ruocco 2433 2459049.5604904005 0.0005 2 Vicenç Ferrando 2437 2459055.487410333 0.00053 2 Yves Jongen 2460 2459089.5777099533 0.00072 3 Yves Jongen 2466 2459098.473869858 0.00032 1 Marc Bretton 2468 2459101.4400298265 0.00054 2 Yves Jongen 2472 2459107.3671997637 0.00035 1 Manfred Raetz 2474 2459110.333919733 0.00041 1 Manfred Raetz 2493 2459138.4938994455 0.00061 2 Ferran Grau Horta 2605 2459304.503627991 0.00052 2 Snaevarr Gudmundsson 2611 2459313.398387921 0.00068 3 Veli-Pekka Hentunen 2727 2459485.3388267807 0.00032 1 Mark Salisbury 2727 2459485.338856781 0.00052 2 Veli-Pekka Hentunen 2752 2459522.3984666145 0.00049 2 Anaël Wünsche 2775 2459556.487556493 0.00064 2 Snaevarr Gudmundsson 2785 2459571.308756449 0.0004 2 Snaevarr Gudmundsson Epoch Transit Center (BJDTDB) Error Data Quality Observer -1433 2453963.108629995 0.00081 1 Gillon et al. (2008) -1133 2454364.5772199957 0.00075 1 Gillon et al. (2008) -1132 2454365.9153884756 0.00025 3 Wilson et al. (2008) -1130 2454368.5924399947 0.00022 1 Gillon et al. (2008) -1128 2454371.2681200043 0.00033 1 Gillon et al. (2008) -1109 2454396.6954099988 5.1e-05 1 Gillon et al. (2008) -884 2454697.7981701 5.5e-05 1 Winn et al. (2009) -846 2454748.6511103 7.2e-05 1 Winn et al. (2009) -843 2454752.6659099977 0.00071 1 Dragomir et al. (2011) Table B7 continued Table B7 (continued) B7Epoch Transit Center (BJDTDB) Error Data Quality Observer -594 2455085.886044746 0.0038 1 Dragomir et al. (2011) -586 2455096.59164467 0.0008 3 Tifner -370 2455385.6495647277 0.00027 1 Sauer T. -340 2455425.7960650325 0.00011 1 Eduardo Fernández-Lajús * -304 2455473.9723555245 0.00034 1 Milne G. -301 2455477.9863855722 0.00057 2 Curtis I. -283 2455502.075825877 0.00039 2 Tan TG -280 2455506.090525931 0.00044 2 Curtis I. -61 2455799.1626514993 0.0004 1 TG Tan 225 2456181.897293772 0.00066 3 Evans P. 505 2456556.6021742392 0.00019 1 Colazo C.A., Schneiter, E.M. 517 2456572.659514626 0.00028 1 Miler E. 567 2456639.5727661964 0.00033 1 Miler E. 742 2456873.763190332 0.00027 1 Eduardo Fernández-Lajús * 751 2456885.8073904864 0.00035 1 Mašek M. ** 754 2456889.820350537 0.00021 1 Eduardo Fernández-Lajús * 754 2456889.821500537 8e-05 1 Colazo C. A., Villarreal C. 768 2456908.5547107635 0.00038 1 Quiñones C. 768 2456908.5560507635 0.00059 2 Quiñones C. 804 2456956.7325712824 0.00079 2 Quiñones C. 804 2456956.7344312826 0.0006 2 Quiñones C. 1023 2457249.8046937543 0.00014 1 Eduardo Fernández-Lajús * 1555 2457961.745391956 0.00069 2 Phil Evans 1587 2458004.567670575 0.00047 2 Phil Evans 1599 2458020.6263900483 0.00044 2 H. Durantini Luca 1599 2458020.6268100482 0.0005 1 Phil Evans 1840 2458343.1408083774 0.00024 1 Carl R Knight 1887 2458406.038555963 0.00031 1 Carl R Knight 2202 2458827.5814704555 0.0003 1 Yves Jongen 2389 2459077.829033093 0.00037 1 Yves Jongen 2409 2459104.595852447 0.00039 1 Yves Jongen 2412 2459108.6087323534 0.00061 2 Yves Jongen 2415 2459112.622142261 0.00056 2 Yves Jongen 2424 2459124.668001988 0.00038 1 Yves Jongen 2474 2459191.577950603 0.0004 1 Yves Jongen 2483 2459203.623060375 0.00055 2 Yves Jongen 2652 2459429.784257368 0.0004 1 Yves Jongen 2658 2459437.814577308 0.0004 1 Yves Jongen 2670 2459453.8723172 0.00038 1 Anaël Wünsche 2675 2459460.56482716 0.00057 2 Yves Jongen 2690 2459480.6370270588 0.00041 2 Anaël Wünsche Table B7 continued Table B7 (continued) B7Epoch Transit Center (BJDTDB) Error Data Quality Observer 2690 2459480.637467059 0.00047 2 Yves Jongen 2693 2459484.6515770415 0.00036 1 Yves Jongen 2696 2459488.667047026 0.00026 1 Yves Jongen 2699 2459492.680807011 0.00028 1 Yves Jongen 2746 2459555.5779369324 0.00027 1 Yves Jongen Table B8 . B8WASP-10 b ETD Transit CentersEpoch Transit Center (BJDTDB) Error Data Quality Observer 0 2454357.85878443 0.00042 1 Christian et al. (2008) Table B8 (continued) B8Epoch Transit Center (BJDTDB) Error Data Quality Observer 342 2455415.57155999 0.00014 1 Maciejewski et al. (2011) Table B8 (continued) B8Epoch Transit Center (BJDTDB) Error Data Quality Observer Table B8 (continued) B8Note-* M. Gaude, C. Gillier, J. Michelet, JP.Roux ** Lycée Notre Dame de Toutes Aides, France N. *** Ferreira, E. Helou, J. Lowenthal, R. O ** † M. Banfi, R. Papini, F. Salvaggio * † † B.A. Dumitru, D. BertesteanuTable B9. WASP-12 b ETD Transit CentersEpoch Transit Center (BJDTDB) Error Data Quality Observer 1450 2458842.306615231 0.0023 3 Jean Plazas 1450 2458842.3124452312 0.0004 1 Yves Jongen 1527 2459080.4521757816 0.00075 2 Veli-Pekka Hentunen 1527 2459080.4541857815 0.00032 1 Yves Jongen 1530 2459089.7315554284 0.00046 1 Yves Jongen 1537 2459111.385484616 0.00058 2 Pavel Pintr 1538 2459114.475044502 0.00043 1 VojtěchŠkolník 1540 2459120.6585942735 0.0004 1 Yves Jongen 1558 2459176.3277522773 0.00187 3 Valere Perroud 1558 2459176.328612277 0.00033 1 Yves Jongen 1568 2459207.254571207 0.00049 1 Snaevarr Gudmundsson 1636 2459417.5608246773 0.00037 1 TJMS GST 1658 2459485.6009329166 0.00064 2 Kaeouach Aziz 1667 2459513.4359022724 0.00034 1 Mark salisbury 1667 2459513.4376022723 0.00042 1 M. Popescu * † † 1677 2459544.3634616127 0.00062 2 Snaevarr Gudmundsson 1678 2459547.454961551 0.00024 1 Mark Salisbury Epoch Transit Center (BJDTDB) Error Data Quality Observer 0 2454508.97685446 0.0002 1 Hebb et al. (2008) 300 2454836.4033931005 0.0006 2 Hentunen 304 2454840.7686030436 0.00047 1 Tucker et al. (2011) 304 2454840.771193043 0.001 3 Gary 366 2454908.437992057 0.001 3 Naves 387 2454931.3581816694 0.00098 3 Brát L. 601 2455164.923956134 0.00149 3 Shadick S. 601 2455164.9243161334 0.00129 3 Stan Shadick 608 2455172.561815919 0.00044 1 Tucker et al. (2011) 608 2455172.562785919 0.00014 1 Ingemyr M. 632 2455198.756735166 0.00141 3 Tieman B. 661 2455230.4067300046 0.00011 1 Maciejewski et al. (2011) 683 2455254.418869997 0.00014 1 Maciejewski et al. (2011) 685 2455256.596033418 0.00059 3 Latham C., Schwieterman E. 693 2455265.3340531443 0.00129 3 Kučáková H. 914 2455506.53328486 0.00112 3 Naves R. Table B9 continued B9 Table B10 (continued) B10Epoch Transit Center (BJDTDB) Error Data Quality Observer 4613 2459281.4245372773 0.00029 1 MarSEC *** 4623 2459289.5593274455 0.0005 2 Paul Benni 4629 2459294.4405775447 0.00046 2 Yves Jongen https://doi.org/10.5281/zenodo.7098460 It is important to note that the BIC is one of many valid model comparison statistics. http://var2.astro.cz/ETD/etd.php?STARNAME=WASP-12& PLANET=b http://var2.astro.cz/ETD/etd.php?STARNAME=HAT-P-32& PLANET=b Table B4 continued Table B6 continued Table B8 continued Bretton M.Table B8 continued Table B9 continued . R Alonso, T M Brown, G Torres, 10.1086/425256The Astrophysical Journal. 613153Alonso, R., Brown, T. M., Torres, G., et al. 2004, The Astrophysical Journal, 613, L153, doi: 10.1086/425256 . Ö Baştürk, S Yalçınkaya, E M Esmer, 10.1093/mnras/staa1758MNRAS. 4964174Baştürk,Ö., Yalçınkaya, S., Esmer, E. M., et al. 2020, MNRAS, 496, 4174, doi: 10.1093/mnras/staa1758 . R V Baluev, E N Sokov, V S Shaidulin, 10.1093/mnras/stv788MNRAS. 4503101Baluev, R. V., Sokov, E. N., Shaidulin, V. S., et al. 2015, MNRAS, 450, 3101, doi: 10.1093/mnras/stv788 . R V Baluev, E N Sokov, H R A Jones, 10.1093/mnras/stz2620MNRAS. 490Baluev, R. V., Sokov, E. N., Jones, H. R. A., et al. 2019, MNRAS, 490, 1294, doi: 10.1093/mnras/stz2620 . R V Baluev, E N Sokov, S Hoyer, 10.1093/mnrasl/slaa069MNRAS. Letters. 49611Baluev, R. V., Sokov, E. N., Hoyer, S., et al. 2020, MNRAS. Letters, 496, L11, doi: 10.1093/mnrasl/slaa069 . S C C Barros, G Boué, N P Gibson, 10.1093/mnras/stt111MNRAS. 4303032Barros, S. C. C., Boué, G., Gibson, N. P., et al. 2013, MNRAS, 430, 3032, doi: 10.1093/mnras/stt111 . K Batygin, P H Bodenheimer, G P Laughlin, F T O&apos;donovan, D Charbonneau, G Á Bakos, 10.1086/519793ApJ. 66337Batygin, K., Bodenheimer, P. H., & Laughlin, G. P. 2016, O'Donovan, F. T., Charbonneau, D., Bakos, G.Á., et al. 2007, ApJ, 663, L37, doi: 10.1086/519793 . F T O&apos;donovan, D Charbonneau, G Mandushev, 10.1086/509123The Astrophysical Journal. 65161O'Donovan, F. T., Charbonneau, D., Mandushev, G., et al. 2006, The Astrophysical Journal, 651, L61, doi: 10.1086/509123 . K C Patra, J N Winn, M J Holman, M Gillon, A Burdanov, 10.3847/1538-3881/ab7374The Astronomical Journal. 159150Patra, K. C., Winn, J. N., Holman, M. J., Gillon, M., & Burdanov, A. 2020, The Astronomical Journal, 159, 150, doi: 10.3847/1538-3881/ab7374 . K C Patra, J N Winn, M J Holman, 10.3847/1538-3881/aa6d75The Astronomical Journal. 154Patra, K. C., Winn, J. N., Holman, M. J., et al. 2017, The Astronomical Journal, 154, 4, doi: 10.3847/1538-3881/aa6d75 . K Penev, L G Bouma, J N Winn, J Hartman, 10.3847/1538-3881/aaaf71The Astronomical Journal. 155165Penev, K., Bouma, L. G., Winn, J. N., & Hartman, J. D. 2018, The Astronomical Journal, 155, 165, doi: 10.3847/1538-3881/aaaf71 . R Petrucci, E Jofré, Y Gómez Maqueo Chew, 10.1093/mnras/stz3034MNRAS. Petrucci, R., Jofré, E., Gómez Maqueo Chew, Y., et al. 2019, MNRAS, doi: 10.1093/mnras/stz3034 . R Petrucci, E Jofré, M Schwartz, Astrophysical journal. Letters. 23Petrucci, R., Jofré, E., Schwartz, M., et al. 2013, Astrophysical journal. Letters, 779, L23 . S Poddaný, L Brat, O Pejcha, 10.1051/epjconf/20101106008EPJ Web of Conferences. 116008Poddaný, S., Brat, L., & Pejcha, O. 2011, EPJ Web of Conferences, 11, 6008, doi: 10.1051/epjconf/20101106008 . S Poddaný, L Brát, O Pejcha, 10.1016/j.newast.2009.09.001New Astronomy. 15297Poddaný, S., Brát, L., & Pejcha, O. 2010, New Astronomy, 15, 297, doi: 10.1016/j.newast.2009.09.001 . Ç Püsküllü, F Soydugan, A Erdem, E Budding, 10.1016/j.newast.2017.04.001NewA. 5539Püsküllü, Ç ., Soydugan, F., Erdem, A., & Budding, E. 2017, NewA, 55, 39, doi: 10.1016/j.newast.2017.04.001 . M Rabus, R Alonso, H J Deeg, 10.1017/S1743921308026859Proceedings of the International Astronomical Union. 4432Rabus, M., Alonso, R., Deeg, H. J., et al. 2008, Proceedings of the International Astronomical Union, 4, 432, doi: 10.1017/S1743921308026859 . M Rabus, H J Deeg, R Alonso, J A Belmonte, J M Almenara, 10.1051/0004-6361/200912252Astronomy and Astrophysics. 5081011Rabus, M., Deeg, H. J., Alonso, R., Belmonte, J. A., & Almenara, J. M. 2009, Astronomy and Astrophysics, 508, 1011, doi: 10.1051/0004-6361/200912252 . S Raetz, G Maciejewski, C Ginski, 10.1093/mnras/stu1505MNRAS. 4441351Raetz, S., Maciejewski, G., Ginski, C., et al. 2014, MNRAS, 444, 1351, doi: 10.1093/mnras/stu1505 . D Ragozzine, A S Wolf, 10.1088/0004-637X/698/2/1778The Astrophysical Journal. 6981778Ragozzine, D., & Wolf, A. S. 2009, The Astrophysical Journal, 698, 1778, doi: 10.1088/0004-637X/698/2/1778 . D Ricci, F G Ramón-Fox, C Ayala-Loera, 10.1086/680233Ricci, D., Ramón-Fox, F. G., Ayala-Loera, C., et al. 2014, doi: 10.1086/680233 G R Ricker, J N Winn, R Vanderspek, 10.1117/1.JATIS.1.1.014003Journal of astronomical telescopes, instruments, and systems, 1, 014003. Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of astronomical telescopes, instruments, and systems, 1, 014003, doi: 10.1117/1.JATIS.1.1.014003 . S Schröter, J H M M Schmitt, H M Müller, 10.1051/0004-6361/201118536Astronomy and Astrophysics. 539Schröter, S., Schmitt, J. H. M. M., & Müller, H. M. 2012, Astronomy and Astrophysics, 539, A97, doi: 10.1051/0004-6361/201118536 . M Seeliger, D Dimitrov, D Kjurkchieva, 10.1093/mnras/stu567MNRAS. 441304Seeliger, M., Dimitrov, D., Kjurkchieva, D., et al. 2014, MNRAS, 441, 304, doi: 10.1093/mnras/stu567 . M Seeliger, M Kitze, R Errmann, 10.1093/mnras/stv1187MNRAS. 4514060Seeliger, M., Kitze, M., Errmann, R., et al. 2015, MNRAS, 451, 4060, doi: 10.1093/mnras/stv1187 . E N Sokov, I A Sokova, V V Dyachenko, 10.1093/mnras/sty1615MNRAS. 480291Sokov, E. N., Sokova, I. A., Dyachenko, V. V., et al. 2018, MNRAS, 480, 291, doi: 10.1093/mnras/sty1615 . E Sonbas, N Karaman, A Özdönmez, 10.1093/mnras/stab3270MNRAS. 5095102Sonbas, E., Karaman, N.,Özdönmez, A., et al. 2021, MNRAS, 509, 5102, doi: 10.1093/mnras/stab3270 . C Sousa-Silva, L K Mckemmish, K L Chubb, 10.1088/1361-6552/aa8f2aPhysics Education. 5315020Sousa-Silva, C., McKemmish, L. K., Chubb, K. L., et al. 2018, Physics Education, 53, 015020, doi: 10.1088/1361-6552/aa8f2a . J Southworth, M Dominik, U G Jørgensen, 10.1093/mnras/stz2602MNRAS. 4904230Southworth, J., Dominik, M., Jørgensen, U. G., et al. 2019, MNRAS, 490, 4230, doi: 10.1093/mnras/stz2602 . A Sozzetti, G Torres, D Charbonneau, 10.1088/0004-637X/691/2/1145The Astrophysical Journal. 6911145Sozzetti, A., Torres, G., Charbonneau, D., et al. 2009, The Astrophysical Journal, 691, 1145, doi: 10.1088/0004-637X/691/2/1145 . G Tinetti, P Drossart, P Eccleston, 10.1007/s10686-018-9598-xExperimental Astronomy. 46135Tinetti, G., Drossart, P., Eccleston, P., et al. 2018, Experimental Astronomy, 46, 135, doi: 10.1007/s10686-018-9598-x . G Tinetti, P Eccleston, C Haswell, arXiv:2104.04824arXiv e-printsTinetti, G., Eccleston, P., Haswell, C., et al. 2021, arXiv e-prints, arXiv:2104.04824. https://arxiv.org/abs/2104.04824 . J D Turner, L Flagg, A Ridden-Harper, R Jayawardhana, 10.3847/1538-3881/ac686fThe Astronomical Journal. 163Turner, J. D., Flagg, L., Ridden-Harper, A., & Jayawardhana, R. 2022, The Astronomical Journal, 163, doi: 10.3847/1538-3881/ac686f . J D Turner, A Ridden-Harper, R Jayawardhana, 10.3847/1538-3881/abd178Turner, J. D., Ridden-Harper, A., & Jayawardhana, R. 2020, doi: 10.3847/1538-3881/abd178 . M Vanko, G Maciejewski, M Jakubík, 10.1093/mnras/stt502MNRAS. 432944Vanko, M., Maciejewski, G., Jakubík, M., et al. 2013, MNRAS, 432, 944, doi: 10.1093/mnras/stt502 . D M Wilson, M Gillon, C Hellier, 10.1086/586735The Astrophysical Journal. 675113Wilson, D. M., Gillon, M., Hellier, C., et al. 2008, The Astrophysical Journal, 675, L113, doi: 10.1086/586735 . I Wong, A Shporer, S Vissapragada, 10.3847/1538-3881/ac5680The Astronomical Journal. 163Wong, I., Shporer, A., Vissapragada, S., et al. 2022, The Astronomical Journal, 163, doi: 10.3847/1538-3881/ac5680 . S W Yee, J N Winn, H A Knutson, 10.3847/2041-8213/ab5c16Astrophysical journal. Letters. 8885Yee, S. W., Winn, J. N., Knutson, H. A., et al. 2019, Astrophysical journal. Letters, 888, L5, doi: 10.3847/2041-8213/ab5c16 . R T Zellem, K A Pearson, E Blaser, 10.1088/1538-3873/ab7ee7PASP. 13254401Zellem, R. T., Pearson, K. A., Blaser, E., et al. 2020, PASP, 132, 054401, doi: 10.1088/1538-3873/ab7ee7 . S Zhao, J Jiang-Hui, D Yao, 10.1016/j.chinastron.2018.01.007Chinese Astronomy and Astrophysics. 42Zhao, S., Jiang-hui, J., & Yao, D. 2018, Chinese Astronomy and Astrophysics, 42, 101, doi: 10.1016/j.chinastron.2018.01.007
[]
[ "Transfinite Milnor invariants for 3-manifolds", "Transfinite Milnor invariants for 3-manifolds" ]
[ "Jae Choon Cha ", "Kent E Orr " ]
[]
[]
In his 1957 paper, John Milnor introduced link invariants which measure the homotopy class of the longitudes of a link relative to the lower central series of the link group. Consequently, these invariants determine the lower central series quotients of the link group. This work has driven decades of research with profound influence. One of Milnor's original problems remained unsolved: to extract similar invariants from the transfinite lower central series of the link group. We reformulate and extend Milnor's invariants in the broader setting of 3-manifolds, with his original invariants as special cases. We present a solution to Milnor's problem for general 3-manifold groups, developing a theory of transfinite invariants and realizing nontrivial values.
10.4171/jems/1328
[ "https://arxiv.org/pdf/2002.03208v2.pdf" ]
211,068,693
2002.03208
8146c8c0ca7f561fbeacc1f2d575f2f15fa046c5
Transfinite Milnor invariants for 3-manifolds Sep 2021 Jae Choon Cha Kent E Orr Transfinite Milnor invariants for 3-manifolds Sep 2021 In his 1957 paper, John Milnor introduced link invariants which measure the homotopy class of the longitudes of a link relative to the lower central series of the link group. Consequently, these invariants determine the lower central series quotients of the link group. This work has driven decades of research with profound influence. One of Milnor's original problems remained unsolved: to extract similar invariants from the transfinite lower central series of the link group. We reformulate and extend Milnor's invariants in the broader setting of 3-manifolds, with his original invariants as special cases. We present a solution to Milnor's problem for general 3-manifold groups, developing a theory of transfinite invariants and realizing nontrivial values. Introduction In John Milnor's 1954 Ph.D. thesis [Mil57], he introduced link invariants obtained from the lower central series of the fundamental group. Milnor's work vastly extended the classical linking number, and has influenced decades of fundamental research. Roughly speaking, Milnor's invariants inductively measure whether the fundamental group of the exterior of a given link has the same lower central series quotients as that of the free group [Mil57]. Another key feature of Milnor's invariant, due to Stallings [Sta65], is invariance under link concordance, and more generally under homology cobordism of the link exterior. Invariance under homology cobordism seeds a fundamental connection between Milnor's invariants and the topology of 4-manifolds. Although seldom noted, the first part of Milnor's paper [Mil57] concerns fundamental groups of exteriors of links in an arbitrary 3-manifold, while the latter part of Milnor's paper, as well as most subsequent research of others, focuses on the special case of links in S 3 . The following problems posed by Milnor in [Mil57] have remained unsolved for more than 60 years. Milnor's Problem [Mil57,p. 52,Problem (b)]. Find a method of attacking the transfinite lower central series quotients and extracting information from it. Our contribution. In this paper, we develop new families of transfinite invariants for closed, orientable 3-manifolds. For one family of these invariants we find striking parallels to Milnor's link invariants, leading us to name that family of invariants Milnor invariants of 3-manifolds. The Milnor invariants we introduce are indexed by arbitrary ordinal numbers called the length of the invariant. This allows one to extend the integer grading in Milnor's original work. Our invariants include classical Milnor invariants as a special case. We show that our invariants are highly nontrivial even at infinite ordinals. Thus, we view these invariants as presenting a solution to Milnor's problem within the broad context of oriented closed 3-manifolds. Indeed, we define four closely related invariants. The invariants we call the Milnor invariants is denoted byμ κ (M ), where κ is the length. The invariantμ κ (M ) has the following features. Moreover, this extends to the transfinite length case, using an appropriate notion of transfinite gropes. (v) Realization:μ κ (M ) lives in a set R κ (Γ)/≈, whose elements are explicitly characterized. Every element in R κ (Γ)/≈ is realized asμ κ (M ) for some closed 3-manifold M . This shows that many fundamental characterizing properties of Milnor's link invariants generalize to our 3-manifold Milnor invariants, thereby extending Milnor's theory across all ordinals and 3-manifolds. Using realizability from (v), we show the aforementioned result that the transfinite theory is highly nontrivial even at infinite ordinals-we exhibit infinitely many explicit 3-manifolds M with vanishingμ κ (M ) for all finite κ but have non-vanishing, pairwise distinctμ ω (M ) for the first transfinite ordinal ω. We also define and study a "universal" transfinite invariant, which generalizes Levine's link invariant [Lev89a] over algebraic closures to the case of 3-manifolds. We prove that this universal invariant is highly nontrivial, even for 3-manifolds for which all transfinite Milnor invariants vanish. As mentioned earlier, for links, whether Levine's invariant can be non-zero remains open. We define two additional invariants central to our paper. The following section describes all four invariants and provides precise statements of our main results as well as applications, (i)-(v). The new results of this paper, especially the framework of transfinite invariants, opens multiple avenues for future research. We discuss a small portion of these, including potential applications to link concordance and to Whitney towers, at the end of the paper. Acknowledgements The first named author was partly supported by NRF grant 2019R1A3B2067839. The second named author was partly supported by Simons Foundation Grants 209082 and 430351. The second author gratefully acknowledges support from SFB 1085 'Higher Invariants' funded by the Deutsche Forschungsgemeinschaft DFG, University of Regensburg. Subsequent to the development of our theory, Sergei Ivanov and Roman Mikhailov have begun studying the Bousfield-Kan completion of 3-manifolds [IM]. Their work seems to relate mysteriously with this paper. Their result inspired our use of the examples M k in Section 13. We thank Sergei and Roman for bringing these examples to our attention. We thank the referee who read our manuscript with keen understanding and insight, and offered numerous valuable comments. Statements of main results In this section, we describe our main results. In Section 2.1, we provide a quick review of homology localization. In Sections 2.2-2.9, we define four invariants for 3-manifolds and present their key features, including 3-manifold Milnor invariants. In Sections 2.10-2.11, we discuss examples exhibiting rich information extracted from these invariants. Throughout this paper, we consider only compact oriented manifolds unless stated otherwise explicitly. The notation H * (−) denotes homology with integral coefficients. Homology localization of groups We begin with a brief introduction to the role of locally finite homology localization of groups, also known as algebraic closure. Readers who are already familiar with this might prefer to skip to the last paragraph (or the last sentence) of this subsection. The invariance of the original Milnor invariants under concordance and homology cobordism follows from a well known result of Stallings that the lower central quotients π 1 (−)/π 1 (−) k are preserved under homology equivalence of spaces for all k < ∞ [Sta65]. (See also [Cas75].) By contrast, the transfinite lower central quotient π 1 (−)/π 1 (−) κ is not invariant under homology cobordism (or homology equivalence). For instance, this follows from an example of Hillman [Hil81,. To extract information invariant under concordance of links and homology cobordism of 3manifolds, we follow an approach suggested in work of Vogel [Vog78] and Levine [Lev89a,Lev89b], using homology localization of groups. In general, localization is defined for a given collection Ω of morphisms in a category C. Briefly, a localization designates a functor E : C → C equipped with a natural transformation A = 1 C (A) → E(A) such that (i) E(φ) is an equivalence for all morphisms φ in Ω, and (ii) E is universal (initial) among those satisfying (i). A precise definition will be stated in Section 3. Observe that a homology equivalence X → Y of spaces gives rise to a group homomorphism π 1 (X) → π 1 (Y ) which induces an isomorphism on H 1 (−) and an epimorphism on H 2 (−). We call a group homomorphism with this homological property 2-connected. Due to an unpublished manuscript of Vogel [Vog78] and an independent approach of Levine [Lev89b,Lev89a], there exists a localization, in the category of groups, for the collection of 2-connected homomorphisms φ : A → B with A and B finitely presented. (For those who are familiar with Bousfield's HZ localization [Bou74,Bou75], we remark that the key difference between Vogel-Levine from the HZ case is the finite presentability of A and B, which turns out to provide a crucial advantage for applications to compact manifolds. ) We observe that Levine's version in [Lev89a] differs slightly from what we use here. With applications to link concordance in mind, he adds the additional requirement that the image φ(A) normally generates B. (This reflects the property that meridians for a link normally generate the link group π 1 (S 3 L).) Levine was aware of both notions of localization. The first detailed exposition of what we use can be found in [Cha08]. We denote this homology localization by G → G in this paper. See Section 3 for more details. The following two properties of the homology localization G are essential for our purpose. For brevity, denote the transfinite lower central subgroup ( G) κ by G κ . (i) A 2-connected homomorphism G → Γ between finitely presented groups induces an isomorphism on G/ G κ → Γ/ Γ κ for every ordinal κ. (ii) When G is finitely presented, G/ G k ∼ = G/G k for all k finite. See Section 3, especially Corollary 3.2. So, G/ G κ is a transfinite generalization of the finite lower central quotients G/G k , which remains invariant under homology cobordism of compact manifolds for every ordinal κ. In this regard, G/ G κ is a correct generalization of G/G k for studies related to homology cobordism, concordance, and disk embedding in dimension 4. From now on, "transfinite lower central quotient" in this paper means G/ G κ , instead of G/G κ , where G is the integer coefficient Vogel-Levine homology localization as constructed in [Cha08]. Definition of the transfinite invariants Milnor's original work [Mil57] compares the lower central quotients π/π k of a link group π = π 1 (S 3 L) with that of the trivial link, namely the free nilpotent quotients F/F k , inductively on k. We provide a relative theory, comparing the lower central quotients of other 3-manifolds to that of a fixed 3-manifold we choose arbitrarily. For instance, when studying links, we can begin with 0-surgery on a nontrivial link, and compare its lower central series quotients to that of other links. By replacing a 3-manifold group with its localization, we extend this theory throughout the transfinite lower central series. Fix a closed 3-manifold Y , which will play the role analogous to the trivial link in Milnor's work. Denote Γ = π 1 (Y ). Suppose M is another closed 3-manifold with π = π 1 (M ). Our invariants compare the transfinite lower central quotients with that of Γ. Indeed, we define and study four invariants of M : (1) a θ-invariant θ κ (M ) defined as a 3-dimensional homology class, (2) a reduced version of the θ κ (M ) living in a certain "cokernel," (3) 3-manifold Milnor invariantμ κ (M ), and (4) a universal θ-invariant θ(M ). The first three invariants are indexed by arbitrary ordinals κ. In Sections 2.2-2.9, we describe the definitions and state their key features. We begin with θ κ (M ). Fix an arbitrary ordinal κ, and suppose the 3-manifold group π admits an isomorphism f : π/ π κ The value of θ κ (M ) in H 3 ( Γ/ Γ κ ) depends on the choice of f : π/ π κ ∼ = − → Γ/ Γ κ , and could be denoted θ κ (M, f ). We choose to omit the reference to f to simplify notation, but we emphasize to the reader that this indeterminacy is often nontrivial. If we choose to remove indeterminacy, we can do so by comparing possible choices for f . Doing so, we obtain an invariant of 3-manifolds defined from θ κ (M ) by taking the value of θ κ (M ) in the orbit space H 3 ( Γ/ Γ κ )/Aut( Γ/ Γ κ ) of the action of automorphisms of Γ/ Γ κ , thus providing an alternative definition of θ κ (M ) which is independent of the choice of f . It turns out that both versions (with and without indeterminacy) are useful, as we discuss below. We will refer to these invariants as the θ κ -invariants of M (relative to Γ). Invariance under homology cobordism Theorem A. The class θ κ (M ) is invariant under homology cobordism. More precisely, if M and N are homology cobordant 3-manifolds with π = π 1 (M ) and G = π 1 (N ), then for every ordinal κ, the following hold. (1) There is an isomorphism φ : G/ G κ ∼ = − → π/ π κ . Consequently θ κ (N ) is defined if and only if θ κ (M ) is defined. (2) If f : π/ π κ ∼ = − → Γ/ Γ κ is an isomorphism, then θ κ (M, f ) = θ κ (N, f • θ) in H 3 ( Γ/ Γ κ ), where φ is the isomorphism in (1). (3) If θ κ (M ) and θ κ (N ) are defined using arbitrary isomorphisms π/ π κ ∼ = − → Γ/ Γ κ and G/ G κ ∼ = − → Γ/ Γ κ , then θ κ (M ) = θ κ (N ) in H 3 ( Γ/ Γ κ )/Aut( Γ/ Γ κ ). We remark that the isomorphism φ in (1) and (2) depends on a choice of a homology cobordism. The statement (3) provides an invariant independent of choice. The proof of Theorem A is given in Section 4. It is a straightforward consequence of the definition of the invariant and basic properties of homology localization. Determination of transfinite lower central quotients Define the set of homology classes which are realizable by θ κ to be (2.1) R κ (Γ) = θ ∈ H 3 ( Γ/ Γ κ ) θ = θ κ (M ) for some closed 3-manifold M equipped with π 1 (M )/ π 1 (M ) κ ∼ = − → Γ/ Γ κ . Not all homology classes are necessarily realizable. That is, R κ (Γ) = H 3 ( Γ/ Γ κ ) in general. Nor is R κ (Γ) necessarily a subgroup. See Theorem G below, and Sections 10 and 11. Nonetheless, one can straightforwardly verify that the projection Γ/ Γ κ+1 → Γ/ Γ κ induces a function R κ+1 (Γ) → R κ (Γ). Although Coker{R κ+1 (Γ) → R κ (Γ)} is not well defined in the usual way because of the lack of a natural group structure, we can define a notion of vanishing in the cokernel as follows: Definition 2.2. We say that a class θ ∈ R κ (Γ) vanishes in Coker{R κ+1 (Γ) → R κ (Γ)} if θ lies in the image of R κ+1 (Γ) → R κ (Γ). That is, the invariant θ κ (M ) vanishes in the cokernel if there is a closed 3-manifold N for which θ κ+1 (N ) is defined (relative to Γ) and the image of θ κ+1 (N ) is θ κ (M ) under the quotient induced homomorphism below. θ κ+1 (N ) ∈ R κ+1 (Γ) ⊂ H 3 ( Γ/ Γ κ+1 ) θ κ (M ) ∈ R κ (Γ) ⊂ H 3 ( Γ/ Γ κ ) We now state the second main result. Theorem B. Suppose M is a closed 3-manifold and π = π 1 (M ) is endowed with an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ . Then the following are equivalent. (1) There exists a lift π/ π κ+1 ∼ = − → Γ/ Γ κ+1 of f which is an isomorphism. (2) The invariant θ κ (M ) vanishes in Coker{R κ+1 (Γ) → R κ (Γ)}. As stated in Theorem C below, it is possible to remove the restriction in Theorem B that the next stage isomorphism π/ π κ+1 ∼ = Γ/ Γ κ+1 is a lift, by taking the value of θ κ (M ) modulo the action of Aut( Γ/ Γ κ ), which is independent of the choice of π/ π κ ∼ = − → Γ/ Γ κ . To state the result, we use the following definition: a class θ vanishes in Coker{R κ+1 (Γ) → R κ (Γ)/Aut( Γ/ Γ κ )} if it lies in the image of the composition R κ+1 (Γ) → R κ (Γ) → R κ (Γ)/Aut( Γ/ Γ κ ). Theorem C. Suppose M is a closed 3-manifold with π = π 1 (M ) which admits an isomorphism π/ π κ ∼ = − → Γ/ Γ κ . Then the following are equivalent. (1) π/ π κ+1 is isomorphic to Γ/ Γ κ+1 (via any isomorphism not required to be a lift ). (2) The invariant θ κ (M ) vanishes in Coker{R κ+1 (Γ) → R κ (Γ)/Aut( Γ/ Γ κ )}. The proof is straightforward, using Theorem B. Proof of Theorem C. Suppose g : π/ π κ+1 ∼ = − → Γ/ Γ κ+1 is an isomorphism. Let g 0 : π/ π κ ∼ = − → Γ/ Γ κ be the isomorphism induced by g, and consider θ κ+1 (M ) = θ κ+1 (M, g) and θ κ (M ) = θ κ (M, g 0 ). Then θ κ (M ) is the image of θ κ+1 (M ) under R κ+1 (Γ) → R κ (Γ). This shows (1) ⇒ (2). For the converse, suppose the invariant θ κ (M ) = θ κ (M, f ) vanishes in the cokernel of R κ+1 (Γ) → R κ (Γ)/Aut( Γ/ Γ κ ) where f : π/ π κ ∼ = − → Γ/ Γ κ is an arbitrary isomorphism. By composing an automorphism on Γ/ Γ κ with f , we may assume that θ κ (M ) lies in the image of R κ+1 (Γ). By Theorem B, there is a lift π/ π κ+1 ∼ = − → Γ/ Γ κ+1 of f . The notion of vanishing in the cokernel generalizes to an equivalence relation ∼ on the set R κ (Γ), which we describe below. Recall that if θ ∈ R κ (Γ), we have θ = θ κ (M ) for some closed 3-manifold M equipped with an isomorphism f : π 1 (M )/ π 1 (M ) κ ∼ = − → Γ/ Γ κ . Let I θ be the image of the composition R κ+1 (π 1 (M )) −→ R κ (π 1 (M )) ∼ = − −→ f * R κ (Γ). We show that {I θ | θ ∈ R κ (Γ)} is a partition of the set R κ (Γ) in Lemma 5.3. Consider the associated equivalence relation: Definition 2.3. Define ∼ on R κ (Γ) by θ ∼ θ ′ if θ ′ ∈ I θ . We prove the following result in Section 5.2. Corollary D. Suppose M and N are closed 3-manifolds with π = π 1 (M ) and G = π 1 (N ), which are equipped with isomorphisms π/ π κ ∼ = − → Γ/ Γ κ and G/ G κ ∼ = − → Γ/ Γ κ . Then, there is an isomorphism π/ π κ+1 ∼ = − → G/ G κ+1 which is a lift of the composition π/ π κ ∼ = − → Γ/ Γ κ ∼ = − → G/ G κ if and only if θ κ (M ) ∼ θ κ (N ) in R κ (Γ). Note that for a class θ ∈ R κ (Γ), we have θ ∼ θ κ (Y ) if and only if θ vanishes in the cokernel of R κ+1 (Γ) → R κ (Γ). Here θ κ (Y ) is defined using the identity map π 1 (Y )/ π 1 (Y ) κ → Γ/ Γ κ . So, Corollary D generalizes Theorem B. Milnor invariants of 3-manifolds Now we define Milnor invariants of 3-manifolds. It combines the features of Theorem C and Corollary D in a natural way. Once again, we remind the reader of our hypothesis. We fix a 3-manifold Y and let Γ = π 1 (Y ). We assume that M is a 3-manifold with π = π 1 (M ), κ is an ordinal, and we have an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ . The invariant θ κ (M ) ∈ R κ (Γ) was defined in Definition 2.1. Here R κ (Γ) ⊂ H 3 ( Γ/ Γ κ ) is the subset of realizable classes defined by (2.1). The following is a coarser version of the equivalence relation ∼ on R κ (Γ) in Definition 2.3. Definition 2.4. Let θ, θ ′ ∈ R κ (Γ). Write θ ≈ θ ′ if there is γ ∈ Aut( Γ/ Γ κ ) such that γ * (θ ′ ) ∼ θ. That is, choosing a 3-manifold M equipped with an isomorphism f : π 1 (M )/ π 1 (M ) κ ∼ = − → Γ/ Γ κ that satisfies θ = θ κ (M ), we have θ ≈ θ ′ if and only if there is γ ∈ Aut( Γ/ Γ κ ) such that γ * (θ ′ ) ∈ Im R κ+1 (π 1 (M )) → R κ (π 1 (M )) ∼ = − −→ f * R κ (Γ) . Since ∼ is an equivalence relation and Aut( Γ/ Γ κ ) is a group, it follows that ≈ is an equivalence relation too. Definition 2.5. The Milnor invariant of length κ for M is defined bȳ µ κ (M ) := [θ κ (M )] ∈ R κ (Γ)/≈. Here [θ κ (M )] is the equivalence class of θ κ (M ) ∈ R κ (Γ) under ≈. We have that θ vanishes in Coker{R κ+1 (Γ) → R κ (Γ)/Aut( Γ/ Γ κ )} in the sense of Section 2.4 if and only if θ ≈ θ κ (Y ) in R κ (Γ). If θ κ (M ) ≈ θ κ (Y ), we say thatμ κ (M ) vanishes, or M has vanishing Milnor invariant of length κ. Theorem E. Let M be a 3-manifold such that π 1 (M )/ π 1 (M ) κ ∼ = Γ/ Γ κ . Thenμ κ (M ) is a well- defined homology cobordism invariant, and the following are equivalent. (1)μ κ (M ) vanishes. (2) π 1 (M )/ π 1 (M ) κ+1 ∼ = Γ/ Γ κ+1 (via any isomorphism not required to be a lift ). (3) The invariantμ κ+1 (M ) is defined. In addition, for M and N such that π 1 (M )/ π 1 (M ) κ ∼ = π 1 (N )/ π 1 (N ) κ ∼ = Γ/ Γ κ , the following two conditions are equivalent. (4)μ κ (M ) =μ κ (N ) in R κ (Γ)/≈. (5) π 1 (M )/ π 1 (M ) κ+1 ∼ = π 1 (N )/ π 1 (N ) κ+1 . Proof. The equivalence of (1)-(3) is the conclusion of Theorem C. Suppose (4) holds. Fix an isomorphism f : π 1 (M )/ π 1 (M ) κ ∼ = − → Γ/ Γ κ . By definition, θ κ (M, f ) ∼ γ * (θ κ (N, g)) for some γ ∈ Aut( Γ/ Γ κ ) and g : π 1 (N )/ π 1 (N ) κ ∼ = − → Γ/ Γ κ . Then γ * (θ κ (N, g)) = θ κ (N, γ • g) . By Corollary D, it follows that (5) holds. Conversely, when (5) holds, let φ be the induced isomorphism π 1 (N )/ π 1 (N ) κ ∼ = − → π 1 (M )/ π 1 (M ) κ . Since φ lifts, θ κ (M, f ) ∼ θ κ (N, f • φ) by Corollary D. So (4) holds. Examples showing the nontriviality of the 3-manifoldμ κ -invariant of transfinite length are given in Section 2.11. See Theorem L. Section 9 explains how classical Milnor invariants are special cases of the above theory associated to finite ordinals. (See also [Orr89,Lev89a,IO01].) Section 2.7 below states that theμ κ -invariant connects to a notion of transfinite gropes, with details in Section 6. A transfinite tower interpretation Corollary D and Theorem E may be viewed as classifications of towers of transfinite lower central quotients of 3-manifold groups. Briefly, we address the following problem: classify extensions of length κ + 1, by 3-manifold groups, of the length κ tower of the transfinite lower central quotients (2.2) Γ/ Γ κ Γ/ Γ ω Γ/ Γ 2 Γ/ Γ 1 {1} Γ/Γ 2 Γ/Γ 1 of a given 3-manifold group Γ = π 1 (Y ). To be precise, we introduce some abstract terminology defined as follows: (i) A length κ tower in a category C is a functor A of the (opposite) category of ordinals {λ | λ ≤ κ}, with arrows λ → λ ′ for λ ′ ≤ λ as morphisms, into C. Denote it by {A(λ)} λ≤κ or {A(λ)}. (ii) A κ-equivalence between two towers {A(λ)} and {A ′ (λ)} is a natural equivalence φ = {φ λ : A(λ) ∼ = − → A ′ (λ)} λ≤κ between the two functors, that is, each φ λ is an equivalence and φ λ is a lift of φ λ ′ for λ ′ ≤ λ ≤ κ. Say {A(λ)} and {A ′ (λ)} are κ-equivalent if there is a κ-equivalence between them. (iii) A length κ+1 extension of a length κ tower {A(λ)} λ≤κ is a length κ+1 tower {B(λ)} λ≤κ+1 equipped with a κ-equivalence between {A(λ)} λ≤κ and {B(λ)} λ≤κ . (iv) Two length κ + 1 extensions {B(λ)} λ≤κ+1 and {B ′ (λ)} λ≤κ+1 of a tower of length κ, {A(λ)} λ≤κ , are equivalent if the composition B(κ) ∼ = − → A(κ) ∼ = − → B ′ (κ) lifts to an equivalence B(κ + 1) ∼ = − → B ′ (κ + 1). In this paper, towers and their extensions will always be transfinite lower central quotient towers { π/ π λ } λ≤κ of 3-manifold groups π. (In this case, { π/ π λ } λ≤κ and { G/ G λ } λ≤κ are κ-equivalent if and only if π/ π κ and G/ G κ are isomorphic.) We define a length κ + 1 extension of (2.2) by a 3-manifold group to be a length κ + 1 extension of the form { π/ π λ } where π = π 1 (M ) for some closed 3-manifold M . For towers of 3-manifold groups, the following two problems are formulated naturally: (1) Classify length κ + 1 extensions of a given fixed tower of length κ, modulo equivalence of extensions in the sense of (iv). (2) Classify length κ + 1 towers whose length κ subtowers are κ-equivalent to a given fixed tower of length κ, modulo (κ + 1)-equivalence in the sense of (ii). The following results are immediate consequences of Corollary D and Theorem E. Corollary F. For every ordinal κ, the following hold. (1) The set of classes length κ + 1 extensions of (2.2) by 3-manifold groups equivalence of length (κ + 1)-extensions of (2.2) is in one-to-one correspondence with R κ (Γ)/∼, via the invariant θ κ . (2) The set of classes length κ + 1 towers of 3-manifold groups with length κ subtower κ-equivalent to (2.2) (κ + 1)-equivalence is in one-to-one correspondence with R κ (Γ)/≈, via the 3-manifold Milnor invariantμ κ . Remark 2.6. The two classifications in Corollary F(1) and (2) are indeed not identical. More precisely, the natural surjection from the set of classes in Corollary F(1) onto that in (2), or equivalently the surjection R κ (Γ)/∼ → R κ (Γ)/≈, is not injective in general. In fact, for the first transfinite ordinal ω, Theorem I below presents an explicit 3-manifold example for which R ω (Γ)/∼ is an infinite set but R ω (Γ)/≈ is a singleton. Transfinite gropes and the invariants In this paper, we also introduce a previously unexplored notion of transfinite gropes (see Section 6.2), and relate them to the transfinite Milnor invariants. Once again, this extends well known results concerning classical Milnor invariant of links and the existence of finite (asymmetric) gropes. For instance, Freedman and Teichner [FT95] and Conant, Schneiderman and Teichner as summarized in [CST11], as well as work of the first author [Cha18]. In [FT95], for finite k, a grope corresponding to the kth term of the lower central series is called a grope of class k. Briefly, we extend this to the case of an arbitrary transfinite ordinal κ, to define a notion of a grope of (transfinite) class κ. We say that a 4-dimensional cobordism W between two 3-manifolds M and N is a grope cobordism of class κ if H 1 (M ) → H 1 (W ) and H 1 (N ) → H 1 (W ) are isomorphisms and the cokernels of H 2 (M ) → H 2 (W ) and H 2 (N ) → H 2 (W ) are generated by homology classes represented by gropes of class κ. See Definitions 6.5, 6.6 and 6.9 for precise descriptions. Transfinite gropes give another characterization of the equivalent properties in Theorems C and E, as stated below. Addendum to Theorems C and E. Suppose M is a closed 3-manifold such that π 1 (M )/ π 1 (M ) κ is isomorphic to Γ/ Γ κ . Then the following is equivalent to the properties (1) and (2) in Theorem C, and to the properties (1)-(3) of Theorem E. (0) There is a grope cobordism of class κ + 1 between M and another closed 3-manifold N satisfying π 1 (N )/ π 1 (N ) κ+1 ∼ = Γ/ Γ κ+1 . Its proof is given in Section 6.2. As a key ingredient of the proof, we develop and use a transfinite generalization of a well-known theorem of Stallings and Dwyer [Sta65,Dwy75]. Since we believe that it will be useful for other applications in the future as well, we present the statement here. Theorem 6.1. Let κ > 1 be an arbitrary ordinal. Suppose f : π → G be a group homomorphism inducing an isomorphism H 1 (π) ∼ = − → H 1 (G). In addition, if κ is a transfinite ordinal, suppose G is finitely generated. Then f induces an isomorphism π/ π κ ∼ = − → G/ G κ if and only if f induces an epimorphism H 2 ( π) −→ H 2 ( G)/ Ker{H 2 ( G) → H 2 ( G/ G λ )} for all ordinals λ < κ. See also Corollaries 6.3, 6.4 and 6.8 in Section 6. Realization of the invariants Our next result is an algebraic characterization of the classes in R κ (Γ). Denote by tH * (−) the torsion subgroup of H * (−). Theorem G. Let κ ≥ 2 be an arbitrary ordinal. A class θ ∈ H 3 ( Γ/ Γ κ ) lies in R κ (Γ) if and only if the following two conditions hold. (1) The cap product ∩ θ : tH 2 ( Γ/ Γ κ ) −→ tH 1 ( Γ/ Γ κ ) ∼ = tH 1 (Γ) is an isomorphism. (2) The composition H 1 ( Γ/ Γ κ ) ∩ θ − −→ H 2 ( Γ/ Γ κ ) pr − −→ H 2 ( Γ/ Γ κ )/ Ker{H 2 ( Γ/ Γ κ ) → H 2 ( Γ/ Γ λ )} is surjective for all ordinals λ < κ. We remark that the definition of R κ (Γ) given in (2.1) still makes sense even when Γ is not a 3-manifold group. (In this case R κ (Γ) may be empty.) Theorem G holds for any finitely presented group Γ. The conditions (1) and (2) in Theorem G may be viewed as Poincaré duality imposed properties of the given class θ with respect to the cap product. Also note that if κ is a discrete ordinal, "for all ordinals λ < κ" in (2) can be replaced with "for λ = κ − 1." For a finite ordinal κ, Theorem G is essentially due to Turaev [Tur84]. Our new contribution in Theorem G is to extend his result transfinitely. The proof of Theorem G is given in Section 7. Among other ingredients, the transfinite generalization of the Stallings-Dwyer theorem [Sta65, Dwy75] stated above as Theorem 6.1 plays a key role in the proof of Theorem G. Universal θ-invariant We remark that if M is equipped with an isomorphism f : π ∼ = − → Γ so that θ(M ) is defined, then f induces an isomorphism π/ π κ ∼ = − → Γ/ Γ κ , and thus the invariant θ κ (M ) is defined for all ordinals κ. Moreover, θ κ (M ) is the image of θ(M ) under R(Γ) → R κ (Γ) induced by the projection Γ → Γ/ Γ κ . Since this factors through R κ+1 (Γ), it follows that θ κ (M ) vanishes in the cokernel of R κ+1 (Γ) → R κ (Γ), or equivalently θ κ (M ) ∼ θ κ (Y ) in R κ (Γ), for every ordinal κ. Consequently, µ κ (M ) vanishes for all κ if θ(M ) is defined. It seems to be hard to prove or disprove the converse. Similarly to the θ κ -invariants (see Theorem A), θ(M ) ∈ R(Γ)/Aut( Γ) is a homology cobordism invariant. We prove this in Theorem 8.1. Also, we prove a realization theorem characterizing homology classes in R(Γ), which is analogous to Theorem G. Theorem H. A homology class θ ∈ H 3 ( Γ) lies in R(Γ) if and only if the following two conditions hold. (1) The cap product ∩ θ : tH 2 ( Γ) → tH 1 ( Γ) ∼ = tH 1 (Γ) is an isomorphism. (2) The cap product ∩ θ : H 1 ( Γ) → H 2 ( Γ) is surjective. We prove Theorem H in Section 8. We remark that Levine proved a realization theorem for his link invariant which lives in H 3 ( F ) [Lev89a]: for all θ ∈ H 3 ( F ), there is a link L for which his invariant is defined and equal to θ. Theorem H says that in case of general 3-manifolds, not all homology classes in H 3 are necessarily realizable. An example is given in Section 12. It is an open problem whether Levine's link invariant in [Lev89a] is nontrivial. In Theorem J below, for the 3-mainfold case, we show that θ(M ) is nontrivial. A torus bundle example This section gives a complete and careful analysis of one example which illustrates the full collection of transfinite invariants considered in this paper. For the underlying 3-manifold, a torus bundle over a circle, the fundamental group of the fiber is a module over the group ring of covering translations, facilitating our computation of the group localization. We thus compute and analyze the full array of invariants under consideration -θ, µ and θ. Moreover, this example illustrates several fundamental features of the invariants, including: (i) nontriviality of θ ω for the first transfinite ordinal ω, (ii) nontriviality of θ even when all finite length θ and µ vanish, and (iii) torsion values of finite length θ and µ. Let Y be the torus bundle over S 1 with monodromy −1 0 0 −1 . That is, viewing S 1 as the unit circle in the complex plane, Y = S 1 × S 1 × [0, 1]/(z −1 , w −1 , 0) ∼ (z, w, 1). Let Γ = π 1 (Y ) be the fundamental group of the torus bundle. In our earlier work [CO13], we computed the homology localization Γ. Using this, it is not hard to compute its transfinite lower central quotients and see that Γ is transfinitely nilpotent. Indeed, Γ ω+1 is trivial. Our computation starts from this. The first transfinite invariant. For the first transfinite ordinal ω, we compute the homology H 3 ( Γ/ Γ ω ) and its subset of realizable classes R ω (Γ). Moreover we completely determine the two equivalence relations ∼ and ≈ on R ω (Γ), which were defined in Sections 2.4 and 2.5. The computation especially tells us the following. Theorem I. For the torus bundle group Γ, the following hold. (1) The set R ω (Γ)/∼ of equivalence classes of realizable values of θ ω is infinite. Consequently, by Corollary F(1), there are infinitely many distinct equivalence classes of length ω + 1 extensions, by 3-manifolds, of the length ω tower { Γ/ Γ λ } λ≤ω of the torus bundle Y (in the sense of Section 2.6). (2) The set R ω (Γ)/≈ is a singleton. Consequently,μ ω (M ) ∈ R ω (Γ)/≈ vanishes whenever it is defined. Also, for all closed 3-manifold groups π such that the length ω tower { π/ π λ } λ≤ω is ω-equivalent to that of Γ (in the sense of Section 2.6), the length ω + 1 tower { π/ π λ } λ≤ω+1 is automatically (ω + 1)-equivalent to that of Γ, by Corollary F (2). Theorem I(1) illustrates that the transfinite θ-invariant of length ω provides highly nontrivial information, even when the transfinite Milnor invariantμ of the same length vanishes. Examples with nonvanishing transfinite Milnor invariants will be given in Section 2.11 below. See Theorem L. The tower interpretations in Theorem I particularly tell us the following: there are 3-manifold groups π such that there is an isomorphism π/ π ω ∼ = − → Γ/ Γ ω which does not lift to an isomorphism between π/ π ω+1 and Γ/ Γ ω+1 but π/ π ω+1 and Γ/ Γ ω+1 are isomorphic. Theorem I is an immediate consequence of Theorem 11.1 and Corollary 11.2, which presents the outcome of our computation for Γ/ Γ ω . See Section 11 for full details. The universal invariant. We also carry out computation of the invariant θ over the homology localization of torus bundle group. Among consequences of the computation, we have the following. Theorem J. For the torus bundle group Γ, the set R(Γ)/Aut( Γ) of realizable values of θ modulo the automorphism action is infinite. This detects the existence of infinitely many distinct homology cobordism classes of closed 3-manifolds M with π = π 1 (M ), such that π ∼ = Γ and thus θ κ (M ) is defined and vanishes in Coker{R κ+1 (Γ) → R κ (Γ)} for all ordinals κ. In particular, for every ordinal κ, the Milnor invariantμ κ (M ) vanishes for these 3-manifolds M . This illustrates that the invariant θ is highly nontrivial for 3-manifolds for which all (transfinite) Milnor type invariants vanish. This may be compared with the case of Levine's link invariant θ(L) ∈ H 3 ( F ) where F is a free group [Lev89a]. (For 0-surgery on a link, this invariant is equivalent to J. Y. Le Dimet's link concordance invariant defined in [LD88].) The fundamental question of Levine in [Lev89a], which is still left open, asks whether θ(L) can be nontrivial. Due to Levine's realization result in [Lev89a], this is equivalent to whether H 3 ( F ) is nontrivial. Our result shows that in the case of general 3-manifold groups, the answer is affirmative, even modulo the automorphism action. Theorem J is a consequence of Theorems 12.1 and 12.2. Indeed, in Section 12, we provide a complete computation of R( Γ) and the action of Aut( Γ). Finite length invariants. The torus bundle example also reveals interesting aspects of finite length case of the Milnor type invariant. Our computation of the invariant θ k for finite k proves the following result. Theorem K. For every finite k, the set of realizable classes R k (Γ) is finite, and thus the set of equivalence classes R k (Γ)/∼ is finite. Moreover, 2 ≤ #(R k (Γ)/∼) ≤ 7 · 2 4(k−2) + 1. Consequently, by Corollary F(1), the number of equivalence classes of length k + 1 extensions, by 3-manifold groups, of the length k tower Γ/Γ k → · · · → Γ/Γ 1 (in the sense of Section 2.6) is between 2 and 7 · 2 4(k−2) + 1 inclusively. Theorem K is a consequence of Theorem 10.1 and Corollary 10.3 in Section 10, which provide a more detailed description of the structure of R k (Γ) and related objects. Remark 2.8. Recall that, for m-component links with vanishing Milnor invariants of length < k, the Milnor invariants of length k are integer-valued, and consequently, they are either all trivial, or have infinitely many values. (Indeed the Milnor invariants of length k span a free abelian group of known finite rank. See [Orr89].) However, for the torus bundle case in Theorem K, it turns out that the finite length θ k invariants live in torsion groups, in fact, finite 2-groups. This leads us to questions related to potential applications to link concordance, and to the higher order Arf invariant conjecture asked by Conant, Schneiderman and Teichner. In the final section of this paper, we discuss these questions, together with other questions arising from our work. Modified torus bundle examples We now consider a family of modified torus bundles {M r | r is an odd integer}, to show that transfinite Milnor invariants of 3-manifolds are nontrivial in general. The modified torus bundles are obtained by changing just one entry in the monodromy matrix of the previous example Y : M r has monodromy −1 r 0 −1 . That is, M r = S 1 × S 1 × [0, 1]/(z −1 , z r w −1 , 0) ∼ (z, w, 1). We remark that discussions with Sergei Ivanov and Roman Mikhailov led us to consider this modification. They studied the Bousfield-Kan completion of 3-manifold groups, with π 1 (M r ) as main examples [IM]. Fix an odd integer d, and choose Y = M d as the basepoint manifold, to which other manifolds M r are to be compared. Let Γ = π 1 (M d ). We prove the following. Theorem L. For any odd integer r,μ k (M r ) ∈ R k (Γ)/≈ is defined and vanishes for every finite k. Moreover, π 1 (M r )/ π 1 (M r ) ω ∼ = Γ/ Γ ω and thusμ ω (M r ) is defined. Butμ ω (M r ) =μ ω (M s ) in R ω (Γ)/≈ if and only if the rational number |r/s| is a square. In particular, the set of realizable values R ω (Γ)/≈ of the 3-manifold Milnor invariant is infinite, and there are infinitely many homology cobordism classes of 3-manifolds with the same finite length Milnor invariants but distinct Milnor invariants of length ω. The following is a consequence of Theorem L combined with Corollary F(2): there are infinitely many 3-manifold groups π, such that the lower central series quotient towers { π/ π κ } κ≤ω of length ω are mutually ω-equivalent (in the sense of Section 2.6), but the length ω + 1 towers { π/ π κ } κ≤ω+1 are not pairwise (ω + 1)-equivalent. Indeed, we show that R ω (Γ)/≈ is equal to Z × (2) = {a/b ∈ Q | a, b ∈ 2Z + 1} modulo the (multiplicative) subgroup ±(Z × (2) ) 2 = {±α 2 | α ∈ Z × (2) }. So, R ω (Γ)/≈ Homology localization of groups In this section we review basic facts on the homology localization of groups, and prove some results which will be useful in later sections. All results in this section were known to J. P. Levine. We include these results here for completeness, since group localization and the results herein plays an essential role in this paper. Preliminaries We begin with the definition of the homology localization which we use. Recall that a group homomorphism π → G is 2-connected if it induces an isomorphism on H 1 (−; Z) and an epimorphism on H 2 (−; Z). Let Ω be the collection of 2-connected homomorphisms between finitely presented groups. A group Γ is local with respect to Ω, or simply local if, for every α : A → B in Ω and every homomorphism f : A → Γ, there is a unique homomorphism g : B → Γ satisfying g • α = f . A B Γ α f g A localization with respect to Ω is defined to be a pair (E, ι) of a functor E from the category of groups to the full subcategory of local groups and a natural transformation ι = {ι G : G → E(G)} satisfying the following: for each homomorphism f : G → Γ with Γ local, there is a unique homomorphism g : E(G) → Γ such that g • ι G = f . G E(G) Γ ιG f g In this paper, we denote E(G) by G. It is a straightforward exercise that a localization is unique if it exists. The existence of a localization with respect to Ω is due to Vogel and Levine. Indeed, in his unpublished manuscript [Vog78], Vogel developed a general theory of localization of spaces with respect to homology, and its group analogue is the localization we discuss. In [Lev89b], Levine developed an alternative approach using certain systems of equations over a group to define a notion of "algebraic closure." He showed that it exists and is equal to the localization with respect to the subset of our Ω consisting of α : A → B in Ω such that α(A) normally generates B. Although the modified closure with respect to our Ω (that is, omitting the normal closure condition) was known to Levine, this theory first appeared with proof in a paper [Cha08]. As a useful overview on homology localization for geometric topologists, the readers are referred to [CO12,Section 2]. The following properties of the homology localization π are essential for our purpose. (1) If π → G is a 2-connected homomorphism between finitely presented groups, then it induces an isomorphism π ∼ = − → G. (2) For a finitely presented group G, there is a sequence G = P (1) → P (2) → · · · → P (k) → · · · of 2-connected homomorphisms of finitely presented groups P (k) such that the localization G → G is equal to the colimit homomorphism G → colim k P (k). Consequently, G → G is 2-connected. (1) For a finitely presented group G, the homomorphism G → G induces an isomorphism G/G k → G/ G k for each k < ∞. (2) A homology equivalence X → Y between finite CW-complexes X and Y with π = π 1 (X) and G = π 1 (Y ) gives rise to isomorphisms π ∼ = − → G and π/ π κ ∼ = − → G/ G κ for each ordinal κ. Proof. (1) From Theorem 3.1(2), it follows that G → G is 2-connected. By Stallings' Theorem [Sta65], G → G induces an isomorphism G/G k → G/ G k . (2) The induced homomorphism π → G is 2-connected, since K(π, 1) and K(G, 1) are obtained by attaching cells of dimension ≥ 3 to X and Y . Since X and Y are finite, it follows that π ∼ = G by Theorem 3.1(1). Therefore π/ π κ ∼ = G/ G κ for every κ. Acyclic equations and induced epimorphisms on localizations J. P. Levine first proved the following result in [Lev94], in much greater generality than stated here. His proof involved group localization determined by closure with respect to contractible equations, not acyclic equations. For this reason, we include a brief proof here. However, this Lemma was certainly known to Levine. where each x i is a formal variable and w i = w i (x 1 , . . . , x n ) is an element of the free product G * F of G and the free group F = F x 1 , . . . x n on x 1 , . . . , x n . A solution {g i } n i=1 to the system S is defined to be an ordered tuple of n elements g i ∈ G such that g i = w i (g 1 , . . . , g n ) for i = 1, . . . , n. A group homomorphism φ : G → Γ induces φ * id : G * F → Γ * F , which sends a system of equations S over G to a system φ(S) := {x i = (φ * id)(w i )} over Γ. If {g i } is a solution to S, then {φ(g i )} is a solution to φ(S). Following [Cha08, Definition 4.1], we say an equation x i = w i (x 1 , . . . , x n ) is null-homologous, or acyclic, if w i lies in the kernel of the projection G * F −→ F −→ H 1 (F ) = F/[F, F ]. A group G is Z-closed if every system of acyclic equations over G has a unique solution in G. We remark that these definitions are variations of Levine's notion of contractible equations and algebraic closure in [Lev89a]. Theorem 3.4 ([Lev89a, Cha08]). (1) A group G is local if and only if G is Z-closed. In particular, every system of acyclic equations over G has a unique solution in G. (2) Every element in G is a solution of a system of acyclic equations over G. More precisely, for each g ∈ G, there is a system of acyclic equations 2), see [Cha08, Theorem 6.1, Proposition 6.6]. We remark that these proofs follow Levine's approach in [Lev89a, Propositions 3 and 6]. S = {x i = w i } n i=1 over G such that the system ι G (S) over G has a solution {g i } n i=1 with g 1 = g.Proof of Lemma 3.3. Suppose f : π → G induces an epimorphism f * : H 1 (π) → H 1 (G). Fix a finite set {a 1 , . . . , a k } which generates G. We begin by writing equations over G which have {a j } as a solution. Let F = F y 1 , . . . , y k . For each a j , since f * is surjective, a j = f (b j ) · c j for some b j ∈ π and c j ∈ [G, G]. Write c j as a product of commutators in the generators a ±1 i , to choose a word u j = u j (y 1 , . . . , y k ) in [F, F ] such that c j = u j (a 1 , . . . , a k ). Let S 0 be the system of the acyclic equations {y j = b j · u j } k j=1 over π. Then {a j } is a solution to the system f (S 0 ) = {y j = f (b j ) · u j } over G. Applying ι G : G → G, it follows that {ι G (a j )} is a solution to the system ι G f (S 0 ) over G. Now, to show that f : π → G is surjective, fix g ∈ G. By Theorem 3.4(2), there is a system S = {x i = w i } n i=1 of acyclic equations over G, with w i ∈ G * F x 1 , . . . , x n , such that the system ι G (S) over G has a solution {g i } with g i ∈ G, g 1 = g. Substitute each occurrence of the generator a j in the word w i with b j · u j , to obtain a new word v i = v i (x n , . . . , x n , y 1 , . . . , y k ). Now consider the system of n+ k equations S ′ = {x i = v i } n i=1 ∪{y j = b j u j } k j=1 , over the group π. Apply the homomorphism ι π : π → π to obtain the system ι π (S ′ ) over π. By Theorem 3.4(2), ι π (S ′ ) has a solution in π, say {r i } n i=1 ∪{s j } k j=1 . That is, r i = ι π v i (r 1 , . . . , r n , s 1 , . . . , s k ) and s j = ι π b j ·u j (s 1 , . . . , s k ). Now, apply f : π → G to the system ι π (S ′ ). By the functoriality of the localization, we have f ι π (S ′ ) = ι G f (S ′ ), and it has { f (r i )} ∪ { f (s j )} as a solution in G. The last k equations of ι G f (S ′ ) form the system ι G f (S 0 ) . By the uniqueness of a solution for ι G f (S 0 ), we have f (s j ) = ι G (a j ). By the uniqueness of a solution for f (S ′ ), it follows that f (r i ) = g i . In particular, f (r 1 ) = g 1 = g. This proves that f : π → G is surjective. Transfinite lower central quotients of local groups are local It is well known that a nilpotent group is local, or equivalently the lower central quotient G/G k of an arbitrary group G is local for all finite k. (See, for instance, [Lev89a,p. 573].) But it is no longer true for the ordinary transfinite lower central quotients G/G κ . For instance, for a free group F of rank > 1, F/F ω ∼ = F and this is not local. However, for local groups the following is true. Lemma 3.5. If G is a local group, then G/G κ is local for every ordinal κ ≥ 1. In particular, G/ G κ is local for every group G. We will use the equation-based approach to prove this. Proof. Suppose S = {x i = w i (x 1 , . . . , x n )} n i=1 is a system of acyclic equations over G/G κ . It suffices to show that S has a unique solution in G/G κ . For the existence, lift S to a system over G, by replacing each element of G/G κ which appears in the words w i with a pre-image in G. Since G is local, there is a solution for the lift, and the image of the solution under the projection G → G/G κ is a solution for S. To prove the uniqueness, we proceed by transfinite induction. First, for κ = 1, G/G κ = {e} and thus everything is unique. Suppose κ ≥ 2 and suppose the solution of a system of acyclic equations over G/G λ is unique for all λ < κ. Suppose {x i = g i } and {x i = g ′ i } are two solutions in G/G κ for a given system S of acyclic equations. Suppose that κ is a discrete ordinal. Let p : G/G κ → G/G κ−1 be the projection. Since {x i = p(g i )} and {x i = p(g ′ i )} are solutions of p(S) over G/G κ−1 , p(g i ) = p(g ′ i ) by the uniqueness over G/G κ−1 . So g ′ i = g i c i for some c i ∈ G κ−1 /G κ . Since G κ−1 /G κ is central in G/G κ and the image of w i under (G/G κ ) * F → F lies in [F, F ], it follows that g ′ i = w i (g ′ 1 , . . . , g ′ n ) = w i (g 1 c 1 , . . . , g n c n ) = w i (g 1 , . . . , g n ) = g i . Now, suppose κ is a limit ordinal. For λ < κ, let p λ : G/G κ → G/G λ be the projection. Since {x i = p λ (g i )} and {x i = p λ (g ′ i )} are solutions of p λ (S), it follows that p λ (g i ) = p λ (g ′ i ) by the uniqueness of a solution over G/G λ . That is, g −1 i g ′ i ∈ Ker p = G λ /G κ for each λ < κ. Since G κ = λ<κ G λ , it follows that g i = g ′ i in G/G κ . We remark that when κ is a discrete ordinal in the above proof, the existence of a solution can also be shown under an induction hypothesis that G/G κ−1 is local, without assuming that G is local. Indeed, if {x i = h i } is a solution for p(S) over G/G κ−1 , then for any choice of h ′ i ∈ p −1 (h i ) ⊂ G/G κ , it turns out that the elements g i = w i (h ′ 1 , . . . , h ′ n ) form a solution {x i = g i } for the given S, by a similar argument to the uniqueness proof. See [Lev89a, Proposition 1(c)]. On the other hand, when κ is a limit ordinal, the assumption that G is local is essential for the existence (and necessary -recall the example of F ∼ = F/F ω ). Closure in the completion For a group G, let G = lim ← − k<∞ G/G k be the nilpotent completion. It is well known that G is a local group, essentially by Stallings' theorem. Therefore, there is a unique homomorphism G → G making the following diagram commutative: G G G Following Levine's approach in [Lev89b], define G = Im{ G → G}. We call G the closure in the completion. It is straightforward to verify that Ker{ G → G} = G ω , that is, G ∼ = G/ G ω , using Stallings' theorem. For later use, we will discuss a special case of a metabelian extension. Let G be an abelian group and A be a ZG-module. Denote the semi-direct product by A ⋊ G. Let ǫ : ZG → Z be the augmentation map, and I := Ker ǫ be the augmentation ideal. Then the lower central subgroup (A ⋊ G) k+1 is equal to I k A, so (A ⋊ G)/(A ⋊ G) k = (A/I k A) ⋊ G. It follows that A ⋊ G = A ⋊ G, where A := lim ← − k<∞ A/I k A is the I-adic completion. Also, A ⋊ G is residually nilpotent if and only if k<∞ I k A = 0. Let S := {r ∈ ZG | ǫ(r) = 1}, and let S −1 A = {a/s | a ∈ A, s ∈ S} be the classical localization with respect to S. By mutiplying a and s by −1, one sees that S −1 A is equal to the localization with respect to a larger subset {r ∈ ZG | ǫ(r) = ±1}. Theorem 3.6. Suppose k<∞ I k A = 0. Then A ⋊ G = S −1 A ⋊ G. Theorem 3.6 is due to Levine [Lev94, Proposition 3.2]. Indeed, he gave a proof (of a more general statement) for the localization defined in [Lev89a,Lev89b], but just by modifying it slightly, his argument applies to the case of the localization we use (which is defined in [Cha08]) as well. For concreteness and for the reader's convenience, we provide a quick proof. Proof of Theorem 3.6. By [CO13, Theorem A.2], S −1 A ⋊ G is a local group, since S −1 A is the cohn localization of the ZG-module A and G is abelian and thus local. It follows that there is a unique homomorphism A ⋊ G → S −1 A ⋊ G making the following diagram commutative: A ⋊ G A ⋊ G S −1 A ⋊ G We claim that A ⋊ G → S −1 A ⋊ G is surjective. To show this, it suffices to verify that every a/s ∈ S −1 A lies in the image of A ⋊ G. Observe that x = a/s a solution of the equation x = w(x), where w(x) = a + (1 − s)x. Write 1 − s = i n i g i , n i ∈ Z, g i ∈ G. Then, in multiplicative notation, w(x) = a i g i x ni g −1 i , a word in (A ⋊ G) * F x . Since ǫ(s) = 1, we have i n i = 0. That is, the equation x = w(x) over A ⋊ G is acyclic. Therefore, there is a solution z ∈ A ⋊ G for x = w(x), and z must be sent to a/s ∈ S −1 A, since a/s is a solution for x = w(x) in the local group S −1 A ⋊ G. This proves the claim. We claim that A → A = lim ← − k<∞ A/I k A factors through S −1 A. To show this, it suffices to prove that every s ∈ S is invertible in ZG = lim ← − k<∞ ZG/I k . Indeed this is a known fact verified by an elementary argument as follows. Since ǫ(s) = 1, 1 − s ∈ I. So, writing (1 − s) k = 1 − r k · s with r k ∈ ZG, we have r k · s ≡ 1 mod I k . Also, r k+1 ≡ r k mod I k , so (r k ) ∈ lim ← − k<∞ ZG/I k is a multiplicative inverse of the given s. By the second claim, there is a natural homomorphism S −1 A ⋊ G → A ⋊ G = A ⋊ G. Since I k A = 0, the map A → A is injective, and thus S −1 A → A is injective. It follows that S −1 A ⋊ G → A ⋊ G is injective. Now, consider A ⋊ G −→ S −1 A ⋊ G −→ A ⋊ G. The first arrow is surjective by the first claim, and the second arrow is injective, so S −1 A ⋊ G is the image of A ⋊ G in A ⋊ G. That is, S −1 A ⋊ G = A ⋊ G. Invariance under homology cobordism In this section we give a proof of Theorem A, which says that θ κ is invariant under homology cobordism. Indeed, it is a straightforward consequence of the definition and the key property of the homology localization. We provide details for concreteness. Fix a group Γ and an ordinal κ. Recall that for a closed 3-manifold M with π = π 1 (M ) which is equipped with an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ , the invariant θ κ (M ) is defined to be the image of the fundamental class of M under H 3 (M ) −→ H 3 (π) −→ H 3 ( π) −→ H 3 ( π/ π κ ) f * − −→ H 3 ( Γ/ Γ κ ). Proof of Theorem A. Suppose M and N are homology cobordant closed 3-manifolds with π = π 1 (M ), G = π 1 (N ). Theorem A(1) asserts that there is an isomorphism φ : G/ G κ ∼ = − → π/ π κ . Let W be a homology cobordism between M and N . Then, by Corollary 3.2(2), the inclusions of M and N into W induce isomorphisms π/ π κ ∼ = π 1 (W )/ π 1 (W ) κ ∼ = G/ G κ . Let φ : G/ G κ ∼ = − → π/ π κ be the composition. This is the promised isomorphism. Suppose f : π/ π κ ∼ = − → Γ/ Γ κ is an isomorphism. Let θ κ (M ) and θ κ (N ) be the invariants defined using the isomorphisms f and f • φ. Theorem A(2) asserts that θ κ (M ) = θ κ (N ) in H 3 ( Γ/ Γ κ ). To show this, consider the following commutative diagram. H 3 (M ) H 3 ( π/ π κ ) H 3 (W ) H 3 ( π 1 (W )/ π 1 (W ) κ ) H 3 ( Γ/ Γ κ ) H 3 (N ) H 3 ( G/ G κ ) i * ∼ = ∼ = f * ∼ = j * ∼ = ∼ = (f •φ) * Since the fundamental classes satisfy i * [M ] − j * [N ] = ∂[W ] = 0 in H 3 (W ), θ κ (M ) − θ κ (N ) = 0 in H 3 ( Γ/ Γ κ ). From this, it also follows that θ κ (M ) = θ κ (N ) in H 3 ( Γ/ Γ κ )/Aut( Γ/ Γ κ ) even when θ κ (M ) and θ κ (N ) are defined using arbitrarily given isomorphisms π/ π κ ∼ = − → Γ/ Γ κ and G/ G κ ∼ = − → Γ/ Γ κ ( Bordism and transfinite lower central quotients The goal of this section is to prove Theorem B and Corollary D. Proof of Theorem B Recall that Theorem B says that if M is a closed 3-manifold with π = π 1 (M ) endowed with an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ , the following are equivalent: (1) There exists a lift π/ π κ+1 ∼ = − → Γ/ Γ κ+1 of f which is an isomorphism. (2) The invariant θ κ (M ) vanishes in Coker{R κ+1 (Γ) → R κ (Γ)}. In our proof, it is essential to use the fact that H 3 (−) is isomorphic to the oriented bordism group Ω SO 3 (−), to obtain a 4-dimensional bordism from condition (2). More specifically, for another closed 3-manifold N with G = π 1 (N ) equipped with g : G/ G κ ∼ = − → Γ/ Γ κ , we have θ κ (N ) = θ κ (M ) in H 3 ( Γ/ Γ κ ) if and only if there is a 4-dimensional bordism W between (M, f ) and (N, g) over the group Γ/ Γ κ . The core of the proof of Theorem B consists of careful analysis of the relationship of such a bordism W and the involved fundamental groups. We begin with a general lemma, for which 4-dimensional duality plays a crucial role. Lemma 5.1. Suppose W is a 4-dimensional cobordism between two closed 3-manifolds M and N . That is, ∂W = N ⊔ −M . Suppose A is an arbitrary abelian group. Let ∂ : H 2 (W, ∂W ; A) → H 1 (∂W ; A) be the boundary homomorphism of the long exact sequence of (W, ∂W ). If the composition Im ∂ H 1 (∂W ; A) = H 1 (M ; A) ⊕ H 1 (N ; A) H 1 (M ; A) p of the inclusion and the projection p is injective, then Ker{H 2 (W ; A) → H 2 (M ; A)} ⊂ Ker{H 2 (W ; A) → H 2 (N ; A)}. Proof. Consider the following diagram. H 2 (W ; A) H 2 (∂W ; A) H 2 (M ; A) ⊕ H 2 (N ; A) H 2 (M ; A) H 2 (W, ∂W ; A) H 1 (∂W ; A) H 1 (M ; A) ⊕ H 1 (N ; A) H 1 (M ; A) k * i * ∂ P DW ∼ = P D ∂W ∼ = p P DM ⊕P DN ∼ = P DM ∼ = Here i * and k * are inclusion-induced, and P D • denotes the Poincare duality isomorphism, that is, We have P D −1 • (c) = c ∩ [•] where [•] is(5.1) Ker{H 2 (W ; A) i * k * − − −→ H 2 (M ; A)} = P D W (Ker p • ∂) by the diagram, = P D W (Ker ∂) since p| Im ∂ is injective. Apply the same argument to N in place of M to obtain (5.2) Ker{H 2 (W ; A) → H 2 (N ; A)} = P D W (Ker q • ∂) ⊃ P D W (Ker ∂) where q : H 1 (∂W ; A) = H 1 (M ; A) ⊕ H 2 (N ; A) −→ H 1 (N ; A) is the projection onto the second factor. From (5.1) and (5.2), the conclusion follows immediately. Theorem B will be proven as a consequence of the following result. Theorem 5.2. Suppose κ ≥ 2 and M and N are closed 3-manifolds with π = π 1 (M ) and G = π 1 (N ) which are endowed with isomorphisms f : π/ π κ ∼ = − → Γ/ Γ κ and g : G/ G κ ∼ = − → Γ/ Γ κ . Define θ κ (M ) and θ κ (N ) using f and g. If θ κ (M ) = θ κ (N ) in H 3 ( Γ/ Γ κ ), then the isomorphism f −1 g : G/ G κ ∼ = − −→ π/ π κ lifts to an isomorphism G/ G κ+1 ∼ = − −→ π/ π κ+1 . Proof. Since H 3 ( Γ/ Γ κ ) is equal to the oriented bordism group Ω SO 3 ( Γ/ Γ κ ), there exists a 4dimensional bordism W , over Γ/ Γ κ , between M and N . We begin with some claims. Claim 1. For any abelian group A, the inclusions i : M ֒→ W and j : N ֒→ W induce injections i * : H 1 (M ; A) → H 1 (W ; A) and j * : H 1 (N ; A) → H 1 (W ; A). To show this, consider the following commutative diagram. H 1 (M ; A) = H 1 (π; A) H 1 ( π; A) H 1 ( π/ π κ ; A) H 1 ( Γ/ Γ κ ; A) H 1 (W ; A) = H 1 (π 1 (W ); A) ∼ = i * ∼ = f * ∼ = When A = Z, the leftmost horizontal arrow is an isomorphism by Theorem 3.1, and the middle horizontal arrow is an isomorphism too since κ ≥ 2. The rightmost horizontal arrow, f * , is an isomorphism since so is f . Therefore, the composition H 1 (M ; A) → H 1 ( Γ/ Γ κ ; A) is an isomorphism for A = Z, and consequently it is an isomorphism for an arbitrary A by the universal coefficient theorem. From this and the above diagram, it follows that i * is injective. The injectivity of j * is shown in the same way, using N in place of M . This proves Claim 1. Claim 2. For any abelian group A, Ker{H 2 (W ; A) i * − −→ H 2 (M ; A)} = Ker{H 2 (W ; A) j * − −→ H 2 (N ; A)}. To show this, use notations of Lemma 5.1. Let ∂ : H 2 (W, ∂W ; A) → H 1 (W ; A) be the boundary map, and let p and q be the projections of H 1 (W ; A) = H 1 (M ; A) ⊕ H 1 (N ; A) onto the first and second factor respectively. By Lemma 5.1, it suffices to show that the restrictions p| Im ∂ and q| Im ∂ are injective. In our case, Im ∂ = Ker{H 1 (∂W ; A) → H 1 (W ; A)} = {(x, y) ∈ H 1 (M ; A) ⊕ H 1 (N ; A) | i * (x) + j * (y) = 0} where i * : H 1 (M ; A) → H 1 (W ; A) and j * : H 1 (N ; A) → H 1 (W ; A) . So, for (x, y) ∈ Im ∂, if 0 = p(x, y) = x, then j * (y) = −i * (x) = 0, and thus y = 0 since j * is injective by Claim 1. This shows that p| Im ∂ is injective. The same argument shows that q| Im ∂ is injective. This completes the proof of Claim 2. Let A = π κ / π κ+1 , and realize the short exact sequence 0 −→ A −→ π/ π κ+1 −→ π/ π κ −→ 1 as a fibration B( π/ π κ+1 ) → B( π/ π κ ) with fiber B(A). We will use the following basic facts from obstruction theory. A map f : X → B( π/ π κ ) of a CW-complex X gives an obstruction class o X ∈ H 2 (X; A) which vanishes if and only if there is a lift X → B( π/ π κ+1 ). In our case, the coefficient system {A} is untwisted on B( π/ π κ ) since the abelian subgroup A = π κ / π κ+1 is central in π/ π κ+1 . So, o X determines a homotopy class of a map φ X : X → K(A, 2), which is null-homotopic if and only if f lifts. Conversely, φ X determines o X . Namely, o X is the image of id A under Hom(A, A) = Hom(H 2 (K(A, 2)), A) = H 2 (K(A, 2); A) (φX ) * − −− −→ H 2 (X, A). By the naturality of the obstruction class o X , φ X is the composition X f − −→ B( π/ π κ ) φ − −→ K(A, 2) where φ = φ B( π/ πκ) is the map associated to the identity of B( π/ π κ ). Consider the following specific lifting problem, which is for X = N : B( π/ π κ+1 ) N B( G/ G κ ) B( π/ π κ ) K(A, 2) W B( Γ/ Γ κ ) j ≃ f −1 g g ≃ φ f −1 ≃ Here the bottom row is obtained from that W is a bordism over Γ/ Γ κ . Claim 3. There exists a lift N → B( π/ π κ+1 ). To prove this, note that the obstruction o N is the image of id A under the composition Hom(A, A) H 2 (K(A, 2); A) H 2 (B( π/ π κ ); A) H 2 (N ; A) H 2 (B Γ/ Γ κ ); A) H 2 (W ; A) (f −1 ) * j * . Thus, o N vanishes if and only the image of id A in H 2 (W ; A) lies in the kernel of the map H 2 (W ; A) j * − → H 2 (N ; A). To show that it is the case, consider the following lifting problem for M in place of N : Claim 4. There is a lift α : G/ G κ+1 → π/ π κ+1 of f −1 g. B( π/ π κ+1 ) M B( π/ π κ ) K(A, 2) W B( Γ/ Γ κ ) i φ f −1 ≃ Since M → B(π) → B( π) → B( π/ π κ+1 ) isG/ G κ+1 π/ π κ+1 G/ G κ π/ π κ α f −1 g ∼ = To show this, first take the homomorphism G → π/ π κ+1 induced by the lift N → B( π/ π κ+1 ) in Claim 3. It is a lift of f −1 g : G/ G κ ∼ = − → π/ π κ . Since π/ π κ+1 is local by Lemma 3.5, G → π/ π κ+1 induces a homomorphism G → π/ π κ+1 . It induces a desired homomorphism α : G/ G κ+1 → π/ π κ+1 , since G κ+1 ⊂ G is sent into ( π/ π κ+1 ) κ+1 = π κ+1 / π κ+1 = {e}. Our goal is to show that the lift α in Claim 4 is an isomorphism. For this purpose, exchange the roles of M and N and apply the same argument, to obtain a lift of g −1 f : π/ π κ → G/ G κ , and call it β : π/ π κ+1 → G/ G κ+1 . Claim 5. The composition αβ : π/ π κ+1 → π/ π κ+1 is an isomorphism. To prove this, consider the following diagram. 1 π κ / π κ+1 π/ π κ+1 π/ π κ 1 1 π κ / π κ+1 π/ π κ+1 π/ π κ 1 αβ| πκ/ π κ+1 αβ id Here, the right square commutes since α and β are lifts of f −1 g and g −1 f and thus αβ is a lift of the identity. By the five lemma, if the left vertical arrow π κ / π κ+1 → π κ / π κ+1 is an isomorphism, then αβ is an isomorphism too. We will indeed show that αβ restricts to the identity on the (larger) subgroup π 2 / π κ+1 . Suppose g ∈ π 2 / π κ+1 . Write g as a product g = i [a i , b i ] of commutators, where a i , b i ∈ π/ π κ+1 . Since αβ is a lift of the identity of π/ π κ , we have αβ(a i ) = a i u i and αβ(b i ) = b i v i for some u i , v i ∈ π κ / π κ+1 . Since π κ / π κ+1 is central in π/ π κ+1 , [a i u i , b i v i ] = [a i , b i ]. It follows that αβ(g) = i [αβ(a i ), αβ(b i )] = i [a i u i , b i v i ] = i [a i , b i ] = g. This completes the proof of Claim 5. Now, by Claim 5, α is injective and β is surjective. Exchange the roles of α and β and apply the same argument, to conclude that α is surjective and β is injective. Therefore α and β are isomorphisms. This completes the proof of Theorem 5.2. Proof of Theorem B. Suppose M is a closed 3-manifold with π = π 1 (M ), which is endowed with an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ . Suppose f lifts to an isomorphismf : π/ π κ+1 ∼ = − → Γ/ Γ κ+1 . Then θ κ+1 (M,f ) is sent to θ κ (M, f ) under R κ+1 (Γ) → R κ (Γ). Therefore θ κ (M ) = θ κ+1 (M,f ) vanishes in the cokernel of R κ+1 (Γ) → R κ (Γ). For the converse, suppose θ κ (M ) = θ κ (M, f ) vanishes in the cokernel of R κ+1 (Γ) → R κ (Γ). This means that there is a closed 3-manifold N with G = π 1 (N ) which is endowed with an isomorphism g : G/ G κ+1 ∼ = − → Γ/ Γ κ+1 , such that θ κ+1 (N,g) is sent to θ κ (M, f ) under R κ+1 (Γ) → R κ (Γ). Let g : G/ G κ ∼ = − → Γ/ Γ κ be induced byg. Then θ κ (N, g) = θ κ (M, f ) , and thus it follows that g −1 f lifts to an isomorphism π/ π κ+1 ∼ = − → G/ G κ+1 , by Theorem 5.2. Compose this lift withg : G/ G κ+1 ∼ = − → Γ/ Γ κ+1 , to obtain an isomorphism π/ π κ+1 ∼ = − → Γ/ Γ κ+1 which is a lift of f . Proof of Corollary D Recall from Definition 2.3 that the equivalence relation ∼ on R κ (Γ) is defined as follows. For θ ∈ R κ (Γ), there is a closed 3-manifold M with π = π 1 (M ), which is equipped with an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ , such that θ κ (M ) = θ. Let I θ be the image of (5.3) R κ+1 (π) −→ R κ (π) ≈ − −→ f * R κ (Γ). Lemma 5.3. The set I θ is well-defined, and I φ = I θ whenever φ ∈ I θ . From Lemma 5.3, it follows that the sets I θ form a partition of R κ (Γ). On R κ (Γ), we write θ ∼ φ if I θ = I φ . Proof of Lemma 5.3. Let θ ∈ R κ (Γ) and (M, f ) be as above, and suppose N is a closed 3-manifold with G = π 1 (N ) equipped with an isomorphism g : G/ G κ ∼ = − → Γ/ Γ κ such that θ κ (N, g) lies in the image of the map (5.3). Then there is an isomorphism lift f −1 g : G/ G κ+1 ∼ = − → π/ π κ+1 of f −1 g : G/ G κ ∼ = − → π/ π κ , by Theorem B applied to (N, f −1 g). The induced functions on R * (−) form the following commutative diagram, where all horizontal arrows are bijective. R κ+1 (G) R κ+1 (π) R κ (G) R κ (Γ) R κ (π) ≈ (f −1 g) * ≈ g * ≈ f * Indeed, if P is a closed 3-manifold equipped with h : π 1 (P )/ π 1 (P ) κ+1 ∼ = − → G/ G κ+1 which induces h : π 1 (P )/ π 1 (P ) κ ∼ = − → G/ G κ , then the images of θ κ+1 (P, h) ∈ R κ+1 (G) under the arrows in the above diagram are given by several θ-invariants of the same P , as shown below: θ κ+1 (P, h) θ κ+1 (P, (f −1 g) • h) θ κ (P, h) θ κ (P, gh) θ κ (P, f −1 gh) (f −1 g) * g * f * As an immediate consequence of the commutativity, we have (5.4) Im{R κ+1 (G) → R κ (G) → R κ (Γ)} = Im{R κ+1 (π) → R κ (π) → R κ (Γ)}. Now, writing φ = θ κ (N, g), the left and right hand sides of (5.4) are I φ and I θ , respectively. This shows the assertion I φ = I θ . Also, when θ κ (N, g) = θ, (5.4) shows that I θ is well-defined, independent of the choice of (M, f ). Once we formulate the above setup, it is rather straightforward to obtain Corollary D, which asserts the following: suppose M and N are closed 3-manifolds with fundamental groups π = π 1 (M ) and G = π 1 (N ), which are equipped with isomorphisms f : π/ π κ ∼ = − → Γ/ Γ κ and g : G/ G κ ∼ = − → Γ/ Γ κ . Then, f −1 g lifts to an isomorphism G/ G κ+1 ∼ = − → π/ π κ+1 if and only if θ κ (M ) ∼ θ κ (N ) in R κ (Γ). Proof of Corollary D. Let f * : R κ (π) → R κ (Γ) be the induced bijection. By definition, θ κ (N ) lies in the subset I θκ(M) of R κ (Γ) if and only if f −1 * θ κ (N ) ∈ R κ (π) lies in the image of R κ+1 (π) → R κ (π); in other words, f −1 * θ κ (N ) = 0 in the cokernel of R κ+1 (π) → R κ (π) . It is the case if and only if f −1 g lifts to an isomorphism G/ G κ+1 ∼ = − → π/ π κ+1 , by applying Theorem B to (N, f −1 g). Transfinite Stallings-Dwyer theorem and transfinite gropes The goal of this section is to provide transfinite generalizations of a well known result of Stallings [Sta65] and Dwyer [Dwy75], and relate it with a notion of transfinite gropes which we define in this section too. In Section 6.2, we prove the Addendum to Theorems C and E, as stated in Section 2.7. The transfinite generalizations of the Stallings-Dwyer theorem will also be used in the proof of realization theorems in Section 7. 6.1. Algebraic statements Theorem 6.1 (Transfinite Stallings-Dwyer). Let κ > 1 be an arbitrary ordinal. Suppose f : π → G is a group homomorphism inducing an isomorphism H 1 (π) ∼ = − → H 1 (G). If κ is an infinite ordinal, suppose G is finitely generated. Then f induces an isomorphism π/ π κ ∼ = − → G/ G κ if and only if f induces an epimorphism (6.1) H 2 ( π) −→ H 2 ( G)/ Ker{H 2 ( G) → H 2 ( G/ G λ )} for all ordinals λ < κ. Note that if κ is a discrete ordinal, then the homomorphism (6.1) is surjective for all λ < κ if and only if it is surjective for λ = κ − 1. Especially, if κ is finite, then by Corollary 3.2(1), Theorem 6.1 specializes to the Stallings-Dwyer theorem [Sta65,Dwy75]: for a homomorphism f : π → G which induces an isomorphism H 1 (π) ∼ = − → H 1 (G), f induces an isomorphism π/π k ∼ = G/G k if and only if f induces an epimorphism H 2 (π) −→ H 2 (G)/ Ker{H 2 (G) → H 2 (G/G k−1 )}. Before proving Theorem 6.1 in Section 6.3, we record some consequences. We will use the following notation, which is a transfinite generalization of the notation used in Dwyer [Dwy75,p. 178]. Definition 6.2 (Transfinite Dwyer kernel). Suppose G is a group, and κ > 1 is an ordinal. The transfinite Dwyer kernel is defined by ψ κ (G) = Ker{H 2 (G) → H 2 (G/G κ−1 )} if κ is a discrete ordinal, λ<κ ψ λ (G) if κ is a limit ordinal. More generally, for a space X with π = π 1 (X), define ψ κ (X) by ψ κ (X) = Ker{H 2 (X) → H 2 (π) → H 2 (π/π κ−1 )} if κ is a discrete ordinal, λ<κ ψ λ (X) if κ is a limit ordinal. That is, ψ κ (BG) = ψ κ (G). Corollary 6.3. Suppose f : π → G induces an isomorphism H 1 (π) → H 1 (G), κ > 1, and suppose G is finitely presented if κ is transfinite. If (6.2) H 2 ( π)f * − −→ H 2 ( G) −→ H 2 ( G)/ψ κ ( G) is surjective, then f induces an isomorphism π/ π κ ∼ = − → G/ G κ Note that Corollary 6.3 assumes the surjectivity of a single homomorphism (6.2), instead of the surjectivity of infinitely many homomorphisms (6.1) in Theorem 6.1, for the limit ordinal case. Proof. If κ is a discrete ordinal, the codomain of (6.2) is equal to that of (6.1), and thus the corollary follows from Theorem 6.1. If κ is a limit ordinal, H 2 ( G)/ψ κ ( G) surjects onto H 2 ( G)/ Ker{H 2 ( G) → H 2 ( G/ G λ )} for all λ < κ. From this and Theorem 6.1, the corollary follows. In practice, it may be difficult to verify the hypothesis that (6.1) or (6.2) is surjective, since localizations are involved. The following variation does not involve localizations in the hypothesis. Corollary 6.4. Suppose f : π → G induces an isomorphism H 1 (π) → H 1 (G). Suppose κ > 1 and G is finitely presented. If H 2 (π) f * − −→ H 2 (G) −→ H 2 (G)/ψ κ (G) is surjective, then f induces an isomorphism π/ π κ ∼ = − → G/ G κ . Proof. Consider the following commutative diagram. H 2 (π) H 2 (G) H 2 (G)/ψ κ (G) H 2 ( π) H 2 ( G) H 2 ( G)/ψ κ ( G) Since G is finitely presented, the middle vertical arrow is surjective by Theorem 3.1(2), and consequently the right vertical arrow is surjective. It follows that bottom horizontal composition is surjective if the top horizontal composition is surjective. So Corollary 6.3 implies Corollary 6.4. Transfinite gropes In this subsection we relate transfinite lower central quotients to a transfinite version of gropes, using the results in Section 6.1. The main statement is Corollary 6.8. This is a transfinite generalization of the finite case approach of Freedman and Teichner [FT95, Section 2]. We begin with new definitions. In what follows, a symplectic basis on a surface of genus g designates a collection of simple closed curves a i , b i (i = 1, . . . , g) such that a i and b i are transverse and intersect exactly once for all i and ( a i ∪ b i ) ∩ (a j ∪ b j ) = ∅ for i = j. Definition 6.5 (Transfinite gropes). (1) Suppose Σ → X is a map of a connected surface Σ with connected or empty boundary to a space X. For a discrete ordinal κ > 1, we say that the map Σ → X supports a grope of class κ, or shortly supports a κ-grope, if there is a symplectic basis {a i , b i } on Σ such that a i bounds a (κ − 1)-grope in X, in the sense defined below, for each i. (2) A loop γ in X bounds a grope of class λ, that is, bounds a λ-grope, if either (a) λ = 1, (b) λ > 1 is a discrete ordinal and there is a map of a surface to X which is bounded by γ and supports a λ-grope, or (c) λ is a limit ordinal and γ bounds a µ-grope for each µ < λ. Definition 6.6 (The grope class of a second homology class). Let κ > 1. A homology class σ ∈ H 2 (X) is represented by a κ-grope, or is of class κ, if either (1) κ is a discrete ordinal and σ is represented by a map of a closed surface supporting a κ-grope, or (2) κ is a limit ordinal and σ is represented by a λ-grope for every λ < κ. We remark that for finite k, if Σ → X supports a k-grope in our sense, then a map of a k-grope in the sense of [FT95, Section 2] is obtained by stacking the inductively given surfaces along basis curves, and vice versa. Proposition 6.7. (1) For κ ≥ 1, a loop γ in X bounds a κ-grope if and only if [γ] ∈ π 1 (X) κ . (2) For κ > 1, a class σ ∈ H 2 (X) lies in the transfinite Dwyer kernel ψ κ (X) if and only if σ is represented by a κ-grope. For finite κ, Proposition 6.7(2) appeared earlier in [FT95, Lemma 2.3]. The following is an immediate consequence of Corollary 6.4 and Proposition 6.7. Corollary 6.8. Suppose κ > 1 is an arbitrary ordinal and f : X → Y is a map of a space X to a finite CW complex Y which induces an isomorphism H 1 (X) ∼ = − → H 1 (Y ). If Coker{H 2 (X) → H 2 (Y )} is generated by classes represented by κ-gropes in Y , then f induces an isomorphism π 1 (X)/ π 1 (X) κ ∼ = − −→ π 1 (Y )/ π 1 (Y ) κ . Proof of Proposition 6.7. From the definitions, (1) follows straightforwardly by transfinite induction. To prove (2), we proceed by transfinite induction too. Since π 1 (X) 1 = π 1 (X), every σ ∈ H 2 (X) lies in ψ 2 (X), and is represented by a 2-grope. So, (2) holds for κ = 2. Suppose κ > 2 and (2) holds for all ordinals less than κ. If κ is a limit ordinal, then by definition, σ ∈ H 2 (X) is in ψ κ (X) if and only σ ∈ ψ λ (X) for all λ < κ. By the induction hypothesis, it holds if and only if σ is represented by a λ-grope for all λ < κ. By the definition, it holds if and only if σ is represented by a κ-grope. This shows that (2) holds for κ. If κ > 2 is a discrete ordinal, the finite case argument given in [FT95, Proof of Lemma 2.3] can be carried out. We provide details for the reader's convenience. Let π = π 1 (X). Suppose σ ∈ H 2 (X) is represented by a κ-grope, that is, σ is the class of a map Σ → X of a surface admitting geometrically symplectic basis {a i , b i } such that each a i bounds a (κ − 1)-grope in X. By (1), [a i ] ∈ π 1 (X) κ−1 , and so a i is null-homotopic in B(π/π κ−1 ). By surgery on Σ along the a i , it follows that the image of σ in H 2 (B(π/π κ−1 )) is a spherical class, and thus trivial. This shows that σ lies in ψ κ (X). For the converse, suppose a class represented by a map Σ → X of a surface Σ lies in ψ κ (X). Attach 2-cells to X along generators of π κ−1 , and attach more cells of dimension ≥ 3, to construct B(π/π κ−1 ). Since Σ is null-homologous in B(π/π κ−1 ) (and since H 3 = Ω SO 3 ), Σ → X ֒→ B(π/π κ−1 ) extends to a compact 3-manifold R bounded by Σ. We may assume that the center of each cell which we attached to X is a regular value of R → B(π/π κ−1 ). Remove, from R, tubular neighborhoods of the inverse images of the centers. This gives a bordism over X between Σ → X and a map of a union of tori and spheres. Spheres support a κ-grope by definition. Since the meridian of each torus bounds a disk in B(π/π κ−1 ), the meridian bounds a (κ − 1)-grope in X by (1). By definition, it follows that the tori support a κ-grope. This completes the proof. As an application, we give a proof of the addendum to Theorems C and E stated in Section 2.7. We first define a terminology used in the statement. Recall that a cobordism W between M and N is an H 1 -cobordism if inclusions induce isomorphisms H 1 (M ) ∼ = H 1 (W ) ∼ = H 1 (N ). Definition 6.9. Let κ be an ordinal. An H 1 -cobordism W between M and N is a grope cobordism of class κ if each of Coker{H 2 (M ) → H 2 (W )} and Coker{H 2 (N ) → H 2 (W )} is generated by homology classes in H 2 (W ) represented by κ-gropes. Now, the addendum to Theorems C and E says the following: let Γ be a group and κ be an arbitrarily given ordinal. Suppose M is a closed 3-manifold with π = π 1 (M ) which is equipped with an isomorphism π/ π κ ∼ = − → Γ/ Γ κ . Then the following are equivalent. (0) There is a grope cobordism of class κ + 1 between M and another closed 3-manifold N satisfying π 1 (N )/ π 1 (N ) κ+1 ∼ = Γ/ Γ κ+1 . (1) π/ π κ+1 is isomorphic to Γ/ Γ κ+1 . ( 2) The invariant θ κ (M ) vanishes in Coker{R κ+1 (Γ) → R κ (Γ)/Aut( Γ/ Γ κ )}. Proof. We have already shown that (1) and (2) π/ π κ+1 ∼ = π 1 (W )/ π 1 (W ) κ+1 ∼ = π 1 (N )/ π 1 (N ) κ+1 by Corollary 6.8. It follows that (1) holds. Proof of the algebraic statement Now, we prove the main algebraic statement of this section. Proof of Theorem 6.1. First, we assert that the surjectivity of H 1 (π) → H 1 (G) implies that π/ π κ → G/ G κ is surjective. Indeed, if κ is finite, then the assertion is a well known fact obtained from a standard commutator identity. For the reader's convenience, we describe an outline of the argument. If a i ≡ b i mod G 2 , then we have [a 1 , [a 2 , . . . , [a k−1 , a k ] . . . ]] ≡ [b 1 , [b 2 , . . . , [b k−1 , b k ] . . . ]] mod G k+1 . From this it follows that π k /π k+1 → G k /G k+1 is surjective for all finite k. The surjectivity of π/π k → G/G k is obtained by applying the five lemma, inductively, to the following diagram. 1 π k−1 /π k π/π k π/π k−1 1 1 G k−1 /G k G/G k G/G k−1 1 When κ is an infinite ordinal, since G is assumed to be finitely generated, π → G is surjective if H 1 (π) → H 1 (G) is surjective, by Lemma 3.3. It follows that π/ π κ → G/ G κ is surjective. Therefore, under the assumption that π/ π κ → G/ G κ is surjective, it suffices to prove that the following two conditions are equivalent: (i) κ π/ π κ → G/ G κ is injective, or equivalently is an isomorphism. (ii) κ H 2 ( π) → H 2 ( G)/K λ ( G) is surjective for all λ < κ, where K λ ( G) := Ker{H 2 ( G) → H 2 ( G/ G λ )}. We proceed by transfinite induction on the ordinal κ. For κ = 2, (i) κ holds since π/ π 2 = H 1 (π) ∼ = H 1 (G) = G/ G 2 , and (ii) κ holds too, since H 2 ( G)/K 1 ( G) is trivial. Fix an ordinal κ ≥ 3, and let f : π → G be a homomorphism which satisfies the hypothesis of Theorem 6.1. Suppose that (i) κ ′ and (ii) κ ′ are equivalent for all κ ′ < κ. If κ is a discrete ordinal, then we proceed similarly to the original argument of Stallings and Dwyer [Sta65,Dwy75], as described below. First, note that (ii) λ holds for all λ < κ if and only if (ii) κ−1 holds, when κ is discrete. Recall, for a normal subgroup N of a group Γ, the Lyndon-Hochschild-Serre spectral sequence for the short exact sequence 1 → N → Γ → Γ/N → 1 gives rise to an exact sequence H 2 (Γ) −→ H 2 (Γ/N ) −→ H 0 (Γ/N ; H 1 (N )) −→ H 1 (Γ) −→ H 1 (Γ/N ) which is called Stallings' exact sequence [Sta65]. Apply this to (Γ, N ) = ( π, π κ−1 ) and ( G, G κ−1 ), to obtain the following diagram with exact rows. 0 H 2 ( π)/K κ−1 ( π) H 2 ( π/ π κ−1 ) π κ−1 / π κ 0 0 H 2 ( G)/K κ−1 ( G) H 2 ( G/ G κ−1 ) G κ−1 / G κ 0 If (i) κ holds, then (i) κ−1 holds too. If (ii) κ holds, then (ii) κ−1 holds too and consequently (i) κ−1 holds by the induction hypothesis. So, in either case, we may assume that (i) κ−1 holds. Then the middle vertical arrow of the diagram is an isomorphism. By the snake lemma, it follows that (6.3) Ker{ π κ−1 / π κ → G κ−1 / G κ } ∼ = Coker{H 2 ( π)/K κ−1 ( π) → H 2 ( G)/K κ−1 ( G)}. Since π/ π κ−1 ∼ = G/ G κ−1 by (i) κ−1 , (i) κ holds if and only if the left hand side of (6.3) is trivial. Also, (ii) κ holds if and only if the right hand side of (6.3) is trivial. It follows that (i) κ and (ii) κ are equivalent. Now, suppose that κ is a limit ordinal. Suppose (i) κ holds. For each λ < κ, since κ is a limit ordinal, λ + 1 < κ. So (i) κ implies (i) λ+1 . By the induction hypothesis, it follows that (ii) λ+1 holds. In particular, H 2 ( π) → H 2 ( G)/K λ ( G) is surjective. This shows that (ii) κ holds. For the converse, suppose (ii) κ holds. For each λ < κ, (ii) κ implies (ii) λ , and thus (i) λ holds by the induction hypothesis. That is, f induces an isomorphism π/ π λ ∼ = − → G/ G λ . Therefore, if g ∈ Ker{ π/ π κ → G/ G κ }, then g ∈ Ker{ π/ π κ → π/ π λ } for all λ < κ. Since π κ = λ<κ π λ , it follows that g is trivial. This proves that π/ π κ → G/ G κ is injective, and thus (i) κ holds. This completes the proof of Theorem 6.1. We remark that the above proof of the equivalence of (i) κ and (ii) κ indeed shows the following statement (just by replacing π and G with P and Z below), which we record as a lemma for later use in this paper. Lemma 6.10. Suppose κ > 1 and f : P → Z is a group homomorphism which induces an epimorphism P/P κ → Z/Z κ and an isomorphism H 1 (P ) ∼ = − → H 1 (Z) . Then the following are equivalent: (i) f induces an isomorphism P/P κ → Z/Z κ . (ii) f induces an epimorphism H 2 (P ) → H 2 (Z)/K λ (Z) for all λ < κ, where K λ (Z) := Ker{H 2 (Z) → H 2 (Z/Z λ )}. Realization of transfinite invariants In this section, we prove Theorem G stated in Section 2.8, which characterizes the realizable classes θ in H 3 ( Γ/ Γ κ ). In the proof of Theorem G, we will use the following lemmas. The first lemma provides a finitely generated approximation of the transfinite lower central quotients of the localization, along the lines of Theorem 3.1(2). Lemma 7.1. Suppose G is a finitely presented group, κ > 1 is an ordinal, and H is a finitely generated subgroup in G/ G κ . Then H is contained in a finitely generated subgroup Q in G/ G κ such that the inclusion induces an isomorphism H 1 (Q) → H 1 ( G/ G κ ). Proof. Since H is finitely generated, there is a 2-connected homomorphism P → G of a finitely presented group P such that the image of P → G → G/ G κ contains H, by Theorem 3.1 (2). Let Q be the image of P → G/ G κ . Since P → Q is surjective, H 1 (P ) → H 1 (Q) is surjective. Since the composition P → Q → G/ G κ induces an isomorphism on H 1 , H 1 (P ) → H 1 (Q) is injective, and consequently H 1 (P ) ∼ = H 1 (Q) ∼ = H 1 ( G/ G κ ) under the induced homomorphisms. Lemma 7.2. Consider any ordinal κ. Suppose π is finitely generated, G is finitely presented, and f : π → G/ G κ is a group homomorphism which induces an epimorphism H 1 (π) → H 1 ( G/ G κ ) = H 1 (G). Then f induces an epimorphism π → G/ G κ . Proof. Since G/ G κ is trivial for κ = 1, we may assume that κ ≥ 2. Recall from Lemma 3.5 that the transfinite lower central quotient of a local group is local. So, in our case, G/ G κ is local, and thus there is an induced homomorphism π → G/ G κ by the universal property of π. π π G/ G κ f To show that π → G/ G κ is surjective, it suffices to prove that every finitely generated subgroup H in G/ G κ is contained in the image of π → G/ G κ . Since H and π are finitely generated, there is a finitely generated subgroup Q in G/ G κ such that the inclusion induces an isomorphism H 1 (Q) ∼ = − → H 1 ( G/ G κ ) and both H and f (π) are contained in Q, by Lemma 7.1. Consider the following commutative diagram. π Q G/ G κ π Q f Since H 1 (π) → H 1 ( G/ G κ ) is surjective, it follows that H 1 (π) → H 1 (Q) is surjective. Therefore, by Lemma 3.3, π → Q is surjective. Since the given subgroup H ⊂ G/ G κ is contained in Q, it follows that H is contained in the image of π. This completes the proof. Another key ingredient of our proof of Theorem G is the following "homology surgery" result for 3-manifolds over a finitely generated fundamental group, which is due to Turaev [Tur84]. Aforementioned in Section 2, we denote the torsion subgroup of H * (−) by tH * (−). Now we are ready to start the proof of Theorem G. Recall from Section 2.4 that the set R κ (Γ) of realizable classes is defined to be the collection of θ ∈ H 3 ( Γ/ Γ κ ) such that θ = θ κ (M ) for some closed 3-manifold M with π = π 1 (M ) equipped with an isomorphism π/ π κ ∼ = − → Γ/ Γ κ . Here, Γ is a fixed finitely presented group. Let κ ≥ 2. Theorem G says that θ ∈ R κ (Γ) if and only if the following two conditions hold. (1) The cap product ∩ θ : tH 2 ( Γ/ Γ κ ) −→ tH 1 ( Γ/ Γ κ ) ∼ = tH 1 (Γ) is an isomorphism. (2) The composition H 1 ( Γ/ Γ κ ) ∩ θ − −→ H 2 ( Γ/ Γ κ ) pr − −→ H 2 ( Γ/ Γ κ )/K λ ( Γ/ Γ κ ) is surjective for all λ < κ, where K λ ( Γ/ Γ κ ) = Ker{H 2 ( Γ/ Γ κ ) → H 2 ( Γ/ Γ λ )}. Proof of Theorem G. For the only if direction, suppose θ ∈ R κ (Γ). Choose a closed 3-manifold M with π = π 1 (M ) and an isomorphism f : π/ π κ ∼ = − → Γ/ Γ κ such that θ κ (M ) = θ. That is, θ = φ * [M ] where φ * : H 3 (M ) → H 3 ( Γ/ Γ κ ) is induced by the composition φ : M −→ Bπ −→ B π −→ B( π/ π κ ) f − −→ ≃ B( Γ/ Γ κ ). Then, the following diagram is commutative. tH 2 (M ) tH 2 ( Γ/ Γ κ ) tH 1 (M ) tH 1 ( Γ/ Γ κ ) ∩ [M] φ * ∩ θ φ * The cap product ∩ [M ] is an isomorphism by Poincaré duality. The bottom arrow φ * is an isomorphism since H 1 (M ) = H 1 (π) = H 1 ( π/ π κ ) and f : π/ π κ → Γ/ Γ κ is an isomorphism. From this it also follows that the top arrow φ * is an isomorphism, since tH 2 (−) = Ext(H 1 (−), Z). Therefore, ∩ θ is an isomorphism. This shows that (1) holds. To show that (2) holds, suppose λ < κ and consider the following commutative diagram. (7.1) H 1 (M ) H 1 ( Γ/ Γ κ ) H 2 (M ) H 2 ( Γ/ Γ κ ) H 2 ( Γ/ Γ κ )/K λ ( Γ/ Γ κ ) H 2 (π) H 2 ( π) H 2 ( π/ π κ ) ∩ [M] ∩ θ pr•(∩ θ) φ * φ * pr By Poincaré duality, ∩ [M ] is an isomorphism. Since H 1 (−) = Hom(H 1 (−), Z) and H 1 (M ) = H 1 ( π/ π κ ) → H 1 ( Γ/ Γ κ ) is an isomorphism, the top arrow φ * is an isomorphism. Also, the assumption that f : π/ π κ → Γ/ Γ κ is an isomorphism implies that the composition H 2 ( π) → H 2 ( π/ π κ ) → H 2 ( Γ/ Γ κ )/K λ ( Γ/ Γ κ ) is surjective, by applying Lemma 6.10 to the composition π → π/ π κ ∼ = − → Γ/ Γ κ . Since H 2 (M ) → H 2 (π) and H 2 (π) → H 2 ( π) are surjective (see Theorem 3.1(2) for the latter), it follows that the composition pr • (∩ θ) in (7.1) is surjective. This proves that (2) holds. It remains to show the if direction. Suppose (1) and (2) hold for a given class θ ∈ H 3 ( Γ/ Γ κ ). Since H 3 = Ω SO 3 , there is a map ψ : N → B( Γ/ Γ κ ) of a closed 3-manifold N such that ψ * [N ] = θ. We will invoke Turaev's homology surgery for 3-manifolds (Lemma 7.3) to alter (N, ψ). Note that Γ/ Γ κ is not finitely generated in general, and thus Lemma 7.3 does not apply directly over B( Γ/ Γ κ ). So we proceed as follows, using a finitely generated approximation. Apply Lemma 7.1 to choose a finitely generated subgroup Q in Γ/ Γ κ such that the inclusion induces an isomorphism H 1 (Q) ∼ = − → H 1 ( Γ/ Γ κ ) and π 1 (N ) → Γ/ Γ κ factors through Q. Let ψ ′ : N → B(π 1 (N )) → B(Q) be the composition, and consider the following commutative diagram. tH 2 (Q) tH 2 ( Γ/ Γ κ ) tH 1 (Q) tH 1 ( Γ/ Γ κ ) ∩ ψ ′ * [N ] ∼ = ∼ = ∩ ψ * [N ]=∩ θ ∼ = The two horizontal arrows and the right vertical arrow ∩ θ are isomorphisms, by our choice of Q, Let π = π 1 (M ), and consider π → Γ/ Γ κ induced by φ. It gives rise to a homomorphism π → Γ/ Γ κ since Γ/ Γ κ is local by Lemma 3.5. Consider the diagram (7.1) again. Now, we have that the composition pr • (∩ θ) is surjective by the hypothesis (2). Note that this surjection is equal to the composition of the six arrows along the counterclockwise outmost path from H 1 ( Γ/ Γ κ ) to H 2 ( Γ/ Γ κ )/K λ ( Γ/ Γ κ ) in (7.1). So, the map H 2 ( π) → H 2 ( Γ/ Γ κ )/K λ ( Γ/ Γ κ ), which is the last one applied in the composition, is surjective. By applying Lemma 6.10 to π → Γ/ Γ κ , it follows that φ induces an isomorphism π/ π κ ∼ = − → Γ/ Γ κ . Therefore θ = φ * [M ] lies in R κ (Γ). This completes the proof of Theorem G. Universal θ-invariant We begin by recalling the definition of the universal θ-invariant from Definition 2.7. As before, let Γ be a finitely presented group. Suppose M is a closed 3-manifold with π = π 1 (M ) equipped with an isomorphism f : π → Γ. Motivated from Levine's link invariant in [Lev89a] , define θ(M ) ∈ H 3 ( Γ) to be the image of [M ] ∈ H 3 (M ) under H 3 (M ) −→ H 3 (π) −→ H 3 ( π) f * − −→ ∼ = H 3 ( Γ). The value of θ(M ) depends on the choice of f , while its image in H 3 ( Γ)/Aut( Γ) is independent of the choice of f . The following is analogous to Theorem A. We omit the proof, since the argument is exactly the same as that of Theorem A. Let R(Γ) be the collection of classes θ ∈ H 3 ( Γ) such that there exists a closed 3-manifold M with π = π 1 (M ) endowed with an isomorphism π ∼ = − → Γ for which θ(M ) = θ. We will give a proof of Theorem H stated in Section 2.9. For the reader's convenience, we recall the statement: a homology class θ ∈ H 3 ( Γ) lies in R(Γ) if and only if the following two conditions hold. (1) The cap product ∩ θ : tH 2 ( Γ) → tH 1 ( Γ) ∼ = tH 1 (Γ) is an isomorphism. (2) The cap product ∩ θ : H 1 ( Γ) → H 2 ( Γ) is surjective. Proof of Theorem H. We will first prove the only if part, using an argument almost identical to the proof of Theorem G. Suppose M is a closed 3-manifold with π = π 1 (M ) and f : π tH 2 (M ) tH 2 ( Γ) tH 1 (M ) tH 1 ( Γ) ∩ [M] φ * ∩ θ φ * By Poincaré duality, ∩ [M ] is an isomorphism. The arrow φ * is an isomorphism since f is an isomorphism. Using tH 2 (−) = Ext(H 1 (−), Z), it follows that φ * is an isomorphism. So, by the commutativity, ∩ θ is an isomorphism. This shows that (1) holds. To show that (2) holds, consider the following commutative diagram: H 1 (M ) H 1 ( Γ) H 2 (M ) H 2 ( Γ) ∩ [M] ∩ θ φ * φ * The arrows φ * is an isomorphism since f is an isomorphism, and ∩ [M ] is an isomorphisms by Poincaré duality. Since H 2 (M ) → H 2 (π) and H 2 (π) → H 2 ( π) are surjective, φ * is surjective. So ∩ θ is surjective. That is, (2) holds. Now, we will prove the if part. Our argument will be different from the proof of Theorem G. Suppose θ ∈ H 3 ( Γ) is a homology class satisfying the conditions (1) and (2). Choose a sequence of 2-connected homomorphisms of finitely presented groups Γ = P (1) −→ P (2) −→ · · · −→ P (ℓ) −→ · · · such that Γ = colim ℓ P (ℓ), by using Theorem 3.1(2). Since H 3 ( Γ) is the colimit of H 3 (P (ℓ)), the class θ lies in the image of H 3 (P (ℓ 0 )) for some ℓ 0 . Let P = P (ℓ 0 ) for brevity. Denote P → Γ by ι, and write θ = ι * (σ), where σ ∈ H 3 (P ). We claim that we may assume that ι * : H 2 (P ) → H 2 ( Γ) is an isomorphism. To prove this, first recall that H 2 (P ) → H 2 ( Γ) is surjective by the choice of the sequence {P (ℓ)}. Let N be the kernel of H 2 (P ) → H 2 ( Γ). Since P is finitely presented, H 2 (P ) is a finitely generated abelian group, and thus N is finitely generated. Since H 2 ( Γ) is the colimit of H 2 (P (ℓ)), it follows that the image of N under H 2 (P ) → H 2 (P (ℓ 1 )) is trivial for some ℓ 1 ≥ ℓ 0 . Since H 2 (P ) → H 2 (P (ℓ 1 )) is surjective, we have H 2 (P (ℓ 1 )) ∼ = H 2 (P )/N ∼ = H 2 ( Γ). Replacing P by P (ℓ 1 ), the claim is obtained. We will use Turaev's homology surgery, over the finitely presented group P . Choose a map ψ : N → BP of a closed 3-manifold N such that ψ * [N ] = σ, using that Ω SO 3 (P ) = H 3 (P ). Consider the following commutative diagram: H 1 (M ) H 1 (P ) H 1 ( Γ) H 2 (M ) H 2 (P ) H 2 ( Γ) ∩ [M] ∩ σ φ * ∩ θ ι * φ * ι * The free group case and Milnor's link invariant In this section we discuss the case when Γ is a free group, and show that our invariants of finite length applied to the zero framed surgery manifold of a link are equivalent to Milnor's link invariants and Orr's homotopy theoretic reformulation of the Milnor invariant. Most of the results from this section appear in [Orr89,IO01,Lev89a,Lev89b]. However, relating prior work to the results herein seems non-trivial. This section will highlight and clarify new perspectives on Milnor's link invariants. We proceed as follows. Fix a positive integer m, and as the "basepoint" manifold, let Y be the connected sum of m copies of S 1 × S 2 . Then π 1 (Y ) = F , the free group on m generators. In this case, we have the following useful property. Lemma 9.1. For finite k ≥ 2, R k (F ) = H 3 (F/F k ). Proof. Recall that H 2 (F/F k ) = F k /F k+1 by Hopf's theorem. Thus the projection induces a zero homomorphism H 2 (F/F k ) → H 2 (F/F k−1 ). From this and the fact that H 1 (F ) = Z m is torsion free, it follows that R k (F ) = H 3 (F/F k ), by Theorem G. So, R k (F ) is an abelian group, and consequently Coker{R k+1 (F ) → R k (F )} is an abelian group too. We remark that the structure of this cokernel was computed in [Orr89,IO01]. The cokernel, Proof. The only if part is trivial. For the if part, observe that a homomorphism F → F induces an isomorphism F/F k → F/F k if and only if it induces an isomorphism H 1 (F ) → H 1 (F ), by Stallings' Theorem [Sta65], since H 2 (F ) = 0. It follows that every automorphism of F/F k lifts to an automorphism of F/F k+1 for k ≥ 2. The conclusion is a straightforward consequence of this: if g : π/π k+1 ∼ = − → F/F k+1 is an isomorphism, then choose an automorphism liftφ : F/F k+1 → F/F k+1 of the automorphism φ = f g −1 , where g : π/π k ∼ = − → F/F k is the induced isomorphism. Then the compositionφ •g is an isomorphism which is a lift of f g −1 • g = f . Coker{R k+1 (F ) → R k (F )} = Coker{H 3 (F/F k+1 ) → H 3 (F/F k )} Using the results stated in Section 2 and the above lemmas on the free group, we compare the lower central quotients π/π k of a 3-manifold group π = π 1 (M ) with the free nilpotent quotient F/F k . For the initial case k = 2, π/π k is isomorphic to F/F k if and only if H 1 (π) ∼ = Z m . The following theorem deals with the induction step. Theorem 9.3. Suppose M is a closed 3-manifold with π = π 1 (M ), equpped with an isomorphism f : π/π k ∼ = − → F/F k , k ≥ 2. Then the following are equivalent. (1) The given f lifts to an isomorphism π/π k+1 ∼ = − → F/F k+1 . (2) There is an isomorphism π/π k+1 ∼ = F/F k+1 (which is not necessarily a lift). Now, we apply the above to links. For an m-component link L in S 3 , let M L be the zero framed surgery manifold of L. Note that if L is the trivial link, then M L is equal to the 3-manifold Y that we use in this section. In [Mil57], Milnor defined his concordance invariants, which we now call Milnor's numerical invariants. These invariants arise as coefficients of the Magnus expansion evaluated on homotopy classes of longitudes of a link. More precisely, for an m-component link L, the Magnus expansion is defined by sending the ιth meridian to 1 + t ι and extending it multiplicatively, and for a sequence ι 1 , . . . , ι k of integers ι j ∈ {1, . . . , m}, Milnor's numerical invariant of length k,μ L (ι 1 , . . . , ι k ), is the coefficient of t ι1 · · · t ι k−1 in the Magnus expansion of the ι k th longitude of L. Milnor's numerical invariants of length k are well-defined as integers for L if all Milnor's numerical invariants of length < k vanish for L. One can find details in [Mil57]. Theorem 9.4. Suppose L is a link with m components. For any finite k ≥ 2, the following are equivalent. (1) k There is an isomorphism π 1 (S 3 L)/π 1 (S 3 L) k+1 ∼ = F/F k+1 . (2) k The zero linking longitudes of L lie in π 1 (S 3 L) k . (3) k There is an isomorphism π 1 (M L )/π 1 (M L ) k ∼ = F/F k . (4) k Milnor's numerical invariants of length k + 1 are well-defined for L as integers. If the above (1) k -(4) k hold, then (1) k+1 -(4) k+1 and the following (5) k+1 -(8) k+1 are equivalent. (5) k+1 Milnor's numerical invariants of length k + 1 vanish for L. Proof. The equivalence of (1) k -(4) k is a folklore consequence of Milnor's theorem [Mil57]: (6) k+1 For some f : π 1 (M L )/π 1 (M L ) k ∼ = − → F/F k , θ k (M L , f ) vanishes in Coker{R k+1 (F ) → R k (F )}. (7) k+1 For all f : π 1 (M L )/π 1 (M L ) k ∼ = − → F/F k , θ k (M L , f ) vanishes in Coker{R k+1 (F ) → R k (F )}. (8) k+1 The invariant θ k (M L ) vanishes in Coker{R k+1 (F ) → R k (F )/Aut(F/F k )}.π 1 (S 3 L)/π 1 (S 3 L) k+1 ∼ = F | F k+1 , [w 1 , x 1 ], . . . , [w m , x m ] where x i and w i correspond to a meridian and zero linking longitude of the ith component of L, respectively. Indeed, since F/F k+1 is Hopfian, the right hand side, which is a quotient of F/F k+1 , is isomorphic to F/F k+1 if and only if [w i , x i ] ∈ F k+1 for all i. A standard application of the Magnus expansion, or Hall basis theorem, shows that [w i , x i ] ∈ F k+1 if and only if w i ∈ F k . Also, since π 1 (M L ) is the quotient of π 1 (S 3 L) by the normal subgroup generated by the longitudes, we have π 1 (M L )/π 1 (M L ) k ∼ = F | F k , w 1 , . . . , w m by Milnor's theorem. Thus π 1 (M L )/π 1 (M L ) k ∼ = F/F k if and only if w i ∈ F k , and it is the case if and only if π 1 (S 3 L)/π 1 (S 3 L) k+1 ∼ = F/F k+1 by the above. Also, it is known that Milnor's invariants of length k + 1 are well-defined integers without ambiguity if and only if w i ∈ F k [Mil57]. This shows that (1) k -(4) k are equivalent. Milnor also showed that his invariants of length k + 1 vanish if and only if w i ∈ F k+1 [Mil57]. It follows that (5) k+1 is equivalent to (1) k+1 -(4) k+1 . By Theorem 9.3, each of (6) k+1 -(8) k+1 is equivalent to (3) k+1 . This completes the proof. In what follows we discuss the relationship of our invariants and the link invariant defined in [Orr89]. Let L be a link for which Milnor's invariants of length ≤ k vanish. Let E L be the exterior of L, and G = π 1 (E L ) = π 1 (S 3 L). Let K k be the mapping cone of the inclusion m S 1 = B(F ) → B(F/F k ), and let j : B(F/F k ) → K k be the inclusion. By Milnor's result [Mil57] (or by Theorem 9.4), there is an isomorphism F/F ℓ ∼ = − → G/G ℓ which takes generators of F to meridians, for ℓ ≤ k + 1. When ℓ = k, this gives rise to a map E L −→ B(G) −→ B(G/G k ) ≃ − −→ B(F/F k ) j − −→ K k which sends meridians to null-homotopic loops. So this extends to a map ψ : S 3 → K k . Denote the homotopy class of this extension by θ k (L) = [ψ] ∈ π 3 (K k ). This is the invariant defined and studied in [Orr89]. Recall from the proof of Theorem 9.4 that G/G k ∼ = − → F/F k induces f : π 1 (M L )/π 1 (M L ) k ∼ = − → F/F k . Consider θ k (M L ) = θ k (M L , f ). To compare θ k (L) with θ k (M L ), we will use arguments which are already known to experts. Let h : π 3 (K k ) → H 3 (K k ) be the Hurewicz homomorphism. Note that the inclusion j induces an isomorphism j * : R k (F ) = H 3 (F/F k ) → H 3 (K k ), since K k is obtained from B(F/F k ) by attaching 2-cells. We claim that our θ k (M L ) and Orr's θ k (L) are identical in H 3 (K k ). That is, θ k (M L ) = j −1 * h(θ k (L)) . The claim is verified as follows. Attach m 2-handles to S 3 × [0, 1] along the zero-framing of the link L ⊂ S 3 = S 3 × 1, to obtain a 4-dimensional cobordism W between S 3 and M L . Let φ : M L → B(F/F k ) be the map induced by the above f : π 1 (M L )/π 1 (M L ) k ∼ = − → F/F k . Since φ restricts to ψ : E L → B(F/F k ) and since W is obtained by attaching m dual 2-handles to M L × [0, 1] along meridians, M L φ − → B(F/F k ) j − → K k extends to a map W → K k which restricts to ψ : S 3 → K k . This gives us the following commutative diagram. M L W S 3 B(F/F k ) K k φ ψ j From the diagram, the assertion j * (θ k (M L )) = h(θ k (L)) follows. In addition, θ k (M L ) = 0 in the cokernel of H 3 (F/F k ) → H 3 (F/F k ) if and only if θ k (L) = 0 in the cokernel of π 3 (K k+1 ) → π 3 (K k ). It follows immediately from the above and from the known fact that the composition j −1 * h : π 3 (K k ) → H 3 (F/F k ) induces an isomorphism between the cokernels [Orr89,IO01]. Consequently, the equivalence of (5), (6) and (7) in Theorem 9.4 subsumes the following result of Orr [Orr89]: for a link L, the Milnor invariants of length k + 1 vanish if and only if θ k (L) = 0 in Coker{π 3 (K k+1 ) → π 3 (K k )}. We remark that the same argument shows that Levine's link invariant θ(L) ∈ H 3 ( F ) defined in [Lev89a] can be identified with our final invariant θ(M L ) of the zero-framed surgery manifold M L . Remark 9.5. Results of this section for general closed 3-manifolds and zero surgery manifolds of links holds for transfinite ordinals k, if one uses G/ G k instead of G/G k for G = F and G = π 1 (M L ), as we always do in this paper. More precise, we have the following. (1) Lemma 9.2 is true for transfinite k. To prove this, one uses our Theorem 6.1 instead of Stallings' Theorem in the above proof of Lemma 9.2 (and use that H 2 ( F ) = 0.) (2) Theorem 9.3 is true for transfinite k. To prove this, one one uses the transfinite version of Lemma 9.2 instead of Lemma 9.2 in the above proof of Theorem 9.3. (3) Theorem 9.4 is true for transfinite k, if one removes the conditions (1) k , (2) k , (4) k and (5) k+1 . The proof is the same as the finite case given above. On the other hand, for links, we do not know whether the transfinite case of the full version of Theorem 9.4 is true. In particular, the following question seems interesting: are our invariants of the zero surgery manifold M L determined by the homotopy class of the longitudes of L, relative to the transfinite lower central series of the group localization? (See the condition (2) k in Theorem 9.4.) We also note that in [IO01], Milnor's link invariants are interpreted as a spanning set for the set of cocycles in H 3 (F/F k ), allowing one to compute Milnor's numerical invariants from the Milnor invariants defined and studied in this paper. Explicit formulae for these cocycles are derived and evaluated on an Igusa Picture representing the homology class θ k (M L ). So, we may also ask: can one read the homotopy class of the longitudes using 3-dimensional cocycles in H 3 ( F / F κ ) for any ordinal κ, thus establishing a numerical formulation for transfinite Milnor's invariants of links? These problems remain open for transfinite ordinals, and possibly hinge on obtaining a deeper computational understanding of the transfinite lower central series of local groups, and especially free local groups. Torus bundle example: invariants of finite length Let Y be the torus bundle with monodromy h : T 2 → T 2 given by −1 0 0 −1 . That is, (10.1) Y = T 2 × [0, 1]/(h(x), 0) ∼ (x, 0). Let Γ = π 1 (Y ) be the fundamental group. The group Γ is an HNN extension Z 2 ⋊Z of π 1 (T 2 ) = Z 2 by Z = t , which acts on Z 2 by t(a, b)t −1 = (−a, −b). The goal of this section is to study the invariant θ k of finite length over the torus bundle group Γ. The cases of transfinite length invariants and the final invariant are investigated in Sections 11, 12 and 13. Readers eager to see the transfinite case may wish to skip this section on a first reading. The following theorem summarizes the result of our computation of finite length invariants. In what follows, Z d = Z/dZ denotes the finite cyclic group of order d, and Z × d = {r ∈ Z d | gcd(r, d) = 1} denotes the multiplicative group of units in Z d . Theorem 10.1. For finite k ≥ 2, the following hold. (1) The third homology is given by H 3 (Γ/Γ k ) = (Z 2 k−1 ) 4 . (2) The set of realizable classes in H 3 (Γ/Γ k ) is given by R k (Γ) = {(a, b, c, r) ∈ (Z 2 ) 4 | ac + b + r = 1} for k = 2, (Z 2 k−1 ) 3 × Z × 2 k−1 for 3 ≤ k < ∞. (3) The map R k+1 (Γ) → R k (Γ) induced by the projection Γ/Γ k+1 → Γ/Γ k is given by            (Z 4 ) 3 × Z × 4 −→ {(a, b, c, r) ∈ (Z 2 ) 4 | ac + b + r = 1} (a, b, c, r) −→ (0, 0, 0, r) for k = 2, (Z 2 k ) 3 × Z × 2 k −→ (Z 2 k−1 ) 3 × Z × 2 k−1 (a, b, c, r) −→ (2a, 2b, 2c, r) for 3 ≤ k < ∞. (4) For every automorphism φ on Γ/Γ k , the induced bijection φ * : Using Theorem 10.1(4), we can also obtain the following estimate of the number of isomorphism classes of the (k + 1)st lower central quotients of 3-manifold groups with the same kth lower central quotient as that of the torus bundle. R k (Γ) → R k (Γ) sends Im{R k+1 (Γ) → R k (Γ)} onto itself. Consequently, θ ∈ R k (Γ) Corollary 10.3. For each finite k ≥ 2, 2 ≤ # π/π k+1 π = π 1 (M ) for a closed 3-manifold M such that π/π k ∼ = Γ/Γ k isomorphism ≤ 7 · 2 4(k−2) + 1. Proof. By Theorem 10.1(3) and (4), there is a class θ ∈ R k (Γ) which does not vanish in the cokernel of R k+1 (Γ) → R k (Γ)/Aut(Γ/Γ k ). From this, it follows that there exist at least two isomorphism classes of π/π k+1 with π = π 1 (M ) for some closed 3-manifold M such that π/π k ∼ = Γ/Γ k , by Theorem C. This proves the lower bound in the statement. By Theorem 10.1(2) and (3), we have #R k (Γ) = (2 k−1 ) 3 · 2 k−2 , # Im{R k+1 (Γ) → R k (Γ)} = (2 k−2 ) 4 . By definition, θ ∈ R k (Γ) is equivalent to θ k (Y ) if and only if θ lies in the image of R k+1 (Γ). So, it follows that #(R k (Γ)/∼) ≤ #R k (Γ) − # Im{R k+1 (Γ) → R k (Γ)} + 1 = 7 · 2 4(k−2) + 1. By Corollary F(2), the number of isomorphism classes of π/π k+1 concerned in the statement is bounded above by #(R k (Γ)/≈), which is in turn bounded above by #(R k (Γ)/∼). From this, the desired upper bound is obtained. Indeed, by Corollary F(1), and by the upper bounded of #(R k (Γ)/∼) in the last step of the above proof, it follows that Theorem K holds, which asserts that 2 ≤ # equivalence classes of length k + 1 extensions of {Γ/Γ λ } λ≤k ≤ 7 · 2 4(k−2) + 1. We remark that the estimates in Corollary 10.3 (and that in Theorem K) are not sharp. Further investigation of the equivalence relation and automorphism action on R k (Γ) gives us improved bounds. We do not address this here. The rest of this section is devoted to the proof of Theorem 10.1. We begin with the lower central quotient computation. For (a, b) ∈ Z 2 ⊂ Γ, we have [t, (a, b)] = (−2a, −2b). By using this equation inductively, it follows that the kth lower central subgroup of Γ is given by (10.2) Γ k = 2 k−1 Z 2 ⊂ Z 2 ⊂ Γ. Consequently, the lower central quotient is given by Γ/Γ k = (Z 2 k−1 ) 2 ⋊ Z, where Z = t acts on (Z 2 k−1 ) 2 by t(a, b)t −1 = (−a, −b). The remainder of this section is devoted to the proof of Theorem 10.1. In Section 10.1, we compute the homology of Γ/Γ k and prove Theorem 10.1(1). In Section 10.2, we study the cap product structure on Γ/Γ k and prove Theorem 10.1(2) and (3). In Section 10.3, we study the action of Aut(Γ/Γ k ) on R k (Γ) and prove Theorem 10.1(4). Cell structure of B(Γ/Γ k ) and homology To compute homology of Γ/Γ k , we will use cellular chain complexes. Although spectral sequences provide an alternative approach for HNN extensions, the cellular method turns out to be more efficient for our purpose. We will use the following standard facts. (1) For the finite cyclic group Z d = g | g d of order d, B(Z d ) has a cell structure with exactly one i-cell e i in each dimension i ≥ 0. The boundary operator of the cellular chain complex C • (B(Z d ); Z[Z d ]) is given by ∂e 2i+1 = (1 − g)e 2i , ∂e 2i = (1 + g + · · · + g d−1 )e 2i−1 . (2) Let G = A⋊Z be an HNN extension of an abelian group A determined by an automorphism h : A → A, that is, Z = t acts on A by tat −1 = h(a). For a given cell structure of B(A), we may assume that h is realized by a cellular map h : B(A) → B(A). Then B(G) has an associated cell structure, whose n-cells are of the form e p × ǫ q with p + q = n, q = 0, 1, e p a p-cell of B(A) and ǫ q (q = 0, 1) an abstract q-cell. The boundary operator of C • (BG; ZG) is given by ∂(e p × ǫ 0 ) = (∂e p ) × ǫ 0 , ∂(e p × ǫ 1 ) = (∂e p ) × ǫ 1 + (−1) p (t · e p × ǫ 0 − h(e p ) × ǫ 0 ). Let d = 2 k−1 and write Z 2 d = (Z d ) 2 for brevity. Take the product B(Z 2 d ) = B(Z d ) × B(Z d ) of the cell complex in (1), and construct B(Γ/Γ k ) = B(Z 2 d ⋊Z) using (2). Cells of dimension n in B(Γ/Γ k ) are of the form e i × e j × ǫ q with i + j + q = n, q = 0, 1. The negation homomorphism h(g) = g −1 on Z d induces (the chain homotopy class of) the chain map C • (Z d ; Z[Z d ]) → C • (Z d ; Z[Z d ]) given by h(e 2k−1 ) = (−1) k g −1 e 2k−1 , h(e 2k ) = (−1) k e 2k . and the monodromy h : B(Z 2 d ) → B(Z 2 d ) is given by h(e i × e j ) = h(e i ) × h(e j ). Using this together with the above (1), (2) and the product boundary formula, it is straightforward to compute the cellular chain complex C • (Γ/Γ k ; Z[Γ/Γ k ]). Applying the augmentation Z[Γ/Γ k ] → Z, it is seen that C • (Γ/Γ k ) = C • (Γ/Γ k ; Z) has the following boundary operators in dimension ≤ 4. ∂ 1 : e 1 × e 0 × ǫ 0 → 0 e 0 × e 1 × ǫ 0 → 0 e 0 × e 0 × ǫ 1 → 0 , ∂ 2 : e 2 × e 0 × ǫ 0 → d · e 1 × e 0 × ǫ 0 e 1 × e 1 × ǫ 0 → 0 e 0 × e 2 × ǫ 0 → d · e 0 × e 1 × ǫ 0 e 1 × e 0 × ǫ 1 → −2e 1 × e 0 × ǫ 0 e 0 × e 1 × ǫ 1 → −2e 0 × e 1 × ǫ 0 , ∂ 3 : e 3 × e 0 × ǫ 0 → 0 e 2 × e 1 × ǫ 0 → d · e 1 × e 1 × ǫ 0 e 1 × e 2 × ǫ 0 → −d · e 1 × e 1 × ǫ 0 e 0 × e 3 × ǫ 0 → 0 e 2 × e 0 × ǫ 1 → d · e 1 × e 0 × ǫ 1 +2 · e 2 × e 0 × ǫ 0 e 1 × e 1 × ǫ 1 → 0 e 0 × e 2 × ǫ 1 → d · e 0 × e 1 × ǫ 1 +2 · e 0 × e 2 × ǫ 0 , ∂ 4 : e 4 × e 0 × ǫ 0 → d · e 3 × e 0 × ǫ 0 e 3 × e 1 × ǫ 0 → 0 e 2 × e 2 × ǫ 0 → d · e 1 × e 2 × ǫ 0 +d · e 2 × e 1 × ǫ 0 e 1 × e 3 × ǫ 0 → 0 e 0 × e 4 × ǫ 0 → d · e 0 × e 3 × ǫ 0 e 3 × e 0 × ǫ 1 → 0 e 2 × e 1 × ǫ 1 → d · e 1 × e 1 × ǫ 1 e 1 × e 2 × ǫ 1 → −d · e 1 × e 1 × ǫ 1 e 0 × e 3 × ǫ 1 → 0 . The homology groups H i (Z 2 d ⋊ Z) (i ≤ 3) are immediately obtained from this. H 1 (Z 2 d ⋊ Z) = Z 2 2 , (10.3) H 2 (Z 2 d ⋊ Z) = Z 2 2 × Z d , (10.4) H 3 (Z 2 d ⋊ Z) = Z 4 d . (10.5) This shows Theorem 10.1(1). In addition, the four Z d factors of H 3 (Z 2 d ⋊ Z) are respectively generated by (10.6) ξ 1 = e 3 × e 0 × ǫ 0 , ξ 2 = e 2 × e 1 × ǫ 0 + e 1 × e 2 × ǫ 0 , ξ 3 = e 0 × e 3 × ǫ 0 , ζ = e 1 × e 1 × ǫ 1 . Here, the basis element ζ ∈ H 3 (Γ/Γ k ) is the image of the fundamental class [Y ] ∈ H 3 (Y ) under H 3 (Y ) → H 3 (Γ/Γ k ). In other words, θ k (Y ) = ζ. To verify this, observe that Y is a subcomplex of B(Γ/Γ k ) consisting of cells e i × e j × ǫ q with i, j, q ∈ {0, 1}. By computing H i (Y ) using this subcomplex, it is seen that e 1 × e 1 × ǫ 1 generates H 3 (Y ) = Z. Also, viewing B(Z 2 d ) as a subcomplex of B(Z 2 d ⋊ Z), it is seen that the subgroup generated by ξ 1 , ξ 2 and ξ 3 is the isomorphic image of H 3 (Z 2 d ) under the inclusion-induced map. The above chain level computation also enables us to compute the projection-induced homomorphism H 3 (Γ/Γ k+1 ) → H 3 (Γ/Γ k ). First, consider the projection Z 2d → Z d . Abuse notation to denote the i-cells of B(Z rd ) and B(Z d ) by the same symbol e i . A routine computation shows that the induced chain map C • (Z 2d ) → C • (Z d ) is given by e i → 2 ⌊i/2⌋ · e i . (For instance, e 1 → e 1 while e 2 → 2e 2 .) From this, it follows that the projection Γ/Γ k+1 = Z 2 2d ⋊ Z −→ Γ/Γ k = Z 2 d ⋊ Z induces the chain map C • (Γ/Γ k+1 ) → C • (Γ/Γ k ) given by e i × e j × ǫ q → 2 ⌊i/2⌋+⌊j/2⌋ · e i × e j × ǫ q . Therefore, H 3 (Γ/Γ k+1 ) → H 3 (Γ/Γ k ) is the homomorphism (10.7) ξ i → 2 · ξ i for i = 1, 2, 3, ζ → ζ. Realizable classes Now we compute the realizable classes in H 3 (Γ/Γ k ). Fix θ ∈ H 3 (Z 2 d ⋊ Z) = H 3 (Γ/Γ k ) where d = 2 k−1 with k ≥ 2 as before. To apply Theorem G, we will investigate the following cap product maps. ∩ θ : tH 2 (Z 2 d ⋊ Z) −→ tH 1 (Z 2 d ⋊ Z) (10.8) ∩ θ : H 1 (Z 2 d ⋊ Z) −→ H 2 (Z 2 d ⋊ Z)/K k−1 (Γ/Γ k ) (10.9) Here, K k−1 (Γ/Γ k ) is the kernel of H 2 (Z 2 d ⋊ Z) = H 2 (Γ/Γ k ) → H 2 (Γ/Γ k−1 ). Case 1. Suppose k ≥ 3, that is, d = 2 k−1 is divisible by 4. Recall that H 3 (Z 2 d ⋊ Z) has basis {ξ 1 , ξ 2 , ξ 3 , ζ} described in (10.6). Let θ ∈ H 3 (Z 2 d ⋊ Z) be a class which is a linear combination of ξ 1 , ξ 2 and ξ 3 . Since each ξ i is of the form • × • × ǫ 0 in (10.6), ξ i lies in the image of the inclusion-induced map i * : H 3 (Z 2 d ) → H 3 (Z 2 d ⋊ Z). Write θ = i * (z) for some z ∈ H 3 (Z 2 d ) . Consider the following commutative diagram. Z 2 2 = tH 2 (Z 2 d ⋊ Z) tH 1 (Z 2 d ⋊ Z) = Z 2 2 Z 2 d = tH 2 (Z 2 d ) tH 1 (Z 2 d ) = Z 2 d ∩θ i * ∩z i * Here, tH 1 (Z 2 d ⋊ Z) = Z 2 2 by (10.3), H 1 (Z 2 d ) = Z 2 d obviously, so tH 2 (Z 2 d ⋊ Z) = Ext(H 1 (Z 2 d ⋊ Z), Z) = Z 2 2 and tH 2 (Z 2 d ) = Z 2 d . Let c ∈ tH 2 (Z 2 d ⋊Z) . Since 2c = 0 and all order 2 elements in tH 2 ( Z 2 d ) = Z 2 d are multiples of d/2, i * (c) is a multiple of d/2. So, i * (i * (c) ∩ z) = c ∩ θ is a multiple of d/2, which is a multiple of 2 since d = 2 k−1 with k ≥ 3. It follows that c∩θ = 0, since it lies in tH 1 (Z 2 d ⋊Z) = Z 2 2 . This shows that the cap product (10.8) is zero. Also, the cap product (10.9) is zero since H 1 (Z 2 d ) = 0 and the following diagram commutes. H 1 (Z 2 d ⋊ Z) H 2 (Z 2 d ⋊ Z) 0 = H 1 (Z 2 d ) H 2 (Z 2 d ) ∩θ i * ∩z i * Now, consider a class of the form θ = rζ with r ∈ Z. Since ζ is the image of the fundamental class [Y ] ∈ H 3 (Γ), ζ is realizable, that is, ζ ∈ R k (Γ). By Theorem G, the cap product (10.8) is an isomorphism for θ = ζ. From this it follows that (10.8) is an isomorphism for θ = rζ if and only if r is odd, since tH 1 (Z 2 d ⋊ Z) is a 2-group by (10.4). Also, the cap product (10.9) is surjective for the realizable class θ = ζ ∈ R k (Γ), by Theorem G. From this it follows that (10.9) is surjective for θ = rζ if r is odd, since H 2 (Z 2 d ⋊ Z) is a 2-group. Combine the above conclusions, to obtain the following: for an arbitrary class θ = aξ 1 + bξ 2 + cξ 3 + rζ ∈ H 3 (Z 2 d ⋊ Z), the above (10.8) is an isomorphism and (10.9) is surjective if and only if r is odd. Applying Theorem G, this proves Theorem 10.1(2) for k ≥ 3. Case 2. Suppose k = 2, that is, d = 2 k−1 = 2. In this case, first note that Γ/Γ k−1 = Γ/Γ 1 is trivial by definition, and thus the cap product (10.9) is onto the trivial group. So, it suffices to determine when the cap product (10.8) is an isomorphism. Observe that the semi-direct product Z 2 2 ⋊ Z is equal to the ordinary product Z 2 2 × Z since −a = a in Z 2 . This enables us to compute the cap product directly using the standard product cell structures. To prevent confusion from the semi-direct product case, denote the i-cell of B(Z) = S 1 by u i (i = 0, 1), while cells of B(Z 2 ) are denoted by e i as before. It is well known that Here and in what follows, for brevity, we use the convention that e i = 0 for i < 0 and u i = 0 for i / ∈ {0, 1}. Using this notation, the cap product is given by (e i ) * ∩ e j = (−1) i(j−i) · e j−i , (u i ) * ∩ u j = u j−i . Therefore, the cap product on the product B(Z 2 2 × Z) is as follows: (10.10) (e i × e j × u p ) * ∩ (e k × e ℓ × u q ) = (−1) jk+pk+pl · e k−i × e ℓ−j × u q−p . Note that the product cell structure we use here is different from the HNN extension cell structure we used in Case 1. To compute the cap product for the basis elements in (10.6) which are expressed in terms of the cells e i × e j × ǫ q , we need to rewrite them in terms of the product cells e i × e j × u q . The three basis elements ξ 1 , ξ 2 and ξ 3 in (10.6) are already in this form, since ǫ 0 is identical with u 0 . To make the computation for ζ = e 1 × e 2 × ǫ 1 simpler, consider the projection Z 2 ⋊ Z → Z 2 2 ⋊ Z = Z 2 2 × Z. It is straightforward to verify that this induces the homotopy class of a chain map C • (B(Z 2 ⋊ Z); Z[Z 2 ⋊ Z]) −→ C • (B(Z 2 2 × Z); Z[Z 2 2 × Z] ) which is given by e i × e j × ǫ q → e i × e j × u q in dimension i + j + q ≤ 1 and by e 1 × e 0 × ǫ 1 → e 1 × e 0 × u 1 − e 2 × e 0 × u 0 e 0 × e 1 × ǫ 1 → e 0 × e 1 × u 1 − e 0 × e 2 × u 0 e 1 × e 1 × ǫ 1 → e 1 × e 1 × u 1 − e 1 × e 2 × u 0 − e 2 × (g · e 1 ) × u 0 in dimensions 2 and 3. So, applying the augmentation, ζ = e 1 × e 1 × ǫ 1 is expressed, in the product complex C • (B(Z 2 2 × Z); Z), as (10.11) ζ = e 1 × e 1 × u 1 − e 1 × e 2 × u 0 − e 2 × e 1 × u 0 . Now we are ready to compute the cap product (10.8). By a routine computation, it is verified that H 2 (Z 2 2 ⋊ Z) = Z 2 2 with basis {(e 2 × e 0 × u 0 ) * , (e 0 × e 2 × u 0 ) * }, tH 1 (Z 2 2 ⋊ Z) = Z 2 2 with basis {e 1 × e 0 × u 0 , e 0 × e 1 × u 0 }. Let θ = aξ 1 + bξ 2 + cξ 3 + rζ ∈ H 3 (Z 2 2 ⋊ Z). From (10.10) and (10.11), it follows that the cap product ∩ θ in (10.8) is given by a b − r b − r c with respect to the above basis. It follows that ∩ θ is an isomorphism if and only if ac + b + r is odd. This completes the proof of Theorem 10.1(2) for k = 2. Once R k (Γ) is computed as above, Theorem 10.1(3) follows immediately from the description of the projection-induced homomorphism in (10.7). Automorphism action on the realizable classes As above, let d = 2 k−1 , and write Γ/ Γ k = Z 2 d ⋊ Z. Suppose φ : Z 2 d ⋊ Z → Z 2 d ⋊ Z is an automorphism. It induces an automorphism φ * : H 3 (Γ/Γ k ) → H 3 (Γ/Γ k ), which restricts to a bijection φ * : R k (Γ) → R k (Γ). Our goal is to show Theorem 10.1(4), which says that φ * sends Im{R k+1 (Γ) → R k (Γ)} onto itself bijectively. As the first step, we claim that φ sends the subgroup Z 2 d ⊂ Z 2 d ⋊ Z isomorphically onto Z 2 d itself. To see this, observe that Z 2 d is the kernel of the horizontal composition in the following diagram: Z 2 d ⋊ Z H 1 (Z 2 d ⋊ Z) = Z 2 d × Z H 1 (Z 2 d ⋊ Z; Q) = Q Z 2 d ⋊ Z H 1 (Z 2 d ⋊ Z) = Z 2 d × Z H 1 (Z 2 d ⋊ Z; Q) = Q φ φ * φ * Since vertical arrows are automorphisms, the claim follows from the commutativity of the diagram. Recall from Section 10.1 that the subgroup ξ 1 , ξ 2 , ξ 3 is the (isomorphic) image of H 3 (Z 2 d ) in H 3 (Z 2 d ⋊ Z). So, by the claim, φ * ξ 1 , ξ 2 , ξ 3 is equal to ξ 1 , ξ 2 , ξ 3 . From this it follows that (10.12) φ * 2ξ 1 , 2ξ 2 , 2ξ 3 = 2ξ 1 , 2ξ 2 , 2ξ 3 . In addition, recall from Section 10.1 that ζ ∈ H 3 (Z 2 d ⋊ Z) = H 3 (Γ/Γ k ) is the image of the fundamental class [Y ] ∈ H 3 (Γ). The quotient homomorphism Γ → Γ/Γ k factors through Γ/Γ k+1 , and [Y ] is sent into R k+1 (Γ) by definition. From this, it follows that ζ lies in the image of R k+1 (Γ). By Theorem 10.1(3), Im{R k+1 (Γ) → R k (Γ)} = {η + rζ | η ∈ 2ξ 1 , 2ξ 2 , 2ξ 3 , r ∈ 2Z + 1}. So, φ * (ζ) = η 0 + r 0 ζ for some η 0 ∈ 2ξ 1 , 2ξ 2 , 2ξ 3 and some odd r 0 . Now, for an arbitrary η + rζ ∈ Im{R k+1 (Γ) → R k (Γ)} with η ∈ 2ξ 1 , 2ξ 2 , 2ξ 3 and r odd, we have φ * (η + rζ) = φ * (η) + rη 0 + rr 0 ζ. Here, φ * (η) + rη 0 ∈ 2ξ 1 , 2ξ 2 , 2ξ 3 by (10.12), and rr 0 is obviously odd. This shows that φ * sends Im{R k+1 (Γ) → R k (Γ)} into itself. Since φ * is one-to-one and Im{R k+1 (Γ) → R k (Γ)} is a finite set, it follows that φ * restricts to a bijection of Im{R k+1 (Γ) → R k (Γ)} onto itself. This completes the proof of Theorem 10.1(4). Torus bundle example: invariants of transfinite length In this section, we study transfinite invariants over the torus bundle Y defined in (10.1): Y = T 2 × [0, 1]/(h(x), 0) ∼ (x, 0) where h = −1 0 0 −1 . As before, let Γ = π 1 (Y ). Denote the ring of integers localized at the prime 2 by Z (2) = {a/b | a ∈ Z, b ∈ 2Z + 1}. Let Z × (2) = {a/b | a, b ∈ 2Z + 1} be the multiplicative group of units. The following theorem summarizes our computation of transfinite invariants over the torus bundle. Theorem 11.1. For transfinite ordinals κ, the following hold. (1) The third homology of Γ/Γ κ is given by H 3 ( Γ/ Γ κ ) = Z (2) for κ = ω, (Z (2) /Z) × Z for κ ≥ ω + 1. (2) The set of realizable classes in H 3 (Γ/Γ κ ) is given by R κ (Γ) = Z ×(2) for κ = ω, (Z (2) /Z) × {±1} for κ ≥ ω + 1. (3) The map R κ+1 (Γ) → R κ (Γ) induced by Γ/Γ κ+1 → Γ/Γ κ is given by            (Z (2) /Z) × {±1} −→ Z × (2) (x, ǫ) −→ ǫ for κ = ω, (Z (2) /Z) × {±1} −→ (Z (2) /Z) × {±1} (x, ǫ) −→ (x, ǫ) for κ ≥ ω + 1. (4) On R ω (Γ) = Z × (2) , the equivalence relation ∼ is given by r ∼ r ′ if and only if r = ±r ′ . On R κ (Γ) with κ ≥ ω + 1, r ∼ r ′ for all r, r ′ ∈ R κ (Γ). (5) The automorphism group Aut( Γ/ Γ ω ) acts on R ω (Γ) transitively. Consequently R ω (Γ)/≈ is trivial. Combining Theorem 11.1(4) and (5) with Corollary F(1) and Theorem C respectively, the following statements are immediately obtained. Recall that the notion of extensions of a transfinite lower central quotient tower and their equivalence were introduced in Section 2.6. Corollary 11.2. (1) The set length ω + 1 extensions, by 3-manifolds, of the length ω tower Γ/ Γ ω → · · · → Γ/Γ 1 = {1} equivalence of length ω + 1 extensions is in one-to-one correspondence with the infinite set (Z × (2) ) >0 := {a/b | a, b ∈ 2Z + 1, a, b > 0}. (2) If M is a closed 3-manifold with π = π 1 (M ) such that π/ π ω ∼ = Γ/ Γ ω , then π/ π κ ∼ = Γ/ Γ κ for every ordinal κ ≥ ω + 1. This illustrates that the classification of tower extensions from length ω to ω + 1 may have a completely different nature from the determination of the isomorphism class of the (ω + 1)st lower central quotient for a given ωth lower central quotient. For the case of the torus bundle group Γ, Corollary 11.2(1) tells us that the former has infinitely many solutions, while the latter has a unique solution by Corollary 11.2(2). In particular, over Γ,μ κ (M ) is trivial for all infinite ordinals κ wheneverμ κ (M ) is defined. We remark that our proof of Theorem L in Section 13 presents modified torus bundle groups, over which there are infinitely many 3-manifolds M with nontrivialμ ω (M ). The remaining part of this section is devoted to the proof of Theorem 11.1. In Section 11.1, we describe the homology localization Γ and its transfinite lower central series. It turns out that the transfinite lower central series stabilizes at length ω + 1 with Γ ω+1 = {1}. In Section 11.2, we study the homology and cap product structure of Γ/ Γ ω and prove Theorem 11.1(1) and (2) for κ = ω. In Sections 11.3 and 11.4, we study the homology and the cap product structure of Γ, respectively, and prove Theorem 11.1(1) and (2) for κ ≥ ω + 1 and Theorem 11.1(3). In Section 11.5, we study the equivalence relation ∼ on R ω (Γ) and prove Theorem 11.1(4) and (5). Homology localization of the torus bundle group We start by reviewing the computation of the homology localization Γ of the torus bundle group Γ, from our earlier work [CO13]. The result expresses Γ as a colimit of finitely presented groups. (Such a colimit expression of the localization exists for any finitely presented group by Theorem 3.1(2), but finding an explicit description is nontrivial in general.) For a positive odd integer ℓ, let (11.1) Γ(ℓ) = u, v, t | tut −1 u, tvt −1 v, [u, v] ℓ 2 , [[u, v], u], [[u, v], v], [u, v], t] . It is straightforward to see that Γ(1) = Γ and the map Γ(ℓ) → Γ(rℓ) sending t, u, and v to t, u r , and v r respectively is a well-defined inclusion for all odd r, ℓ ≥ 1. The groups Γ(ℓ) with these inclusions form a direct system. Observe, from the presentation (11.1), that [u, v] ∈ Γ(ℓ) generates a finite cyclic subgroup of order ℓ 2 , which is normal in Γ(ℓ), and the quotient of Γ(ℓ) by this cyclic subgroup is isomorphic to the semi direct product Z 2 ⋊ Z, where Z 2 is generated by u, v and Z is generated by t which acts on Z 2 by negation. Note that the restriction of Γ(ℓ) → Γ(rℓ) on the cyclic subgroup generated by [u, v] is the homomorphism Z ℓ 2 → Z (rℓ) 2 given by 1 → r 2 , and Γ(ℓ) → Γ(rℓ) induces a map Z 2 ⋊ Z → Z 2 ⋊ Z on the quotients, which is given by (a, b, c) → (ra, rb, c). So, if we identify Z ℓ 2 with 1 ℓ 2 Z/Z := ( 1 ℓ 2 Z)/Z = { a ℓ 2 | a ∈ Z}/Z, under [u, v] → 1 ℓ 2 and and identify Z 2 ⋊ Z with ( 1 ℓ Z) 2 ⋊ Z under u → ( 1 ℓ , 0, 0), v → (0, 1 ℓ , 0) , t → (0, 0, 1), then we obtain the following commutative diagram with exact rows. (11.2) 1 1 ℓ 2 Z/Z Γ(ℓ) ( 1 ℓ Z) 2 ⋊ Z 1 1 1 (rℓ) 2 Z/Z Γ(rℓ) ( 1 rℓ Z) 2 ⋊ Z 1 id id Taking colimit, we obtain the following central extension. (11.3) 1 −→ Z (2) /Z −→ Γ −→ Z 2 (2) ⋊ Z −→ 1. Using this, we can compute the transfinite lower central subgroups of Γ. Lemma 11.4. The first transfinite lower central subgroup Γ ω is equal to the subgroup Z (2) /Z. For κ ≥ ω + 1, Γ κ is trivial. Proof. We claim that Γ(ℓ) ω is the subgroup 1 ℓ 2 Z/Z. To prove this, we will first verify Γ(ℓ) ω ⊂ 1 ℓ 2 Z/Z. Recall from (10.2) that the rth lower central subgroup (Z 2 ⋊ Z) r of Γ = Z 2 ⋊ Z is equal to (2 r−1 Z) 2 . So Z 2 ⋊ Z is residually nilpotent. That is, (Z 2 ⋊ Z) ω is trivial. From this and (11.2), it follows that Γ(ℓ) ω lies in the subgroup 1 ℓ 2 Z/Z. For the reverse inclusion, first verify that u 2 r−1 ∈ Γ(ℓ) r by induction, using the identity [t, [u, v] has order ℓ 2 and ℓ is odd, it implies that [u, v] ∈ Γ(ℓ) r+1 . Since this holds for all r, it follows that [u, v] ∈ Γ(ℓ) ω . In other words, 1 ℓ 2 Z/Z ⊂ Γ(ℓ) ω . This shows the claim that Γ(ℓ) ω = 1 ℓ 2 Z/Z. From the claim, the promised conclusion Γ ω = Z (2) /Z is obtained by taking colimit. Since [u, v] is central, Γ(ℓ) ω+1 is trivial, and thus Γ(ℓ) κ is trivial for all κ ≥ ω + 1. Take colimit to obtain that Γ κ is trivial for all κ ≥ ω + 1. u 2 r−1 ] = u −2 r . So [u 2 r−1 , v] = [u, v] 2 r−1 lies in Γ(ℓ) r+1 . Since Third homology and realizable classes for κ = ω The goal of this subsection is to investigate the homology and cap product structure of Γ/ Γ ω and prove Theorem 11.1(1) and (2) for κ = ω. We begin with homology computation for Γ/ Γ ω . By (11.3) and Lemma 11.4, we have Γ/ Γ ω = Z 2 (2) ⋊ Z where Z acts on Z 2 (2) by negation. The Lyndon-Hochschild-Serre spectral sequence for the HNN extension 2) ) is the map induced by negation (a, b) → (−a, −b) on Z 2 (2) . Using that Z 2 1 −→ Z 2 (2) −→ Γ/ Γ ω −→ Z −→ 1 gives the Wang exact sequence (11.4) H 3 (Z 2 (2) ) −→ H 3 ( Γ/ Γ ω ) −→ H 2 (Z 2 (2) ) 1−t * − − −→ H 2 (Z 2 (2) ) −→ H 2 ( Γ/ Γ ω ) −→ H 1 (Z 2 (2) ) 1−t * − − −→ H 1 (Z 2 (2) ) where t * : H i (Z 2 (2) ) → H i (Z 2( (2) is the colimit of ( 1 d Z) 2 ∼ = Z 2 , it is straightforward to compute the following homology groups of Z 2 (2) : H 3 (Z 2 (2) ) = 0, H 2 (Z 2 (2) ) = Z (2) , H 1 (Z 2 (2) ) = Z 2 (2) . Moreover, 1 − t * on H 2 (Z 2 (2) ) is zero, while 1 − t * on H 1 (Z 2 (2) ) is multiplication by 2. From this and (11.4), it follows that (11.5) H 3 ( Γ/ Γ ω ) = H 2 (Z 2 (2) ) = Z (2) , H 2 ( Γ/ Γ ω ) = H 2 (Z 2 (2) ) = Z (2) , H 1 ( Γ/ Γ ω ) = Z 2 2 × Z. This shows Theorem 11.1(1) for κ = ω. Now we investigate cap products to compute the set of realizable classes R ω (Γ). First note that θ = 1 ∈ Z (2) = H 3 ( Γ/ Γ ω ) is the image of the fundamental class [Y ] ∈ H 3 (Γ), and thus it lies in R ω (Γ) by definition. So, by Theorem G, for θ = 1, (11.6) ∩ θ : tH 2 ( Γ/ Γ ω ) = Z 2 2 −→ tH 1 ( Γ/ Γ ω ) = Z 2 2 is an isomorphism. Also, (11.7) ∩ θ : H 1 ( Γ/ Γ ω ) = Z −→ H 2 ( Γ/ Γ ω )/ Ker{H 2 ( Γ/ Γ ω ) → H 2 ( Γ/ Γ k )}. is surjective for all finite k. That is, Im{∩ 1} = H 2 ( Γ/ Γ ω )/Ker. Consider an arbitrary θ := a/d ∈ Z (2) = H 3 ( Γ/ Γ ω ) with d odd. Then, since the codomain of (11.6) is a finite abelian 2-group, the cap product ∩ θ = ad · (∩ 1) in (11.6) is an isomorphism if and only if a is odd. Moreover, if a is odd, then the cap product in (11.7) satisfies Im{∩ a/d} = (a/d) · Im{∩ 1} = (a/d) · (H 2 ( Γ/ Γ ω )/Ker) = H 2 ( Γ/ Γ ω )/Ker where the last equality holds since a/d is invertible in H 2 ( Γ/ Γ ω ) = Z (2) . So, by applying Theorem G, θ = a/d ∈ H 3 ( Γ/ Γ ω ) lies in R ω (Γ) if and only if a is odd. This proves Theorem 11.1(2) for κ = ω. 11.3. Third homology for κ ≥ ω + 1 The goal of this subsection is to investigate the homology of Γ and prove Theorem 11.1(1) for κ ≥ ω + 1. Recall that Γ/ Γ κ = Γ for κ ≥ ω + 1 by Lemma 11.4, and that Γ = colim Γ(ℓ) by Theorem 11.3, where Γ(ℓ) is the group defined by (11.1). We restate (11.1) for the reader's convenience. (11.1) Γ(ℓ) = u, v, t | tut −1 u, tvt −1 v, [u, v] ℓ 2 , [[u, v], u], [[u, v], v], [u, v], t] To understand the homology of Γ, it is useful to consider an HNN extension described below. Let A(ℓ) be the subgroup of Γ(ℓ) generated by u and v, following [CO13]. From the presentation (11.1), it is immediately seen that A(ℓ) is a normal subgroup of Γ(ℓ), and Γ(ℓ)/A(ℓ) is the infinite cyclic group generated by t: (11.8) 1 −→ A(ℓ) −→ Γ(ℓ) −→ Z −→ 1 Note that Γ(ℓ) → Γ(rℓ) induces an isomorphism Γ(ℓ)/A(ℓ) ∼ = − → Γ(rℓ)/A(rℓ) = Z sending t to t. Let A = colim A(ℓ) , and take the colimit of (11.8) to obtain the following: 1 −→ A −→ Γ −→ Z −→ 1 The Lyndon-Hochschild-Serre spectral sequence for this HNN extension gives the following Wang exact sequence: (11.9) · · · −→ H i (A) 1−t * − − −→ H i (A) −→ H i ( Γ) −→ H i−1 (A) 1−t * − − −→ H i−1 (A) −→ · · · where t * is induced by the conjugation by t on A. To compute the homology of Γ using (11.9), we first compute the homology of A. From (11.1), it follows that [u, v] ∈ A(ℓ) generates a finite cyclic normal subgroup of order ℓ 2 , and A(ℓ) is a central extension of this, by the free abelian group of rank two generated by u and v. Since u → u r , v → v r and [u, v] → [u r , v r ] = [u, v] r 2 under Γ(ℓ) → Γ(rℓ), we have the following commutative diagram. (11.10) 1 Z ℓ 2 A(ℓ) Z 2 1 1 Z (rℓ) 2 A(rℓ) Z 2 1 ·r 2 ·r Consider the Lyndon-Hochschild-Serre spectral sequence of the top row of (11.10). (11.11) E 2 p,q = H p (Z 2 ) ⊗ H q (Z ℓ 2 ) =⇒ H n (A(ℓ)) The E 2 and E ∞ = E 3 pages for q ≤ 3 are as follows: (11.12) E 2 = Z ℓ 2 Z 2 ℓ 2 Z ℓ 2 0 0 0 Z ℓ 2 Z 2 ℓ 2 Z ℓ 2 Z Z 2 Z d 2 2,0 , E ∞ = E 3 = Z ℓ 2 Z 2 ℓ 2 Z ℓ 2 0 0 0 0 Z 2 ℓ 2 Z ℓ 2 Z Z 2 ℓ 2 Z . All entries in (11.12) are immediately obtained from (11.11), possibly except E 3 p,q for (p, q) = (2, 0) and (1, 0). To verify these, observe that E ∞ 0,1 = E 3 0,1 mush vanish since H 1 (A(k)) = Z 2 and E ∞ 1,0 = E 2 1,0 = Z 2 . From this it follows that the differential d 2 2,0 is surjective, so its kernel E 3 0,1 is the subgroup ℓ 2 Z of Z. From the E ∞ page, it follows that H 2 (A(ℓ)) is an extension of Z 2 ℓ 2 = H 1 (Z 2 ) ⊗ H 1 (Z ℓ 2 ) by ℓ 2 Z ⊂ Z = H 2 (Z 2 ). From (11.10), it follows that H 1 (Z 2 ) → H 1 (Z 2 ) induced by A(ℓ) → A(rℓ) is ℓ = 1, to obtain the following commutative diagram: 0 H 3 (Y ) H 2 (A(1)) 0 0 H 3 (A) H 3 ( Γ) H 2 (A) 0 ∼ = ∼ = From this, it is straightforward to see that the composition of the inverses of the two isomorphisms and H 3 (Y ) → H 3 ( Γ) is a splitting. It gives an identification (11.18) H 3 ( Γ) = (Z (2) /Z) × Z such that (0, 1) ∈ (Z (2) /Z) × Z represents the image of the fundamental class [Y ]. This shows Theorem 11.1(1) for κ ≥ ω + 1. For use in the next subsection, we compute H 2 ( Γ) here. By taking the abelianization, it is straightforward to see that H 1 (A) = Z 2 (2) and t * on H 1 (A) is the negation. So, from the Wang sequence (11.9), it follows that (11.19) H 2 ( Γ) = H 2 (A) = H 2 (A(1)) = H 2 (Z 2 ) = Z. 11.4. Realizable classes for κ ≥ ω + 1 In this subsection, we study the set of realizable classes R κ (Γ) for κ ≥ ω + 1, to prove Theorem 11.1(2) for κ ≥ ω + 1 and Theorem 11.1(3). To determine realizable classes in H 3 ( Γ/ Γ κ ) = H 3 ( Γ) for κ ≥ ω + 1, consider the cap product By (11.19), H 2 ( Γ) = H 2 (Z 2 ) = Z. By (11.5), H 2 ( Γ/ Γ ω ) = H 2 (Z 2 (2) ) = Z (2) . Since Z 2 ֒→ Z 2 (2) induces the standard inclusion on these H 2 , Ker{H 2 ( Γ) → H 2 ( Γ/ Γ ω )} is trivial, and thus the codomain of (11.21) is equal to H 2 ( Γ) = Z. From this, it follows that the cap product (11.21) is zero for θ = (x, 0) ∈ H 3 ( Γ) = (Z (2) /Z) × Z, since x is torsion. By Theorem G, the cap product (11.21) is surjective for θ = (0, 1), since this θ lies in R κ (Γ). So, for a general class θ = (x, r) ∈ H 3 ( Γ) = (Z (2) /Z) × Z, (11.21) is surjective if and only if r = ±1. By Theorem G, it is the case if and only if θ ∈ R κ (Γ). This proves Theorem 11.1(2) for κ = ω + 1. For κ > ω + 1, the computation proceeds along the same lines. The only exception is that we need to replace Ker{H 2 ( Γ) → H 2 ( Γ/ Γ ω )} in (11.21) by Ker{H 2 ( Γ) → H 2 ( Γ/ Γ λ )} with λ < κ. But, since the kernel is already trivial for λ = ω by the above computation, the same argument applies to the case of κ > ω + 1 as well. This completes the proof of Theorem 11.1(2) for κ ≥ ω + 1. (11.20) ∩ θ : tH 2 ( Γ) −→ tH 1 ( Γ) = tH 1 (Γ) = Z 2 2 . If θ ∈ H 3 ( Γ) = (Z (2) /Z) × Z To compute the map R ω+1 (Γ) → R ω (Γ) induced by the projection Γ = Γ/ Γ ω+1 → Γ/ Γ ω , recall from (11.18) that H 3 ( Γ) = (Z (2) /Z) × Z where the Z factor is identified with H 2 (A) = H 2 (Z 2 ) via the Wang sequence (11.9). Also, recall from (11.5) that H 3 ( Γ/ Γ ω ) = H 2 (Z (2) ) = Z (2) . So, H 3 ( Γ) = (Z (2) ) × Z → H 3 ( Γ/ Γ ω ) = Z (2) is given by (a, r) → r, and R ω+1 (Γ) → R ω (Γ) is the restriction. This shows Theorem 11.1(3) for κ = ω. Since Γ/ Γ κ = Γ for κ ≥ ω +1, Theorem 11.1(3) for κ ≥ ω + 1 is obviously true. Equivalence relation and automorphism action for κ = ω In this subsection, we investigate the equivalence relation ∼ on R ω (Γ) and prove Theorem 11.1(4) and (5). Recall that R ω (Γ) = Z × (2) ⊂ H 3 ( Γ/ Γ ω ) = Z (2) by Theorem 11.1(1) and (2). Fix θ = p q ∈ R ω (Γ), where p, q ∈ 2Z + 1. To determine the equivalence class of θ as a subset of R ω (Γ), we will use an automorphism of Γ/ Γ ω , which is equal to Z 2 (2) ⋊Z by Lemma 11.4. Define φ p/q : Z 2 (2) ⋊Z → Z 2 (2) ⋊Z by φ p/q (a, b, r) = ( p q · a, b, r) for a, b ∈ Z (2) , r ∈ Z. It is straightforward to verify that φ p/q is an automorphism with inverse φ −1 p/q = φ q/p . We claim that φ p/q induces 1 → p q = θ on H 3 (Z 2 (2) ⋊ Z) = Z (2) . To see this, observe that the restriction of φ p/q on the subgroup Z 2 (2) induces an automorphism of H 2 (Z 2 (2) ) = Z (2) given by 1 → p q . Since H 3 (Z 2 (2) ⋊ Z) = H 2 (Z 2 (2) ) by (11.5), the claim follows from this. To avoid confusion, for a closed 3-manifold M with π = π 1 (M ) equipped with an isomorphism f : π/ π ω ∼ = − → Γ/ Γ ω , denote the invariant θ ω (M ) by θ ω (M, f ) temporarily. Then, for M = Y , the above claim implies that θ ω (Y, φ p/q ) = θ, since 1 ∈ Z (2) = H 3 ( Γ/ Γ ω ) represents the image of the fundamental class [Y ]. So, by definition, the equivalence class I θ = {θ ′ | θ ′ ∼ θ} of θ in R ω (Γ) is equal to the image of the following composition. R ω+1 (Γ) R ω (Γ) R ω (Γ) (Z (2) /Z) × {±1} Z × (2) (φ p/q ) * Here, the projection-induced map R ω+1 (Γ) → R ω (Γ) is (a, ±1) → ±1 by Theorem 11.1(3). From this, it follows that I θ = {θ, −θ}. This completes the proof of Theorem 11.1(4). In addition, using the above argument, it is straightforward to show Theorem 11.1(5), which asserts that the action of Aut( Γ/ Γ ω ) on the set of realizable classes R ω (Γ) is transitive. Indeed, for an arbitrary θ = p/q ∈ Z × (2) = R ω (Γ), since the above automorphism φ p/q on Γ/ Γ ω satisfies φ p/q (1) = θ, it follows that θ and 1 have the same orbit. So the action is transitive. Torus bundle example: the universal θ-invariant We continue the study of the localization of the fundamental group Γ of the torus bundle Y defined in (10.1). The goal of this section is to understand the final invariant θ defined over Γ and prove Theorem J, which we state again below for the reader's convenience. Recall that R(Γ) is the set of realizable values of θ. Theorem J says: R(Γ)/Aut( Γ) is infinite. This detects the existence of infinitely many distinct homology cobordism classes of closed 3-manifolds M with π = π 1 (M ), such that π ∼ = Γ, and thus, θ κ (M ) is defined and vanishes in Coker{R κ+1 (Γ) → R κ (Γ)} for all ordinals κ. In particular, for every ordinal κ, the Milnor invariantμ κ (M ) vanishes for these 3-manifolds M . We begin with computation of realizable classes in H 3 ( Γ). Recall that H 3 ( Γ) = (Z (2) /Z) × Z by Theorem 11.1(1). Theorem 12.1. R(Γ) = (Z (2) /Z) × {±1} ⊂ H 3 ( Γ). Proof. In the argument used to prove Theorem 11.1(1) in Section 11.4, we have shown that a homology class θ ∈ H 3 ( Γ) = (Z (2) /Z)×Z lies in (Z (2) /Z)×{±1} if and only if ∩ θ : tH 2 ( Γ) → tH 1 ( Γ) is an isomorphism and ∩ θ : H 1 ( Γ) → H 2 ( Γ) is an epimorphism. By Theorem H, it follows that R(Γ) = (Z (2) /Z) × {±1}. The next theorem describes the action of Aut( Γ) on H 3 ( Γ) and R(Γ). To state the result, we use the following notation. For a group G, denote the abelianization by G ab . Recall from Section 11.3 that Γ is an HNN extension of a subgroup A such that A ab = Z 2 (2) with basis {u, v}. We will show, in Lemma 12.3, that if f : Γ → Γ an automorphism, then f induces an automorphism f A ∈ GL(2, Z (2) ) on A ab = Z 2 (2) satisfying det f A = ±1, and f induces an automorphism f Z on the quotient Γ/A = Z. Define δ f := det f A ∈ {1, −1}, ǫ f := f Z (1) ∈ {1, −1}. One readily sees that Aut( Γ) → {−1, 1} 2 ∼ = Z 2 2 given by f → (δ f , ǫ f ) is a surjective group homomorphism onto the Klein 4-group. Indeed, for a given pair (δ, ǫ) ∈ {1, −1} 2 , the automorphism Γ → Γ defined by u → u δ , v → v, t → t ǫ gives rise to an automorphism f : Γ → Γ satisfying (δ f , ǫ f ) = (δ, ǫ). Theorem 12.2. Suppose f is an automorphism on Γ. Then the induced automorphism f * on H 3 ( Γ) = (Z (2) /Z) × Z is given by f * (a, n) = (δ f · a, δ f · ǫ f · n). Consequently, there are bijections H 3 ( Γ)/Aut( Γ) ≈ {(a, n) ∈ Z (2) × Z | 0 ≤ a < 1/2, n ≥ 0}, R(Γ)/Aut( Γ) ≈ {a ∈ Z (2) | 0 ≤ a < 1/2}. The first statement says that the natural map Aut( Γ) → Aut(H 3 ( Γ)) factors through the Klein 4-group {1, −1} 2 , via f → (δ f , ǫ f ). The two bijections in Theorem 12.2 are immediately obtained from the first statement. The first sentence of Theorem J, which asserts that R(Γ)/Aut( Γ) is infinite, is an immediate consequence of Theorem 12.2. Also, from this, the second statement of Theorem J follows immediately by Theorems 8.1. The remaining part of this section is devoted to the proof of the first statement of Theorem 12.2. Recall that Γ(ℓ) is the subgroup of Γ given by (11.1), Γ(1) = Γ, and Γ is the colimit of Γ(ℓ). If f : Γ → Γ is an automorphism, then for each odd ℓ ≥ 1, f (Γ(ℓ)) ⊂ Γ(rℓ) for some odd r ≥ 1, since Γ(ℓ) is finitely generated. The restriction f : Γ(ℓ) → Γ(rℓ) induces isomorphisms on H 1 and H 2 , since so does the colimit inclusion Γ(ℓ) → Γ for every ℓ. This leads us to investigate 2-connected homomorphisms f : Γ(ℓ) → Γ(rℓ). We begin with a characterization. Recall from the presentation (11.1) that Γ(ℓ) has generators u, v and t. Let A(ℓ) be the subgroup generated by u and v, as done in Section 11.3. f (t) = t ǫ u p v q [u, v] j f (u) = u a v b [u, v] m f (v) = u c v d [u, v] n where ǫ, a, b, c, d, j, m, n are integers satisfying (12.2) ǫ = ±1, ad − bc = ±r 2 , 2m ≡ aq − bp + ab, 2n ≡ cq − dp + cd mod (rℓ) 2 . Often we will abuse the notation to denote by f the automorphism of Γ induced by a 2-connected map f : Γ(ℓ) → Γ(rℓ). Note that if f is given by (12.1), then it induces automorphisms 1 r [ a c b d ] on H 1 (A) = Z 2 (2) and 1 → ǫ on Z = Γ/A. So, we have (12.3) δ f = ad − bc r 2 , ǫ f = ǫ. Proof of Lemma 12.3. Observe that any g ∈ Γ(ℓ) can be written as g = t ǫ u p v q [u, v] j , by using the defining relations in (11.1). Also, t ǫ u p v q [u, v] j lies in the subgroup A(ℓ) if and only if ǫ = 0. We claim that f sends A(ℓ) to A(rℓ). From the claim, it follows that f (t), f (u) and f (v) are of the form of (12.1) for some exponents (without enforcing (12.2) for now). To show the claim, consider the first rational derived subgroup of a group G, which is defined to be the kernel of the natural map G → H 1 (G) ⊗ Q. That is, it is the minimal normal subgroup of G such that the quotient is abelian and torsion free. It is straightforward to see that the first Then f is 2-connected, and f and f ′ induce the same homomorphism f * = f ′ * : H 3 (Γ(1)) → H 3 (Γ(r)). Proof. By Lemma 12.3, the assignment (12.4) gives a well-defined 2-connected homomorphism, since the conditions in (12.2) do not involve the exponent j. Recall that BΓ(1) = Y = T 2 × [0, 1]/(h(z), 0) ∼ (z, 1) where h : T 2 → T 2 = S 1 × S 1 is the monodromy h(ζ, ξ) = (ζ −1 , ξ −1 ). Here S 1 is regarded as the unit circle in C. Use (1, 1, 0) as a basepoint of BΓ(1). Choose maps BΓ(1) → BΓ(r) realizing f and f ′ , and denote them by f and f ′ , abusing the notation. Let T 3 = T 2 × S 1 , and use (1, 1, 1) ∈ T 3 as a basepoint. Denote by x, y the standard basis of π 1 (T 2 ) = Z 2 , and denote by s the generator of π 1 (S 1 ) = Z, so that x, y and s form a basis of π 1 (T 3 ). The element [u, v] ∈ Γ(r) is central by (11.1), and f (u), f (v) ∈ Γ(r) commute since u, v ∈ Γ(1) commute. It follows that there is a map g : T 3 → BΓ(r) which induces π 1 (T 3 ) = Z 3 → Γ(r) given by x → f (u) −1 , y → f (u) −1 and s → [u, v] j . By homotopy if necessary, we may assume g| T 2 ×1 is equal to f ′ | T 2 ×1 , since f = f ′ on u and v. Define F : BΓ(1) → BΓ(r) to be the composition F : BΓ(1) == T 2 × [0, 1]/(h(z), 0) ∼ (z, 1) − −→ q T 2 × [0, 1 2 ]/(h(z), 0) ∼ (z, 1 2 ) ∪ T 2 × 1 2 T 2 × [ 1 2 , 1]/(z, 1 2 ) ∼ (z, 1) == Y ∪ T 2 T 3 f ′ ∪g − − −→ BΓ(r) where q is the quotient map induced by (z, t) → (z, t). Observe that the induced map F : Γ(1) → Γ(r) satisfies F (u) = f ′ (u) = f (u), F (v) = f ′ (v) = f (v), and F (t) = f ′ (t)g(s) = f (t). It follows that f and F are homotopic. Therefore, on H 3 , we have f * [Y ] = F * [Y ] = f ′ * [Y ] + g * [Y ] ∈ H 3 (Γ(r) ). So it suffices to prove that g * : H 3 (T 3 ) → H 3 (Γ(r)) is zero. To show this, first observe that g : π 1 (T 3 ) → Γ(r) sends π 1 (T 3 ) = Z 3 to the subgroup A(r). In addition, it induces a morphism of central extensions: (12.5) 0 Z Z 3 Z 2 0 0 Z r 2 A(r) Z 2 0 ·j g [ a c b d ] Here the top row corresponds to the trivial fibration S 1 ֒→ T 3 → T 2 , and the bottom row is the exact sequence in (11.10). The leftmost and rightmost vertical maps are multiplication by j and [ a c b d ], by the definition of g and description (12.1) of f . The map g induces a morphism of the spectral sequences. In particular, on E 2 2,1 , g induces a map (12.6) Z = H 2 (Z 2 ) ⊗ H 1 (Z) −→ H 2 (Z 2 ) ⊗ H 1 (Z r 2 ) = Z r 2 . This is scalar multiplication by (ad − bc)j, by the above descriptions of the vertical maps in (12.5). By Lemma 12.3, ad − bc = ±r 2 . It follows that (12.6) is a zero map. Since E 2 2,1 for Z 3 is equal to H 3 (T 3 ), it follows that g * : H 3 (T 3 ) → H 3 (A(r)) is zero. The next step of our reduction is described by the following lemma. Lemma 12.6. Suppose f : Γ = Γ(1) → Γ(r) is a 2-connected homomorphism. Then there is a 2-connected homomorphism f ′ : Γ → Γ(r) such that f ′ (t) = t ǫ f , δ f = δ f ′ and f * = f ′ * on H 3 (Γ). Proof. Let ǫ = ǫ f , and apply Lemmas 12.3 and 12.5 to assume (12.7) f (t) = t ǫ u p v q , f (u) = u a v b [u, v] m , f (v) = u c v d [u, v] n . We claim that we may assume that both p and q are even in (12.7). To show this, consider φ : Γ → Γ given by φ(t) = tu, φ(u) = u, φ(v) = v. It is a well-defined 2-connected homomorphism by Lemma 12.3. Moreover, it induces the identity on H 3 (Γ) = Z. This can be seen geometrically, by inspecting the fundamental class [Y ] under an appropriate map xBΓ = Y → Y realizing φ. Alternately, use the Wang sequence for the extension 1 → A(1) → Γ → Z to identify H 3 (Γ) with H 2 (A(1)) = H 2 (Z 2 ) = Z, and use that φ| A(1) is the identity. Now, since φ * = id on H 3 , it Nontrivial transfinite Milnor invariants The goal of this section is to prove Theorem L stated in Section 2.11, which gives an infinite family of 3-manifolds with vanishing Milnor invariants of finite length, but distinct nontrivial transfinite Milnor invariants of length ω. As mentioned in Section 2.11, we do so by using a family of 3manifolds, {M d | d ∈ 2Z + 1}: M d is defined to be the torus bundle T 2 × [0, 1]/(h d (z), 0) ∼ (z, 1), with monodromy h d = −1 d 0 −1 . Note that M d is obtained from the original torus bundle Y studied in the previous sections, by modifying the (1, 2)-entry of the monodromy from 0 to d. Fix an odd integer d. We will use M d as the basepoint manifold to which other 3-manifolds M r are compared. That is, let Γ = π 1 (M d ). Our main goal of this section is to prove Theorem L, which asserts the following: (1) For every odd integer r,μ k (M r ) is defined and vanishes for all finite k. Moreover, π 1 (M r )/ π 1 (M r ) ω ∼ = Γ/ Γ ω , soμ ω (M r ) is defined. (2) But, for odd r and s,μ ω (M r ) =μ ω (M s ) if and only if |r/s| is a square in Z × (2) . In particular, µ ω (M r ) is nontrivial if and only if |r/d| is not a square. (3) Indeed, the set of realizable values of the Milnor invariant of length ω, R ω (Γ)/≈, is in 1-1 correspondence with Z × (2) /±(Z × (2) ) 2 . Here ±(Z × (2) ) 2 := {±α 2 | α ∈ Z × (2) }. For every a/b ∈ Z × (2) with a, b ∈ 2Z + 1, we have a/b ≡ |ab| mod ±(Z × (2) ) 2 (multiplicatively), and in the prime factorization of the integer |ab|, one can assume that each prime has exponent at most one, modulo square. So R ω (Γ)/≈ is bijective to the set of odd positive integers which have no repeated primes in the factorization. To show Theorem L, we will compute the realizable classes and the equivalence relations ∼ and ≈ for the modified torus bundle case. In fact, both the arguments for computation and their outcomes are very close to the original torus bundle d = 0 case. However, the modified case has a small but important difference: the action of Aut( Γ/ Γ ω ) on R ω (Γ) turns out to have smaller orbits. See Theorem 13.1(2) below and compare it with Theorem 11.1(5). From this the nontriviality of the length ω Milnor invariants will be obtained. More specifically, we will show the following. Theorem 13.1. Let Γ = π 1 (M d ) as above, d odd. Then, the following hold. (1) Each of H 3 ( Γ/ Γ ω ), H 3 ( Γ/ Γ ω+1 ), R ω (Γ), R ω+1 (Γ), the map R ω+1 (Γ) → R ω (Γ) and the equivalence relation ∼ on R ω (Γ) is identical with that given in Theorem 11.1: that is, H 3 ( Γ/ Γ ω ) = Z (2) , H 3 ( Γ/ Γ ω+1 ) = (Z (2) /Z) × Z, R ω (Γ) = Z × (2) , R ω+1 (Γ) = (Z (2) /Z) × {±1}. The map R ω+1 (Γ) → R ω (Γ) is (x, ǫ) → ǫ, and on R ω (Γ) = Z × (2) , θ ∼ θ ′ if and only if θ = ±θ ′ . (2) The orbits of the action of Aut( Γ/ Γ ω ) on R ω (Γ) = Z × (2) is given by: φ(θ) = θ ′ for some φ ∈ Aut( Γ/ Γ ω ) if and only if θ/θ ′ is a square. Consequently, on R ω (Γ), θ ≈ θ ′ if and only if θ/θ ′ = ±α 2 for some α ∈ Z × (2) . (3) For every odd integer r, there is an isomorphism f : π 1 (M r )/ π 1 (M r ) ω ∼ = − → Γ/ Γ ω such that θ ω (M r ) = θ ω (M r , f ) = r/d ∈ Z × (2) = R ω (Γ). Soμ ω (M r ) = r/d = rd andμ ω (M rd ) = r in Z × (2) /±(Z × (2) ) 2 = R ω (Γ)/≈. Theorem L follows immediately from Theorem 13.1 (2) and (3). The remaining part of this section is devoted to the proof of Theorem 13.1. In Section 13.1, we compute the transfinite lower central quotients Γ/ Γ ω and Γ/ Γ ω+1 . In Section 13.2, we prove Theorem 13.1(1). In Section 13.3, we prove Theorem 13.1(2) and (3). Transfinite lower central series quotients of the localization The group Γ = π 1 (M d ) is the semi-direct product Z 2 ⋊ Z = Z 2 ⋊ h d Z, where the generator t of Z acts on Z 2 by h d = −1 d 0 −1 . In what follows, we will compute Γ/ Γ ω and Γ/ Γ ω+1 . Recall that the group A = colim ℓ:odd A(ℓ) was defined in the beginning of Section 11.3. One can write A = {x α y β [x, y] γ | α, β ∈ Z (2) , γ ∈ Z (2) /Z} where the group operation is given by x α y β [x, y] γ · x λ y µ [x, y] ζ = x α+λ y β+µ [x, y] γ+ζ−βλ . The group A has Z 2 as a subgroup, which is generated by x and y. (Note that [x, y] is trivial in A.) Also, Z (2) /Z = {[x, y] γ } ⊂ A is a central subgroup, and A/(Z (2) /Z) = Z 2 (2) . Recall that, for d = 0 case, we proved that Γ = Γ/ Γ ω+1 is equal to the semi-direct product A ⋊ h0 Z, where the generator t of Z acts on A is given by h 0 = −1 0 0 −1 , that is, t · x = x −1 , t · y = y −1 See Section 11.3. We will prove a similar result for the modified torus bundle case. For this purpose, we need to extend the action of t = h d on Z 2 to A. Being an extension, t · x n = x −n , t · y n = x dn y −n must be satisfied for every integer n, but it can be seen that a naive attempt to define t · x 1/n = x −1/n , t · y 1/n = x d/n y −1/n does not give a group homomorphism t : A → A. Instead, we use the following lemma, which can be verified by a direct computation. To state it, we need the fact that the multiplication by 2, Z (2) /Z 2· − → Z (2) /Z, is an isomorphism, so γ/2 ∈ Z (2) /Z is well-defined for every γ ∈ Z (2) /Z. Lemma 13.2. The map t : A → A defined by t · (x α y β [x, y] γ ) = x −α+dβ y −β [x, y] γ+ dβ 2 2 is a group isomorphism which extends t = h d : Z 2 → Z 2 . Define a semi-direct product A ⋊ Z = A ⋊ h d Z by using the action of t in the lemma. The subgroup Z (2) /Z is central in A ⋊ Z, and the quotient (A ⋊ Z)/(Z (2) /Z) is the semi-direct product Z 2 (2) ⋊ Z = Z 2 (2) ⋊ h d Z, which is defined using the action of t = h d on Z 2 (2) . In what follows, we omit h d in the semi-direct product notation. Theorem 13.3. Γ/ Γ ω = Z 2 (2) ⋊ Z, and Γ/ Γ ω+1 = A ⋊ Z. The natural maps of Γ = Z 2 ⋊ Z into Γ/ Γ ω and Γ/ Γ ω+1 are the inclusions. Indeed, it can also be shown that Γ = A ⋊ Z and Γ ω+1 = {1}, by modifying the arguments used in [CO13]. Since we do not use this stronger fact, we will just provide a proof of Theorem 13.3 only. Proof of Theorem 13.3. First, we will compute Γ/ Γ ω using Theorem 3.6. For this, we need to compute the classical module localization S −1 Z 2 , where S = {s(t) ∈ Z[t ±1 ] | s(1) = ±1}. As a Z[t ±1 ]-module, Z 2 is presented by the matrix tI − h d = t+1 −d 0 t+1 . So, Z 2 is annihilated by the determinant, t 2 + 2t + 1. Observe that for each s(t) ∈ S, s(t)s(t −1 ) ∈ S. So, S −1 Z 2 is equal to T −1 Z 2 , where T = {±s(t)s(t −1 ) | s(t) ∈ S}. An element p(t) ∈ T satisfies p(t) = p(t −1 ), so p(t) = a 0 + i>0 a i (t + t −1 ) i with p(1) = a 0 + i>0 2 i · a i = ±1. Since t + t −1 acts on Z 2 by multiplication by −2, it follows that multiplication by p(t) on Z 2 is equal to multiplication by a 0 + i>0 (−2) i ·a i , which is an odd integer. Conversely, an arbitrary odd integer r can be written as r = ±1+4k, so the multiplication by r on Z 2 is equal to multiplication by s(t) := ±1−k(t+t −1 −2), which lies in S. It follows that S −1 Z 2 = (2Z + 1) −1 Z 2 = Z 2 (2) . Also, for the augmentation ideal I = {p(t) ∈ Z[t ±1 ] | p(1) = 0}, an element p(t) ∈ I 2k is of the form p(t) = (t − 1) 2k q(t) = (t + t −1 − 2) k · t k q(t). Since t + t −1 acts on Z 2 by multiplication by −2, it follows that I 2k Z 2 ⊂ 4 k Z 2 , and consequently, k<∞ I k Z 2 = 0. So Γ = Z 2 ⋊ Z is residually nilpotent. For later use, note that the same argument proves that Z 2 (2) ⋊ Z is residually nilpotent too. Now, by Theorem 3.6, the closure in the completion is given by Γ/ Γ ω = Z 2 ⋊ Z = S −1 Z 2 ⋊ Z = Z 2 (2) ⋊ Z. This proves the first conclusion. To compute Γ/ Γ ω+1 , we claim the following: Before proving the claims, we will derive the second conclusion of the theorem. Since Z (2) /Z is a central abelian subgroup of A ⋊ Z and the quotient (A ⋊ Z)/(Z (2) /Z) = Z 2 (2) ⋊ Z is a local group, A ⋊ Z is local, by [CO13, Theorem A.2, Lemma A.4]. So, the inclusion Γ = Z 2 ⋊ Z ֒→ A ⋊ Z induces Γ → A ⋊ Z. We will apply the standard Stallings argument to Γ → A ⋊ Z. We have already shown that Γ/ Γ ω = Z 2 (2) ⋊ Z. Combining this with claim (2) above, it follows that Γ/ Γ ω ∼ = (A ⋊ Z)/(A ⋊ Z) ω . Since the composition H 2 (Γ) → H 2 ( Γ) → H 2 (A ⋊ Z) is an isomorphism by the first claim, H 2 ( Γ) → H 2 (A ⋊ Z) is surjective. So, by Stallings' work [Sta65], Γ/ Γ ω+1 ∼ = (A ⋊ Z)/(A ⋊ Z) ω+1 . By the second claim, it follows that Γ/ Γ ω+1 ∼ = A ⋊ Z. (1) H 2 (A ⋊ Z) = H 2 (A) = H 2 (Z 2 ) = H 2 (Z 2 ⋊ Z) = Z,(2) Therefore, to complete the proof, it only remains to show the claims. We begin with the first claim, which concerns H 2 . In fact, H * (A ⋊ Z) can be computed using the Wang sequence (13.1) · · · −→ H 2 (A) 1−t * − − −→ H 2 (A) −→ H 2 (A ⋊ Z) −→ H 1 (A) 1−t * − − −→ H 1 (A) −→ · · · similarly to Section 11.3. (Our (13.1) here is analogous to (11.9).) By (11.14), H 2 (A) = H 2 (Z 2 ) = Z. Since h d has determinant one, t * = id on H 2 (A) and thus 1 − t * = 0. Also, H 1 (A) = Z 2 (2) , and t * on H 1 (A) is given by h d . So, 1 − t * = 2 −d 0 2 on H 1 (A) and this is injective. Therefore, from (13.1), it follows that We remark that while our monodromy h d (d odd) is different from the d = 0 case, H 2 (A ⋊ Z) remains the same as the d = 0 case given in (11.19). For the second claim, we proceed similarly to the proof of Lemma 11.4. We have already shown that (A ⋊ Z)/(Z (2) /Z) = Z 2 (2) ⋊ Z is residually nilpotent. So, (A ⋊ Z) ω ⊂ Z (2) /Z. For the reverse inclusion, observe that [x β , t] = x −2β , so by induction, x 2 k β ∈ (A⋊Z) k+1 . Thus [x, y] 2 k β = [x 2 k β , y] lies in (A ⋊ Z) k+2 for all β ∈ Z (2) . For every γ ∈ Z (2) , there is β ∈ Z (2) such that 2 k β ≡ γ mod Z, since 2 is invertible in Z (2) /Z. It follows that [x, y] γ = [x, y] 2 k β ∈ (A ⋊ Z) k+2 . Since it holds for every k, [x, y] γ ∈ (A ⋊ Z) ω . This shows that (A ⋊ Z) ω = Z (2) /Z. Finally, since [x, y] γ is central in A ⋊ Z, (A ⋊ Z) ω+1 = {1}. This completes the proof of the claims. Homology and realizable classes We will prove Theorem 13.1(1). To compute H 3 ( Γ/ Γ ω ), we use the Wang sequence for Γ/ Γ ω = Z 2 (2) ⋊ Z, similarly to Section 11.2. Indeed, the Wang sequence was already given in (11.4): 0 = H 3 (Z 2 (2) ) −→ H 3 ( Γ/ Γ ω ) −→ H 2 (Z 2 (2) ) 1−t * − − −→ H 2 (Z 2 (2) ) −→ H 2 ( Γ/ Γ ω ) −→ H 1 (Z 2 (2) ) 1−t * − − −→ H 1 (Z 2 (2) ) −→ H 1 ( Γ/ Γ ω ) −→ Z −→ 0 Here, the difference from Section 11.2 is that t * is induced by h d = −1 d 0 −1 . So, 1 − t * on H 2 (Z 2 (2) ) = Z (2) is zero, and 1 − t * on H 1 (Z 2 (2) ) = Z 2 (2) is 2 −d 0 2 . It follows that (13.3) H 3 ( Γ/ Γ ω ) = H 2 (Z 2 (2) ) = Z (2) , H 2 ( Γ/ Γ ω ) = H 2 (Z 2 (2) ) = Z (2) , H 1 ( Γ/ Γ ω ) = Z 4 × Z. Note that H i ( Γ/ Γ ω ) remains the same as that of original torus bundle in Section 11.2 for i = 2, 3, while H 1 ( Γ/ Γ ω ) is altered since d is odd. Compare 13.3 with (11.5). But, H 1 ( Γ/ Γ ω ) is still a finite abelian 2-group. By this, the analysis of the cap products (11.6) and (11.7) (which uses Theorem G) in Section 11.2 applies to our case without any modification. This shows that R ω (Γ) = Z × (2) . To compute H 3 ( Γ/ Γ ω+1 ), we proceed similarly to Section 11.3. For Γ/ Γ ω+1 = A ⋊ Z, we have the Wang sequence (11.9) · · · −→ H i (A) 1−t * − − −→ H i (A) −→ H i ( Γ/ Γ ω+1 ) −→ H i−1 (A) 1−t * − − −→ H i−1 (A) −→ · · · where t * is again induced by h d = −1 d 0 −1 . We have H 3 (A) = H 3 (Z (2) /Z) = Z (2) /Z by (11.16). Since the subgroup Z (2) /Z ⊂ A is generated by [x, y] γ on which our t acts trivially, t * on H 3 (A) is the identity. Also, since H 2 (A) = H 2 (Z 2 ) = Z by (11.14), t * on H 2 (A) is the identity too. It follows that H 3 ( Γ/ Γ ω+1 ) = (Z (2) /Z) ⋊ Z, the same as (11.18) in Section 11.3. Also, for θ ∈ H 3 ( Γ/ Γ ω+1 ), the analysis of the cap products (11.6) and (11.7) in Section 11.4 is carried out for our case ∩ θ : tH 2 ( Γ/ Γ ω+1 ) −→ tH 1 ( Γ/ Γ ω+1 ) = tH 1 (Γ) ∩ θ : H 1 ( Γ/ Γ ω+1 ) −→ H 2 ( Γ/ Γ ω+1 )/ Ker{H 2 ( Γ/ Γ ω+1 ) → H 2 ( Γ/ Γ ω )} without modification, using that tH 1 (Γ) = Z 4 is a finite abelian 2-group. This shows that R ω+1 (Γ) = Z (2) /Z × {±1} ⊂ H 3 ( Γ/ Γ ω+1 ). Note that we have shown that R ω (Γ) and R ω+1 (Γ) are the same as those of the original torus bundle case (d = 0). So, by the argument in the last paragraph, R ω+1 (Γ) → R ω (Γ) is also the same as the the original torus bundle case. To complete the proof of Theorem 13.1(1), it remains to determine the equivalence relation ∼ on R ω (Γ) = Z × (2) ⊂ H 3 ( Γ/ Γ ω ) = Z (2) . Let θ = a/b ∈ R ω (Γ) = Z × (2) with a, b odd integers. To compute the equivalence class of θ, we will first find a 3-manifold realizing θ. Recall that M d is the modified torus bundle, with monodromy h d = −1 d 0 −1 , and that Γ = π 1 (M d ). For another odd integer r which will be specified later, consider the 3-manifold M r . By Theorem 13.3 aplied to r instead of d, we have π 1 (M r )/ π 1 (M r ) ω = Z 2 (2) ⋊ hr Z. Because the following observation will also be used later, we state it as a lemma. Lemma 13.4. Let α, β ∈ Z × (2) . Then φ = φ α,β : Z 2 (2) ⋊ hr Z → Z 2 (2) ⋊ h d Z given by φ(a, b, n) = (α · a, β · b, n) is a group isomorphism if and only if dβ = rα. When it is the case, the induced isomorphism φ * : H 3 ( π 1 (M r )/ π 1 (M r ) ω ) = Z (2) −→ H 3 ( Γ/ Γ ω ) = Z (2) is multiplication by αβ. Proof. Since the monodromies are h d = −1 d 0 −1 and h r = −1 r 0 −1 and φ = α 0 0 β on Z 2 (2) , our φ is an isomorphism between the semi-direct products if and only if the matrix identity h d φ = φh r holds. From this, the first conclusion follows immediately, using the condition d = 0. Since H 3 ( π 1 (M r )/ π 1 (M r ) ω ) = H 2 (Z 2 (2) ) = Z (2) by (13.3) and since the restriction φ| Z 2 (2) is α 0 0 β , the induced map φ * on H 3 is the multiplication by det φ| Z 2 (2) = αβ. For our purpose, let r = abd, α = 1/b and β = a. By Lemma 13.4, φ = φ α,β is an isomorphism, and φ * on H 3 is multiplication by a/b. Since the fundamental class [M r ] is equal to 1 ∈ Z (2) = ( π 1 (M r )/ π 1 (M r ) ω ), it follows that the value of the invariant θ(M r ) = θ(M r , φ) defined using the isomorphism φ is equal to the class θ = a/b ∈ R ω (Γ). Therefore, the equivalence class of θ, with respect to ∼, is equal to the image of the composition R ω+1 (π 1 (M r )) R ω (π 1 (M r )) R ω (Γ) (Z (2) /Z) × {±1} Z × (2) Z × (2) ∼ = φ * by Definition 2.4. Since the first arrow is (x, ±1) → ±1 and the second arrow is multiplication by θ = a/b, it follows that θ ∼ θ ′ if and only if θ ′ = ±θ. The completes the proof of Theorem 13.1(1). Automorphism action and Milnor invariants Recall that Γ = π 1 (M d ) where d is fixed. We will prove Theorem 13.1(2). Suppose φ : Γ/ Γ ω → Γ/ Γ ω = Z 2 (2) ⋊Z is an automorphism. Similarly to the proof of Lemma 12.3, we have that φ restricts to an automorphism on the subgropup Z 2 (2) , since Z 2 (2) is the first rational derived subgroup of Γ/ Γ ω . Write φ| Z 2 (2) = [ α β γ δ ] ∈ GL(2, Z (2) ). For the generator t of the Z factor of Z 2 (2) ⋊ Z, we have that φ(0, t) = (v, t ǫ ) for some v ∈ Z 2 (2) and ǫ ∈ {±1}, since φ is an automorphism on the quotient ( Γ/ Γ ω )/Z 2 (2) = Z. Since φ is a group homomorphism on the semi-direct product with respect to the monodromy h d , the matrix identity φh d = h ǫ d φ must be satisfied. By comparing the matrix entries, it implies that φ| Z 2 (2) = α β 0 ǫα . (Here one uses the assumption that d is nonzero!) From this, it follows that the induced automorphism φ * on H 3 ( Γ/ Γ ω ) = H 2 (Z 2 (2) ) = Z (2) is equal to multiplication by ǫ · det φ| Z 2 (2) = α 2 . Note that α ∈ Z × (2) since φ| Z 2 (2) is invertible over Z (2) . Conversely, the above computation also shows that for any square α 2 ∈ Z × (2) , there is an automorphism φ on Γ/ Γ ω = Z 2 (2) ⋊ Z such that φ * on H 3 is multiplication by α 2 . For instance, by setting β = 0 and ǫ = 1, the automorphism φ given by φ| Z 2 (2) = [ α 0 0 α ] and φ(0, t) = (0, t) has that property. From the above, Theorem 13.1(2) follows immedately: for θ, θ ′ ∈ Z × (2) = R ω (Γ), φ(θ) = θ ′ for some φ ∈ Aut( Γ/ Γ ω ) if and only if θ/θ ′ is a square in Z × (2) . By the above computation of the equivalence relaiton ∼ and by Definition 2.4, it also follows that θ ≈ θ ′ in R ω (Γ) if and only if θ/θ ′ = ±α 2 for some α ∈ Z × (2) . Finally, we will prove Theorem 13.1(3). Recall that d is the fixed odd integer. Let r be an arbitrary odd integer. Let θ = r/d ∈ R ω ( Γ/ Γ ω ) = Z × (2) . Apply Lemma 13.4, for (α, β) = (1, r/d), to obtain the isomorphism φ = φ α,β : π 1 (M r )/ π 1 (M r ) ω −→ Γ/ Γ ω . Furthermore, Lemma 13.4 says that φ * on H 3 is multiplication by αβ = r/d ∈ Z × (2) . Since the fundamental class is [M r ] = 1 ∈ Z (2) , we have θ ω (M r ) = θ ω (M r , φ) = φ * (1) = r/d. This completes the proof of Theorem 13.1, the last theorem of this paper. Questions We list some questions which naturally arise from this work. (1) Can one interpret the invariants θ k (M ) andμ k (M ) of finite length (i.e. k < ∞) as Gusarov-Vassiliev finite type invariants in an appropriate sense? We remark that θ k (M ) andμ k (M ) are invariant under Habiro-Gusarov clasper surgery, which is now often called Y k -equivalence. More precisely, the following hold. Fix a closed 3-manifold group Γ, and let M and M ′ be two closed 3-manifold which are Y k−1equivalent. Then θ k (M ) is defined if and only if θ k (M ′ ) is defined, and when they are defined, θ k (M ) = θ k (M ′ ) in R k (Γ)/Aut(Γ/Γ k ) if M and M ′ are Y k -equivalent. The following three questions are relevant. (2) Can one extract the invariants θ k (M ) andμ k (M ) of finite length from (some variant of) the Kontsevich integral, or related quantum invariants? (3) Our results strongly suggest that there should be a notion of transfinite type invariants. Can one interpret the transfinite length invariants θ κ (M ) andμ κ (M ) as a finite type invariant? If not, can one generalize the notion of finite type invariants of 3-manifolds to a suitable notion of "transfinite type" invariants, so that the invariant θ κ (M ) of transfinite length can be viewed as invariants of transfinite type? (4) Can we extend the definition of the Kontsevich integral (of 3-manifolds or links) to a transfinite version? The following addresses the (non)triviality of the transfinite invariants of a given length. (5) For every (countable) ordinal κ, is there a closed 3-manifold group Γ for which the sets R κ (Γ)/∼ and R κ (Γ)/≈ have more than one element? Milnor's original work [Mil57] combined with Orr's result [Orr89] tell us that the answer to (5) is affirmative for finite κ. See also Theorem K in this paper. Theorems I and L show that the answer is affirmative for κ = ω. (6) Is there a countable ordinal ̟ such that if Γ is a 3-manifold group and M is a 3-manifold equipped with an isomorphism π 1 (M )/ π 1 (M ) ̟ → Γ/ Γ ̟ for whichμ ̟ (M ) vanishes (over Γ), thenμ κ (M ) is defined and vanishes (over Γ) for every κ > ̟. (7) Do θ κ andμ κ (M ) (with κ either finite or transfinite) reveal new information on link concordance? Regarding (7), consider the following. Fix rational numbers a 1 /b 1 , . . . , a m /b m ∈ Q. For a given m-component link L, perform Dehn filling on the exterior of the link, with slopes a i /b i , to obtain a closed 3-manifold. Call it M L . Fix a link L 0 , and let Y = M L0 , Γ = π 1 (Y ). To compare a given link L with the link L 0 , consider the invariants θ κ (M L ) andμ κ (M ), over the group Γ, as link invariants. It seems particularly interesting whether θ κ andμ κ of transfinite length gives a new nontrivial link invariant in this way. In addition, the finite length case may also have some interesting potential applications. Recall from Section 10 that there are examples for which the finite length invariants θ k live in finite abelian groups, and thus have torsion-values. (8) Do θ k andμ k of finite length give new torsion-valued link concordance invariants? The following is closely related to (8). In [CST12] (see also the survey [CST11] of a series of related papers), Conant, Schneiderman and Teichner proposed a higher order version of the classical Arf invariant for links. It may be viewed as certain 2-torsion valued information extracted from Whitney towers and gropes in 4-space. A key conjecture in the theory of Whitney towers is whether the higher order Arf invariants are nontrivial. (9) Are the invariants θ k andμ k related to the higher order Arf invariants? More specifically, can one show the conjectural nontriviality of the higher order Arf invariants using these invariants (of certain 3-manifolds associated to links)? Also, the existence of transfinite Milnor invariants suggest the existence of transfinite Arfinvariants. (10) Do transfinite Arf invariants of links and 3-manifolds exist? (11) If so, are these determined by the invariants, or some analogue of the invariants, θ κ ? ( i ) iDetermination of lower central series quotients:μ κ (M ) inductively determine the isomorphism classes of the lower central series quotients, as do Milnor's link invariants. Furthermore, this inductive process extends to transfinite ordinals. (ii) Homology cobordism invariance:μ κ (M ) is invariant under homology cobordism, as are Milnor's link invariants. (iii) Specialization to Milnor's link invariants:μ κ (M ) with finite κ determines Milnor's link invariants, when M is the zero-surgery on a link in S 3 . (iv) Obstructions to gropes: like those of links,μ κ (M ) is an obstruction to building gropes. can be naturally identified with the set of odd positive integers r with no repeated primes in the factorization. Such an r corresponds to the value of the length ω Milnor invariantμ ω (M rd ) of the 3-manifold M rd . See Theorem 13.1. So, the modified torus bundles explicitly realize nontrivial values of the transfinite Milnor invariantμ ω over the group Γ = π 1 (M d ). is obtained by a routine standard argument using the universal properties given in the definitions. We omit the details. For instance, see [Cha08, Proposition 6.4], [Lev89a, Proposition 5]. The proof of Theorem 3.1(2) is not straightforward and uses the actual construction of the localization. See [Cha08, Proposition 6.6], [Lev89a, Proposition 6]. Lemma 3 . 3 . 33If a group homomorphism π → G induces an epimorphism H 1 (π) → H 1 (G) and if G is finitely generated, then it induces an epimorphism π → G.This proof of Lemma 3.3 depends on an equation-based approach to the localization. In what follows, we give a quick review of definitions and results we need. Fix a group G. Following the idea of Levine [Lev89a] (see also Farjoun-Orr-Shelah [FOS89]) consider a system S = {x i = w i } of equations of the form x i = w i (x 1 , . . . , x n ), i = 1, . . . , n Definition 4. 1 . 1Two closed 3-manifolds M and N are homology cobordant if there is a 4-manifold W such that ∂W = M ⊔ −N and the inclusions induce isomorphisms H * (M ) ∼ = H * (W ) ∼ = H * (N ). Such a 4-manifold W is called a homology cobordism. not necessarily the above f and f • φ), since the orbit of θ κ (−) under the action of Aut( Γ/ Γ κ ) is independent of the choice of the isomorphism. This shows Theorem A(3). the fundamental class. The left and middle squares commute since ∂[W ] = [∂W ] = [M ] ⊕ [N ]. The right square commutes since i * is equal to the projection onto the first factor. obviously a lift, the obstruction o M vanishes. On the other hand, by the above argument applied to this case, o M vanishes if and only if the image of id A in H 2 (W ; A) lies in the kernel of H 2 (W ; A) i * − → H 2 (M ; A). By Claim 2, it follows that the image of id A is contained in the kernel of H 2 (W ; A) → H 2 (N ; A) as well. That is, the obstruction o N vanishes too. This proves Claim 3. are equivalent in Section 2.4. Suppose (1) holds. Then M ×[0, 1] is a grope cobordism of class κ+1, and thus (0) holds. For the converse, suppose W is a grope cobordism of class κ+1 given in (0). Since Coker{H 2 (M ) → H 2 (W )} and Coker{H 2 (N ) → H 2 (W )} are generated by (κ + 1)-gropes and H 1 (M ) ∼ = H 1 (W ) ∼ = H 1 (N ), we have Lemma 7. 3 ( 3Turaev [Tur84, Lemma 2.2]). Suppose g : N → X is a map of a closed 3-manifold N to a CW-complex X with finitely generated π 1 (X) such that the cap product∩ g * [N ] : tH 2 (X) −→ tH 1 (X)is an isomorphism. Then (N, g) is bordant, over X, to a pair (M, f ) of a closed 3-manifold M and a map f : M → X which induces an isomorphism f * : H 1 (M ) ∼ = − → H 1 (X). by the fact tH 2 (−) = Ext(H 1 (−), Z) and by the hypothesis (1). So ∩ ψ ′ * [N ] is an isomorphism too. Now apply Lemma 7.3 to (N, ψ ′ ) to produce a closed 3-manifold M endowed with a map M → B(Q) which induces an isomorphism on H 1 . Let φ : M → B(Q) → B( Γ/ Γ κ ) be the composition. It induces an isomorphism H 1 (M ) ∼ = − → H 1 (Q) ∼ = H 1 ( Γ/ Γ κ ). Also, since (M, φ) is bordant to (N, ψ), we have φ * [M ] = ψ * [N ] = θ. Theorem 8 . 1 . 81The invariant θ(M ) is invariant under homology cobordism in the following sense: (1) If M and N are homology cobordant 3-manifolds with π = π 1 (M ) and G = π 1 (N ), then there is an isomorphism φ : G ∼ = − → π, and consequently θ(M ) is defined if and only if θ(N ) is defined. (2) When θ(M ) and θ(N ) are defined using an isomorphism f : π ∼ = − → Γ and the composition f • φ, we have θ(M ) = θ(N ) in H 3 ( Γ). (3) When θ(M ) and θ(N ) are defined using arbitrary isomorphisms π ∼ = − → Γ and G ∼ = − → Γ, we have θ(M ) = θ(N ) in H 3 ( Γ)/Aut( Γ). isomorphism. Let θ = θ(M ) ∈ H 3 ( Γ). That is, θ is the image of [M ] under the map induced by the composition φ : M → Bπ → B π f − → B Γ. Consider the following commutative diagram: By condition (1), ∩ θ is an isomorphism. The arrows ι * and ι * are isomorphisms since H 1 (P ) →H 1 ( Γ) is an isomorphism by the choice of {P (ℓ)} and tH 2 (−) = Ext(H 1 (−), Z). So, ∩ σ is an isomorphism. Apply Turaev's Lemma 7.3, to obtain a map φ : M → B(P ) of a closed 3-manifold M with π = π 1 (M ) such that (M, φ) is bordant to (N, ψ) over P and φ * : H 1 (M ) → H 1 (P ) is an isomorphism. We have φ * [M ] = ψ * [N ] = σ in H 3 (P ). Consider the following diagram: By condition ( 2 ) 2, ∩ θ is surjective. The arrows ι * and ι * are isomorphisms by the choice of {P (ℓ)} and by the claim. The arrow φ * is an isomorphism since φ induces an isomorphism on H 1 . By Poincaré duality, ∩ [M ] is an isomorphism. From these facts, it follows that φ * : H 2 (M ) → H 2 (P ) is surjective. So, by Theorem 3.1(1), φ * : π = π 1 (M ) → P induces an isomorphism π ∼ = − → P . Since ι induces P ∼ = − → Γ, it follows that ιφ : M → Γ induces an isomorphism π ∼ = − → Γ. Since φ * [M ] = σ, we have θ(M ) = ι * φ * [M ] = ι * σ = θ. This shows that θ ∈ R(Γ). is a free abelian group of rank mR(m, k) − R(m, k + 1) where R(m, n) := 1 n d|n φ(d) · m n/d and φ(d) is the Möbius function. The following is another useful feature of the case of the free group F . Lemma 9.2. Suppose π is a group. Then, for finite k ≥ 2, every isomorphism f : π/π k ∼ = − → F/F k lifts to an isomorphism π/π k+1 ∼ = − → F/F k+1 if and only if there exists an isomorphism π/ ( 3 ) 3The invariant θ k (M, f ) vanishes in Coker{R k+1 (F ) → R k (F )}. (4) The invariant θ k (M ) vanishes in Coker{R k+1 (F ) → R k (F )/Aut(F/F k )}. (5) The invariant θ k (M, g) vanishes in Coker{R k+1 (F ) → R k (F )} for any isomorphism g : π/π k ∼ = − → F/F k .Proof.(1) and(2)are equivalent by Lemma 9.2. (1) and (3) are equivalent by Theorem B. (2) and (4) are equivalent by Theorem C. It follows that (2) implies (3) for any isomorphism f . In other words, (2) implies (5). Finally, (5) implies (3) obviously. From Theorem 9.4, it follows that all Milnor invariants of length k + 1 are defined without ambiguity if and only if θ k (M L ) is defined, and all Milnor invariants of length k + 1 vanish if and only if θ k (M L ) vanishes in Coker{R k+1 (F ) → R k (F )}. vanishes in the cokernel of R k+1 (Γ) → R k (Γ) if and only if θ vanishes in the cokernel of R k+1 (Γ) → R k (Γ)/Aut(Γ/Γ k ). From Theorem 10.1(4) and Theorems B and C, the following corollary is immediately obtained. Corollary 10.2. Let k ≥ 2 be finite. Suppose M is a closed 3-manifold with π = π 1 (M ) and f : π/π k ∼ = − → Γ/Γ k is an isomorphism. Then f lifts to an isomorphism f : π/π k+1 ∼ = − → Γ/Γ k+1 if and only if there is an isomorphism π/π k+1 ∼ = − → Γ/Γ k+1 (which is not required to be a lift). pq e p × e q , ∆(u i ) = p+q=i u p × e q are cellular approximations of the diagonal maps B(Z 2 ) → B(Z 2 ) × B(Z 2 ) and B(Z) → B(Z) × B(Z), and thus the chain level cup product of B(Z d ) and B(Z) defined using them are given by (e i ) * ∪ (e j ) * = (−1) ij · (e i+j ) * , (u i ) * ∪ (u j ) * = (u i+j ) * . Theorem 11.3 ([CO13, Theorem 3.1]). The homology localization of Γ is given asΓ −→ Γ = colim ℓ odd Γ(ℓ). lies in the subgroup Z (2) /Z, then kθ = 0 for some odd k > 0. Since tH 1 ( Γ) is a 2-group, it follows that (11.20) is zero for θ ∈ Z (2) /Z. On the other hand, θ = (0, 1) ∈ H 3 ( Γ) = (Z (2) /Z) × Z lies in R κ (Γ) since θ is the image of the fundamental class [Y ]. So (11.20) is an isomorphism for θ = (0, 1) by Theorem G. Since tH 1 ( Γ) is a finite abelian 2-group, it follows that (11.20) is an isomorphism for θ = (0, r) if and only if r ∈ Z is odd. Combining these observations, it follows that (11.20) is an isomorphism for θ = (x, r) ∈ H 3 ( Γ) = (Z (2) /Z) × Z if and only if r ∈ Z is odd. Now consider the cap product (11.21) ∩ θ : H 1 ( Γ) −→ H 2 ( Γ)/ Ker{H 2 ( Γ) → H 2 ( Γ/ Γ ω )}. So f : Γ(ℓ) → Γ(rℓ) is 2-connected.Conversely, if f : Γ(ℓ) → Γ(rℓ) is 2-connected, then it induces an automorphism f : Γ = Γ(ℓ) Lemma 12.3. A homomorphism f : Γ(ℓ) → Γ(rℓ) is 2-connected if and only if f is given by (12.1) (A ⋊ Z) ω = Z (2) /Z = {[x, y] γ }, (A ⋊ Z) ω+1 = {1}.Here, the equalities between H 2 (−) are induced by the inclusions of the groups. 2 (A ⋊ Z) = H 2 (A) = H 2 (Z 2 ) = Z. multiplication by r, while H 2 (Z 2 ) → H 2 (Z 2 ) = Z and H 1 (Z ℓ 2 ) → H 1 (Z (rℓ) 2 ) are multiplication by r 2 . So (11.10) gives rise to the following diagram:(11.13)The top row of (11.13) for ℓ = 1 provides an isomorphism H 2 (A(1)) = H 2 (Z 2 ) ∼ = − → Z. The colimit map Z = 1 2 Z → colim ℓ 2 Z is an isomorphism, since the map ·r 2 in (11.13) is an isomorphism for all r. On the other hand, since the map ·r 3 : Z 2 ℓ 2 → Z 2 (rℓ) 2 is zero for all large r divided by ℓ 2 , colim Z 2 ℓ 2 vanishes. So, by taking the colimit of (11.13), we obtain an isomorphism (11.14)Also, (·r) : Z 2 → Z 2 induces multiplication by r 2 on H 2 (Z 2 ). From this, it follows that (11.10) gives rise to (11.15)Since the vertical map ·r 4 in (11.15) is trivial if r divided by ℓ, the colimit of them is trivial. So, by taking the colimit of (11.15), we obtain an isomorphism (11.16)Note that the action of t onIt follows that t * on H 3 (A) = Z (2) /Z is the identity. Now, use (11.14), (11.16) and the fact that 1 − t * = 0 on both H 2 (A) and H 3 (A), to extract the following exact sequence from the Wang sequence (11.9).(11.17) 0To provide a fixed identification, we use a splitting described below. Recall that Γ(1) = Γ, so H 3 (Γ(1)) = H 3 (Γ) = H 3 (Y ) where Y is the torus bundle. Compare (11.17) with the Wang sequence associated to the exact sequence (11.8) for rational derived subgroup is characteristic. In our case, for Γ(ℓ), the first rational derived subgroup is equal to A(ℓ). So f (A(ℓ)) ⊂ A(rℓ), as claimed. Next, we claim that a map of the free group on t, u and v to Γ(rℓ) given by (12.1) kills relations of Γ(ℓ) if and only if ad − bc ≡ 0 mod r 2 andThe claim is shown by a routine computation. The map sends the relation tut −1 u to, which is trivial if and only if r 2 divides ad − bc, since[u, v]has order (rℓ) 2 in Γ(rℓ). This proves the claim.Recall that H 1 (Γ(ℓ)) = (Z 2 ) 2 × Z where the factors are generated by u, v, and t respectively. So, when f is the homomorphism given by (12.1), f * :Therefore, f induces an isomorphism on H 1 if and only if ǫ = ±1 and ad − bc is odd.To investigate the induced map on H 2 , first note that A(ℓ) ab is equal to Z 2 generated by u and v. We will use the fact that H 2 (Γ(ℓ)) can be identified with the subgroup ℓ 2 Z ⊂ Z = H 2 (A(ℓ) ab ). This can be proven by investigating the Wang sequence for the HNN extension (11.8). An alternative proof is as follows. Recall that H 2 (Γ) = H 2 ( Γ) = Z by (11.19). Since Γ = Γ(1) → Γ(ℓ) and Γ(ℓ) → Γ are 2-connected, it follows that H 2 (Γ(ℓ)) is equal to H 2 (Γ(1)), which is equal to H 2 (A(1)) = H 2 (Z 2 ) by (11.19). Note that Z 2 = A(1) → A(ℓ) ab = Z 2 is scalar multiplication by ℓ. So, H 2 (Γ(ℓ)) is the subgroup ℓ 2 Z ⊂ Z = H 2 (A(ℓ) ab ). Now, observe that H 2 (A(ℓ) ab ) = Z → H 2 (A(rℓ) ab ) = Z induced by f given by (12.1) is equal to multiplication by ad − bc. From this, it follows that f induces an epimorphism H 2 (Γ(ℓ)) → H 2 (Γ(rℓ)) if and only if ad − bc = ±r 2 .Using Lemma 12.3, we will investigate the action of Aut( Γ) on the torsion part of H 3 ( Γ). Proof. Fix an arbitrary odd ℓ ≥ 1. By Lemma 12.1, the given automorphism f on Γ restricts to a 2-connected homomorphism f | Γ(ℓ) : Γ(ℓ) → Γ(rℓ) for some odd r ≥ 0, and f | Γ(ℓ) is of the form of (12.1). We haveRecall from Section 11.3 that [u, v] ∈ Γ(ℓ) generates a subgroup that we identified with Z ℓ 2 . So,2are regarded as subgroups of Z (2) /Z using (11.16). By (11.17) and (11.16), it follows that the induced map f * : tH 3 ( Γ) → tH 3 ( Γ) = Z (2) /Z is multiplication by δ f . By (11.18), the Z factor of H 3 ( Γ) = (Z (2) /Z) × Z is generated by the image of the fundamental class [Y ] ∈ H 3 (Y ) = H 3 (Γ). The rest of this section is devoted to understand the action of Aut( Γ) on this generator. Since every automorphism of Γ is induced by a 2-connected map f : Γ = Γ(1) → Γ(r), it suffices to investigate f * [Y ] ∈ H 3 (Γ(r)).Our strategy is to simplify f given in Lemma 12.3 without altering f * [Y ]. We begin with elimination of the[u, v]j factor in f (t) in the general form (12.1).Lemma 12.5. Let f : Γ(1) → Γ(r) be a 2-connected map given by (12.1). Let f ′ : Γ(1) → Γ(r) be the mapProof. Consider the subgroup of Γ(ℓ) generated by u, v and t 2 , which corresponds a double cover.is generated by u and v and the infinite cyclic group 2Z is generated by t 2 . The colimit A × 2Z = colim(A(ℓ) × 2Z) is an index two subgroup of Γ. Since f sends A(1) to A(r), f lifts to a homomorphism g : A(1) × 2Z → A(r) × 2Z. Compose them with the colimit maps A(r) × 2Z → colim(A(ℓ) × 2Z) = A × 2Z and Γ(r) → colim ℓ Γ(ℓ) = Γ, and take H 3 , to obtain the following diagram.(12.8)We will compare the composition of the top row with the homomorphism induced by the colimit i| A(1)×2Z : A(1) × 2Z → A × 2Z. The key property, which is a consequence of the hypothesis f (t) = t ǫ f , is that the lift g can be written as a product: g = (g| A(1) ) × (ǫ·) where g| A(1) is equal to the restriction f | A(1) : A(1) → A(r), and ǫ· : 2Z → 2Z is multiplication by ǫ := ǫ f . So, the induced map g * on H 3 is determined by g| A(1) and ǫ by the Künneth formula. More precisely, since A(1) = Z 2 , the composition of the top row of (12.8) is equal to the composition Similarly, define a 2-connected homomorphism φ ′ : Γ → Γ by φ ′ (t) = tv, φ ′ (u) = u, φ ′ (v) = v. Then f * = (f • φ ′ ) * on H 3 , too. We have that (f • φ)(t) = f (tu) =. that f * = (f • φ) * on H 3 (Γ. t ǫ u p v q · u a v b [u, v] m = t ǫ u p+a v q+b [u, v] m−aq , (f • φ ′ )(t) = f (tv) = t ǫ u p v q · u c v d [u, v] m = t ǫ u p+c v q+d [u, v] m−cqthat f * = (f • φ) * on H 3 (Γ). Similarly, define a 2-connected homomorphism φ ′ : Γ → Γ by φ ′ (t) = tv, φ ′ (u) = u, φ ′ (v) = v. Then f * = (f • φ ′ ) * on H 3 , too. We have that (f • φ)(t) = f (tu) = t ǫ u p v q · u a v b [u, v] m = t ǫ u p+a v q+b [u, v] m−aq , (f • φ ′ )(t) = f (tv) = t ǫ u p v q · u c v d [u, v] m = t ǫ u p+c v q+d [u, v] m−cq . Then, composition with φ alters the parity of p and preserves the parity of q, and composition with φ ′ alters the parity of q (while the parity of p is left uncontrolled). So, by composition, we may assume that both p and q are even. This proves of the claim. Now, define ψ : Γ(r) → Γ(r) to be ψ(g) = ugu −1. Since conjugation induces the identity on H * (e.g., see [Wei94, p. 191]), we have (ψ • f ) * = f * on H 3 . Also, we have (ψ • f )(t) = u · t ǫ u p v q · u −1 = t ǫ u p−2 v q [u, v] q , (ψ • f )(u) = u · u a v b [u, v] m · u −1 = u a v b [u, v] m+b , (ψ • f )(v) = u · u c v d [u, v] n · u −1 = u c v d [u, v] n+dBy (12.2), ad − bc is odd. We assume a and d are odd, and b is even, since arguments for other cases are identical. Then, composition with φ alters the parity of p and preserves the parity of q, and composition with φ ′ alters the parity of q (while the parity of p is left uncontrolled). So, by composition, we may assume that both p and q are even. Note that a, b, c, d and ǫ are left unchanged. Finally apply Lemma 12.5 to obtain the form of (12.7). This proves of the claim. Now, define ψ : Γ(r) → Γ(r) to be ψ(g) = ugu −1 . Since conjugation induces the identity on H * (e.g., see [Wei94, p. 191]), we have (ψ • f ) * = f * on H 3 . Also, we have (ψ • f )(t) = u · t ǫ u p v q · u −1 = t ǫ u p−2 v q [u, v] q , (ψ • f )(u) = u · u a v b [u, v] m · u −1 = u a v b [u, v] m+b , (ψ • f )(v) = u · u c v d [u, v] n · u −1 = u c v d [u, v] n+d . This changes p to p − 2, without altering a, b, c, d, ǫ and q. Apply Lemma 12.5, to eliminate [u, v] q in (ψ • f )(tApply Lemma 12.5, to eliminate [u, v] q in (ψ • f )(t). This changes p to p − 2, without altering a, b, c, d, ǫ and q (but m and n are allowed to be altered). Since φ, φ ′ , ψ and ψ ′ used above have ǫ • = 1 and δ • = 1, we have ǫ f ′ = ǫ f and. * . Since φ, φ ′ , ψ and ψ ′ used above have ǫ • = 1 and δ • = 1, we have ǫ f ′ = ǫ f and As the final step of our analysis, we investigate the special case of 2-connected homomorphisms in Lemma 12.6. Let i : Γ = Γ(1) → colim Γ(ℓ) = Γ be the colimit map, and i * : H 3 (Γ) → H 3 ( Γ) be the induced map. Recall that the Z factor of H 3 ( Γ) = (ZAs the final step of our analysis, we investigate the special case of 2-connected homomorphisms in Lemma 12.6. Let i : Γ = Γ(1) → colim Γ(ℓ) = Γ be the colimit map, and i * : H 3 (Γ) → H 3 ( Γ) be the induced map. Recall that the Z factor of H 3 ( Γ) = (Z (2 In our case, the homomorphism H 2 (A(1)) = Z → H 2 (A(r) ab ) = Z induced by g| A(1) = f | A(1) is multiplication by the determinant of A(1) = Z 2 → A(r) ab = Z 2 , which is equal to δ f · r 2 by (12.3). From this. 11.13) and (11.14(A) is identified with the subgroup r 2 Z ⊂ Z = H. 2A(r) ab ). it follows that the composition of the top row of (12.8) is equal to δ f · ǫ · (i| A(1)×2Z ) * . Consequently, the composition of the bottom row of (12.8), which is the induced homomorphism f * : H 3 (Γ) → H 3 ( Γ), is equal to δ f · ǫ · i * on the image of H 3 (A(1) × 2Z) → H 3 (Γ(1)) = H 3 (Y )By (11.13) and (11.14), H 2 (A) is identified with the subgroup r 2 Z ⊂ Z = H 2 (A(r) ab ). In our case, the homomorphism H 2 (A(1)) = Z → H 2 (A(r) ab ) = Z induced by g| A(1) = f | A(1) is multiplication by the determinant of A(1) = Z 2 → A(r) ab = Z 2 , which is equal to δ f · r 2 by (12.3). From this, it follows that the composition of the top row of (12.8) is equal to δ f · ǫ · (i| A(1)×2Z ) * . Consequently, the composition of the bottom row of (12.8), which is the induced homomorphism f * : H 3 (Γ) → H 3 ( Γ), is equal to δ f · ǫ · i * on the image of H 3 (A(1) × 2Z) → H 3 (Γ(1)) = H 3 (Y ). Since B(A(1) × 2Z. Since B(A(1) × 2Z . · , * , 2 · δ f · ǫ · i * [Y ] in H 3 ( Γ)· f * [Y ] = 2 · δ f · ǫ · i * [Y ] in H 3 ( Γ). (2) /completes the proof. Now we are ready to prove Theorem 12.2, which asserts that the action of f ∈ Aut( Γ) on H 3 ( Γ) = (Z/Z (2) ) × Z is given by f * (a, n) = (δ f · a. Since H 3 ( Γ) = (Zδ f · ǫ f · nSince H 3 ( Γ) = (Z (2) /completes the proof. Now we are ready to prove Theorem 12.2, which asserts that the action of f ∈ Aut( Γ) on H 3 ( Γ) = (Z/Z (2) ) × Z is given by f * (a, n) = (δ f · a, δ f · ǫ f · n). By Lemma 12.4, the restriction of f * on tH 3 ( Γ) = Z (2) /Z is multiplication by δ f . So it remains to investigate f * on the generator (0, 1) ∈ (Z (2) /Z) ×. 12.2Z. By Lemmas. 12and 12.7, we may assume that the map H 3 (Γ) → H 3 ( Γ) induced by f sends the fundamental class [Y ] to δ f · ǫ f · i * [YProof of Theorem 12.2. By Lemma 12.4, the restriction of f * on tH 3 ( Γ) = Z (2) /Z is multiplication by δ f . So it remains to investigate f * on the generator (0, 1) ∈ (Z (2) /Z) × Z. By Lemmas 12.6 and 12.7, we may assume that the map H 3 (Γ) → H 3 ( Γ) induced by f sends the fundamental class [Y ] to δ f · ǫ f · i * [Y ]. Since (0, 1) ∈ (Z (2) /Z) × Z is the image of. it follows that f * (0, 1) =Since (0, 1) ∈ (Z (2) /Z) × Z is the image of [Y ], it follows that f * (0, 1) = Homological localizations of spaces, groups, and π-modules. A K Bousfield, Localization in group theory and homotopy theory, and related topics. Sympos., Battelle Seattle Res. Center, Seattle; BerlinSpringer418A. K. Bousfield, Homological localizations of spaces, groups, and π-modules, Localization in group theory and homotopy theory, and related topics (Sympos., Battelle Seattle Res. Center, Seattle, Wash., 1974), Springer, Berlin, 1974, pp. 22-30. Lecture Notes in Math., Vol. 418. The localization of spaces with respect to homology. Topology. 14, The localization of spaces with respect to homology, Topology 14 (1975), 133-150. Link cobordism and Milnor's invariant. Andrew J Casson, Bull. London Math. Soc. 7Andrew J. Casson, Link cobordism and Milnor's invariant, Bull. London Math. Soc. 7 (1975), 39-40. Injectivity theorems and algebraic closures of groups with coefficients. Jae Choon, Cha , Proc. London Math. Soc. 961Jae Choon Cha, Injectivity theorems and algebraic closures of groups with coefficients, Proc. London Math. Soc. 96 (2008), no. 1, 227-250. Rational Whitney tower filtration of links. Math. Ann. 3703-4, Rational Whitney tower filtration of links, Math. Ann. 370 (2018), no. 3-4, 963-992. L 2 -signatures, homology localization, and amenable groups. Jae Choon Cha, Kent E Orr, Comm. Pure Appl. Math. 65Jae Choon Cha and Kent E. Orr, L 2 -signatures, homology localization, and amenable groups, Comm. Pure Appl. Math. 65 (2012), 790-832. Hidden torsion, 3-manifolds, and homology cobordism. J. Topol. 62, Hidden torsion, 3-manifolds, and homology cobordism, J. Topol. 6 (2013), no. 2, 490-512. Higher-order intersections in low-dimensional topology. Jim Conant, Rob Schneiderman, Peter Teichner, Proc. Natl. Acad. Sci. USA. 10820Jim Conant, Rob Schneiderman, and Peter Teichner, Higher-order intersections in low-dimensional topol- ogy, Proc. Natl. Acad. Sci. USA 108 (2011), no. 20, 8131-8138. Whitney tower concordance of classical links. James Conant, Rob Schneiderman, Peter Teichner, Geom. Topol. 163James Conant, Rob Schneiderman, and Peter Teichner, Whitney tower concordance of classical links, Geom. Topol. 16 (2012), no. 3, 1419-1479. Massey products and maps between groups. William G Dwyer, Homology , J. Pure Appl. Algebra. 62William G. Dwyer, Homology, Massey products and maps between groups, J. Pure Appl. Algebra 6 (1975), no. 2, 177-190. On the third homotopy group of Orr's space. Emmanuel D Farjoun, Roman Mikhailov, Algebr. Geom. Topol. 181Emmanuel D. Farjoun and Roman Mikhailov, On the third homotopy group of Orr's space, Algebr. Geom. Topol. 18 (2018), no. 1, 569-582. Bousfield localization as an algebraic closure of groups. Emmanuel D Farjoun, Kent E Orr, Saharon Shelah, Israel J. Math. 661-3Emmanuel D. Farjoun, Kent E. Orr, and Saharon Shelah, Bousfield localization as an algebraic closure of groups, Israel J. Math. 66 (1989), no. 1-3, 143-153. 4-manifold topology. II. Dwyer's filtration and surgery kernels. H Michael, Peter Freedman, Teichner, Invent. Math. 1223Michael H. Freedman and Peter Teichner, 4-manifold topology. II. Dwyer's filtration and surgery kernels, Invent. Math. 122 (1995), no. 3, 531-557. Concordance and homotopy. I. Fundamental group. M A Gutiérrez, Pacific J. Math. 821M. A. Gutiérrez, Concordance and homotopy. I. Fundamental group, Pacific J. Math. 82 (1979), no. 1, 75-91. Knot concordance in three manifolds. Prudence Heck, Indiana UniversityPh.D. thesisPrudence Heck, Knot concordance in three manifolds, Ph.D. thesis, Indiana University, 2009. Alexander ideals of links. Jonathan A Hillman, Lecture Notes in Mathematics. 895Springer-VerlagJonathan A. Hillman, Alexander ideals of links, Lecture Notes in Mathematics, vol. 895, Springer-Verlag, Berlin-New York, 1981. Right exact group completion as a transfinite invariant of the homology equivalence. Sergei Ivanov, Roman Mikhailov, arXiv:1909.10181Sergei Ivanov and Roman Mikhailov, Right exact group completion as a transfinite invariant of the ho- mology equivalence, arXiv:1909.10181. Links, pictures and the homology of nilpotent groups. Kiyoshi Igusa, Kent E Orr, Topology. 406Kiyoshi Igusa and Kent E. Orr, Links, pictures and the homology of nilpotent groups, Topology 40 (2001), no. 6, 1125-1166. Cobordisme d'enlacements de disques. Jean-Yves Le Dimet, Mém. Soc. Math. France (N.S92Jean-Yves Le Dimet, Cobordisme d'enlacements de disques, Mém. Soc. Math. France (N.S.) (1988), no. 32, ii+92. Link concordance and algebraic closure. Jerome P Levine, Invent. Math. II3Jerome P. Levine, Link concordance and algebraic closure. II, Invent. Math. 96 (1989), no. 3, 571-592. Link concordance and algebraic closure of groups. Comment. Math. Helv. 642, Link concordance and algebraic closure of groups, Comment. Math. Helv. 64 (1989), no. 2, 236- 255. Link invariants via the eta invariant. Comment. Math. Helv. 691, Link invariants via the eta invariant, Comment. Math. Helv. 69 (1994), no. 1, 82-119. Algebraic geometry and topology, A symposium in honor of S. Lefschetz. John W Milnor, Princeton University PressPrinceton, N. J.Isotopy of linksJohn W. Milnor, Isotopy of links. Algebraic geometry and topology, A symposium in honor of S. Lefschetz, Princeton University Press, Princeton, N. J., 1957, pp. 280-306. New link invariants and applications. Kent E Orr, Comment. Math. Helv. 624Kent E. Orr, New link invariants and applications, Comment. Math. Helv. 62 (1987), no. 4, 542-560. Homotopy invariants of links. Invent. Math. 952, Homotopy invariants of links, Invent. Math. 95 (1989), no. 2, 379-394. Homology and central series of groups. John Stallings, J. Algebra. 2John Stallings, Homology and central series of groups, J. Algebra 2 (1965), 170-181. Nilpotent homotopy types of closed 3-manifolds. V G Turaev, Lecture Notes in Math. 1060SpringerTopology (LeningradV. G. Turaev, Nilpotent homotopy types of closed 3-manifolds, Topology (Leningrad, 1982), Lecture Notes in Math., vol. 1060, Springer, Berlin, 1984, pp. 355-366. Localization of spaces with respect to a class of maps. Pierre Vogel, Univ. de NantesPreprintPierre Vogel, Localization of spaces with respect to a class of maps, Preprint, Univ. de Nantes, 1978. An introduction to homological algebra. Charles A Weibel, Cambridge Studies in Advanced Mathematics. 38Cambridge University PressCharles A. Weibel, An introduction to homological algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, Cambridge, 1994.
[]
[ "Can graph neural networks count substructures?", "Can graph neural networks count substructures?" ]
[ "Zhengdao Chen \nCourant Institute of Mathematical Sciences\nNew York University\nNew York\n", "Lei Chen \nCourant Institute of Mathematical Sciences\nNew York University\nNew York\n", "Soledad Villar \nCourant Institute of Mathematical Sciences\nNew York University\nNew York\n\nCenter for Data Science\nNew York University\nNew York\n", "Joan Bruna \nCourant Institute of Mathematical Sciences\nNew York University\nNew York\n\nCenter for Data Science\nNew York University\nNew York\n\nInstitute for Advanced Study\nPrinceton\n" ]
[ "Courant Institute of Mathematical Sciences\nNew York University\nNew York", "Courant Institute of Mathematical Sciences\nNew York University\nNew York", "Courant Institute of Mathematical Sciences\nNew York University\nNew York", "Center for Data Science\nNew York University\nNew York", "Courant Institute of Mathematical Sciences\nNew York University\nNew York", "Center for Data Science\nNew York University\nNew York", "Institute for Advanced Study\nPrinceton" ]
[]
The ability to detect and count certain substructures in graphs is important for solving many tasks on graph-structured data, especially in the contexts of computational chemistry and biology as well as social network analysis. Inspired by this, we propose to study the expressive power of graph neural networks (GNNs) via their ability to count attributed graph substructures, extending recent works that examine their power in graph isomorphism testing and function approximation. We distinguish between two types of substructure counting: matching-count and containment-count, and establish both positive and negative answers for popular GNN architectures. Specifically, we prove that Message Passing Neural Networks (MPNNs), 2-Weisfeiler-Lehman (2-WL) and 2-Invariant Graph Networks (2-IGNs) cannot perform matching-count of substructures consisting of 3 or more nodes, while they can perform containment-count of star-shaped substructures. We also prove positive results for k-WL and k-IGNs as well as negative results for k-WL with limited number of iterations. We then conduct experiments that support the theoretical results for MPNNs and 2-IGNs, and demonstrate that local relational pooling strategies inspired by Murphy et al. (2019) are more effective for substructure counting. In addition, as an intermediary step, we prove that 2-WL and 2-IGNs are equivalent in distinguishing non-isomorphic graphs, partly answering an open problem raised in Maron et al. (2019a).
null
[ "https://arxiv.org/pdf/2002.04025v2.pdf" ]
211,069,434
2002.04025
d019693240dc01e599dbcb4832dcbb7c67fad566
Can graph neural networks count substructures? Zhengdao Chen Courant Institute of Mathematical Sciences New York University New York Lei Chen Courant Institute of Mathematical Sciences New York University New York Soledad Villar Courant Institute of Mathematical Sciences New York University New York Center for Data Science New York University New York Joan Bruna Courant Institute of Mathematical Sciences New York University New York Center for Data Science New York University New York Institute for Advanced Study Princeton Can graph neural networks count substructures? The ability to detect and count certain substructures in graphs is important for solving many tasks on graph-structured data, especially in the contexts of computational chemistry and biology as well as social network analysis. Inspired by this, we propose to study the expressive power of graph neural networks (GNNs) via their ability to count attributed graph substructures, extending recent works that examine their power in graph isomorphism testing and function approximation. We distinguish between two types of substructure counting: matching-count and containment-count, and establish both positive and negative answers for popular GNN architectures. Specifically, we prove that Message Passing Neural Networks (MPNNs), 2-Weisfeiler-Lehman (2-WL) and 2-Invariant Graph Networks (2-IGNs) cannot perform matching-count of substructures consisting of 3 or more nodes, while they can perform containment-count of star-shaped substructures. We also prove positive results for k-WL and k-IGNs as well as negative results for k-WL with limited number of iterations. We then conduct experiments that support the theoretical results for MPNNs and 2-IGNs, and demonstrate that local relational pooling strategies inspired by Murphy et al. (2019) are more effective for substructure counting. In addition, as an intermediary step, we prove that 2-WL and 2-IGNs are equivalent in distinguishing non-isomorphic graphs, partly answering an open problem raised in Maron et al. (2019a). Introduction In recent years, graph neural networks (GNNs) have achieved empirical success on processing data from various fields such as social networks, quantum chemistry, particle physics, knowledge graphs and combinatorial optimization (Scarselli et al., 2008;Bruna et al., 2013;Duvenaud et al., 2015;Kipf and Welling, 2016;Defferrard et al., 2016;Bronstein et al., 2017;Dai et al., 2017;Nowak et al., 2017;Ying et al., 2018;Zhou et al., 2018;Choma et al., 2018;Zhang and Chen, 2018;You et al., 2018aYou et al., ,b, 2019Yao et al., 2019;Ding et al., 2019;Stokes et al., 2020). Thanks to such progress, there have been growings interest in studying the expressive power of GNNs. One line of work does so by studying their ability to distinguish non-isomorphic graphs. In this regard, Xu et al. (2018a) and Morris et al. (2019) show that GNNs based on neighborhood-aggregation schemes are at most as powerful as the classical Weisfeiler-Lehman (WL) test (Weisfeiler and Leman, 1968) and propose GNN architectures that can achieve such level of power. While graph isomorphism testing is very interesting from a theoretical viewpoint, one may naturally wonder how relevant it is to real-world tasks on graph-structured data. Moreover, WL is powerful enough to distinguish almost all pairs of non-isomorphic graphs except for rare counterexamples (Babai et al., 1980). Hence, from the viewpoint of graph isomorphism testing, existing GNNs are in some sense already not far from being maximally powerful, which could make the pursuit of more powerful GNNs appear unnecessary. Another perspective is the ability of GNNs to approximate permutation-invariant functions on graphs. For instance, Maron et al. (2019c) and Keriven and Peyré (2019) propose architectures that achieve universal approximation of permutation-invariant functions on graphs, though such models involve tensors with order growing in the size of the graph and are therefore impractical. Importantly, Chen et al. (2019b) establishes an equivalence between the ability to distinguish any pair of non-isomorphic graphs and the ability to approximate arbitrary permutation-invariant functions on graphs. Nonetheless, for GNNs used in practice, which are not universally approximating, more efforts are needed to characterize what they can and cannot do. For example, Loukas (2019) shows that GNNs under assumptions are Turing universal but loses power when its depth and width are limited, though the arguments rely on the nodes all having distinct features and the focus is on the asymptotic depth-width tradeoff. Concurrently to our work, Garg et al. (2020) provide impossibility results of several classes of GNNs to decide graph properties including girth, circumference, diameter, radius, conjoint cycle, total number of cycles, and k-cliques. Despite these interesting results, we still need a perspective for understanding the expressive power of different classes of GNNs in a way that is intuitive, relevant to goals in practice, and potentially helpful in guiding the search for more powerful architectures. Inspired by the relevance of detecting and counting graph substructures in applications, we propose to understand the power of GNN architectures via the substructures that they can and cannot count. Also referred to by various names including graphlets, motifs, subgraphs and graph fragments, graph substructures are well-studied and relevant for graph-related tasks in computational chemistry (Deshpande et al., 2002;Murray and Rees, 2009;Duvenaud et al., 2015;Jin et al., 2018Jin et al., , 2019Jin et al., , 2020, computational biology (Koyutrk et al., 2004) and social network studies (Jiang et al., 2010). In organic chemistry, for example, certain patterns of atoms called functional groups are usually considered indicative of the molecules' properties (Lemke, 2003;Pope et al., 2018). In the literature of molecular chemistry, substructure counts have been used to generate molecular fingerprints (Morgan, 1965;OBoyle and Sayle, 2016) and compute similarities between molecules (Alon et al., 2008;Rahman et al., 2009). In addition, for general graphs, substructure counts have been used to create graph kernels (Shervashidze et al., 2009) and compute spectral information (Preciado and Jadbabaie, 2010). The connection between GNNs and graph substructures is explored empirically by Ying et al. (2019) as a way to interpret the predictions made by GNNs. Thus, the ability of different GNN architectures to count graph substructures not only serves as an intuitive theoretical measure of their expressive power but also is highly relevant to real-world scenarios. While people have proposed variants of GNNs that take advantage of substructure information (Monti et al., 2018;Liu et al., 2018Liu et al., , 2019, often they rely on handcrafting rather than learning such information. More importantly, there is a lack of a systematic theoretical study of the ability of existing GNNs to count substructures. In this work, we first build a theoretical framework for studying the ability of GNNs to count attributed substructures based on both function approximation and graph discrimination. In particular, we distinguish between containment-count and matching-count, each corresponding to having subgraphs and induced subgraphs isomorphic to a given pattern, respectively. Next, we look at classical GNN architectures and prove the following results. 1. Focusing on matching-count, we establish that neither Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017) nor 2nd-order Invariant Graph Networks (2-IGNs) (Maron et al., 2019c) can count any connected substructure of 3 or more nodes. For any such pattern, we prove this by constructing a pair of graphs that provably cannot be distinguished by any MPNN or 2-IGN but with different matching-counts of the given pattern. This result points at an important class of simple-looking tasks that are provably hard for classical GNN architectures. 2. We show positive results for containment-count of star-shaped patterns by MPNNs and 2-IGNs, generalizing results in Arvind et al. (2018), as well as for both matching-and containment-count of size-k patterns by k-WL and k-IGNs. The latter result hints at a hierarchy of the increasing power of k-WL's in terms of counting substructures, which would be more intuitive than the hierarchy in terms distinguishing non-isomorphic graphs as shown in Cai et al. (1992), and therefore concretely motivates the search for GNNs with higher expressive power than 2-WL or MPNN. 3. While a tight negative result for general k-WL is difficult to obtain, we show that T iterations of k-WL is unable to perform matching-count for the path pattern of (k + 1)2 T or more nodes. It is relevant since real-life GNNs are often shallow, and also demonstrates an interplay between k and depth. We complement these theoretical results with synthetic experiments of counting triangles and stars in random graphs. In addition, while our negative theoretical results are worst-case in nature, the experiments illustrate an average-case difficulty for classical GNNs to count even the simplest graph substructures such as triangles. On the other hand, instead of performing iterative equivariant aggregations of information as is done in MPNNs and IGNs, we propose a type of locally powerful models based on the observation that substructures present themselves in local neighborhoods known as egonets. One idea is to apply the Relational Pooling approach (Murphy et al., 2019) to egonets, resulting in a model we call Local Relational Pooling. We demonstrate that it can perform both matching-and containment-count in the experiments. 2 Framework 2.1 Attributed graphs, (induced) subgraphs and two types of counting An unattributed graph G with n nodes is usually denoted by G = (V, E), where typically V = [n] := {1, ..., n} is the vertex set and E ⊂ V 2 := V × V is the edge set. We define an attributed graph or weighted graph as G = (V, E, x, e), where in addition to V and E, we let x i ∈ X represent the node feature (or node attribute) of node i, and e i,j ∈ Y represent the edge feature of edge (i, j) if (i, j) ∈ E. For simplicity, we only consider undirected graphs (i.e. if (i, j) ∈ E then (j, i) ∈ E and e i,j = e j,i ), and we do not allow self-connections (i.e., (i, i) / ∈ E) or multi-edges (so that E is a well-defined set). Note that an unattributed graph can be viewed as an attributed graph with identical node and edge features. If a graph has only node features and no edge features, we can also represent it as G = (V, E, x). Unlike the node and edge features, the indices of the nodes are not inherent properties of the graph. Rather, different ways of ordering the nodes result in different representations of the same underlying graph. This is characterized by the definition of graph isomorphism: Two attributed graphs G [1] = (V [1] , E [1] , x [1] , e [1] ) and G [2] = (V [2] , E [2] , x [2] , e [2] ) are isomorphic if there exists a bijection π : V [1] → V [2] such that (1) (i, j) ∈ E [1] if and only if (π(i), π(j)) ∈ E [2] , (2) x [1] i = x [2] π(i) for all i in V [1] , and (3) e [1] i,j = e [2] π(i),π(j) for all (i, j) ∈ E [1] . Before defining substructure counting, we first need to define subgraphs and induced subgraphs. For G = (V, E, x, e), a subgraph of G is any graph G [S] = (V [S] , E [S] , x, e) with V [S] ⊆ V and E [S] ⊆ E. An induced subgraphs of G is any graph G [S ] = (V [S ] , E [S ] , x, e) with V [S ] ⊆ V and E [S ] = E ∩ (V [S ] ) 2 . In words, the edge set of an induced subgraph needs to include all edges in E that have both end points belonging to V [S ] . Thus, an induced subgraph of G is also its subgraph, but the converse is not true. We now define two types of counting attributed substructures: matching and containment, illustrated in Figure 1 [P] , e [P] ) be a (typically smaller) graph that we refer to as a pattern or substructure. We define C(G, G [P] ), called the containment-count of G [P] in G, to be the number of subgraphs of G that are isomorphic to G [P] . We define M(G; G [P] ), called the matching-count of G [P] in G, to be the number of induced subgraphs of G that are isomorphic to G [P] . Since all induced subgraphs are subgraphs, we . Let G [P] = (V [P] , E [P] , xalways have M(G; G [P] ) ≤ C(G; G [P] ). Moreover, on a space of graphs G, we call M(·; G [P] ) the matching-count function of the pattern G [P] , and C(·; G [P] ) the containment-count function of G [P] . To formalize the probe into whether certain GNN architectures can count different substructures, a natural question to study is whether they are able to approximate the matching-count and the containment-count functions arbitrarily well. Formally, given a target function g : G → R, and family of functions, F, which in our case is typically the family of functions that a GNN architecture can represent, we say F is able to approximate g on G if for all > 0 there exists f ∈ F such G [P] G [1] G [2] Figure 1: Illustration of the two types of counts of the pattern G [P] in graphs G [1] and G [2] . The edge and node features are represented by colors. For G [1] , the matching-count M(G [1] ; G [P] ) = 0 but the containment-count C(G [1] ; G [P] ) = 1. For G [2] , M(G [2] ; G [P] ) = C(G [2] ; G [P] ) = 0 since the edge features do not match. that |g(G) − f (G)| < , for all G ∈ G. However, such criterion based on function approximation is hard to work with directly when we look at concrete examples later on. For this reason, below we will look for an alternative and equivalent definition from the perspective of graph discrimination. From function approximation to graph discrimination Say G is a space of graphs, and F is a family of functions from G to R. Given two graphs G [1] , G [2] ∈ G, we say F is able to distinguish them if there exists f ∈ F such that f (G [1] ) = f (G [2] ). Such a perspective has been explored in Chen et al. (2019b), for instance, to build an equivalence between function approximation and graph isomorphism testing by GNNs. In the context of substructure counting, it is clear that the ability to approximate the count functions entails the ability to distinguish graphs in the following sense: Observation 1. If F is able to approximate the matching-count (or containment-count) function of a pattern G [P] on the space G, then for all G [1] , G [2] ∈ G such that M(G [1] , G [P] ) = M(G [2] , G [P] ) (or C(G [1] , G [P] ) = C(G [2] , G [P] )), they can be distinguished by F. What about the converse? When the space G is finite, such as if the graphs have bounded numbers of nodes and the node as well as edge features belong to finite alphabets, we can show a slightly weaker statement than the exact converse. Following Chen et al. (2019b), we define an augmentation of families of functions using feed-forward neural networks as follows: Definition 1. Given F, a family of functions from a space X to R, we consider an augmented family of functions also from X to R consisting of all functions of the following form x → h N N ([f 1 (x), ..., f d (x)]), where d ∈ N, h 1 , ..., h d ∈ F, and h N N is a feed-forward neural network / multi-layer perceptron. When N N is restricted to have L layers at most, we denote this augmented family by F +L . Lemma 1. Suppose X is a finite space, g is a finite function on X , and F is a family of functions on X . Then, F +1 is able to approximate f on G if ∀x 1 , x 2 ∈ X with g(x 1 ) = g(x 2 ), ∃f ∈ F such that f (x 1 ) = f (x 2 ). Proof. Since X is a finite space, for some large enough integer d, ∃ a collection of d functions, f 1 , ..., f d ∈ F such that, if we define the function f (x) = (f 1 (x), ..., f d (x)) ∈ R d , then it holds that ∀x 1 , x 2 ∈ X , f (x 1 ) = f (x 2 ) ⇒ g(x 1 ) = g(x 2 ). (In fact, we can choose d ≤ |X |·(|X |−1) 2 , since in the worst case we need one f i per pair of x 1 , x 2 ∈ X with x 1 = x 2 .) Then, ∃ a well-defined function h from R d to R such that ∀x ∈ X , g(x) = h(f (x)). By the universal approximation power of neural networks, h can then be approximated arbitrarily well by some neural network h N N . Thus, in the context of substructure counting, we have the following observation. Observation 2. Suppose G is a finite space. If ∀G [1] , G [2] ∈ G with M(G [1] , G [P] ) = M(G [2] , G [P] ) (or C(G [1] , G [P] ) = C(G [2] , G [P] )), F is able to distinguish G [1] and G [2] , then F +1 is able to approximate the matching-count (or containment-count) function of the pattern G [P] on G. For many GNN families, F +1 in fact has the same expressive power as F. For example, consider F MPNN , the family of all Message Passing Neural Networks on G. F +1 MPNN consists of functions that run several MPNNs on the input graph in parallel and stack their outputs to pass through an MLP. However, running several MPNNs in parallel is equivalent to running one MPNN with larger dimensions of hidden states and messages, and moreover the additional MLP at the end can be merged into the readout function. Similar holds for the family of all k-Invariant Graph Functions (k-IGNs). Hence, for such GNN families, we have an exact equivalence on finite graph spaces G. Therefore, we define substructure counting alternatively as follows, which are equivalent thanks to the results above and easier to work with when we study particular GNN architectures: Definition 2. We say F is able to perform matching-count (or containment-count) of a pattern G [P] on G if ∀G [1] , G [2] ∈ G such that M(G [1] , G [P] ) = M(G [2] , G [P] ) (or C(G [1] , G [P] ) = C(G [2] , G [P] )), F is able to distinguish G [1] and G [2] . Another benefit of this definition is that it naturally allows us to also define the ability of graph isomorphism tests to count substructures. A graph isomorphism test, such as the Weisfeiler-Lehman (WL) test, takes as input a pair of graphs and returns whether or not they are believed to be isomorphic. Typically, the test will return true if the two graphs are indeed isomorphic but does not necessarily return false for every pair of non-isomorphic graphs. Given such a graph isomorphism test, we say it is able to perform matching-count (or containment-count) of a pattern G [P] on G if ∀G [1] , G [2] ∈ G such that M(G [1] , G [P] ) = M(G [2] , G [P] ) (or C(G [1] , G [P] ) = C(G [2] , G [P] )), the test can tell these two graphs apart. Additional notations used in the proofs are given in Appendix A. Message Passing Neural Networks and k-Weisfeiler-Lehman tests Message Passing Neural Network (MPNN) is a generic model that incorporates many popular architectures, and it is based on learning local aggregations of information in the graph (Gilmer et al., 2017). When applied to an undirected graph G = (V, E, x, e), an MPNN with T layers is defined iteratively as follows. For t < T , to compute the message m (t+1) i and the hidden state h (t+1) i for each node i ∈ V at the (t + 1)th layer, we apply the following update rule: m (t+1) i = N (i) M t (h (t) i , h (t) j , e i,j ) h (t+1) i = U t (h (t) i , m (t+1) i ) where N (i) is the neighborhood of node i in G, M t is the message function at layer t and U t is the vertex update function at layer t. Finally, a graph-level prediction is computed aŝ y = R({h (T ) i : i ∈ V }), where R is the readout function. Typically, the hidden states at the first layer are set as h show that, when the graphs' edges are unweighted, such models are at most as powerful as the Weisfeiler-Lehman (WL) test in distinguishing non-isomorphic graphs. We will prove an extension of this result that incorporates edge features, which MPNNs naturally accommodate, so that by examining the ability of 2-WL to count substructures, we can draw conclusions for MPNNs. Before that, we will first introduce the hierarchy of k-WL tests. The hierarchy of k-Weisfeiler-Lehman (k-WL) tests We will introduce the general k-WL test for k ∈ N * applied to a pair of graphs, G [1] and G [2] . Assume that the two graphs have the same number of vertices, since otherwise they can be told apart easily. Without loss of generality, we assume that they share the same set of vertex indices, V (but can differ in E, x or e). For each of the graphs, at iteration 0, the test assigns an initial color in some color space to every k-tuple in V k according to its isomorphism type 1 , and then updates the coloring in every iteration. For any k-tuple s = (i 1 , ..., i k ) ∈ V k , we let c w ∈ [k], define the neighborhood N w (s) = {(i 1 , ..., i w−1 , j, i j+1 , ..., i k ) : j ∈ V } Given c (t−1) k and c (t−1) k , define C (t) w (s) = Hash t,1 {c (t−1) k (s) :s ∈ N w (s)} C (t) w (s) = Hash t,1 {c (t−1) k (s) :s ∈ N w (s)} with "{}" representing a multiset, and Hash t,1 being some hash function that maps injectively from the space of multisets of colors to some intermediate space. Then let c (t) k (s) = Hash t,2 c (t−1) k (s), C (t) 1 (s), ..., C (t) k (s) c (t) k (s) = Hash t,2 c (t−1) k (s), C (t) 1 (s), ..., C (t) k (s) where Hash t,2 maps injectively from its input space to the space of colors. The test will terminate and return the result that the two graphs are not isomorphic if at some iteration t, the following two multisets differ: {c (t) k (s) : s ∈ V k } = {c (t) k (s) : s ∈ V k } Some properties of k-WL For graphs with unweighted edges, 1-WL and 2-WL are known to have the same discriminative power (Maron et al., 2019b). For k ≥ 2, it is known that (k + 1)-WL is strictly more powerful than k-WL, in the sense that there exist pairs of graph distinguishable by the former but not the latter (Cai et al., 1992). Thus, with growing k, the set of k-WL tests forms a hierarchy with increasing discriminative power. Note that there has been an different definition of WL in the literature, sometimes known as Folklore Weisfeiler-Lehman (FWL), with different properties (Maron et al., 2019b;Morris et al., 2019). When people use the term "Weisfeiler-Lehman test" without specifying "k", it usually refers to 1-WL, 2-WL or 1-FWL. Proof intuition: If 2-WL cannot distinguish the two graphs, then at any iteration t, {c (t) k (s) : s ∈ V 2 } = {c (t) k (s) : s ∈ V 2 }. This guarantees the existence of a bijective map from pairs of nodes in G [1] to pairs of nodes in G [2] that preserve the coloring. Through examining the update rules of 2-WL and MPNNs, we will show by induction that for any MPNN, at the tth layer, such a map will also preserve the hidden states of the nodes involved in the pair as well as the edge feature. This implies that any MPNN with t layers will return identical outputs when applied to the two graphs. This result motivates us to study what patterns 2-WL can and cannot count in the next subsection. Substructure counting by 2-WL and MPNNs Whether or not 2-WL can perform matching-count of a pattern is completely characterized by the number of nodes in the pattern. Any connected pattern with 1 or 2 nodes (i.e., representing a node or an edge) can be easily counted by an MPNN with 0 and 1 layer of message-passing, respectively, or by 2-WL with 0 iteration 2 . In contrast, for all other patterns, we provide the following negative result, to be proved in Appendix D. Theorem 2. 2-WL cannot perform matching-count of any connected pattern with 3 or more nodes. Proof Intuition. Given any connected pattern of at least 3 nodes, we can construct a pair of graphs that have different matching-counts of the pattern but cannot be distinguished from each other by 2-WL. For instance, if we run 2-WL on the pair of graphs in Figure 2, then there will be c 2 (s) : s ∈ V 2 }, ∀t, which implies that 2-WL cannot distinguish the two graphs. Thus, together with Theorem 1, we have Corollary 1. MPNNs cannot perform matching-count of any connected pattern with 3 or more nodes. For containment-count, if both nodes and edges are unweighted, Arvind et al. (2018) show that the only patterns 1-WL (and equivalently 2-WL) can count are either star-shaped patterns and pairs of disjoint edges. We prove the positive result that MPNNs can count star-shaped patterns even when node and edge features are allowed, utilizing a result in Xu et al. (2018a) that the message functions are able to approximate any function on multisets. Theorem 3. MPNNs can perform containment-count of star-shaped patterns. By Theorem 1, this implies that Corollary 2. 2-WL can perform containment-count of star-shaped patterns. 2 Rigorously, this is a special case of Theorem 4. Substructure counting by k-WL There have been efforts to extend the power of GNNs by going after k-WL for higher k, such as Morris et al. (2019). Thus, it is also interesting to study the patterns that k-WL can and cannot count. Firstly, since k-tuples are assigned initial colors based on their isomorphism types, the following is easily seen, and we provide a proof in Appendix F. Theorem 4. k-WL, at initialization, is able to perform both matching-count and containment-count of patterns consisting of at most k nodes. This establishes a potential hierarchy of increasing power in terms of substructure counting by k-WL. However, tighter results can be much harder to achieve. For example, to show that 2-FWL (and therefore 3-WL) cannot count cycles of length 8, Fürer (2017) has to rely on performing computer counting on the classical Cai-Fürer-Immerman counterexamples to k-WL (Cai et al., 1992). We leave the pursuit of general and tighter characterizations of k-WL's substructure counting power for future research, but we are nevertheless able to provide a partial negative result concerning finite iterations of k-WL. Definition 3. A path pattern of size m, denoted by H m , is an unattributed graph, H m = (V [Hm] , E [Hm] ), where V [Hm] = [m], and E [Hm] = {(i, i + 1) : 1 ≤ i < m} ∪ {(i + 1, i) : 1 ≤ i < m}. Theorem 5. Running T iterations of k-WL cannot perform matching-count of any path pattern of (k + 1)2 T or more nodes. The proof is given in Appendix G. This bound grows quickly when T becomes large. However, since in practice, many if not most GNN models are designed to be shallow (Zhou et al., 2018;Wu et al., 2019), we believe this result is still relevant for studying finite-depth GNNs that are based on k-WL. Invariant Graph Networks Recently, diverging from the strategy of local aggregation of information as adopted by MPNNs and k-WLs, an alternative family of GNN models called Invariant Graph Networks (IGNs) was introduced in Maron et al. (2018, 2019c,b). Here we restate its definition. Definition 4. A kth-order Invariant Graph Network (k-IGN) is a function F : R n k ×d0 → R that can be decomposed in the following way: F = m • h • L (T ) • σ • · · · • σ • L (1) , where each L (t) is a linear equivariant layer from R n k ×dt−1 to R n k ×dt , σ is a pointwise activation function, h is a linear invariant layer from R n k ×d T to R, and m is an MLP. Maron et al. (2019c) show that if k is allowed to grow as a function of the size of the graphs, then k-IGNs can achieve universal approximation of permutation-invariant functions on graphs. Nonetheless, due to the quick growth of computational complexity and implementation difficulty as k increases, in practice it is hard to have k > 2, while if k = 2, it is proven to lose the universal approximation power (Chen et al., 2019b). However, it remains interesting to study what are the things that 2-IGNs are capable of doing, especially from the perspective of substructure counting. Note that the a 2-IGN takes as input a third-order tensor, B (0) , defined for a given graph G = (V = [n], E, x, e) in the following way. Supposing without loss of generality that the node and edge features both have dimension d, we have B (0) ∈ R n×n×(d+1) , such that: ∀i ∈ [n], B (0) i,i,2:(d+1) = x i ; ∀i, j ∈ [n] with i = j, B (0) i,j,1 = A i,j and B (0) i,j,2:(d+1) = e i,j . If we use B (t) to denote the output of the tth layer of the 2-IGN, then they are obtained iteratively by B (t+1) = σ(L (t) (B (t) )) (1) 2-IGNs equivalent to 2-WL Before studying how well can 2-IGNs count substructures, we first relate it to 2-WL. It is known that 2-IGNs are at least as powerful as 2-WL, while the other direction remains an open problem (Maron et al., 2019c,a). Here we answer the question by proving the converse, that 2-IGNs are no more powerful than 2-WL. The full argument can be found in Appendix H. Theorem 6. If two graphs G [1] and G [2] cannot be distinguished by the 2-WL test, then there is no 2-IGN that can distinguish them either. Proof intuition: Given two nodes i, j ∈ V with i = j, we can partition V 2 as the union of nine disjoint subsets: A 1 = {(i, j)}, A 2 = {(i, i)}, A 3 = {(j, j)}, A 4 = {(i, k) : k = i or j}, A 5 = {(k, i) : k = i or j}, A 6 = {(j, k) : k = i or j}, A 7 = {(k, j) : k = i or j}, A 8 = {(k, l) : k = l and {k, l} ∩ {i, j} = ∅}, and A 9 = {(k, k) : k / ∈ {i, j}}. If 2-WL cannot distinguish the two graphs in t iterations, then there exists not only a color-preserving bijective map from pairs of nodes in G [1] to pairs of nodes in G [2] , mapping (i, j) to some (i , j ), but also a color-preserving bijective map from A w to A w for each w ∈ [9], where A w is the corresponding subset of V 2 associated with (i , j ). By the update rule of 2-IGNs, this allows us to show that B (t) i,j = B (t) i ,j , and hence a t-layer 2-IGN cannot return distinct outputs when applied to the two graphs. Substructure counting by 2-IGNs Thanks to the equivalence shown above, the two following theorems are direct consequences of Theorem 2 and Corollary 2, though we also provide a direct proof of Corollary 4 in Appendix I. Corollary 4. 2-IGNs cannot perform matching-count of any connected pattern with 3 or more nodes. Corollary 5. 2-IGNs can perform containment-count of star-shaped patterns. Substructure counting by k-IGNs Since k-IGNs are no less powerful than k-WL (Maron et al., 2019b), we have as a corollary of Theorem 4 that Corollary 6. k-IGNs can perform both matching-count and containment-count of patterns consisting of at most k nodes. Local Relational Pooling Though MPNNs and 2-IGNs are able to aggregate information from multi-hop neighborhoods, we have seen above that they are unable to preserve information such as the matching-counts of nontrivial patterns. To bypass such limitations, we suggest going beyond the strategy of iteratively aggregating information in an equivariant way, which underlies both MPNNs and IGNs. One helpful observation is that, if a pattern is present in the graph, it can always be found in a sufficiently large local neighborhood, or egonet, of some node in the graph (Preciado et al., 2012). An egonet of depth l centered at a node i is the induced subgraph consisting of i and all nodes within distance l from it. Note that any pattern with radius r is a subgraph of some egonet of depth l = r. Hence, by applying a powerful local model to each egonet separately and then aggregating the outputs, we could potentially obtain a model capable of counting patterns. For such a local model, we adopt the Relational Pooling (RP) idea from Murphy et al. (2019). In summary, it creates a powerful permutation-invariant model by symmetrizing a powerful model that is not necessarily permutation-invariant, where the symmetrization is performed by averaging or summing over all permutations of the nodes' ordering. Formally, if B ∈ R n×n×d is a node-ordering-dependent representation of the graph G, such as the adjacency matrix or the B (0) defined above for 2-IGNs, then define f RP (G) = π∈Snf (π • B), wheref can be some non-permutation-invariant function, S n is the set of permutations on n nodes, and π • B is B transformed by permuting its first two dimensions according to π. Such f 's are shown to be an universal approximators of permutation-invariant functions (Murphy et al., 2019). The summation quickly becomes intractable once n is large, and hence approximation methods have been introduced. In our case, however, since we apply this model to egonets that are usually smaller than the entire graph, the tractability issue is greatly alleviated. Moreover, since egonets are rooted graphs, we can reduce the symmetrization over all permutations in S n to the subset S BFS i,l . For computational efficiency, every tensor representation of egonet B is cropped into a fixed-sized subtensor C k (B) = B [k],[k],: ∈ R k×k×d . Then our model over the entire graph G is expressed as f l,k LRP (G) = i∈V π∈S BFS n i,lf C k (π • B [ego] i,l ) We call it depth-l size-k Local Relational Pooling (LRP-l-k). If node degrees are upper-bounded by D, the time complexity is O(n · (D!) D l · k 2 ), and hence linear in n if D, k and l are fixed. In the experiments below, we implement a variant of LRP-1-4 designed as, with bias terms ignored, f 1,4 LRP (G) = W 1 i∈V σ    Mlp(D i ) |S BFS ni,1 | π∈S BFS n i,1 f * (π • B [ego] i,1 )    , where D i is the degree of node i, σ is ReLU, Mlp maps from R to R H , where H is the hidden dimension, W 1 ∈ R 1×H and ∀j ∈ [H], (f * (X)) j = tanh( W 2,j C 4 (X)) ∈ R with W 2,j ∈ R 4×4×d . The motivation of Mlp(D i ) is to adaptively learn an invariant function over permutation, such as summing and averaging. Experiments Tasks. In this section, we verify our theoretical results on two graph-level regression tasks: matching-counting triangles and containment-counting 3-stars, with both patterns unattributed, as illustrated in Figure 3. By Theorem 2 and Corollary 1, MPNNs and 2-IGNs can perform matching-count of triangles. Note that since a triangle is a clique, its matching-count and containment-count are equal. We generate the ground-truth counts of triangles in each graph with an counting algorithm proposed by Shervashidze et al. (2009). By Theorem 3 and Corollary 2, MPNNs and 2-IGNs can perform containment-count though not matching-count of 3-stars. For its ground-truth count, we compute the number of stars centered at each node as d 3 , where d is the degree of each node, and then sum over all nodes in the graph. Synthetic datasets. We generate two synthetic datasets of random unattributed graphs. The first one is a set of 5000 Erdős-Renyi random graphs denoted as ER(m, p), where m = 10 is the number of nodes in each graph and p = 0.3 is the probability that an edge exists. The second one is a set of 5000 random regular graphs (Steger and Wormald, 1999) denoted as RG(m, d), where m is the number of nodes in each graph and d is the node degree. We uniformly sample (m, d) from {(10, 6), (15, 6), (20, 5), (30, 5)}. We also randomly delete m edges in each graph from the second dataset. For both datasets, we randomly split them into training-validation-test sets with percentages 30%-20%-50%. Results. The results on the two tasks are shown in Table 1, measured by the MSE on the test set divided by the variance of the ground truth counts of the pattern computed over all graphs in the dataset. Firstly, the almost-negligible errors of LRP on all the tasks supports our theory that depth-1 LRP is powerful enough for counting triangles and 3-stars, both of which are patterns with radius 1. GIN, 2-IGN and sGNN produce much smaller test error than the variance of the ground truth counts for the 3-star tasks, consistent with their theoretical power to perform containment-count of stars. Relative to the variance of the ground truth counts, GIN and 2-IGN have worse top performance on the triangle task than on the 3-star task, also as expected from the theory. Moreover, the experiment results provide interesting insights into the average-case performance in the substructure counting tasks, which are beyond what our theory can predict at this point. Table 1: Performance of different GNNs on matching-counting triangles and containment-counting 3-stars on the two datasets, measured by test MSE divided by variance of the ground truth counts. Shown here are the best and the median performances of each model over five runs. Note that we select the best out of four variants for each of GCN, GIN and sGNN, and the better out of two variants for 2-IGN. Details of the GNN architectures and raw results can be found in Appendices J, K. Erdős-Renyi Random Regular Conclusions We propose a theoretical framework to study the expressive power of classes of GNNs based on their ability to count substructures. We distinguish two kinds of counting: containment-count (counting subgraphs) and matching-count (counting induced subgraphs). We prove that neither MPNNs nor 2-IGNs can matching-count any connected structure with 3 or more nodes; k-IGNs and k-WL can containment-count and matching-count any pattern of size k. We also provide an upper bound on the size of "path-shaped" substructures that finite iterations of k-WL can matching-count. To establish these results, we prove an equivalence between approximating graph functions and discriminating graphs. Also, as intermediary results, we prove that MPNNs are no more powerful than 2-WL on attributed graphs, and that 2-IGNs are equivalent to 2-WL in distinguishing non-isomorphic graphs, which partly answers an open problem raised in Maron et al. (2019a). In addition, we perform numerical experiments that support our theoretical results and show that the Local Relational Pooling approach inspired by Murphy et al. (2019) can successfully count certain substructures. In summary, we build the foundation for using substructure counting as an intuitive and relevant measure of the expressive power of GNNs, and our concrete results for existing GNNs motivate the search for more powerful designs of GNNs. One limitation of our theory is that it only pertains to the expressive power of GNNs and does not speak about optimization or generalization. In addition, our theoretical results are worse-case in nature and cannot predict average-case performance, which is interesting to study as well. Nonetheless, even within this new framework, many interesting questions remain, including better characterizing the ability to count substructures of general k-WL and k-IGNs as well as other architectures such as spectral GNNs (Chen et al., 2019a) Duvenaud, D. K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., Aspuru-Guzik, A., and Adams, R. P. (2015). Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pages 2224-2232. A Additional notations For two positive integers a and b, we define Mod a (b) to be a if a divides b and the number c such that b ≡ c (mod a) otherwise. Hence the value ranges from 1 to a as we vary b ∈ N * . For a positive integer c, let [c] denote the set {1, ..., c}. Two k-typles, (i i , ..., i k ), (j 1 , ..., j k ) ∈ V k are said to be in the same equivalent class if ∃ a permutation π on V such that (π(i i ), ..., π(i k )) = (j 1 , ..., j k ). Note that belonging to the same equivalence class is a weaker condition than having the same isomorphism type, as will be defined in Appendix B, which has to do with what the graphs look like. For any k-tuple, s = (i 1 , ..., i k ), and for w ∈ [k], use I w (s) to denote the wth entry of s, i w . B Isomorphism types of k-tuples in k-WL for attributed graphs Say G [1] = (V [1] , E [1] , x [1] , e [1] ), G [2] = (V [2] , E [2] , x [2] , e [2] ). a) ∀s = (i 1 , ..., i k ), s = (i 1 , ..., i k ) ∈ (V [1] ) k , s and s are said to have the same isomorphism type if 1. ∀α, β ∈ [k], i α = i β ⇔ i α = i β 2. ∀α ∈ [k], x [1] iα = x [1] i α 3. ∀α, β ∈ [k], (i α , i β ) ∈ E [1] ⇔ (i α , i β ) ∈ E [1] , and moreover, if either side is true, then e [1] iα,i β = e [1] i α ,i β b) Similar if both s, s ∈ (V [2] ) k . c) ∀s = (i 1 , ..., i k ) ∈ (V [1] ) k , s = (i 1 , ..., i k ) ∈ (V [2] ) k , s and s are said to have the same isomorphism type if 1. ∀α, β ∈ [k], i α = i β ⇔ i α = i β 2. ∀α ∈ [k], x [1] iα = x [2] i α 3. ∀α, β ∈ [k], (i α , i β ) ∈ E [1] ⇔ (i α , i β ) ∈ E [2] , and moreover, if either side is true, then e C Proof of Theorem 1 (MPNNs are no more powerful than 2-WL) Proof. Suppose for contradiction that there exists an MPNN with T 0 layers that can distinguish the two graphs. Let m (t) and h (t) , m (t) and h (t) be the messages and hidden states at layer t obtained by applying the MPNN on the two graphs, respectively. Definẽ h (t) i,j = h (t) i if i = j h (t) i , h (t) j , a i,j , e i,j otherwisẽ h (t) i,j = h (t) i if i = j h (t) i , h (t) j , a i,j , e i,j otherwise, where a i,j = 1 if (i, j) ∈ E [1] and 0 otherwise, e i,j = e [1] i,j is the edge feature of the first graph, and a , e are defined similarly for the second graph. Since the two graphs cannot be distinguished by 2-WL, then for the T 0 th iteration, there is {c (T0) 2 (s) : s ∈ V 2 } = {c (T0) 2 (s) : s ∈ V 2 }, which implies that there exists a permutation on V 2 , which we can call η 0 , such that ∀s ∈ V 2 , there is c (T0) 2 (s) = c (T0) 2 (η 0 (s)). To take advantage of this condition, we introduce the following lemma, which is central to the proof. Lemma 2. ∀t ≤ T 0 , ∀i, j, i , j ∈ V , if c (t) 2 ((i, j)) = c (t) 2 ((i , j )), then 1. i = j ⇔ i = j . 2.h (t) i,j =h (t) i ,j Proof of Lemma 2: First, we state the following simple observation without proof, which is immediate given the update rule of k-WL: Lemma 3. For k-WL, ∀s, s ∈ V k , if for some t 0 , c (t0) k (s) = c (t0) k (s ), then ∀t ∈ [0, t 0 ], c (t) k (s) = c (t) k (s ). For the first condition, assuming c (t) 2 ((i, j)) = c (t) 2 ((i , j )), Lemma 3 then tells us that c (0) 2 ((i, j)) = c (0) 2 ((i , j )). Since the colors in 2-WL are initialized by the isomorphism type of the node pair, it has to be that i = j ⇔ i = j . We will prove the second condition by induction on t. For the base case, t = 0, we want to show that ∀i, j, i , j ∈ V , if c (0) 2 ((i, j)) = c (0) 2 ((i , j )) thenh (0) i,j =h (0) i ,j . If i = j, then c (0) 2 ((i, i)) = c (0) 2 ((i , i )) if and only if x i = x i , which is equivalent to h (0) i = h (0) i , and henceh (0) i =h (0) i . If i = j, 2 ((i, j)) = c (0) 2 ((i , j )) implies that x i = x i ⇒ h (0) i = h (0) i x j = x j ⇒ h (0) j = h (0) j a i,j = a i ,j e i,j = e i ,j which yieldsh (0) i,j =h (0) i ,j . Next, to prove the inductive step, assume that for some T ∈ [T 0 ], the statement in Lemma 2 holds for all t ≤ T − 1, and consider ∀i, j, i , j ∈ V such that c (T ) 2 ((i, j)) = c (T ) 2 ((i , j )). By the update rule of 2-WL, this implies that c (T −1) 2 ((i, j)) = c (T −1) 2 ((i , j )) {c (T −1) 2 ((k, j)) : k ∈ V } = {c (T −1) 2 ((k, j )) : k ∈ V } {c (T −1) 2 ((i, k)) : k ∈ V } = {c (T −1) 2 ((i , k)) : k ∈ V }(2) The first condition, thanks to the inductive hypothesis, implies thath (T −1) i,j =h (T −1) i ,j . In particular, if i = j, then we have a i,j = a i ,j e i,j = e i ,j(3) The third condition implies that ∃ a permutation on V , which we can call ξ i,i , such that ∀k ∈ V , c (T −1) 2 ((i, k)) = c (T −1) 2 ((i , ξ i,i (k))) By the inductive hypothesis, there is ∀k ∈ V , h (T −1) i,k =h (T −1) i ,ξ i,i (k) and moreover, ξ i,i (k) = i if and only if k = i. For k = i, we thus have h (T −1) i = h (T −1) i h (T −1) k = h (T −1) ξ i,i (k) a i,k = a i ,ξ i,i (k) e i,k = e i ,ξ i,i (k) Now, looking at the update rule at the T th layer of the MPNN, m (T ) i = k∈N (i) M T (h (T −1) i , h (T −1) k , e i,k ) = k∈V a i,k · M T (h (T −1) i , h (T −1) k , e i,k ) = k∈V a i ,ξ i,i (k) · M T (h (T −1) i , h (T −1) ξ i,i (k) , e i ,ξ i,i (k) ) = k ∈V a i ,k · M T (h (T −1) i , h (T −1) k , e i ,k ) = m (T ) i where between the third and the fourth line we made the substitution k = ξ i,i (k). Therefore, h (T ) i = U t (h (T −1) i , m T i ) = U t (h (T −1) i , m T i ) = h (T ) i By the symmetry between i and j, we can also show that h (T ) j = h (T ) j . Hence, together with 3, we can conclude thath (T ) i,j =h (T ) i ,j , which proves the lemma. Thus, the second result of this lemma tells us that ∀i, j ∈ V 2 ,h (T0) i,j =h (T0) η0(i,j) . Moreover, by the first result, ∃ a permutation on V , which we can call τ 0 , such that ∀i ∈ V , η((i, i)) = (τ 0 (i), τ 0 (i)). Combining the two, we have that ∀i ∈ V , h (T0) i = h (T0) τ (i) , and hence {h (T0) i : i ∈ V } = {h (T0) i : i ∈ V }(4) Therefore,ŷ =ŷ , meaning that the MPNN returns identical outputs on the two graphs. = {(i, j) : i, j ≤ m, (i, j) ∈ E [P] } ∪ {(i + m, j + m) : i, j ≤ m, (i, j) ∈ E [P] } ∪ {(1, 2), (2, 1), (1 + m, 2 + m), (2 + m, 1 + m)}; ∀i ≤ m, x [1] i = x [1] i+m = x [P] i ; ∀(i, j) ∈ E [P] , e [1] i,j = e [1] i+m,j+m = e [P] i,j , and moreover we can randomly choose a value of edge feature for e E [2] = {(i, j) : i, j ≤ m, (i, j) ∈ E [P] } ∪ {(i + m, j + m) : i, j ≤ m, (i, j) ∈ E [P] } ∪ {(1, 2 + m), (2 + m, 1), (1 + m, 2), (2, 1 + m)}; ∀i ≤ m, x [2] i = x [2] i+m = x [P] i ; ∀(i, j) ∈ E [P] , e [2] i,j+m = e [2] i+m,j = e [P] i,j , and moreover we let e On one hand, by construction, 2-WL will not be able to distinguish G [1] from G [2] . This is intuitive if we compare the rooted subtrees in the two graphs, as there exists a bijection from V [1] to V [2] that preserves the rooted subtree structure. A rigorous proof is given at the end of this section. In addition, we note that this is also consequence of the direct proof of Corollary 4 given in Appendix I, in which we will show that the same pair of graphs cannot be distinguished by 2-IGNs. Since 2-IGNs are no less powerful than 2-WL (Maron et al., 2019b), this implies that 2-WL cannot distinguish them either. On the other hand, G [1] and G [2] has different matching-count of the pattern. G [1] contains no subgraph isomorphic to G [P] . Intuitively this is obvious; to be rigorous, note that firstly, neither the subgraph induced by the nodes {1, ..., m} nor the subgraph induced by the nodes {1 + m, ..., 2m} is isomorphic to G [P] , and secondly, the subgraph induced by any other set of m nodes is not connected, whereas G [P] is connected. G [2] , however, has at least two induced subgraphs isomorphic to G [P] , one induced by the nodes {1, ..., m}, and the other induced by the nodes {1 + m, ..., 2m}. If G [P] is a clique, then we also first construct G [1] , G [2] from G [P] as two copies of G [P] . Then, for G [1] , we pick two distinct nodes 1, 2 ∈ V [P] and remove the edges (1, 2), (2, 1), (1 + m, 2 + m) and (2 + m, 1 + m) from V [1] , while adding edges (1, 2 + m), (2 + m, 1), (1 + m, 2), (2, 1 + m) with the same edge features. Then, G [1] contains no subgraph isomorphic to G [P] , while G [2] contains two. Note that the pair of graphs is the same as the counterexample pair of graphs that could have been constructed in the non-clique case for the pattern that is a clique with one edge deleted. Hence 2-WL still cant distinguish G [1] from G [2] . Proof of 2-WL failing to distinguish G [1] and G [2] : To show that 2-WL cannot distinguish G [1] from G [2] , we need to show that if we run 2-WL on the two graphs, then ∀T, {c (T ) ((i, j)) : i, j ∈ V } = {c (T ) ((i, j)) : i, j ∈ V }. For this to hold, it is sufficient to find a bijective map η : j))), ∀i, j ∈ V . First, we define a set S = {(1, 2), (2, 1), (1 + m, 2 + m), (2 + m, 1 + m), (1, 2 + m), (2 + m, 1), (1 + m, 2), (2, 1 + m)}, which represents the "special" pairs of nodes that capture the difference between G [1] and G [2] . Then we can define η : V 2 → V 2 such that c (T ) ((i, j)) = c (T ) (η((i,V 2 → V 2 as η((i, j)) = (i, j), if (i, j) / ∈ S (i, Mod 2m (j + m)), if (i, j) ∈ S Note that η is a bijective. It is easy to verify that η is a color-preserving map between node pairs in G [1] and node pairs in G [2] at initialization, i.e. c (0) ((i, j)) = c (0) (η((i, j))), ∀i, j ∈ V . We will prove by induction that in fact it remains such a color-preserving map at any iteration T . The inductive step that we need to prove is, Lemma 4. For any positive integer t, supposing that c (t−1) ((i, j)) = c (t−1) (η ((i, j))), ∀i, j ∈ V , then we also have c (t) ((i, j)) = c (t) (η((i, j))), ∀i, j ∈ V . Proof of Lemma 4: By the update rule of 2-WL, ∀i, j ∈ V , to show that c (t) ((i, j)) = c (t) (η ((i, j))), we need to establish three conditions: c (t−1) ((i, j)) = c (t−1) (η((i, j))) (5) {c (t−1) (s) :s ∈ N 1 ((i, j))} = {c (t−1) (s) :s ∈ N 1 (η((i, j)))} (6) {c (t−1) (s) :s ∈ N 2 ((i, j))} = {c (t−1) (s) :s ∈ N 2 (η((i, j)))} (7) The first condition is already guaranteed by the inductive hypothesis. Now we prove the last two conditions by examining different cases separately below. Case 1 i, j / ∈ {1, 2, 1 + m, 2 + m} Then η((i, j)) = (i, j), and N 1 ((i, j)) ∩ S = ∅, N 2 ((i, j)) ∩ S = ∅. Therefore, η restricted to N 1 ((i, j)) or N 2 ((i, j)) is the identity map, and thus {c (t−1) (s) :s ∈ N 1 ((i, j))} ={c (t−1) (η(s)) :s ∈ N 1 ((i, j))} ={c (t−1) (s) :s ∈ N 1 (η((i, j)))}, thanks to the inductive hypothesis. Similar for the condition (7). Case 2 i ∈ {1, 1 + m}, j / ∈ {1, 2, 1 + m, 2 + m} Then η((i, j)) = (i, j), N 2 ((i, j)) ∩ S = {(i, 2), (i, 2 + m)}, and N 1 ((i, j)) ∩ S = ∅. To show condition (7), note that η is the identity map when restricted to N 2 ((i, j)) \ {(i, 2), (i, 2 + m)}, and hence {c (t−1) (s) :s ∈ N 2 ((i, j)) \ {(i, 2), (i, 2 + m)}} = {c (t−1) (s) :s ∈ N 2 ((i, j)) \ {(i, 2), (i, 2 + m)}} Moreover, η((i, 2)) = (i, 2 + m) and η((i, 2 + m)) = (i, 2). Hence, by the inductive hypothesis, c (t−1) ((i, 2)) = c (t−1) ((i, 2 + m)) and c (t−1) ((i, 2 + m)) = c (t−1) ((i, 2)). Therefore, {c (t−1) (s) :s ∈ N 2 ((i, j))} ={c (t−1) (s) :s ∈ N 2 ((i, j))} ={c (t−1) (s) :s ∈ N 2 (η((i, j)))}, which shows condition (7). Condition (6) is easily seen as η restricted to N 1 ((i, j)) is the identity map. Case 3 j ∈ {1, 1 + m}, i / ∈ {1, 2, 1 + m, 2 + m} There is η((i, j)) = (i, j), N 1 ((i, j)) ∩ S = {(2, j), (2 + m, j)}, and N 2 ((i, j)) ∩ S = ∅. Hence the proof can be carried out analogously to case 2. Case 4 i ∈ {2, 2 + m}, j / ∈ {1, 2, 1 + m, 2 + m} There is η((i, j)) = (i, j), N 2 ((i, j)) ∩ S = {(i, 1), (i, 1 + m)}, and N 1 ((i, j)) ∩ S = ∅. Hence the proof can be carried out analogously to case 2. Case 5 j ∈ {2, 2 + m}, i / ∈ {1, 2, 1 + m, 2 + m} There is η((i, j)) = (i, j), N 1 ((i, j)) ∩ S = {(1, j), (1 + m, j)}, and N 2 ((i, j)) ∩ S = ∅. Hence the proof can be carried out analogously to case 2. Case 6 (i, j) ∈ S There is η((i, j)) = (i, Mod 2m (j)), N 1 ((i, j))∩S = {(i, j), (Mod 2m (i), j)}, N 2 ((i, j))∩S = {(i, j), (i, Mod 2m (j))}. Thus, N 1 (η((i, j))) = N 1 ((i, Mod 2m (j))), N 2 (η((i, j))) = N 2 ((i, Mod 2m (j))) = N 2 ((i, j)). Once again, η is the identity map when restricted to N 1 ((i, j)) \ S or N 2 ((i, j)) \ S. Hence, by the inductive hypothesis, there is {c (t−1) (s) :s ∈ N 1 ((i, j))\{(i, j), (Mod 2m (i), j)}} = {c (t−1) (s) :s ∈ N 1 ((i, j))\{(i, j), (Mod 2m (i), j)}} {c (t−1) (s) :s ∈ N 2 ((i, j))\{(i, j), (i, Mod 2m (j))}} = {c (t−1) (s) :s ∈ N 2 ((i, j))\{(i, j), (i, Mod 2m (j))}} Also from the inductive hypothesis, we have c (t−1) ((i, j)) =c (t−1) (η((i, j))) =c (t−1) ((i, Mod 2m (j))),(8)c (t−1) ((i, j)) =c (t−1) ((j, i)) =c (t−1) (η((j, i))) =c (t−1) ((j, Mod 2m (i))) =c (t−1) ((Mod 2m (i), j)),(9)c (t−1) ((i, Mod 2m (j))) =c (t−1) (η((i, Mod 2m (j)))) =c (t−1) ((i, Mod 2m (Mod 2m (j)))) =c (t−1) ((i, j)),(10)c (t−1) ((Mod 2m (i), j)) =c (t−1) ((j, Mod 2m (i))) =c (t−1) (η((j, Mod 2m (i)))) =c (t−1) ((j, Mod 2m (Mod 2m (i)))) =c (t−1) ((j, i)) =c (t−1) ((i, j)),(11) where in (9) and (11), the first and the last equalities are thanks to the symmetry of the coloring between any pair of nodes (i , j ) and its "reversed" version (j , i ), which persists throughout all iterations, as well as the fact that if (i , j ) ∈ S, then (j , i ) ∈ S. Therefore, we now have {c (t−1) (s) :s ∈ N 1 ((i, j))} = {c (t−1) (s) :s ∈ N 1 ((i, j))} (12) {c (t−1) (s) :s ∈ N 2 ((i, j))} = {c (t−1) (s) :s ∈ N 2 ((i, j))}(13) Since η((i, j)) = (i, Mod 2m (j)), we have N 1 (η((i, j))) ={(k, Mod 2m (j)) : k ∈ V } ={(k, Mod 2m (j)) : (Mod 2m (k), j) ∈ N 1 ((i, j))} ={(Mod 2m (k), Mod 2m (j)) : (k, j) ∈ N 1 ((i, j))} Thanks to the symmetry of the coloring under the map (i , j ) → (Mod 2m (i ), Mod 2m (j )), we then have {c (t−1) (s) :s ∈ N 1 (η((i, j)))} ={c (t−1) ((Mod 2m (k), Mod 2m (j))) : (k, j) ∈ N 1 ((i, j))} ={c (t−1) ((k, j)) : (k, j) ∈ N 1 ((i, j))} ={c (t−1) (s) :s ∈ N 1 ((i, j))} Therefore, combined with (12), we see that (6) is proved. (7) is a straightforward consequence of (13), since N 2 ((i, j)) = N 2 (η((i, j))). Case 7 i, j ∈ {1, 1 + m} There is η((i, j)) = (i, j), N 2 ((i, j)) ∩ S = {(i, 2), (i, 2 + m)}, and N 1 ((i, j)) ∩ S = {(2, j), (2 + m, j)}. Thus, both (6) and (7) can be proved analogously to how (7) is proved for case 2. Case 8 i, j ∈ {2, 2 + m} There is η((i, j)) = (i, j), N 2 ((i, j)) ∩ S = {(i, 1), (i, 1 + m)}, and N 1 ((i, j)) ∩ S = {(1, j), (1 + m, j)}. Thus, both (6) and (7) can be proved analogously to how (7) is proved for case 2. With conditions (6) and (7) shown for all pairs of (i, j) ∈ V 2 , we know that by the update rules of 2-WL, there is c (t) ((i, j)) = c (t) (η((i, j))), ∀i, j ∈ V . With Lemma 4 justifying the inductive step, we see that for any positive integer T , there is c (T ) ((i, j)) = c (T ) (η((i, j))), ∀i, j ∈ V . Hence, we can conclude that ∀T, {c (T ) ((i, j)) : i, j ∈ V } = {c (T ) ((i, j)) : i, j ∈ V },= {(1, i) : 2 ≤ i ≤ m} ∪ {(i, 1) : 2 ≤ i ≤ m}. Given a graph G, for each of its node j, we define N (j) as the set of its neighbors in the graph. Then the neighborhood centered at j contributes to C C (G, G [P] ) if and only if x j = x [P] 1 and ∃S ⊆ N (j) such that the multiset {(x k , e jk ) : k ∈ S} equals the multiset {(x [P] k , e [P] 1k ) : 2 ≤ k ≤ m}. Moreover, the contribution to the number C C (G, G [P] ) equals the number of all such subsets S ⊆ N (j). Hence, we have the following decomposition C C (G, G [P] ) = j∈V f [P] x j , {(x k , e jk ) : k ∈ N (j)} , where f [P] , is defined for every 2-tuple consisting of a node feature and a multiset of pairs of node feature and edge feature (i.e., objects of the form x, M = {(x α , e α ) : α ∈ K} where K is a finite set of indices) as f [P] (x, M ) = 0 if x = x [P] 1 # [P] M if x = x [P] 1 where # [P] M denotes the number of sub-multisets of M that equals the multiset {(x F Proof of Theorem 4 (k-WL is able to count patterns of k or fewer nodes) Proof. Suppose we run k-WL on two graphs, G [1] and G [2] . In k-WL, the colorings of the k-tuples are initialized according to their isomorphism types as defined in Appendix B. Thus, if for some pattern of no more than k nodes, G [1] and G [2] have different matching-count or containment-count, then there exists an isomorphism type of k-tuples such that G [1] and G [2] differ in the number of k-tuples under this type. This implies that {c (0) k (s) : s ∈ (V [1] ) k } = {c (0) k (s ) : s ∈ (V [2] ) k }, and hence the two graphs can be distinguished at the 0th iteration of k-WL. G Proof of Theorem 5 (T iterations of k-WL cannot matchingcount path patterns of size (k + 1)2 T or more) Proof. For any integer m ≥ (k + 1)2 T , we will construct two graphs G [1] = (V [1] = [2m], E [1] , x [1] , e [1] ) and G [2] = (V [2] = [2m], E [2] , x [2] , e [2] ), both with 2m nodes but with different matching-counts of H m , and show that k-WL cannot distinguish them. Define E double = {(i, i + 1) : 1 ≤ i < m} ∪ {(i + 1, i) : 1 ≤ i < m} ∪ {(i + m, i + m + 1) : 1 ≤ i < m} ∪ {(i + m + 1, i + m) : 1 ≤ i < m}, which is the edge set of a graph that is exactly two disconnected copies of H m . For G [1] , let E [1] = E double ∪ {(1, m), (m, 1), (1 + m, 2m), (2m, 1 + m)}; ∀i ≤ m, x [1] i = x [1] i+m = x [Hm] i ; ∀(i, j) ∈ E [Hm] , e [1] i,j = e [1] j,i = e [1] i+m,j+m = e [1] j+m,i+m = e [Hm] i,j , and moreover, we can randomly choose a value of edge feature for e [1] 1,m = e [1] m,1 = e [1] 1+m,2m = e [1] 2m,1+m . For G [2] , let E [2] = E double ∪ {(1, 2m), (2m, 1), (m, 1 + m), (1 + m, 2m)}; ∀i ≤ m, x [2] i = x [2] i+m = x [Hm] i ; ∀(i, j) ∈ E [Hm] , e [1] i,j = e [1] j,i = e [1] i+m,j+m = e [1] j+m,i+m = e [Hm] i,j , and moreover, set e , respectively, obtained after running k-WL on the two graphs simultaneously for t iterations. To show that the answer is negative, we want to prove that {c (T ) k (s) : s ∈ [2m] k } = {c (T ) k (s) : s ∈ [2m] k }(14) To show this, if is sufficient to find a permutation η : [2m] k → [2m] k such that ∀ k-tuple s ∈ [2m] k , c (T ) k (s) = c (T ) k (η(s)). Before defining such an η, we need the following lemma. Lemma 5. Let p be a positive integer. If m ≥ (k + 1)p, then ∀s ∈ [2m] k , ∃i ∈ [m] such that {i, i + 1, ..., i + p − 1} ∩ {Mod m (j) : j ∈ s} = ∅. Proof of Lemma 5: We can use a simple counting argument to show this. For u ∈ [k + 1], define A u = {up, up + 1, ..., (u + 1)p − 1} ∪ {up + m, up + 1 + m, ..., (u + 1)p − 1 + m}. Then |A u | = 2p, A u ∩ A u = ∅ if u = u , and [2m] ⊇ u∈[k+1] A u ,(15) since m ≥ (k + 1)p. Suppose that the claim is not true, then each A i contains at least one node in s, and therefore . Similarly, if we consider s = (3, 14, 15), then χ(s) = 4, and thus η(s) = ζ 4 (s) = (3, 6, 7). In both cases, we see that the isomorphism type of s in G [1] equals the isomorphism type of η(s) in G [2] . In the end, we will show that c s ⊇ (s ∩ [2m]) ⊇ u∈[k+1] (s ∩ A u ),(T ) k (s) = c (T ) k (η(s)). which contains at least k + 1 nodes, which is contradictory. With this lemma, we see that ∀s ∈ [2m] k , ∃i ∈ [m] such that ∀j ∈ s, Mod m (j) either < i or ≥ i + 2 ∀s = (i 1 , ..., i k ) ∈ [2m] k , ζ i (s) = (τ i (i 1 ), ..., τ i (i k )).(17) Finally, we define a mapping η from [2m] k → [2m] k as, η(s) = ζ χ(s) (s)(18) The maps χ, τ and η are illustrated in Figure 4. To fulfill the proof, there are two things we need to show about η. First, we want it to be a permutation on [2m] k . To see this, observe that χ(s) = χ(η(s)), and hence ∀s ∈ [2m] k , (η • η)(s) = (ζ χ(η(s)) • ζ χ(s) )(s) = s, since ∀i ∈ [m], τ i • τ i is the identity map on [2m]. Second, we need to show that ∀s ∈ [2m] k , c (T ) k (s) = c (T ) k (η(s)). This will be a consequence of the following lemma. Lemma 6. At iteration t, ∀s ∈ [2m] k , ∀i such that ∀j ∈ s, either Mod m (j) < i or Mod m (j) ≥ i + 2 t , there is c (t) k (s) = c (t) k (ζ i (s))(19) Remark: This statement allows i to depend on s, as will be the case when we apply this lemma to η(s) = ζ χ(s) (s), where we set i to be χ(s). Proof of Lemma 6: Notation-wise, for any k-tuple, s = (i 1 , ..., i k ), and for w ∈ [k], use I w (s) to denote the wth entry of s, i w . The lemma can be shown by using induction on t. Before looking at the base case t = 0, we will first show the inductive step, which is: ∀T , suppose the lemma holds for all t ≤T − 1, then it also holds for t =T . Inductive step: Fix aT and suppose the lemma holds for all t ≤T − 1. Under the condition that ∀j ∈ s, either Mod m (j) < i or Mod m (j) ≥ i + 2T , to show c (T ) k (s) = c (T ) k (ζ i (s)), we need two things to hold: 1. c (T −1) k (s) = c (T −1) k (ζ i (s)) 2. ∀w ∈ [k], {c (T −1) k (s) :s ∈ N w (s)} = {c (T −1) k (s) :s ∈ N w (ζ i (s))} The first condition is a consequence of the inductive hypothesis, as i + 2T > i + 2 (T −1) . For the second condition, it is sufficient to find for all w ∈ [k], a bijective mapping ξ from N w (s) to N w (ζ i (s)) such that ∀s ∈ N w (s), c (T −1) k (s) = c (T −1) k (ξ(s)). We then define β(i,s) = Mod m (I w (s)) + 1, if i ≤ Mod m (I w (s)) < i + 2T −1 i, otherwise(21) Now, consider anys ∈ N w (s). Note thats and s differ only in the wth entry of the k-tuple. • If i ≤ Mod m (I w (s)) < i + 2T −1 , then ∀j ∈s, either j ∈ s, in which case either Mod m (j) < i < Mod m (I w (s)) + 1 = β(i,s) or Mod m (j) ≥ i + 2T ≥ Mod m (I w (s)) + 1 + 2T −1 = β(i,s) + 2T −1 , or j = I w (s), in which case Mod m (j) < Mod m (I w (s)) + 1 = β(i,s). • If Mod m (I w (s)) < i or Mod m (I w (s)) ≥ i + 2T −1 , then ∀j ∈s, either j ∈ s, in which case either Mod m (j) < i = β(i,s) or Mod m (j) ≥ i + 2T ≥ β(i,s) + 2T −1 , -or j = I w (s), in which case either Mod m (j) < i = β(i,s) or Mod m (j) ≥ i + 2T −1 ≥ β(i,s) + 2T −1 . Thus, in all cases, there is ∀j ∈s, either Mod m (j) < β(i,s), or Mod m (j) ≥ i + 2T −1 . Hence, by the inductive hypothesis, we have c (T −1) k (s) = c (T −1) k (ζ β(i,s) (s)) . This inspires us to define, for ∀w ∈ [k], ∀s ∈ N w (s), ξ(s) = ζ β(i,s) (s)(22) Additionally, we still need to prove that, firstly, ξ maps N w (s) to N w (ζ i (s)), and secondly, ξ is a bijection. For the first statement, note that ∀s ∈ N w (s), ζ β(i,s) (s) = ζ i (s) because s contains no entry between i and β(i,s), with the latter being less than i+2T . Hence, ifs ∈ N w (s), then ∀w ∈ [k] with w = w, there is I w (s) = I w (s), and therefore I w (ξ(s)) = I w (ζ β(i,s) (s)) = τ β(i,s) (I w (s)) = τ β(i,s) (I w (s)) = I w (ζ β(i,s) (s)) = I w (ζ i (s)), which ultimately implies that ξ(s) ∈ N w (ζ i (s)). For the second statement, note that since I w (ξ(s)) = τ β(i,s) (I w (s)) (by the definition of ζ), there is Mod m (I w (ξ(s))) = Mod m (τ β(i,s) (I w (s))) = Mod m (I w (s)), and therefore β(i, ξ(s)) = β(i,s). Thus, we know that (ξ • ξ)(s) = (ζ β(i,ξ(s)) • ζ β(i,s) )(s) = (ζ β(i,s) • ζ β(i,s) )(s) =s. This implies that ξ is a bijection from N w (s) to N w (ζ i (s)). This concludes the proof of the inductive step. Base case: We need to show that ∀s ∈ [2m] k , ∀i * such that ∀j ∈ s, either Mod m (j) < i * or Mod m (j) ≥ i * + 1, there is c (0) k (s) = c (0) k (ζ i * (s))(23) Due to the way in which the colorings of the k-tuples are initialized in k-WL, the statement above is equivalent to showing that s in G [1] and ζ i * (s) in G [2] have the same isomorphism type, for which we need the following to hold. Lemma 7. Say s = (i 1 , ..., i k ), in which case ζ i * (s) = (τ i * (i 1 ), ..., τ i * (i k )). Then 1. ∀i α , i β ∈ s, i α = i β ⇔ τ i * (i α ) = τ i * (i β ) 2. ∀i α ∈ s, x [1] iα = x [2] τ i * (iα) 3. ∀i α , i β ∈ s, (i α , i β ) ∈ E [1] ⇔ (τ i * (i α ), τ i * (i β )) ∈ E [2] , and moreover, if either is true, e [1] iα,i β = e [2] τ i * (iα),τ i * (i β ) Proof of Lemma 7: 1. This is true since τ i * is a permutation on [2m]. 2. This is true because by the construction of the two graphs, ∀i ∈ [2m], x [1] i = x [2] i , and moreover x By the assumption on i * in (23), we know that i α , i β / ∈ {i * , i * + m}. Now we look at 16 different cases separately, which comes from 4 possibilities for each of i α and i β : i α (or i β ) belonging to {1, ..., i * − 1}, {i * + 1, ..., m}, {1 + m, ..., i * − 1 + m}, or {i * + 1 + m, ..., 2m} 2] , and moreover, e [1] i = x [1] i+m if i ≤ m.Case 1 1 ≤ i α , i β < i * Then τ i * (i α ) = i α , τ i * (i β ) = i β . In addition, as Mod m (i α ), Mod m (i β ) = m, there is (i α , i β ) / ∈ S. Thus, if (i α , i β ) ∈ E [1] , then (i α , i β ) ∈ E double ⊂ E [[1] iα,i β = e [Hm] iα,i β = e [2] iα,i β = e [2] τ i * (iα),τ i * (i β ) . Same for the other direction. Case 2 1 + m ≤ i α , i β < i * + m Similar to case 1. 2] , and moreover, e iα,i β = e [Hm] iα,i β = e [2] iα+m,i β +m = e [2] τ i * (iα),τ i * (i β ) . Case 3 i * + 1 ≤ i α , i β ≤ m Then τ i * (i α ) = i α + m, τ i * (i β ) = i β + m. In addition, as Mod m (i α ), Mod m (i β ) = 1, there is (i α , i β ) / ∈ S. Thus, if (i α , i β ) ∈ E [1] , then (i α , i β ) ∈ E double , and hence (i α + m, i β + m) ∈ E double ⊂ E [Case 4 i * + 1 + m ≤ i α , i β ≤ 2m Similar to case 3. Case 5 1 ≤ i α < i * , i * + 1 ≤ i β ≤ m If i α = 1 or i β = m, then since H m is a path and i α < i * ≤ i β − 1, (i α , i β ) / ∈ E [1] or E [2] . Now we consider the case where i α = 1, i β = m. As 1 ≤ i * < m, by the definition of τ , there is τ i * (1) = 1, and τ i * (m) = 2m. Note that both (1, m) ∈ E [1] and (1, 2m) ∈ E [2] are true, and moreover, e [1] 1,m = e [2] 1,2m . Case 6 1 ≤ i β < i * , i * + 1 ≤ i α ≤ m Similar to case 5. Case 7 1 + m ≤ i α < i * + m, i * + 1 + m ≤ i β ≤ 2m Similar to case 5. Case 8 1 + m ≤ i β < i * + m, i * + 1 + m ≤ i α ≤ 2m Similar to case 5. Case 9 1 ≤ i α < i * and 1 + m ≤ i β < i * + m Then τ s (i α ) = i α , τ s (i β ) = i β , and (i α , i β ) / ∈ E [1] or E [2] . Case 10 1 ≤ i β < i * and 1 + m ≤ i α < i * + m Similar to case 9. Case 11 i * + 1 ≤ i α < m and i * + 1 + m ≤ i β ≤ 2m (i α , i β ) / ∈ E [1] . τ s (i α ) = i α + m, τ s (i β ) = i β − m. Hence (τ s (i α ), τ s (i β )) / ∈ E [2] either. Case 12 i * + 1 ≤ i β ≤ m and i * + 1 + m ≤ i α ≤ 2m Similar to case 11. Case 13 1 ≤ i α < i * and i * + 1 + m ≤ i β ≤ 2m (i α , i β ) / ∈ E [1] obviously. We also have τ s (i α ) = i α ∈ [1, i * ), τ s (i β ) = i β − 1 ∈ [i * + 1, m], and hence (τ s (i α ), τ s (i β )) / ∈ E [2] . Case 14 1 ≤ i β < i * and i * + 1 + m ≤ i α ≤ 2m Similar to case 13. Case 15 1 + m ≤ i α < i * + m and i * + 1 ≤ i β ≤ m Similar to case 13. Case 16 1 + m ≤ i β < i * + m and i * + 1 ≤ i α ≤ m Similar to case 13. This concludes the proof of Lemma 7. Lemma 7 completes the proof of the base case, and hence the induction argument for Lemma 6. ∀s ∈ [2m] k , since η(s) = ζ χ(s) (s), and χ(s) satisfies ∀j ∈ s, either Mod m (j) < i or Mod m (j) ≥ i + 2 T , Lemma 6 implies that at iteration T , we have c (T ) k (s) = c (T ) k (ζ χ(s) (s)) = c (T ) k (η(s)). Since we have shown that η is a permutation on [2m] k , this let's us conclude that {c (T ) k (s) : s ∈ [2m] k } = {c (T ) k (s) : s ∈ [2m] k },(24) and therefore k-WL cannot distinguish between the two graphs in T iterations. H Proof of Theorem 6 (2-IGNs are no more powerful than 2-WL) Proof. For simplicity of notations, we assume d t = 1 in every layer of a 2-IGN. The general case can be proved by adding more subscripts. For 2-WL, we use the definition in section 3.1 except for omitting the subscript k in c (t) k . To start, it is straightforward to show (and we will prove it at the end) that the theorem can be deduced from the following lemma: Lemma 8. Say G [1] and G [2] cannot be distinguished by the 2-WL. Then ∀t ∈ N, it holds that ∀s, s ∈ V 2 , if c (t) (s) =c (t) (s ), then B (t) s = B (t) s(25) This lemma can be shown by induction. To see this, first note that the lemma is equivalent to the statement that ∀T ∈ N, ∀t ≤ T, (25) Next, to show that the induction step holds, we need to prove the following statement: ∀T ∈ N, if ∀t ≤ T − 1, (25) holds, then (25) also holds for t = T. To prove the consequent, we assume that for some s, s ∈ V 2 , there is c ( Case 1: s = (i, j) ∈ V 2 with i = j Let's first consider the case where s = (i, j) ∈ V 2 with i = j. In this case, we can also write s = (i , j ) ∈ V 2 with i = j , thanks to Lemma 2. Then, note that V 2 can be written as the union of 9 disjoint sets that are defined depending on s: In this way, we partition V 2 into 9 different subsets, each of which consisting of pairs (k, l) that yield a particular equivalence class of the 4-tuple (i, j, k, l). Similarly, we can define A s ,w for w ∈ [9], which will also give us V 2 = 9 w=1 A s,w , where we define A s,1 = {(i, j)}, A s,2 = {(i, i)}, A s,3 = {(j, j)}, A s,4 = {(i, k) : k = i or j}, A s,5 = {(k, i) : k = i or j},V 2 = 9 w=1 A s ,w Moreover, note that N 1 (s) = w=1,3,7 A s,w N 2 (s) = w=1,2,4 A s,w N 1 (s ) = w=1,3,7 A s ,w N 2 (s ) = w=1,2,4 A s ,w Before proceeding, we make the following definition to simplify notations: C s,w = {c (T −1) (s) :s ∈ A s,w } C s ,w = {c (T −1) (s) :s ∈ A s , Combining (27) and (28), we obtain w=3,7 C s,w = w=3,7 C s ,w(30) Combining (27) and (29), we obtain w=2,4 C s,w = w=2,4 C s ,w(31) Note that V 2 can also be partitioned into two disjoint subsets: A s,w , V 2 = where the first subset represent the edges: {(i, j) ∈ V 2 : i = j} and the second subset represent the nodes: {(i, i) : i ∈ V }. Similarly, V 2 =C s,w = ∅(33) Combining (30) and (32) Combining (31) and (32) or (33), we get C s,2 =C s ,2(36)C s,4 =C s ,4(37) Thanks to symmetry between (i, j) and (j, i), as we work with undirected graphs, there is C s,5 = C s,4 = C s ,4 = C s ,5(38) C s,6 = C s,7 = C s ,7 = C s ,6 In addition, since we assume that G Combining (40) with (27) Combining (41) with (36) and (34), we get C s,9 = C s ,9 Hence, in conclusion, we have that ∀w ∈ [9], C s,w = C s ,w(44) By the inductive hypothesis, this implies that ∀w ∈ [9], {B (T −1) s :s ∈ A s,w } = {B (T −1) s :s ∈ A s ,w }(45) Let us show how (45) may be leveraged. First, to prove that B (T ) s = B (T ) s , recall that B (T ) = σ(L (T ) (B (T −1) )) B (T ) = σ(L (T ) (B (T −1) ))(46) Therefore, it is sufficient to show that for all linear equivariant layer L, we have L(B (T −1) ) i,j = L(B (T −1) ) i ,j(47) Also, recall that L(B (T −1) ) i,j = (k,l)∈V 2 T i,j,k,l B k,l + Y i,j L(B (T −1) ) i ,j = (k ,l )∈V 2 T i ,j ,k ,l B k ,l + Y i ,j(48) By the definition of the A s,w 's and A s ,w 's, there is ∀w ∈ [9], ∀(k, l) ∈ A s,w , ∀(k , l ) ∈ A s ,w , we have the 4-tuples (i, j, k, l) ∼ (i , j , k , l ), i.e., ∃ a permutation π on V such that (i, j, k, l) = (π(i ), π(j ), π(k ), π(l )), which implies that T i,j,k,l = T i ,j ,k ,l . Therefore, together with (45), we have the following: L(B (T −1) ) i,j = (k,l)∈V 2 T i,j,k,l B k,l + Y i,j = 9 w=1 (k,l)∈As,w T i,j,k,l B k,l + Y i,j = 9 w=1 (k ,l )∈A s ,w T i ,j ,k ,l B k ,l + Y i ,j =L(B (T −1) ) i ,j(49) and hence B (T ) i,j = B (T ) i j , which concludes the proof for the case that s = (i, j) for i = j. Case 2: s = (i, i) ∈ V 2 Next, consider the case s = (i, i) ∈ V 2 . In this case, s = (i , i ) for some i ∈ V . This time, we write V 2 as the union of 5 disjoint sets that depend on s (or s ): V 2 = 5 w=1 A s,w , where we define A s,1 = {(i, i)}, A s,2 = {(i, j) : j = i}, A s,3 = {(j, i) : j = i}, A s,4 = {(j, k) : j, k = i and j = k}, and A s,5 = {(j, j) : j = i}. Similar for s . We can also define C s,w and C s ,w as above. Note that N 1 (s) = w=1,3 A s,w N 2 (s) = w=1,2 A s,w N 1 (s ) = w=1,3 A s ,w N 2 (s ) = w=1,2 A s ,w Hence, we can rewrite (26) as C s,1 =C s ,1(50) w=1,3 C s,w = w=1,3 C s ,w(51) w=1,2 C s,w = w=1,2 C s ,w(52) Combining (50) with (51), we get C s,3 = C s ,3(53) Combining (50) with (52), we get C s,2 = C s ,2 Combining (57) with (50), we get C s,5 = C s ,5 Combining (58) with (54) and (53), we get C s,4 = C s ,4 Hence, in conclusion, we have that ∀w ∈ [5], C s,w = C s ,w(61) By the inductive hypothesis, this implies that ∀w ∈ [5], {B (T −1) s :s ∈ A s,w } = {B (T −1) s :s ∈ A s ,w }(62) Thus, L(B (T −1) ) i,i = (k,l)∈V 2 T i,i,k,l B k,l + Y i,i = 5 w=1 (k,l)∈As,w T i,i,k,l B k,l + Y i,i = 5 w=1 (k ,l )∈A s ,w T i ,i ,k ,l B k ,l + Y i ,i =L(B (T −1) ) i ,i and hence B (T ) i,j = B (T ) i j , which concludes the proof for the case that s = (i, i) for i ∈ V . Now, suppose we are given any 2-IGN with T layers. Since G [1] and G [2] cannot be distinguished by 2-WL, together with Lemma 2, there is {c (T ) ((i, j)) : i, j ∈ V, i = j} = {c (T ) ((i , j )) : i , j ∈ V, i = j } and {c (T ) ((i, i)) : i ∈ V } ={c (T ) ((i , i )) : i ∈ V } Hence, by the lemma, we have {B (T ) (i,j) : i, j ∈ V, i = j} = {B (T ) (i ,j ) : i , j ∈ V, i = j } and {B (T ) (i,i) : i ∈ V } ={B (T ) (i ,i ) : i ∈ V } Then, since the second-last layer h in the 2-IGN can be written as h(B) = α i,j∈V,i =j B i,j + β i∈V B i,i(63) there is h(B (T ) ) = h(B (T ) )(64) and finally m • h(B (T ) ) = m • h(B (T ) )(65) which means the 2-IGN yields identical outputs on the two graphs. I Direct proof of Corollary 4 (2-IGNs are unable to matchingcount patterns of 3 or more nodes) Proof. The same counterexample as in the proof of Theorem 2 given in Appendix D applies here, as we are going to show below. Note that we only need to consider the non-clique case, since the set of counterexample graphs for the non-clique case is a superset of the set of counterexample graphs for the clique case. Let B be the input tensor corresponding to G [1] , and B corresponding to G [2] . For simplicity, we assume in the proof below that d 0 , ..., d T = 1. The general case can be proved in the same way but with more subscripts. (In particular, for our counterexamples, (69) can be shown to hold for each of the d 0 feature dimensions.) Define a set S = {(1, 2), (2, 1), (1 + m, 2 + m), (2 + m, 1 + m), (1, 2 + m), (2 + m, 1), (1 + m, 2), (2, 1 + m)}, which represents the "special" edges that capture the difference between G [1] and G [2] . We aim to show something like this: ∀t,                                      B (t) i,j = B (t) i,j , ∀(i, j) / ∈ S B (t) 1,2 = B (t) 1+m,2 , B (t) 2,1 = B (t) 2,1+m , B (t) 1+m,2+m = B (t) 1,2+m B (t) 2+m,1+m = B (t) 2+m,1 B (t) 1,2+m = B (t) 1+m,2+m , B (t) 2+m,1 = B (t) 2+m,1+m , B (t) 1+m,2 = B (t) 1,2 B (t) 2,1+m = B (t) 2,1(66) If this is true, then it is not hard to show that the 2-IGN returns identical outputs on B and B , which we will leave to the very end. To represent the different cases above compactly, we define a permutation η 1 on V × V in the following way. First, define the following permutations on V : κ 1 (i) = Mod 2m (1 + m), if i ∈ {1, 1 + m} i, otherwise Next, define the permutation τ 1 on V × V : τ 1 ((i, j)) = (κ 1 (i), κ 1 (j)) and then η 1 as the restriction of τ 1 on the set S ⊂ V × V : η 1 ((i, j)) = τ 1 ((i, j)), if (i, j) ∈ S (i, j), otherwise Thus, (66) can be rewritten as ∀t, B (t) i,j = B (t) η1((i,j))(67) Before trying to prove (67), let's define κ 2 , τ 2 and η 2 analogously: κ 2 (i) = Mod 2m (2 + m), if i ∈ {2, 2 + m} i, otherwise τ 2 ((i, j)) = (κ 2 (i), κ 2 (j)) η 2 ((i, j)) = τ 2 ((i, j)), if (i, j) ∈ S (i, j), otherwise Thus, by symmetry, (67) is equivalent to ∀t, B (t) i,j = B (t) η1((i,j)) = B (t) η2((i,j))(68) Because of the recursive relation (1), we will show (68) by induction on t. For the base case, it can be verified that B i,j = B thanks to the construction of G [1] and G [2] . Moreover, if we define another permutation V × V , ζ 1 : ζ 1 ((i, j)) =          (Mod 2m (i + m), Mod 2m (j + m)), if j ∈ {1, 1 + m} , i / ∈ {2, 2 + m} or i ∈ {1, 1 + m} , j / ∈ {2, 2 + m} (i, j), otherwise then thanks to the symmetry between (i, j) and (i + m, j + m), there is B (0) i,j = B (0) ζ1((i,j)) , B (0) i,j = B (0) ζ1((i,j)) Thus, for the induction to hold, and since σ applies entry-wise, it is sufficient to show that Lemma 9. If B i,j = B ζ1((i,j)) , B i,j = B ζ1((i,j)) (71) B i,j = B η1((i,j)) = B η2((i,j)) ,(72) Proof of Lemma 9: Again, by symmetry between (i, j) and (i + m, j + m), (73) can be easily shown. For (74), because of the symmetry between η 1 and η 2 , we will only prove the first equality. By Maron et al. T i,j,k,l B k,l + Y i,j where crucially, T i,j,k,l depends only on the equivalence class of the 4-tuple (i, j, k, l). We consider eight different cases separately. Case 1 i, j / ∈ {1, 2, 1 + m, 2 + m} There is η 1 ((i, j)) = (i, j), and (i, j, k, l) ∼ (i, j, η 1 ((k, l))), and thus T i,j,k,l = T i,j,η1((k,l)) . Therefore, L(B ) η1((i,j)) =L(B ) i,j = (2m,2m) (k,l)=(1,1) T i,j,k,l B k,l + Y i,j = (2m,2m) η1((k,l))=(1,1) T i,j,η1((k,l)) B η1((k,l)) + Y i,j = (2m,2m) (k,l)=(1,1) T i,j,η1((k,l)) B η1((k,l)) + Y i,j = (2m,2m) (k,l)=(1,1) T i,j,k,l B η1((k,l)) + Y i,j = (2m,2m) (k,l)=(1,1) T i,j,k,l B k,l + Y i,j =B i,j Case 2 i ∈ {1, 1 + m}, j / ∈ {1, 2, 1 + m, 2 + m} There is η 1 ((i, j)) = (i, j), and (i, j, k, l) ∼ (i, j, η 2 ((k, l))), because η 2 only involves permutation between nodes 2 and 2 + m, while i and j / ∈ {2, 2 + m}. Thus, T i,j,k,l = T i,j,η2((k,l)) . Therefore, L(B ) η1((i,j)) =L(B ) i,j = (2m,2m) (k,l)=(1,1) T i,j,k,l B k,l + Y i,j = (2m,2m) η2((k,l))=(1,1) T i,j,η2((k,l)) B η2((k,l)) + Y i,j = (2m,2m) (k,l)=(1,1) T i,j,η2((k,l)) B η2((k,l)) + Y i,j = (2m,2m) (k,l)=(1,1) T i,j,k,l B η2((k,l)) + Y i,j = (2m,2m) (k,l)=(1,1) T i,j,k,l B k,l + Y i,j =B i,j Case 3 j ∈ {1, 1 + m}, i / ∈ {1, 2, 1 + m, 2 + m} Analogous to case 2. Case 4 i ∈ {2, 2 + m}, j / ∈ {1, 2, 1 + m, 2 + m} There is η 1 ((i, j)) = (i, j), and (i, j, k, l) ∼ (i, j, η 1 ((k, l))), because η 1 only involves permutation between nodes 1 and 1 + m, while i and j / ∈ {1, 1 + m}. Thus, T i,j,k,l = T i,j,η1((k,l)) . Therefore, we can apply the same proof as for case 2 here except for changing η 2 's to η 1 's. Case 5 j ∈ {2, 2 + m}, i / ∈ {1, 2, 1 + m, 2 + m} Analogous to case 4. Case 6 (i, j) ∈ S Define one other permutation on V × V , ξ 1 , as ξ 1 ((i, j)) =      (Mod 2m (i + m), j), if Mod m (j) = 1, Mod m (i) = 1 or 2 (i, Mod 2m (j + m)), if Mod m (i) = 1, Mod m (j) = 1 or 2 (i, j), otherwise It can be verified that ξ 1 • τ 1 = η 1 • ζ 1 Moreover, it has the property that if (i, j) ∈ S, then (i, j, k, l) ∼ (i, j, ξ 1 (k, l)) because ξ 1 only involves permutations among nodes not in {1, 2, 1 + m, 2 + m} while i, j ∈ {1, 2, 1 + m, 2 + m}. Thus, we have (i, j, k, l) ∼(κ 1 (i), κ 1 (j), κ 1 (k), κ 1 (l)) =(τ 1 (i, j), τ 1 (k, l)) =(η 1 (i, j), τ 1 (k, l)) ∼(η 1 (i, j), ξ 1 • τ 1 (k, l)) =(η 1 (i, j), η 1 • ζ 1 (k, l)), implying that T i,j,k,l = T η1(i,j),η1•ζ1(k,l) . In addition, as η 1 ((i, j)) ∼ (i, j), there is Y η1((i,j)) = Y i,j . Moreover, by (71), B η1•ζ1((k,l)) = B η1((k,l)) = B k,l Therefore, L(B ) η1((i,j)) = (2m,2m) (k,l)=(1,1) T η((i,j)),k,l B k,l + Y η1((i,j)) = (2m,2m) η1•ζ1((k,l))=(1,1) T η1((i,j)),η1•ζ1((k,l)) B η1•ζ1((k,l)) + Y η1((i,j)) = (2m,2m) (k,l)=(1,1) T η1((i,j)),η1•ζ1((k,l)) B η1•ζ1((k,l)) + Y η1((i,j)) = (2m,2m) (k,l)=(1,1) T i,j,k,l B k,l + Y i,j =B i,j Case 7 i, j ∈ {1, 1 + m} There is η 1 (i, j) = (i, j) and (i, j, k, l) ∼ (i, j, η 2 ((k, l))). Thus, T i,j,k,l = T i,j,η2((k,l)) , and the rest of the proof proceeds as for case 2. Case 8 i, j / ∈ {1, 1 + m} There is η 1 (i, j) = (i, j) and (i, j, k, l) ∼ (i, j, η 1 ((k, l))). Thus, T i,j,k,l = T i,j,η1((k,l)) , and the rest of the proof proceeds as for case 4. J Specific GNN architectures In Section 6, we show experiments on synthetic datasets with several related architectures. Here are some explanation for them: • LRP-i-j: Local Relational Pooling with egonet depth i and cropped subtensors of size j, as described in the main text. In our experiments, we take i = 1, j = 4. Hence, the vectorized subtensor (or submatrix, as the graph is unattributed) is of size 4 × 4 = 16. The nonlinear activation functions are chosen between ReLU and tanh by hand. The models are trained using the Adam optimizer Kingma and Ba (2014) For 2-IGNs, as IN is not immediately well-defined, we only train two variants, one with JK and one without. All models are trained for 100 epochs. Learning rates are searched in {1, 0.1, 0.05, 0.01}. We pick the best model with the lowest MSE loss on validation set to generate the results. K Experiment results The variances of the ground truth counts are: 311.1696 for the 3-star task on the Erdős-Renyi dataset, 7.3441 for the triangle task on the Erdős-Renyi dataset, 316.1284 for the 3-star task on the Random Regular dataset, and 9.4249 for the triangle task on the Random Regular dataset. i . Learnable parameters can appear in the functions M t , U t (for all t ≤ T ) and R. Xu et al. (2018a) and Morris et al. (2019) k (s) denote the color of s in G [1] assigned at tth iteration, and let c (t) k (s) denote the color it receives in G[2] . c k (s) are updated iteratively as follows. For each Extending the aforementioned results by Xu et al. (2018a); Morris et al. (2019) in a nontrivial way to incorporate edge features, we present the following theorem, to be proved in Appendix C. Figure 2 : 2Illustration of the construction in the proof of Theorem 2 for the pattern from Figure 1 (left). Note that M(G [1] ; G [P] ) = 0 whereas M(G [2] ; G [P] ) = 2. The graphs G [1] and G [2] are not distinguishable by MPNNs, 2-WL, or 2-IGNs. ), and so on. We can in fact show that {c Corollary 3 . 2 - 32IGNs are exactly as powerful as 2-WL. ⊆ S n of permutations compatible with breath-first-search (BFS) to further reduce the complexity, as suggested in Murphy et al. (2019). Concretely, we define G [ego] i,l as the egonet centered at node i of depth l, B [ego] i,l as the corresponding representation and n i,l as the number of nodes in G [ego] Figure 3 : 3Substructures to be counted in the experiments. Left: A triangle. Right: A 3-star. Models. We consider LRP,GIN (Xu et al., 2018a),GCN (Kipf and Welling, 2016), 2-IGN (Maron et al., 2018) and spectral GNN (sGNN)(Chen et al., 2019a), with GIN and GCN belonging to the category of MPNNs. Details of GNN architectures are provided in Appendix J. We use mean squared error (MSE) for regression loss. Each model is trained on 1080ti five times with different random seeds. Fürer , M. (2017). On the combinatorial power of the Weisfeiler-Lehman algorithm. arXiv preprint arXiv:1704.01023. Garg, V. K., Jegelka, S., and Jaakkola, T. (2020). Generalization and representational limits of graph neural networks. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. (2017). Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263-1272. JMLR. org. Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Jiang, C., Coenen, F., and Zito, M. (2010). Finding frequent subgraphs in longitudinal social network data using a weighted graph mining approach. In International Conference on Advanced Data Mining and Applications, pages 405-416. Springer. Jin, W., Barzilay, R., and Jaakkola, T. (2019). Hierarchical graph-to-graph translation for molecules. Jin, W., Barzilay, R., and Jaakkola, T. (2020). Composing molecules with multiple property constraints. arXiv preprint arXiv:2002.03244. Jin, W., Barzilay, R., and Jaakkola, T. S. (2018). Junction tree variational autoencoder for molecular graph generation. CoRR, abs/1802.04364. Keriven, N. and Peyré, G. (2019). Universal invariant and equivariant graph neural networks. arXiv preprint arXiv:1905.04943. Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kipf, T. N. and Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Koyutrk, M., Grama, A., and Szpankowski, W. (2004). An efficient algorithm for detecting frequent subgraphs in biological networks. Bioinformatics, 20(suppl 1):i200-i207. Lemke, T. L. (2003). Review of organic functional groups: introduction to medicinal organic chemistry. Lippincott Williams & Wilkins.Liu, S.,Chandereng, T., and Liang, Y. (2018). N-gram graph, A novel molecule representation. arXiv preprint arXiv:1806.09206.Liu, X., Pan, H., He, M., Song, Y., and Jiang, X. (2019). Neural subgraph isomorphism counting.Loukas, A.(2019). What graph neural networks cannot learn: depth vs width. arXiv preprint arXiv:1907.03199.Maron, H., Ben-Hamu, H., and Lipman, Y. (2019a). Open problems: Approximation power of invariant graph networks. Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. (2019b). Provably powerful graph networks. In Advances in Neural Information Processing Systems, pages 2153-2164. Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. (2018). Invariant and equivariant graph networks. Maron, H., Fetaya, E., Segol, N., and Lipman, Y. (2019c). On the universality of invariant networks. arXiv preprint arXiv:1901.09342. Monti, F., Otness, K., and Bronstein, M. M. (2018). Motifnet: a motif-based graph convolutional network for directed graphs. CoRR, abs/1802.01572. Morgan, H. L. (1965). The generation of a unique machine description for chemical structures-a technique developed at chemical abstracts service. Journal of Chemical Documentation, 5(2):107-113. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. (2019). Weisfeiler and leman go neural: Higher-order graph neural networks. Association for the Advancement of Artificial Intelligence. Murphy, R. L., Srinivasan, B., Rao, V., and Ribeiro, B. (2019). Relational pooling for graph representations. arXiv preprint arXiv:1903.02541. Murray, C. W. and Rees, D. C. (2009). The rise of fragment-based drug discovery. Nature chemistry, 1(3):187. Nowak, A., Villar, S., Bandeira, A. S., and Bruna, J. (2017). A note on learning algorithms for quadratic assignment with graph neural networks. arXiv preprint arXiv:1706.07450. OBoyle, N. M. and Sayle, R. A. (2016). Comparing structural fingerprints using a literature-based similarity benchmark. Journal of cheminformatics, 8(1):1-14. Pope, P., Kolouri, S., Rostrami, M., Martin, C., and Hoffmann, H. (2018). Discovering molecular functional groups using graph convolutional neural networks. arXiv preprint arXiv:1812.00265. Preciado, V. M., Draief, M., and Jadbabaie, A. (2012). Structural analysis of viral spreading processes in social and communication networks using egonets. Preciado, V. M. and Jadbabaie, A. (2010). From local measurements to network spectral properties: Beyond degree distributions. In 49th IEEE Conference on Decision and Control (CDC), pages 2686-2691. IEEE. Rahman, S. A., Bashton, M., Holliday, G. L., Schrader, R., and Thornton, J. M. (2009). Small molecule subgraph detector (smsd) toolkit. Journal of cheminformatics, 1(1):12. Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and Monfardini, G. (2008). The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80. Shervashidze, N., Vishwanathan, S., Petri, T., Mehlhorn, K., and Borgwardt, K. (2009). Efficient graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics, pages 488-495. Steger, A. and Wormald, N. C. (1999). Generating random regular graphs quickly. Combinatorics, Probability and Computing, 8(4):377-396. Stokes, J. M., Yang, K., Swanson, K., Jin, W., Cubillos-Ruiz, A., Donghia, N. M., MacNair, C. R., French, S., Carfrae, L. A., Bloom-Ackerman, Z., et al. (2020). A deep learning approach to antibiotic discovery. Cell, 180(4):688-702. Ulyanov, D., Vedaldi, A., and Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Weisfeiler, B. and Leman, A. (1968). The reduction of a graph to canonical form and the algebra which appears therein. Nauchno-Technicheskaya Informatsia, 2(9):12-16. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Yu, P. S. (2019). A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. (2018a). How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.-i., and Jegelka, S. (2018b). Representation learning on graphs with jumping knowledge networks. arXiv preprint arXiv:1806.03536. Yao, W., Bandeira, A. S., and Villar, S. (2019). Experimental performance of graph neural networks on random instances of max-cut. In Wavelets and Sparsity XVIII, volume 11138, page 111380S. International Society for Optics and Photonics. Ying, R., Bourgeois, D., You, J., Zitnik, M., and Leskovec, J. (2019). Gnn explainer: A tool for post-hoc explanation of graph neural networks. arXiv preprint arXiv:1903.03894. Ying, R., You, J., Morris, C., Ren, X., Hamilton, W. L., and Leskovec, J. (2018). Hierarchical graph representation learning with differentiable pooling. CoRR, abs/1806.08804. You, J., Liu, B., Ying, Z., Pande, V., and Leskovec, J. (2018a). Graph convolutional policy network for goal-directed molecular graph generation. In Advances in neural information processing systems, pages 6410-6421. You, J., Wu, H., Barrett, C., Ramanujan, R., and Leskovec, J. (2019). G2sat: Learning to generate sat formulas. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems 32, pages 10553-10564. Curran Associates, Inc. You, J., Ying, R., Ren, X., Hamilton, W. L., and Leskovec, J. (2018b). Graphrnn: A deep generative model for graphs. CoRR, abs/1802.08773. Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. (2017). Deep sets. In Advances in neural information processing systems, pages 3391-3401.Zhang, M. andChen, Y. (2018). Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, pages 5165-5175.Zhou, J., Cui, G., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., and Sun, M. (2018). Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434. In k-WL tests, two k-tuples s and s in either (V [1] ) k or (V[2] ) k are assigned the same color at iteration 0 if and only if they have the same isomorphism type.For a reference, seeMaron et al. (2019b). then by the definition of isomorphism types given in Appendix B, c . In words, both G[1] and G[2] are constructed based on two copies of G[P] , and the difference is that, G [1] adds the edges {(1, 2), (2, 1), (1 + m, 2 + m), (2 + m, 1 + m)}, whereas G [2] adds the edges {(1, 2 + m), (2 + m, 1), (1 + m, 2), (2, 1 + m)}, all with the same edge feature. : 2 ≤ k ≤ m}.Thanks to Corollary 6 of Xu et al. (2018a) based on Zaheer et al. (2017), we know that f[P] can be expressed by some message-passing function in an MPNN. Thus, together with summation as the readout function, MPNN is able to express C C (G, G[P] ). m . In words, both G[1] and G[2] are constructed based on two copies of H m , and the difference is that, G[1] adds the edges {(1, m), (m, 1), (1+m, 2m), (2m, 1+m)}, whereas G [2] adds the edges {(1, 2m), (2m, 1), (m, 1+m), (1+m, m)}, all with the same edge feature. For the case k = 3, m = 8, T = 1, for example, the constructed graphs are illustrated in Figure 4. Can G [1] and G [2] be distinguished by k-WL? Let c the coloring functions of k-tuples for G [1] and G [2] Figure 4 : 4Illustration of the construction in the proof of Theorem 5 in Appendix G. In this particular case, k = 3, m = 8, T = 1. If we consider s = (1, 12, 8) as an example, where the corresponding nodes are marked by blue squares in G [1] , there is χ(s) = 2, and thus η(s) = ζ 2 (s) = (1, 4, 16), which are marked by blue squares in G [2] 3 . 3Define S = {(1, m), (m, 1), (1 + m, 2m), (2m, 1 + m), (1, 2m), (2m, 1), (m, 1 + m), (1 + m, 2m)}, which is the set of "special" pairs of nodes in which G [1] and G [2] differ. Note that ∀(i α , i β ) ∈ [2m] 2 , (i α , i β ) ∈ S if and only if the sets {Mod m (i α ), Mod m (i β )} = {1, m}. holds. This allows us to carry out an induction in T ∈ N. For the base case t = T = 0, this is true because c (0) and c (0) in WL and B (0) and B (0) in 2-IGN are both initialized in the same way according to the subgraph isomorphism. To be precise, c (0) (s) = c (t) (s ) if and only if the subgraph in G [1] induced by the pair of nodes s is isomorphic to the subgraph in G [2] induced by the pair of nodes s , which is also true if and only if B c By the update rules of k-WL, the statement c (T ) (s) = c (T ) (T −1) (s) = c (T −1) (s ) {c (T −1) (s) :s ∈ N 1 (s)} = {c (T −1) (s) :s ∈ N 1 (s )} {c (T −1) (s) :s ∈ N 2 (s)} = {c (T −1) (s) :s ∈ N 2 (s )} (26) A s, 6 = 6{(j, k) : k = i or j}, A s,7 = {(k, j) : k = i or j}, A s,8 = {(k, l) : k = l and {k, l} ∩ {i, j} = ∅}, and A s,9 = {(k, k) : k / ∈ {i, j}}. , (37), (38), (39), (35), we get C s,8 = C s ,8 then L(B) i,j = L(B) ζ1((i,j)) , L(B ) i,j = L(B ) ζ1((i,j)) (73) L(B) i,j = L(B ) η1((i,j)) = L(B ) η2((i,j)) , ( 2018 ) 2018, we can express the linear equivariant layer L by L(B) With the lemma above, (67) can be shown by induction as a consequence. Thus,Maron et al. (2018) show that the space of linear invariant functions on R n×n is two-dimensional, and so for example, the second-last layer h in the 2-IGN can be written as α, β ∈ R. Then since η 1 is a permutation on V × V and also is the identity map when restricted to{(i, i) : i ∈ V }, with learning rate 0.1. The number of hidden dimensions is searched in {1, 8, 16, 64, 128}.• 2-IGN: 2nd-order Invariant Graph Networks proposed by Maron et al. (2018). In our experiments, we take 8 hidden dimensions for invariant layers and 16 hidden dimensions for output multi-layer perceptron. The models are trained using the Adam optimizer with learning rate 0.1. The numbers of hidden dimensions are searched in {(16, 32), (8, 16), (64, 64)}. • GCN: Graph Convolutional Networks proposed by Kipf and Welling (2016). In our experiments, we adopt a 4-layer GCN with 128 hidden dimensions. The models are trained using the Adam optimizer with learning rate 0.01. The number of hidden dimensions is searched in {8, 32, 128}. Depth is searched in {2, 3, 4, 5}. • GIN: Graph Isomorphism Networks proposed by Xu et al. (2018a). In our experiments, we adopt a 4-layer GIN with 32 hidden dimensions. The models are trained using the Adam optimizer with learning rate 0.01. The number of hidden dimensions is searched in {8, 16, 32, 128}. • sGNN: Spectral GNN with operators from family {I, A, min(A 2 , 1)}. In our experiments, we adopt a 4-layer sGNN with 128 hidden dimensions. The models are trained using the Adam optimizer with learning rate 0.01. The number of hidden dimensions is searched in {8, 128}. For GCN, GIN and sGNN, we train four variants for each architecture, depending on whether Jumping Knowledge (JK) (Xu et al., 2018b) and Instance Normalization (IN) / Spatial Batch Normalization (Ulyanov et al., 2016; Ioffe and Szegedy, 2015) are included or not. The use of IN in GNNs is seen in Chen et al. (2019a), in which normalization is applied to each dimension of the hidden states of all nodes in each graph. and polynomialIGNs (Maron et al., 2019a). Another interesting future direction is to study the relevance of substructure counting in empirical tasks, following the work ofYing et al. (2019). Finally, we hope our framework can help guide the search for more powerful GNNs by having substructure counting as a criterion.Choma, N., Monti, F., Gerhardt, L., Palczewski, T., Ronaghi, Z., Prabhat, P., Bhimji, W.,Bronstein, M., Klein, S., and Bruna, J. (2018). Graph neural networks for icecube signal classification.In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 386-391. IEEE. Dai, H., Khalil, E. B., Zhang, Y., Dilkina, B., and Song, L. (2017). Learning combinatorial optimization algorithms over graphs. arXiv preprint arXiv: 1704.01665. Defferrard, M., Bresson, X., and Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844-3852. Deshpande, M., Kuramochi, M., and Karypis, G. (2002). Automated approaches for classifying structures. Technical report, Minnesota University Minneapolis Department of Computer Science. Ding, M., Zhou, C., Chen, Q., Yang, H., and Tang, J. (2019). Cognitive graph for multi-hop reading comprehension at scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694-2703, Florence, Italy. Association for Computational Linguistics. Proof. Say G [P] = (V[P] , E[P] , x[P] , e[P] ) is a connected pattern of m nodes, where m > 2, and thusV [P] = [m]. First, if G [P]is not a clique, then by definition, there exists two distinct nodes i, j ∈ V[P] such that i and j are not connected by an edge. Assume without loss of generality that i = 1 and j = 2.D Proof of Theorem 2 (2-WL is unable to matching-count pat- terns of 3 or more nodes) Now, construct two graphs G [1] = (V = [2m], E [1] , x [1] , e [1] ), G [2] = (V = [2m], E [2] , x [2] , e [2] ) both with 2m nodes. For G [1] , let E [1] which implies that the two graphs cannot be distinguished by 2-WL. (See Section 2.1 of Arvind et al. (2018) for a proof for the case where all nodes have identical features.) Proof. Without loss of generality, we represent a star-shaped pattern by G [P] = (V [P] , E [P] , x [P] , e [P] ), where V [P] = [m] (with node 1 representing the center) and E [P]E Proof of Theorem 3 (MPNNs are able to containment-count star-shaped patterns) w }This allows us to rewrite (26) as C s,1 =C s ,1 (27) w=1,3,7 C s,w = w=1,3,7 C s ,w (28) w=1,2,4 C s,w = w=1,2,4 C s ,w As shown in Lemma 2, pairs of nodes that represent edges cannot share the same color with pairs of nodes the represent nodes in any iteration of 2-WL. Thus, we havew=1,4,5,6,7,8 A s ,w w=2,3,9 A s ,w , w=1,4,5,6,7,8 C s,w w=2,3,9 C s ,w = ∅ (32) w=1,4,5,6,7,8 C s ,w w=2,3,9 [1] and G [2] cannot be distinguished by 2-WL, there has to be9 w=1 C s,w = 9 w=1 C s ,w Combining this with (32) or (33), we get w=1,4,5,6,7,8 C s,w = w=1,4,5,6,7,8 C s ,w (40) w=2,3,9 C s,w = w=2,3,9 C s ,w Moreover, since we can decompose V 2 as with w=1,5 A s,w = w=1,5 A s ,w representing the nodes and w=2,3,4 A s,w = w=2,3,4 A s ,w representing the edges, we have Since G[1] and G [2] cannot be distinguished by 2-WL, there is Therefore, combining this with (55) or (56), we obtainV 2 = w=1,5 A s,w w=2,3,4 A s,w = w=1,5 A s ,w w=2,3,4 A s ,w w=1,5 C s,w w=2,3,4 C s ,w = ∅ (55) w=1,5 C s ,w w=2,3,4 C s,w = ∅ (56) 5 w=1 C s,w = 5 w=1 C s ,w w=1,5 C s,w = w=1,5 C s ,w (57) w=2,3,4 C s,w = w=2,3,4 C s ,w We define isomorphism types rigorously in Appendix B. Acknowledgements We would like to thank Haggai Maron and Jiaxuan You for nice conversations. This work is partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF 1845360, NSF CHS-1901091, Samsung Electronics, and the Institute for Advanced Study. SV is partly supported by NSF DMS 1913134, EOARD FA9550-18-1-7007 and the Simons Algorithms and Geometry (A&G) Think Tank. Biomolecular network motif counting and discovery by color coding. N Alon, P Dao, I Hajirasouliha, F Hormozdiari, S C Sahinalp, Bioinformatics. 2413Alon, N., Dao, P., Hajirasouliha, I., Hormozdiari, F., and Sahinalp, S. C. (2008). Biomolecular network motif counting and discovery by color coding. Bioinformatics, 24(13):i241-i249. V Arvind, F Fuhlbrück, J Köbler, O Verbitsky, arXiv:1811.04801On weisfeiler-leman invariance: Subgraph counts and related graph properties. arXiv preprintArvind, V., Fuhlbrück, F., Köbler, J., and Verbitsky, O. (2018). On weisfeiler-leman invariance: Subgraph counts and related graph properties. arXiv preprint arXiv:1811.04801. Random graph isomorphism. L Babai, P Erdos, S M Selkow, SIaM Journal on computing. 93Babai, L., Erdos, P., and Selkow, S. M. (1980). Random graph isomorphism. SIaM Journal on computing, 9(3):628-635. Geometric deep learning: Going beyond euclidean data. M M Bronstein, J Bruna, Y Lecun, A Szlam, P Vandergheynst, IEEE Signal Processing Magazine. 344Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., and Vandergheynst, P. (2017). Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42. J Bruna, W Zaremba, A Szlam, Y Lecun, arXiv:1312.6203Spectral networks and locally connected networks on graphs. arXiv preprintBruna, J., Zaremba, W., Szlam, A., and LeCun, Y. (2013). Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. An optimal lower bound on the number of variables for graph identification. J.-Y Cai, M Fürer, N Immerman, Combinatorica. 124Cai, J.-Y., Fürer, M., and Immerman, N. (1992). An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410. Supervised community detection with line graph neural networks. Z Chen, L Li, Bruna , J , Internation Conference on Learning RepresentationsChen, Z., Li, L., and Bruna, J. (2019a). Supervised community detection with line graph neural networks. Internation Conference on Learning Representations. On the equivalence between graph isomorphism testing and function approximation with gnns. Z Chen, S Villar, L Chen, Bruna , J , Advances in Neural Information Processing Systems. Chen, Z., Villar, S., Chen, L., and Bruna, J. (2019b). On the equivalence between graph isomorphism testing and function approximation with gnns. In Advances in Neural Information Processing Systems, pages 15868-15876. Test MSE loss for all models with chosen parameters as specified in Appendix J. We run each model for five times and picked the best and the median (3rd best) results for Table 1. Note that each of GCN, GIN and sGNN has four variants while 2-IGN has two variants. 2The reported rows in Table 1 are bolded here. Dataset: Erdős-RenyiTable 2: Test MSE loss for all models with chosen parameters as specified in Appendix J. We run each model for five times and picked the best and the median (3rd best) results for Table 1. Note that each of GCN, GIN and sGNN has four variants while 2-IGN has two variants. The reported rows in Table 1 are bolded here. Dataset: Erdős-Renyi
[]
[ "Inertial-Only Optimization for Visual-Inertial Initialization", "Inertial-Only Optimization for Visual-Inertial Initialization" ]
[ "Carlos Campos ", "José M M Montiel ", "Juan D Tardós " ]
[]
[]
We formulate for the first time visual-inertial initialization as an optimal estimation problem, in the sense of maximum-a-posteriori (MAP) estimation. This allows us to properly take into account IMU measurement uncertainty, which was neglected in previous methods that either solved sets of algebraic equations, or minimized ad-hoc cost functions using least squares. Our exhaustive initialization tests on EuRoC dataset show that our proposal largely outperforms the best methods in the literature, being able to initialize in less than 4 seconds in almost any point of the trajectory, with a scale error of 5.3% on average. This initialization has been integrated into ORB-SLAM Visual-Inertial boosting its robustness and efficiency while maintaining its excellent accuracy.
10.1109/icra40945.2020.9197334
[ "https://arxiv.org/pdf/2003.05766v1.pdf" ]
212,675,093
2003.05766
95f64a41c036e09ce6defcc78fdd58e040ce810a
Inertial-Only Optimization for Visual-Inertial Initialization 12 Mar 2020 Carlos Campos José M M Montiel Juan D Tardós Inertial-Only Optimization for Visual-Inertial Initialization 12 Mar 2020This paper has been accepted for publication in 2020 International Conference on Robotics and Automation (ICRA). DOI: IEEE Xplore: We formulate for the first time visual-inertial initialization as an optimal estimation problem, in the sense of maximum-a-posteriori (MAP) estimation. This allows us to properly take into account IMU measurement uncertainty, which was neglected in previous methods that either solved sets of algebraic equations, or minimized ad-hoc cost functions using least squares. Our exhaustive initialization tests on EuRoC dataset show that our proposal largely outperforms the best methods in the literature, being able to initialize in less than 4 seconds in almost any point of the trajectory, with a scale error of 5.3% on average. This initialization has been integrated into ORB-SLAM Visual-Inertial boosting its robustness and efficiency while maintaining its excellent accuracy. I. INTRODUCTION Simultaneous Localization and Mapping (SLAM) techniques allow robots and AR/VR systems to be aware of their environments, while locating themselves in the reconstructed scene. Visual-inertial SLAM with a single monocular camera and a low-cost Inertial Measurement Unit (IMU) sensor, offers a small, compact and low power solution for most applications. IMU sensors measure acceleration and angular velocity, providing robustness against fast motion or challenging environments, and allowing to retrieve the true scale of the environment, which would remain unknown in a pure monocular system. However, to start using them, some parameters need to be estimated in an initialization process. These are scale, gravity direction, initial velocity, and accelerometer and gyroscope biases. A wrong initialization would lead to poor convergence, as well as inaccurate estimation of all other variables. In addition, a fast initialization is as important as an accurate one, because as long as IMU is not initialized, visual-inertial SLAM cannot be performed. Previous works on visual-inertial initialization can be classified in joint and disjoint (or loosely coupled) estimation methods. Joint visual-inertial initialization was pioneered by Martinelli [1], who proposed a closed-form solution to jointly retrieve scale, gravity, accelerometer bias and initial velocity, as well as visual features depth. This method was built on the assumption that camera poses can be roughly estimated from IMU readings. The method tracks several points in all the images, and builds a system of equations stating that the 3D point coordinates, as seen from any camera pair, should be the same, that is solved by linear least squares. This work was extended by Kaiser et al. [2] building a similar linear This work was supported in part by the Spanish government under grants PGC2018-096367-B-I00 and DPI2017-91104-EXP, the Aragón government under grant DGA T45-17R, and by Huawei under grant HF2017040003. The authors are with Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Spain [email protected]; [email protected]; [email protected] algebraic system that is solved using non-linear least squares, to also find gyroscope bias and to take gravity magnitude into account. The capability to find accurate initial solutions in 2 seconds was shown in simulations. Crucially, the original and modified methods ignore IMU noise properties, and minimize the 3D error of points in space, and not their reprojection errors, that is the goldstandard in feature-based computer vision. Our previous work [3] shows that this results in large unpredictable errors, that can be corrected by adding two rounds of Visual-Inertial Bundle Adjustment (VI-BA), together with two tests to detect and discard bad initializations. This renders the method usable, obtaining on the public EuRoC dataset [4] joint visual-inertial initializations in 2 seconds with scale error around 5%. However, the method only works in 20% of the trajectory points. Such a low initialization recall can be a problem for AR/VR or drone applications where the system is desired to be launched immediately. Disjoint visual-inertial initialization is based on the solid assumption that the up-to-scale camera trajectory can be estimated very accurately from pure monocular vision, and then use this trajectory to estimate the inertial parameters. As modern visual-odometry and visual SLAM systems perform local bundle adjustment and provide trajectories with much higher precision than IMU integration, this trajectory uncertainty can be safely ignored while estimating the inertial parameters. This idea was pioneered by Mur-Artal and Tardós in ORBSLAM-VI [5], and later adopted by Qin et al. in VINS-Mono [6] [7]. In both cases, inertial parameters are found in different steps by solving a set of linear equations using least-squares. In [5] a linear system is built by eliminating the velocities for each frame. However, after these algebraic manipulations, the errors to be minimized are meaningless and unrelated to sensor noise properties. In order to obtain accurate estimations, including accelerometer bias, the method requires 15 seconds for initialization. In [7] accelerometer bias is assumed to be zero, requiring only 1-2 seconds to initialize, depending on the motion. In both methods, IMU measurements are manipulated and mixed in the same linear system, where the residuals of all equations are considered with the same weight, ignoring sensor uncertainties. In addition, the different inertial parameters are solved separately in different steps, not all at once, ignoring the correlations between them. All this leads to an estimation which is not optimal in the sense of maximum-a-posteriori (MAP) estimation. We propose a novel disjoint visual-inertial initialization method by formulating it as an optimal estimation problem, in the sense of MAP estimation. For this, we build on the excellent work of Forster et al. [8] that allows to preintegrate IMU readings and, taking into account the probabilistic characterization of sensor noises, properly compute the covariances of the preintegrated terms. Assuming that the error of the monocular SLAM trajectory is negligible compared with the IMU errors, we derive a very efficient MAP estimator for inertial-only parameters, and use it to initialize a visualinertial SLAM system. The main contributions of our work are: • The formulation of the visual-inertial initialization as an inertial-only optimal estimation problem, in the sense of MAP estimation, taking properly into account the probabilistic model of IMU noises. • We solve for all inertial parameters at once, in a single step, avoiding the inconsistencies derived from decoupled estimation. This makes all estimations jointly consistent. • We do not make any assumptions about initial velocity or attitude, which makes our method suitable for any initialization case. • We do not assume IMU biases to be zero, instead we code the known information about them as probabilistic priors that are exploited by our MAP estimation. In the next section we present the theory and in-depth details behind our proposal. Later, we evaluate and compare it against the best examples of joint and disjoint initialization methods, proving to outperform them. II. MAXIMUM-A-POSTERIORI INITIALIZATION The gold-standard method for feature-based visual-inertial SLAM is visual-inertial bundle adjustment (VI-BA), that takes properly into account the noise properties in all the sensors, and obtains a maximum-a-posteriori joint estimation of all variables (see [5] for a modern formulation using IMU preintegration on manifold from [8]). The main limitation of VI-BA is that it requires a good seed to converge quickly and avoid getting stuck in local minima, due to its strong nonlinear nature. Joint [3] and disjoint [5] initialization methods based on least-squares estimation showed that VI-BA largely improves their initial solutions. Our main goal is going one step further and also use MAP estimation in the initialization, making proper use of sensor noise models. Our novel initialization method is based on the following ideas: • Despite the non-linear nature of BA, Monocular SLAM (or visual odometry) is mature and robust enough to obtain very accurate initial solutions for structure and motion, with the only caveat that their estimations are up-to-scale. • The uncertainty of visual SLAM trajectory is much smaller than the IMU uncertainties and can be ignored while obtaining a first solution for the IMU variables. So, we perform inertial-only MAP estimation, taking the up-to-scale visual SLAM trajectory as constant. • Inspired on the work of [9], we adopt a parametrization that explicitly represents and optimizes the scale factor of the monocular SLAM solution. • Differently from [5] [7], we jointly optimize all the IMU variables in one step, taking into account the cross-covariances between the preintegrated terms for position, and linear and angular velocities [8]. Our initialization method can be split in three steps: 1) Vision-only MAP estimation: Initialize and run monocular ORB-SLAM [10] for a short period (typically 2 s) using BA to obtain a vision-only MAP estimation up-to-scale. At the same time, compute IMU preintegrations between keyframes and their covariances [8]. 2) Inertial-only MAP estimation: Inertial-only optimization to align the IMU trajectory and ORB-SLAM trajectory, finding the scale, keyframes' velocities, gravity direction and IMU biases. 3) Visual-inertial MAP estimation: Use the solution from the previous step as seed for a full VI-BA to obtain the joint optimal solution. After the initialization, we launch ORB-SLAM Visual-Inertial [5], that performs local VI-BA. We have observed that scale estimation accuracy can be further improved after 5-10 seconds performing a full VI-BA or, with much lower computational cost, repeating the inertial-only optimization. The three initialization steps are further detailed next. A. Vision-only MAP Estimation We initialize pure monocular SLAM, using the same procedure as in ORB-SLAM to find the initial motion. Matching of FAST points, using ORB descriptor, is performed between two initial frames. Fundamental matrix and homography models are found and scored. The one with a higher score is used to find the initial motion and triangulate the features. Once the structure and motion are initialized, we do pure monocular SLAM for 1 or 2 seconds. The only difference from ORB-SLAM is that we enforce keyframe insertion at a higher frequency (4 Hz to 10 Hz). In that way, IMU preintegration between keyframes has low uncertainty, since integration times are very short. After this period, we have an up-to-scale map composed of ten keyframes and hundreds of points, that has been optimized using BA by the ORB-SLAM mapping thread. The up-to-scale keyframe poses are transformed to the body (or IMU) reference using visual-inertial calibration. These body poses are denoted asT 0: (3) is rotation matrix from i-th body to world reference, andp i ∈ R 3 is the up-to-scale position of i-th body. k = [R,p] 0:k , where R i ∈ SO B. Inertial-only MAP Estimation The goal of this step is to obtain an optimal estimation of the inertial parameters, in the sense of MAP estimation, using the up-to-scale trajectory obtained by vision. As we don't have a good guess of the inertial parameters, using at this point a full VI-BA would be too expensive and prone to get stuck in local minima, as shown in the experiments section. An intermediate solution would be to marginalize out the points to obtain a prior for the trajectory and its (fully dense) covariance matrix, and use it while optimizing the IMU parameters. We opt for a more efficient solution, considering the trajectory as fixed, and perform an inertialonly optimization. The inertial parameters to be found are: X k = {s, R wg , b,v 0:k }(1) where s ∈ R + is the scale factor of the vision-only solution, R wg ∈ SO (3) is the gravity direction, parameterized by two angles, such that gravity in world reference frame is expressed as g = R wg g I , with g I = (0, 0, G) T being G the magnitude of gravity, b = (b a , b g ) ∈ R 6 are the accelerometer and gyroscope biases, andv 0:k ∈ R 3 the upto-scale body velocities from first to last keyframe. We prefer to use up-to-scale velocitiesv i , instead of true ones v i = sv i , since it eases the initialization process. Biases are assumed constant for all involved keyframes since initialization period is just 1-2 seconds, and random walk would have almost no effect. It is worth noting that this formulation takes into account gravity magnitude from the beginning, as opposed to [7] and [5] that require a separate step to fix its value. In our case, the only measurements used come from IMU, and are summarized in the IMU preintegrated terms defined in [8]. We denote by I i,j the preintegration of inertial measurements between i-th and j-th keyframes, and by I 0:k the set of IMU preintegrations between successive keyframes in our initialization window. With the state and measurements defined, we can formulate a MAP estimation problem, where the posterior distribution is: p(X k |I 0:k ) ∝ p(I 0:k |X k )p(X k )(2) where p(I 0:k |X k ) is the likelihood distribution of the IMU measurements given the IMU states, and p(X k ) the prior for the IMU states. Considering independence of measurements, the likelihood can be factorized as: p(I 0:k |X k ) = k i=1 p(I i−1,i |s, g dir , b, v i−1 , v i )(3) To obtain the MAP estimator, we need to find the parameters which maximize the posterior distribution, that is equivalent to minimize its negative logarithm, thus: X * k = arg max X k p(X k |I 0:k ) = arg min X k − log(p(X k )) − k i=1 log (p(I i−1,i |s, g dir , b, v i−1 , v i ))(4) Assuming Gaussian error for IMU preintegration and prior distribution, the MAP problem is equivalent to: X * k = arg min X k r p 2 Σp + k i=1 r Ii−1,i 2 Σ I i−1,i(5) where r p and r Ii−1,i are the residual of the prior and IMU measurements between consecutive keyframes, while Σ p and Σ Ii−1,i are their covariances. In this optimization, vision reprojection errors do not appear, only inertial residuals. As IMU measurements do not suffer from data association errors, the use of robust cost function, like the Huber norm, does not make sense, since it would slow down the optimization. Following [11] and [8], we define the inertial residual as: r Ii,j = [r ∆Rij , r ∆vij , r ∆p ij ](6)r ∆Rij = Log ∆R ij (b g ) T R T i R j (7) r ∆vij = R T i (sv j − sv i − R wg g I ∆t ij ) − ∆v ij (b g , b a ) (8) r ∆p ij = R T i sp j − sp i − sv i ∆t ij − 1 2 R wg g I ∆t 2 ij − ∆p ij (b g , b a ) (9) where ∆R ij (b g ), ∆v ij (b g , b a ) and ∆p ij (b g , b a ) are preintegrated IMU measurements from i-th to j-th keyframe, which only depend on biases. These terms can be linearly updated as explained in [8], avoiding reintegrating at each iteration. ∆t ij is the time between both keyframes. Log stands for the logarithm map from Lie group SO(3) to its algebra so(3), isomorphic to R 3 . Since we assume that biases can be considered constant during the initialization window, IMU residuals do not include random walk for biases. We assume that the residuals follow Gaussian distributions, and their covariances can be computed as proposed in [8]. As we are optimizing in a manifold we need to define a retraction [8] to update the gravity direction estimation during the optimization: R new wg = R old wg Exp(δα g , δβ g , 0)(10) being Exp(.) the exponential map from so(3) to SO (3). To guarantee that scale factor remains positive during optimization we define its update as: s new = s old exp (δs)(11) Biases and velocities are updated additively. If we define δg dir = (δα g , δβ g ), the inertial parameters updates used during optimization are (δs, δg dir , δb g , δb a , {δv i }). Derivatives of IMU residuals w.r.t. these parameters can be found in the appendix. The final optimization problem, represented in figure 1, is implemented and solved using g2o C++ library [12], using analytic derivatives and Levenberg-Marquardt algorithm. As it is well known in the literature, gravity and accelerometer bias tends to be coupled, being difficult to distinguish in most cases. To avoid that problem, some techniques neglect accelerometer bias during the initialization assuming a zero value [7], while others wait for a long time to Map Points Fig. 1: Underlying graph representation of the inertialonly optimization (left) and the first visual-inertial Bundle Adjustment (right). Yellow boxes represent IMU residuals, red boxes stand for reprojection error, while the purple one represents prior information for accelerometer bias. Dashed lines point out fixed variables (keyframes poses for inertialonly optimization) guarantee that it is observable [5]. Here we adopt a sound and pragmatic approach: we include b a as a parameter to be optimized, but adding a prior residual for it: r p = b a 2 Σp . If the motion performed does not contain enough information to estimate the bias, the prior will keep its estimation close to zero. If the motion makes b a observable, its estimation will converge towards its true value. A prior for b g is not needed as it is always well observable from keyframe orientations and gyroscope readings. Since we have to solve a non-linear optimization problem, we need an initial guess for inertial parameters. Hence, we initialize biases equal to zero, while gravity direction is initialized along the average of accelerometer measurements, as accelerations are usually much smaller than gravity. The scale factor needs to be initialized sufficiently close to its true value to guarantee convergence, but we do not have any initial guess. Taking advantage of our very efficient inertial-only optimization (5ms), we launch the optimization with three initial scale values, that correspond to median scene depth of 1, 4 and 16 meters, keeping the solution that provides the lowest residual as defined in equation 5. Our results show that, using this range of scale values, our method is able to converge in a wide variety of scenes. At the end of the optimization, the frame poses and velocities and the 3D map points are scaled with the scale value found, and are rotated to align the z axis with the estimated gravity direction. IMU preintegration is repeated with the new bias estimations, aiming to reduce future linearization errors. C. Visual-Inertial MAP Estimation Inertial-only optimization provides an estimation accurate enough to be used as seed for a first joint visual-inertial Bundle Adjustment, ensuring its convergence. In this optimization, shown also in figure 1, pure inertial parameters like g dir and s do not appear, but they are implicitly included in keyframe poses. Compared with [3], this step replaces the BA1&2 steps. In fact, the optimization is exactly the same, it only differs in the initial seed, which previously was computed solving a linear system, and now is computed by means of a MAP estimator. A similar optimization is also done in VINS-Mono initialization, before launching VI odometry. In the literature, there are several proposed tests to determine if an initialization is successful or not. In [3], observability of the optimization problem and consensus between different sets of measurements are checked. In contrast, VINS-Mono checks that estimated gravity magnitude has an error lower than 10%, and IMU readings have enough variance. Here, we propose to discard initializations whose mean acceleration is below some threshold (0.5% of gravity). This discards only the worst attempts, with almost constant velocity, which are not observable [1]. We remark that all initialization steps are performed in a parallel thread, without having any effect on the real time tracking thread. Once the optimization is finished, the system is already initialized, and we switch from visual to visualinertial SLAM. III. EXPERIMENTAL RESULTS To analyze the capability to initialize under different sensor trajectories, we run an exhaustive initialization test. We launch an initialization every 0.5 seconds (one out of 10 frames) in every trajectory of the EuRoC dataset, what results on testing 2248 different initialization trajectories. To compare, we run the same exhaustive test with the joint initialization method of [3] and the loosely coupled initialization of VINS-Mono [7], using the software provided by the authors. As a baseline, we also try to initialize using only visual-inertial bundle adjustment with the same initial guesses for gravity direction, velocities and biases, and the same three initial values for scale, keeping the solution with smaller residual. The performance is measured in terms of the scale error before and after applying full VI-BA. To measure scale factor, and thus scale error, we align the initialization and ground-truth trajectories using Horn alignment, such that for an instant t, the estimatedp(t) and ground-truth p GT (t) trajectories are related by: p(t) = T ⊕ p GT (t) where T ∈ Sim(3)(12) We also report the duration of the initialization trajectory, denoted as t Init , as well as the total time, t T ot , until a successful initialization is achieved. For all methods, if a bad initialization is detected, a new one is attempted with the next batch of available data. This, together with the time needed for visual initialization, makes t T ot ≥ t Init . Results are summarized in table I. The first two blocks compare our method with our previous joint initialization method [3], based on the work of Martinelli [1] and Kaiser et al. [2], improved by VI-BA and two rejection tests. The proposed initialization beats the joint initialization by a wide margin, both in accuracy and needed time, being able to initialize in less than 4 seconds with scale error of 5.29%, using trajectories of 2.16 seconds on average. The method of I: Results of exhaustive initialization attempts every 0.5s in EuRoC dataset. The first two blocks compare our proposal with the best joint initialization method [3] using trajectories of ∼ 2 seconds (t Init ), while the last two blocks compare it with the loosely-coupled initialization of VINS-mono [7] using trajectories of ∼ 1.3 seconds. Joint Initialization [3] Inertial-only Optimization (10 KFs @ 4Hz) VI BA VINS-Mono Initialization [7] Inertial-only Optimization ( [3] was able to obtain a scale error only slightly worse, but it was at the expense of a t T ot of 13 seconds, owing to the high rejection rate of the proposed tests. The baseline VI-BA initialization, using the same trajectories and initial guesses, obtains an average scale error of 13.82%, which is even higher than the 11.69% error obtained by just applying our inertial-only optimization, which is also much more efficient (5 ms per run, compared with 133 ms, as shown in table II). The last blocks compare our method with the looselycoupled initialization of VINS-Mono [7]. To ease comparison, we have configured our system to run with similarsized trajectories (t Init ) of around 1.25 seconds. With these shorter trajectories, our method beats the baseline VI-BA initialization doubling its accuracy and more than doubling the accuracy of VINS-Mono initialization, with a t T ot 0.31 seconds higher. This slightly higher t T ot is the result of the visual initialization used in our system, that one from ORB-SLAM, which in difficult sequences can struggle to success. We remark that reducing the scale error by using longer initialization trajectories, i.e. increasing t Init , may not be easy for VINS-Mono. Since this system is not prepared to work as a pure visual odometry system, visual and inertial initializations have to be solved simultaneously for the same set of frames, and increasing time for inertial initialization would also increase the visual initialization time. This entails that points have to be tracked along more frames, which may be not feasible in case of camera rotation or fast motion. For VINS-Mono, there is sharp contrast between the 22.05% scale error found in our exhaustive initialization tests and the low RMS ATE error reported in [7] (in the range of 0.080-0.320 m) when the whole trajectories are processed. This may be explained because when launched from the beginning, the initialization is performed while the drone is taking off, which entails big accelerations, making inertial parameters more observable, while in our experiment, initialization is performed along the whole sequence where other motions that give lower observability are present. Moreover, since VINS-Mono marginalizes old states, not fixing them as ORBSLAM-VI does, this initial error can be further reduced as the drone progresses. In figure 2, we plot the scale factor distribution for every studied method, along the whole EuRoC dataset. Results before visual-inertial BA show that all methods tend to underestimate the true scale. This bias is worse in VINS-Mono initialization, where there is a high number of initial solutions whose scale is close to zero. In contrast, the bias is much lower for inertial-only optimization at 4 Hz, that uses 2.15 s trajectories, whose mean is close to one. After visualinertial BA, the bias almost disappears, having all methods a distribution with mean close to one, but with different variances, being VINS-Mono with 1s trajectories the worst and our inertial-only optimization with 2 s trajectories the best. Finally, considering the computing times in table II, inertial-only optimization is extremely efficient, taking around 5 ms per run, rotating and scaling points, frames and velocities takes 11 ms, and full VI-BA requires 132 ms. The inertial-only optimization time is much lower than the time required by the Martinelli-Kaiser closed-form solution, which is around 60 ms [3]. Compared with the baseline VI-BA which requires three runs with different scales for a total 400 ms, our complete method only takes 160 ms, and doubles the scale accuracy. Once verified that our inertial-only optimization performs better than previous initialization methods, we have made a second experiment which consists in launching ORB-SLAM Visual-Inertial [5] using our new initialization. As in [3], we perform two visual-inertial bundle adjustment 5 and 10 seconds after initialization. We check three different sequences of EuRoC dataset, with different difficulty degrees, Results from table III show that ORB-SLAM VI reaches in sequences V1 01 and V1 02 similar accuracy levels using the proposed initialization and the original initialization from [5]. In addition, sequence V1 03, which previously could not be processed, because the original initialization failed, can now be successfully processed. This is because the new initialization takes just 2 seconds, being possible to immediately use the IMU, avoiding tracking loss during subsequent fast motions. Our results show that the combination of our initialization method with ORB-SLAM VI gives a very robust system that is significantly more accurate than VINS-Mono. IV. CONCLUSIONS AND FUTURE WORK The proposed initialization method has shown to be more accurate than the top-performing methods in the literature, with a very low computing time. This confirms that optimal estimation theory is able to make proper use of the probabilistic models of sensor noises, obtaining more accurate results than solving linear systems of equations or using nonweighted least squares. Full visual-inertial BA is a very non-linear problem, plagued with local minima, which hinders convergence. We have split it in a fully observable up-to-scale visual problem, followed by an inertial-only optimization phase that can be solved very efficiently, producing an initial solution for VI-BA that alleviates the local minima problem. As future work, we highlight that this inertial-only optimization could be used not only for initialization, but also to refine the scale and other inertial parameters once SLAM is initialized and running. This would have a much lower computational cost than performing a full visual-inertial bundle adjustment, where all visual and inertial parameters are involved. Moreover, this new initialization can be easily adapted to the stereo-inertial case. It would be enough to remove the scale from the inertial-only optimization. APPENDIX Derivatives w.r.t. δb g , δb a , δv i and δv j are found or immediately derived from [8]. Derivatives for δs are: ∂r ∆Rij ∂δs = 0 3×1(13)∂r ∆vij ∂δs = R T i (v j −v i ) s exp(δs)(14) ∂r ∆p ij ∂δs = R T i p j −p i −v i ∆t ij s exp(δs)(15) All these expressions are evaluated for δs = 0. Derivatives for δg dir are: ∂r ∆Rij ∂δg dir = 0 3×2 (16) ∂r ∆vij ∂δg dir = −R T i R wg G∆t ij (17) ∂r ∆p ij ∂δg dir = − 1 2 R T i R wg G∆t 2 ij(18) Where: G =   0 −G G 0 0 0  (19) Fig. 2 : 2Experimental distribution of the scale factor (ratio between estimated and true scales) obtained by the different initialization methods along all sequences of the EuRoC dataset, before and after visual-inertial BA. A total of 2248 initializations have been launched. TABLE TABLE II : IIComputing time of our method for the exhaustive initialization experiment in sequence V1 02.Step mean (ms) median (ms) max (ms) Inertial-Only 3×5.24 3×4.92 3×6.39 Map update 11.18 11.25 13.98 Visual-Inertial BA 132.78 136.43 198.07 Total 159.68 163.92 228.24 TABLE III : IIIResults for ORBSLAM-VI with the proposed initialization (median values on five executions are shown) compared with results reported for original ORBSLAM-VI[5] and VINS-Mono in[13].running 5 experiments for each one. We align both SLAM and GT trajectories, and Absolute Trajectory Error (ATE) is measured.ORBSLAM-VI + our initialization ORBSLAM-VI [5] VINS-Mono [13] Seq. Name Scale error (%) RMSE ATE (m) Scale error (%) RMSE ATE (m) RMSE ATE (m) V1 01 0.4 0.023 0.9 0.027 0.060 V1 02 0.3 0.026 0.8 0.024 0.090 V1 03 1.7 0.059 - - 0.180 Closed-form solution of visual-inertial structure from motion. A Martinelli, International Journal of Computer Vision. 1062A. Martinelli, "Closed-form solution of visual-inertial structure from motion," International Journal of Computer Vision, vol. 106, no. 2, pp. 138-152, 2014. Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation. J Kaiser, A Martinelli, F Fontana, D Scaramuzza, IEEE Robotics and Automation Letters. 21J. Kaiser, A. Martinelli, F. Fontana, and D. Scaramuzza, "Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation," IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 18-25, 2017. Fast and robust initialization for visual-inertial SLAM. C Campos, J M M Montiel, J D Tardós, 2019 International Conference on Robotics and Automation (ICRA). IEEEC. Campos, J. M. M. Montiel, and J. D. Tardós, "Fast and robust ini- tialization for visual-inertial SLAM," in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 1288-1294. The EuRoC micro aerial vehicle datasets. M Burri, J Nikolic, P Gohl, T Schneider, J Rehder, S Omari, M W Achtelik, R Siegwart, The International Journal of Robotics Research. 3510M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, "The EuRoC micro aerial vehicle datasets," The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157-1163, 2016. Visual-inertial monocular SLAM with map reuse. R Mur-Artal, J D Tardós, IEEE Robotics and Automation Letters. 22R. Mur-Artal and J. D. Tardós, "Visual-inertial monocular SLAM with map reuse," IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796-803, 2017. Robust initialization of monocular visual-inertial estimation on aerial robots. T Qin, S Shen, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS. T. Qin and S. Shen, "Robust initialization of monocular visual-inertial estimation on aerial robots," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 4225-4232. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. T Qin, P Li, S Shen, IEEE Transactions on Robotics. 344T. Qin, P. Li, and S. Shen, "VINS-Mono: A robust and versa- tile monocular visual-inertial state estimator," IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004-1020, 2018. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. C Forster, L Carlone, F Dellaert, D Scaramuzza, Robotics: Science and Systems. C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, "IMU preinte- gration on manifold for efficient visual-inertial maximum-a-posteriori estimation," in Robotics: Science and Systems, 2015. Scale drift-aware large scale monocular SLAM. H Strasdat, J M M Montiel, A J Davison, Robotics: Science and Systems VI. 2H. Strasdat, J. M. M. Montiel, and A. J. Davison, "Scale drift-aware large scale monocular SLAM," Robotics: Science and Systems VI, vol. 2, 2010. ORB-SLAM: a versatile and accurate monocular SLAM system. R Mur-Artal, J M M Montiel, J D Tardós, IEEE Transactions on Robotics. 315R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, "ORB-SLAM: a versatile and accurate monocular SLAM system," IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. Visual-inertial-aided navigation for highdynamic motion in built environments without initial conditions. T Lupton, S Sukkarieh, IEEE Transactions on Robotics. 281T. Lupton and S. Sukkarieh, "Visual-inertial-aided navigation for high- dynamic motion in built environments without initial conditions," IEEE Transactions on Robotics, vol. 28, no. 1, pp. 61-76, 2012. g2o: A general framework for graph optimization. R Kümmerle, G Grisetti, H Strasdat, K Konolige, W Burgard, IEEE International Conference on Robotics and Automation (ICRA). R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, "g2o: A general framework for graph optimization," in IEEE Inter- national Conference on Robotics and Automation (ICRA), 2011, pp. 3607-3613. A general optimization-based framework for local odometry estimation with multiple sensors. T Qin, J Pan, S Cao, S Shen, arXiv:1901.03638arXiv preprintT. Qin, J. Pan, S. Cao, and S. Shen, "A general optimization-based framework for local odometry estimation with multiple sensors," arXiv preprint arXiv:1901.03638, 2019.
[]
[ "DISTRIBUTION OF MASS OF HOLOMORPHIC CUSP FORMS", "DISTRIBUTION OF MASS OF HOLOMORPHIC CUSP FORMS" ]
[ "Valentin Blomer ", "Rizwanur Khan ", "Matthew Young " ]
[]
[]
We prove an upper bound for the L 4 -norm and for the L 2 -norm restricted to the vertical geodesic of a holomorphic Hecke cusp form f of large weight k. The method is based on Watson's formula and estimating a mean value of certain L-functions of degree 6. Further applications to restriction problems of Siegel modular forms and subconvexity bounds of degree 8 L-functions are given.2010 Mathematics Subject Classification. 11F11, 11F66.
10.1215/00127094-2380967
[ "https://arxiv.org/pdf/1203.2573v2.pdf" ]
46,914,693
1203.2573
aee32dbe2ebea71217c568e77cffd3b43bf04bdf
DISTRIBUTION OF MASS OF HOLOMORPHIC CUSP FORMS 22 Feb 2013 Valentin Blomer Rizwanur Khan Matthew Young DISTRIBUTION OF MASS OF HOLOMORPHIC CUSP FORMS 22 Feb 2013 We prove an upper bound for the L 4 -norm and for the L 2 -norm restricted to the vertical geodesic of a holomorphic Hecke cusp form f of large weight k. The method is based on Watson's formula and estimating a mean value of certain L-functions of degree 6. Further applications to restriction problems of Siegel modular forms and subconvexity bounds of degree 8 L-functions are given.2010 Mathematics Subject Classification. 11F11, 11F66. Introduction Suppose f ∈ S k is an L 2 -normalized cuspidal Hecke eigenform of even weight k for the modular group Γ = SL 2 (Z). A basic question is to understand the size of f and the distribution of its mass as k becomes large; more precisely, we consider F (z) = y k/2 f (z) since |F (z)| is Γ-invariant. This can be made quantitative in various ways, e.g. by bounding the L p -norm of F for 2 < p ≤ ∞. A first guess might be that the mass of F should be nicely distributed on Γ\H such that F has no essential peaks. Indeed, the mass equidistribution distribution conjecture, proved in [HSo], tells us that the measure |F (z)| 2 dx dy/y 2 tends to the uniform measure (3/π) dx dy/y 2 (in the sense of integration against continuous and compactly supported test functions) as F runs through a sequence of cuspidal Hecke eigenforms with weight k tending to infinity. A closer look, however, reveals that F takes large values high in the cusp at y = k/(4π), and for p = ∞ we have the essentially best-possible result (1.1) F ∞ = k 1 4 +o(1) , see [Xi], which uses Deligne's bound. A variant of this argument shows (1.2) F p ≫ k 1 4 − 3 2p −ε (which is non-trivial only for p > 6), and in the opposite direction we have the interpolation (convexity) bound (1.3) F p ≤ F 2/p 2 F 1− 2 p ∞ ≪ k 1 4 − 1 2p +ε . We will give a quick proof of (1.2) in Section 3. In this article we are interested in the L 4 -norm of F and its connection to L-functions. In this case (1.3) becomes F 4 4 ≪ k 1/2+ε , and nothing better has been known so far. Our first result constitutes an improvement over this trivial bound. Theorem 1.1. We have F 4 4 ≪ k 1/3+ε . Theorem 1.1 shows that the measure of the set where F satisfies (1.1) is small. One also immediately obtains an improvement on (1.3) for all 2 < p < ∞ by interpolation, namely, F p ≪ k 2 3 ( 1 4 − 1 2p )+ε , if 2 ≤ p ≤ 4, k 1 4 − 2 3p +ε , if 4 ≤ p < ∞. One may speculate on what is the true size of the L 4 -norm. Conjecture 1.2. On the basis of the conjectures in [CFKRS], one has with the normalization (1.4) Γ\H |f (z)| 2 y k 3 π dx dy y 2 = 1, that as k → ∞, (1.5) Γ\H |f (z)| 4 y 2k 3 π dx dy y 2 = 2 + o(1). Note that with the normalization (1.4), Cauchy-Schwarz implies F 4 ≥ 1. We remark on the side that for an ∞-old form F of weight k, i.e. the (L 2 -normalized) iterated Maaß lift K k−2 · · · K 2 K 0 f of a fixed weight 0 cusp form f , Bernstein and Reznikov [BR,Section 2.6] have shown the unconditional bound F 4 = O(1) of almost the same strength as (1.5), at least for (fixed) co-compact lattices, and together with [R2,Theorem A] the same bound should hold for (fixed) congruence subgroups 1 . At first sight, the numerical value in (1.5) is surprising in light of the following variation. Conjecture 1.3. Suppose φ is a Hecke-Maaß form for the full modular group with spectral parameter T . On the basis of the conjectures in [CFKRS], one has with the normalization Γ\H |φ(z)| 2 3 π dx dy y 2 = 1, that as T → ∞, (1.6) Γ\H |φ(z)| 4 3 π dx dy y 2 = 3 + o(1). Conjecture 1.3 has been folklore for a while, see e.g. [KR,p. 989] and the discussion in [Sa1,§4]. Since the fourth moment of a normalized Gaussian random variable is 3, it is consistent with the random wave model of M. Berry [Be], and some numerical evidence is given, for instance, in [HR, HSt]. Based on the usual analogy between large weight holomorphic cusp forms and Maaß forms, one might have expected the answer of 3 in both conjectures, but as P. Sarnak pointed out to us, Conjecture 1.2 indicates that f (z)y k/2 is modelled by a complex Gaussian for which the normalized fourth moment is 2. One should keep in mind, however, that by (1.2) this analogy ends certainly with the eighth moment which is not bounded any more. Although (1.5) and (1.6) look very pleasant using probability measure, we nevertheless follow the usual convention in the literature and use dxdy y 2 since this aids us in quoting results. One may ask the question of bounding L 4 -norms in terms of other parameters of automorphic forms. Sarnak and Watson [Sa1,Theorem 3] can show the bound f 4 ≪ λ ε for a weight 0 Hecke-Maaß cusp form of large eigenvalue λ, possibly assuming the Ramanujan-Petersson conjecture (see also [Lu2]). For Eisenstein series restricted to fixed compact regions within Γ\H this has been shown by Spinu [Sp]. In the level aspect, a best-possible result on average has been proved in [Bl1]. All these results have Watson's formula [Wa2] as a starting point that translates the L 4 -norm into a mean value of certain triple product L-functions of degree 8, but they are of very different levels of difficulty. The present case of the weight aspect is the hardest in terms of the size of the conductors of the relevant L-functions. Here Watson's formula gives roughly (1.7) F 4 4 ≈ 1 k g∈B 2k L(1/2, f × f × g) where here and henceforth B k denotes a Hecke basis of S k . This is a family of about k L-functions having conductors of size about k 6 . The Lindelöf hypothesis would imply F 4 4 ≪ k ε , but unconditionally a bound of this strength seems to be completely out of reach by present technology. Using the factorization (1.8) L(1/2, f × f × g) = L(1/2, sym 2 f × g)L(1/2, g) and non-negativity of central L-values [KZ, La], one can estimate the second factor individually by k 1/3+ε , the best known subconvexity bounds for this degree 2 L-function [Pe], and is left with an average of degree 6 L-functions of conductor k 4 in a family of size k. Here we are in a position to obtain a best-possible upper bound ("Lindelöf on average") which is of independent interest. The following result is slightly more general than needed for our applications. Theorem 1.4. Fix a constant c > 0. For f ∈ B k and |κ − k| ≤ c we have 12 2k − 1 g∈B2κ L(1/2, sym 2 f × g) L(1, sym 2 g) ≪ k ε . The implicit constant depends only on ε and c. This is the main "workhorse" result of the paper that is used in the course of proving Theorems 1.1, 1.6, and 1.7. We will only need the cases κ = k even and κ = k − 1 odd which come up naturally in our period formulae (2.7) and (1.10) below, but the argument works in greater generality as long as k and κ are sufficiently close (see below for a more detailed discussion). We note that Theorem 1.4 is trivial in the case κ ≥ k, κ odd, and in the case k < κ, κ even, since in these cases the root number of L(s, sym 2 f × g) is −1. The factorization (1.8) together with a subconvexity bound for L(1/2, g) gives trivially a subconvexity bound for the degree 8 function on the left hand side of (1.8). Based on Theorem 1.4 we can get a subconvexity bound for a degree 8 L-function in a much less obvious situation. This seems to be the first instance of subconvexity for a triple product L-function with three varying factors. Corollary 1.5. Let k, l be two even positive integers and let f ∈ B k , h ∈ B l be two Hecke eigenforms. Then 12 2(k + l) − 1 g∈B k+l L(1/2, f × g × h) L(1, sym 2 g) ≪ (kl) 1/6+ε . In particular, L(1/2, f × g × h) ≪ (k + l)(kl) 1/6+ε for each g ∈ B k+l . The convexity bound in this situation is ((k + l)kl) 1/2 , so Corollary 1.5 gives subconvexity in the range k 1/2+δ ≤ l ≤ k 2−δ . The proof of Theorem 1.4 is based on a careful study of the integral kernel in the GL(3) Voronoi summation formula. It turns out that we roughly need to sum (1.9) n≍k 2 λ f (n 2 )J 2κ−1 ( √ n) where here and henceforth λ f denote the Hecke eigenvalues of f . The Bessel function comes from Petersson's formula applied to the sum over g ∈ B 2κ . The key observation is that large parts of the Voronoi kernel are essentially cancelled by the Mellin transform of the Bessel function, and hence the seemingly complicated expression (1.9) with the Bessel function in the transitional region becomes treatable, cf. Lemma 5.1. It is at this point that we need k ≈ κ in Theorem 1.4. A somewhat similar phenomenon was (implicitly) the key of success in X. Li's work [Li]. The endgame of the proof features a stationary phase argument. For the purpose of this paper we could get by with an ad hoc argument, but a uniform analysis of oscillating integrals is a recurring theme in analytic number theory, and we felt that a general result in this direction may be welcome in many other situations. We give a weighted stationary phase lemma in Proposition 8.2 below. It gives an asymptotic expansion with arbitrary precision, and it is also applicable in situations with several stationary points that move against each other, or in the case of mildly oscillating weight functions. Theorem 1.4 can be used in many situations, and we proceed to give two applications connected with norms of automorphic forms restricted to certain submanifolds. For a holomorphic cuspidal Hecke eigenform g ∈ S 2k with k odd let F g ∈ S k+1 (Sp 4 (Z)) be its Saito-Kurokawa lift (see [EZ]). Then F g restricted to the diagonal is a modular form on (Γ\H) × (Γ\H), and we denote by N (F g ) the (square of the) L 2 -norm of this restricted function, when both Γ\H and Sp 4 (Z)\H 2 are equipped with probability measures. Ichino's formula [Ic] implies (1.10) N (F g ) = π 2 15 L(3/2, g)L(1, sym 2 g) · 12 k f ∈B k+1 L 1 2 , sym 2 f × g . It was conjectured in [LY] that N (F g ) ∼ 2 as k → ∞, and this conjecture was shown on average over both g ∈ B 2k and K ≤ k ≤ 2K. Here we show that the expected asymptotic formula holds for a much smaller average only over g ∈ B 2k . Theorem 1.6. We have 12 2k − 1 g∈B 2k N (F g ) = 2 + O(k −η ) for some η > 0. Dropping all but one term gives the bound N (F g ) ≪ k which is slightly better than the strongest individual bound obtained in [LY]. With an amplifier one might even get a small power saving but we did not investigate this. In this context Theorem 1.4 has a geometric interpretation: the projection of the diagonally restricted F g onto any f × f with f ∈ B k+1 is essentially bounded on average over lifts F g . It would be very interesting to prove a lower bound in Theorem 1.4 since this would show that for a given f , it is not the case that the projection of F g onto f × f is zero for all g. As another application we let f ∈ B k be an L 2 -normalized cuspidal Hecke eigenform and write as before F (z) = f (z)y k/2 . We consider the restriction of F to the distinguished vertical infinite length geodesic: (1.11) I := ∞ 0 |F (iy)| 2 dy y = ∞ 0 f (iy) 2 y k dy y . It follows easily from Parseval that this integral can be expressed in terms of L-functions (this is a classical observation of Hecke; a quick derivation is given in Section 7): I = ∞ −∞ 2 k−2 |Γ( k 2 + it)| 2 Γ(k) · |L(1/2 + it, f )| 2 L(1, sym 2 f ) dt ∼ π 2k 1/2 ∞ −∞ e −2t 2 /k |L(1/2 + it, f )| 2 L(1, sym 2 f ) dt. (1.12) One can show in various ways I ≪ k 1/2+ε , either by using (1.1) or alternatively by a mean value theorem for Dirichlet polynomials, while the Lindelöf hypothesis would predict that I ≪ k ε . The situation is once again in sharp contrast to the non-holomorphic case: the mean value theorem argument applied to J := ∞ 0 |φ(iy)| 2 dy/y for an L 2 -normalized Hecke-Maaß cusp form φ with large Laplace eigenvalue 1/4 + T 2 shows immediately the essentially best-possible bound J ≪ T ε , see [Sa2,p.6]. We will conclude from Theorem 1.4 the following improvement on the trivial bound in the holomorphic case. Theorem 1.7. We have I ≪ k 1/4+ε . Theorem 1.7 shows that the measure of the set of y > 0 where F (iy) satisfies (1.1) is small. Observe that the optimal bound I ≪ k ε would give, in light of (1.12), an extremely strong subconvexity result, but even Theorem 1.7 in its present form implies an interesting ("Burgess-type") hybrid subconvexity bound. Our approach to proving Theorem 1.7 easily shows the (weaker) result that I 1/2 ≪ k 1/8+ε F 4 which is reminiscent of a result of Bourgain [B] which compares the restricted L 2 -norms along geodesics and the L 4 -norm of Laplace eigenfunctions on a compact Riemannian manifold. As a by-product of the calculations in Section 3 we will also show the lower bound I ≫ k −ε , see Corollary 3.1. As far as we know, this is the first nontrivial geodesic restriction result for holomorphic forms of large weight. Reznikov [R1] initiated a study of restricted L 2 -norms along various curves for Maaß forms with large eigenvalue. Sarnak [Sa2] mentions that the restricted L 2 -norm of F (z) along a fixed closed horocycle is O(k ε ); this horocycle case amounts to bounding the sum of squares of Hecke eigenvalues of f of size ≈ k but in a short interval of length √ k. This is very different from the analysis of I. Our approach to bounding I is specific to the vertical geodesic because we use the realness of f on the geodesic in (1.11) which is exploited in (7.2). Returning to the situation of Theorem 1.1, we finally mention that rather than decomposing f (z) 2 into a Hecke basis of holomorphic forms, one could instead use a spectral decomposition for y k |f (z)| 2 . In place of (1.7), we instead arrive at a mean-value of the shape (1.13) F 4 4 ≈ k −1 tj ≪ √ k L(1/2, f × f × u j ) + k −1 t≪ √ k |L(1/2 + it, f × f )| 2 dt, where only the even Maaß forms occur in the sum. The conductor of this degree 8 L-function is t 4 j k 4 and it factorizes as L(1/2, sym 2 f × u j )L(1/2, u j ). In this case the degree 2 factor has conductor t 2 j ≪ k while the degree 6 factor has conductor about t 2 j k 4 ≪ k 5 ; this has the effect that if one uses a subconvexity bound on the degree 2 factor then one is left with estimating a family of about k L-functions having conductors of size about k 5 , which is more difficult. This alternate formulation also gives an independent way to derive Conjecture 1.2, and it does indeed lead to the same constant. Since f (iy) is real for y > 0, we may use the decomposition of f (z) 2 into holomorphic forms also in the situation of Theorem 1.7 which again works more efficiently than the corresponding decomposition of y k |f (z)| 2 into Maaß forms. On the other hand, the decomposition (1.13) can be used for the following variant of Corollary 1.5: for f and g of weight k and u j an even Maaß form with spectral parameter t j one has the bounds (1.14) L(1/2, f × g × u j ) ≪ k 4/3+ε , |L(1/2 + it, f × g)| 2 ≪ k 4/3+ε , provided t j , t ≪ √ k. The conductors of these L-functions are (kt j ) 4 and (kt) 2 , respectively, so these bounds are subconvex for t j ≫ k 1/3+δ and t ≫ k 1/3+δ , accordingly. So far the results in this section have relied on the theory of L-functions. It is also natural to attempt to bound these integrals directly with the Fourier expansion. With this approach, we will show Theorem 1.8. Let f ∈ S k be a Hecke eigenform and define F (z) = f (z)y k/2 as before. Suppose y 0 > 0. Then (1.15) ∞ y0 1/2 −1/2 |F (x + iy)| 4 dxdy y 2 ≪ k 1/2+ε y 2 0 + k −1/2+ε . In particular, this indicates that the bulk of the L 4 -norm arises from small values of y, in contrast to (1.1) where the supremum is attained very high in the cusp. The direct calculations with the Fourier expansions lead to sums of shifted convolution sums which when bounded trivially lead to Theorem 1.8. On the other hand, in certain ranges we can turn this analysis around and bound these new sums via Theorems 1.1 and 1.7. We refer to Section 3, in particular Corollary 3.2, for the precise results on shifted convolution sums, including a connection with Poincaré series. Theorem 1.8 is somewhat reminiscent of [So,Proposition 2] which is a crucial input for quantum unique ergodicity for Maaß forms on the modular surface; in essence it shows that mass (measured in the L 2 -sense) cannot escape through the cusp. However, the methods in [So], based on the properties of multiplicative functions, are very different from ours. Acknowledgements. We would like to thank P. Sarnak and the referees for very useful comments. Period and spectral formulae In this section we compile several useful formulae for later use. In Subsection 2.3 we can already deduce Theorem 1.1 and Corollary 1.5 as well as the bounds (1.14) from Theorem 1.4 (whose proof is deferred to Section 5). The Petersson formula. Let E(z, s) = Γ ∞ \Γ ℑ(γz) s = y s + Z(2(1 − s)) Z(2s) y 1−s + 2 √ y Z(2s) n =0 τ s− 1 2 (|n|) |n| 1/2 K s− 1 2 (2π|n|y)e(nx) denote the usual Eisenstein series where Z(s) = ζ(s)Γ(s/2)π −s/2 is the completed zeta-function, τ ν (n) = ab=n (a/b) ν ,Γ = P SL 2 (Z) andΓ ∞ is the subgroup of upper triangular matrices inΓ. Let g(z) = ∞ n=1 λ g (n)(4πn) (k−1)/2 e(nz) ∈ S k be a Hecke normalized cusp form, and write G(z) = y k/2 g(z). Then by unfolding Γ\H |G(z)| 2 E(z, s) dx dy y 2 = ∞ 0 ∞ n=1 |λ g (n)| 2 (4πn) k−1 e −4πny y s+k dy y 2 = L(s, g ×ḡ)Γ(s + k − 1) ζ(2s)(4π) s in ℜs > 1. In particular, (2.1) G 2 2 = π 3 res s=1 L(s, g ×ḡ)Γ(s + k − 1) ζ(2s)(4π) s = L(1, sym 2 g)Γ(k) 12ζ(2) . Combining this with the Petersson formula [IK,Proposition 14.5], we obtain (2.2) ζ(2) (k − 1)/12 g∈B k λ g (n)λ g (m) L(1, sym 2 g) = δ n,m + 2πi −k ∞ c=1 S(m, n, c) c J k−1 4π √ mn c where we recall that B k denotes a Hecke basis B k of S k . 2.2. The Voronoi formula. Let ψ be a smooth function with compact support in (0, ∞) with Mellin transformψ(s). Let f ∈ S k be a holomorphic Hecke cusp form of weight k and denote by A(n, m) = A(m, n) the Fourier-Whittaker coefficients of the symmetric square lift of f , normalized such that A(1, 1) = 1, see [Go,Sections 6,7]. Let c be a natural number and d an integer coprime to c. Then we have [MS,Theorem 1.18] (2.3) n≥1 A(m, n)e nd c ψ(n) = c ± n1|c n2≥1 A(n 2 , n 1 ) n 2 n 1 S md, ±n 2 , c n 1 Ψ ± n 2 n 2 1 c 3 m where (2.4) Ψ ± (x) = 1 2π 3/2 (1) (π 3 x) −s G ± (s)ψ(−s) ds 2πi with (2.5) G ± (s) = Γ( 1 2 (k + 1 + s))Γ( 1 2 (k + s)) Γ( 1 2 (k − s))Γ( 1 2 (k − 1 − s)) Γ( 1 2 (2 + s) Γ( 1 2 (1 − s)) ∓ i Γ( 1 2 (1 + s)) Γ( 1 2 (−s)) . 2.3. Watson's formula. Let k, l be two even positive integers and let f ∈ B k , h ∈ B l , g ∈ B k+l be three Hecke eigenforms. We write F = f y k/2 , H = hy l/2 , G = gy (k+l)/2 . Then Watson's formula [Wa2,Theorem 3] together with the local computations in [Wa2,Section 4.1] shows | F H, G | 2 = Λ(1/2, f ×ḡ × h) 4Λ(1, sym 2 f )Λ(1, sym 2ḡ )Λ(1, sym 2 h) = π 3 2(k + l − 1) · L(1/2, f ×ḡ × h) L(1, sym 2 f )L(1, sym 2ḡ )L(1, sym 2 h) . (2.6) Since f, g, h have real Fourier coefficients, we can drop the complex conjugation bars. Applying this with k = l and f = h, we obtain (2.7) F 4 4 = F 2 , F 2 = g∈B 2k | F 2 , G | 2 = π 3 2(2k − 1)L(1, sym 2 f ) 2 g∈B 2k L(1/2, g)L(1/2, sym 2 f × g) L(1, sym 2 g) . On the other hand, (2.6) implies |F | 2 , |H| 2 = F H, F H = g∈B k+l | F H, G | 2 = π 3 2(k + l − 1)L(1, sym 2 f )L(1, sym 2 h) g∈B k+l L(1/2, f × g × h) L(1, sym 2 g) . (2.8) We see that Theorem 1.1 is an easy consequence of Theorem 1.4: in (2.7) we use the non-negativity of L(1/2, sym 2 f × g) [JS, La] and L(1/2, g) [KZ] together with the lower bound L(1, sym 2 f ) ≫ k −ε [HL]. Then Theorem 1.1 and the individual subconvexity bound L(1/2, g) ≪ k 1/3+ε [Pe,p. 37] imply Theorem 1.1. The same argument gives Corollary 1.5: since |F | 2 , |H| 2 ≤ F 2 4 H 2 4 , Theorem 1.1 implies Corollary 1.5. Finally we show how (1.14) follows from the same ideas as above. Suppose that f and g both have weight k. For u j even, Watson's formula gives | F G, u j | 2 = Λ(1/2, f ×ḡ × u j ) 8Λ(1, sym 2 f )Λ(1, sym 2 g)Λ(1, sym 2 u j ) = 2 · L(1/2, f ×ḡ × u j ) L(1, sym 2 f )L(1, sym 2 g)L(1, sym 2 u j ) G(k, t j ), G(k, t) := π 3 |Γ(k − 1 2 + it)| 2 4Γ(k) 2 . A straightforward computation with Stirling's formula shows G(k, t) ∼ π 3 4 k −1 exp(−t 2 /k) for |t| ≤ k 2/3 , and is exponentially small for |t| > k 2/3 . The classical Rankin-Selberg theory computes the projection of F G onto the Eisenstein series and the formula is 1 4π | F G, E(., 1/2 + it) | 2 = 1 π · |L( 1 2 + it, f ×ḡ)| 2 L(1, sym 2 f )L(1, sym 2 g)|ζ(1 + 2it)| 2 G(k, t). As above we deduce k 1/3+ε ≫ F 2 4 G 2 4 ≥ FḠ, FḠ from Theorem 1.1; spectrally decomposing FḠ and using the preceding two inner product formulae easily leads to (1.14). The Fourier expansion In this section we sketch the proof of (1.2) (which is a generalization of the method of [Xi]), and prove Theorem 1.8. Let (3.1) f (z) = ∞ n=1 a n (4πn) (k−1)/2 e(nz) ∈ S k be an L 2 -normalized holomorphic Hecke cusp form of weight k. Then |a 1 | 2 ≍ 1 Γ(k)L(1, sym 2 f ) = 1 Γ(k/2) 2 k 1/2+o(1) 2 k , by (2.1). It follows that F p ≥ ∞ 1 1 0 f (x + iy)y k/2 p dx dy y 2 1/p ≥ ∞ 1 1 0 f (x + iy)e(−x)dx p y kp/2 dy y 2 1/p = ∞ 1 a 1 (4π) (k−1)/2 e −2πy y k/2 p dy y 2 1/p ≫ k −ε ∞ 1 e −2πy (2πy) k/2 Γ(k/2)k 1/4 p dy y 2 1/p . Let L := [ k 4π − √ k, k 4π + √ k]. It is well-known that e −2πy (2πy) k/2 ≫ Γ(k/2)k 1/2 for y ∈ L. Hence F p ≫ k −ε L k p/4 dy y 2 1/p ≫ k 1 4 − 3 2p −ε , as claimed. Now we prove Theorem 1.8. Let P (y 0 ) denote the left hand side of (1.15). Writing out the Fourier expansion and integrating over x, we obtain P (y 0 ) = |a 1 | 4 m+n=m ′ +n ′ λ f (m)λ f (n)λ f (m ′ )λ f (n ′ )(4πm) k−1 2 (4πn) k−1 2 (4πm ′ ) k−1 2 (4πn ′ ) k−1 2 × ∞ y0 y 2k−1 exp(−2πy(m + n + m ′ + n ′ )) dy y . Changing variables y → y/(2π(m + n + m ′ + n ′ )), we recast this as |a 1 | 4 2π m+n=m ′ +n ′ λ f (m)λ f (n)λ f (m ′ )λ f (n ′ ) m + n + m ′ + n ′ √ mn m + n k−1 √ m ′ n ′ m ′ + n ′ k−1 Γ 2k−1, 2πy 0 (m+n+m ′ +n ′ ) , where Γ(a, z) = ∞ z t a e −t dt t is the incomplete gamma function. Define Q(a, x) = Γ(a, x)/Γ(a) where a, x > 0. This function is well understood asymptotically. All we need here is that Q(a, x) is exponentially small for x ≥ a + √ a log a, and Q(a, x) − 1 is exponentially small for x ≤ a − √ a log a; we always have Q(a, x) ≤ 1. With (2.1) and letting l = m + n = m ′ + n ′ be a new variable, we obtain P (y 0 ) = 2π 5/2 Γ(k − 1 2 ) Γ(k)L(1, sym 2 f ) 2 l T f (l) 2 l Q(2k − 1, 4πy 0 l) where T f (l) = m+n=l λ f (m)λ f (n) 2 √ mn m + n k−1 . It turns out that T f (l) is closely related to the inner product of f 2 onto the l-th holomorphic Poincare series of weight 2k; see (3.8) below. Unless m ∼ n, the weight function in the definition of T f (l) is exponentially small. Note that 2 √ mn m + n = 1 − |m − n| 2 (m + n)( √ m + √ n) 2 = 1 − |m − n| 2 2(m + n) 2 + O |m − n| 4 (m + n) 4 , so that the contribution to T f (l) from |m − n| ≥ l √ k log(l) is exponentially small. Then by Deligne's bound, (3.2) T f (l) ≪ l ε 1 + l √ k . At this point we can already estimate trivially to obtain P (y 0 ) ≤ k 1/2+ε y −2 0 + k −1/2+ε as claimed in Theorem 1.8. For convenience we slightly simplify the expression for P (y 0 ). We first note the simple approximation (3.3) T f (l) = S f (l) + O(l 1+ε k −3/2 ), S f (l) = m+n=l λ f (m)λ f (n) exp − |m − n| 2 k 2l 2 . We may replace Q(2k − 1, 4πy 0 l) by 1 under the assumption l ≤ k 2πy0 , obtaining (3.4) P (y 0 ) = 2π 5/2 Γ(k − 1 2 ) Γ(k)L(1, sym 2 f ) 2 l≤ k 2πy 0 S f (l) 2 l + O(k ε y −2 0 + k −1+ε ). We deduce some additional corollaries from this argument. First we observe that the same argument can be used for the geodesic restriction problem in Theorem 1.7 which we complement by the following result. Corollary 3.1. With the notation and assumptions of Theorem 1.7, we have (3.5) R(y 0 ) := ∞ y0 |F (iy)| 2 dy y ≪ k 1/2+ε y 0 + k ε . Furthermore, R(1) ≫ k −ε . Indeed, a direct calculation shows R(y 0 ) := ∞ y0 y k |f (iy)| 2 dy y = π 2 L(1, sym 2 f ) l T f (l) l Q(k, 2πy 0 l) with T f (l) as in (3.3), and (3.2) immediately implies the upper bound in (3.5). With the same approximations as above, we obtain the slightly nicer expression (3.6) R(y 0 ) = π 2 L(1, sym 2 f ) l≤ k 2πy 0 S f (l) l + O k ε y 0 k + 1 y 0 . For a proof of the lower bound in Corollary 3.1 we observe that R(1) ≥ R(y 0 ) for y 0 ≥ 1, and we choose 2πy 0 = k 1/2+ε . In this case, l ≤ k 1/2−ε and so effectively only the diagonal terms m = n = l/2, with l even, persist in (3.3). That is, R(1) ≥ π 2 L(1, sym 2 f ) 2l≤k 1/2−ε λ f (l) 2 2l + O(k −1/2+ε ). Dropping all but l = 1, we obtain the claimed lower bound. The expressions (3.4) and (3.6) can be used to bound on average the shifted convolutions S f (l) defined in (3.3). Corollary 3.2. Let N ≥ 1. With the notation and assumptions as above we have (3.7) l≤N S f (l) l ≪ (N k) ε k 1/4 + N k , l≤N S f (l) 2 l ≪ (N k) ε k 5/6 + N k 1/6 + N 2 k 3/2 . The former bound is nontrivial for N > k 3/4+ε , while the latter is nontrivial for N > k 11/12+ε . This seems the first bound of this type in the literature. To prove the first bound in (3.7), we apply (3.6) with 2πy 0 = k/N and use the obvious inequality R(y 0 ) ≤ R(0) = I in combination with Theorem 1.7. For the second bound in (3.7) we apply (3.4) with 2πy 0 = k/N and use the inequality P (y 0 ) ≪ (1 + y −1 0 )P (1) ([Iw3, Lemma 2.10]) in combination with Theorem 1.1. The shifted convolution sum T f (l) is a natural object and can be interpreted in terms of Poincaré series as we now briefly explain. Let P l denote the l-th holomorophic Poincaré series of weight 2k for the group Γ = SL 2 (Z) as in [Iw1,Section 3.3], that is, P l (z) = γ∈Γ∞\Γ j(γ, z) −2k e(lγz) and define the normalized function P l via P l (z) = Γ(2k − 1) (4πl) (2k−1)/2 P l (z). This normalization is natural because by [Iw1,(3.24)], P l , P l is 1 plus a sum of Kloosterman sums. For a cusp form g(z) of weight 2k, we have g, P l = Γ(2k − 1) (4πl) (2k−1)/2 g(l), where g(z) = l≥1 g(l)e(lz). Suppose that f of weight k is given by (3.1), and let g = f 2 . Since g(l) = a 2 1 m+n=l λ f (m)λ f (n)(4π √ mn) k−1 , we obtain (3.8) T f (l) = 2 k−1 √ 4πl a 2 1 Γ(2k − 1) f 2 , P l = k 1 4 +o(1) l 1 2 · f 2 , P l . Conditional results Our next aim is to show how Conjecture 1.2 follows from the general recipe of [CFKRS]. The overall approach is analogous to the derivation of Conjecture 1.6 of [LY] which is slightly different in that it averages L(1/2, sym 2 f × g) over f while here we average L(1/2, f × f × g) = L(1/2, sym 2 f × g)L(1/2, g) over g. We assume some familiarity with [CFKRS]. The forthcoming calculations are purely formal and only at the end do we arrive at something that makes sense. Mimicking the approximate functional equation we write formally L(1/2 + α, g) = l λ g (l) l 1/2+α + X α l λ g (l) l 1/2−α , and L(1/2 + β, sym 2 f × g) = m,n λ g (n)A(m, n) (m 2 n) 1/2+β + Y β m,n λ g (n)A(m, n) (m 2 n) 1/2−β for certain quantities X α , Y β with X 0 = Y 0 = 1. As above, A(m, n) denotes the Fourier-Whittaker coefficients of the symmetric square lift of f . Then by (2.7) we have (4.1) F 4 4 = π 3 2(2k − 1)L(1, sym 2 f ) 2 g∈B 2k l,m,n λ g (l)λ g (n)A(m, n) l 1/2+α (m 2 n) 1/2+β + . . . , where the dots indicate three more similar terms. The Petersson formula (2.2) expresses this spectral sum as a diagonal term plus a sum of Kloosterman sums. The [CFKRS] conjecture instructs us to apply this averaging formula to each of the four terms in (4.1), and to retain only the diagonal term. Thus we obtain + α + β) ζ(2 + α + 3β) , see [Go,Prop. 6.6.3]. At this point we can set all the parameters to 0, giving F 4 4 ∼ π 3 6ζ(2) 2 = 6 π . Normalizing as in (1.4), we finally arrive at Γ\H y 2k |f (z)| 4 3 π dxdy y 2 ∼ 2. Next we indicate the changes necessary to derive Conjecture 1.3. Let φ be as in Conjecture 1.3, and suppose u j form a Hecke-Maaß orthonormal basis for SL 2 (Z) with spectral parameter t j . Then the spectral decomposition gives φ 4 4 = Γ\H |φ(z)| 4 dxdy y 2 = j | φ 2 , u j | 2 + (Eisenstein). Watson's formula gives for u j even that | φ 2 , u j | 2 = π 2 3 H T (t j ) L(φ × φ × u j , 1/2) L(sym 2 φ, 1) 2 L(sym 2 u j , 1) , H T (t) = |Γ( 1 2 +2iT +it 2 )| 2 |Γ( 1 2 +2iT −it 2 )| 2 |Γ( 1 2 +it 2 )| 4 |Γ( 1+2iT 2 )| 4 |Γ( 1+2it 2 )| 2 . There is a similar formula for the projection of φ 2 onto the Eisenstein series that follows much more elementarily from unfolding. As above, we then obtain φ 4 4 = 3 π + π 2 3 L(sym 2 φ, 1) 2 ev j≥1 H T (t j ) L(1, sym 2 u j ) L(1/2, u j )L(1/2, sym 2 φ × u j ) + (Eis.), taking into account the constant eigenfunction u 0 = 3/π. Now the Kuznetsov formula plays the role of the Petersson formula. To this end, we recall that the Kuznetsov formula takes the form 2 ev j≥1 h(t j )λ j (m)λ j (n) L(1, sym 2 u j ) + (Eis.) = 1 2 δ m=n ∞ −∞ h(t)d * t + (Kloosterman), d * t = 1 π 2 t tanh(πt)dt. We then arrive at the conjecture φ 4 4 − 3 π ∼ π 8ζ(2) I, I = ∞ −∞ H T (t)d * t. We next evaluate I. Stirling's formula gives that H T (t) ∼ 2π T 2 − t 2 4 −1/2 t 2 −1 exp − πq(t, T ) , where q(t, T ) = |T + t 2 | + |T − t 2 | − 2T which is 0 for |t| ≤ 2T and is |t − 2T | for |t| > 2T . Then I ∼ 8 π 2T 0 T 2 − t 2 4 −1/2 dt + 8 π ∞ 2T t 2 4 − T 2 −1/2 e −π(t−2T ) dt = 8 + O(T −1/2 ). Thus we arrive at the conjecture φ 4 4 ∼ 3 π + 6 π = 9 π which after renormalization gives (1.6). A mean value of central L-values This section is devoted to the proof of Theorem 1.4. An inspection of the proof indicates that improving the upper bound into an asymptotic formula with a power saving is, in a vague sense, almost equivalent to a subconvexity bound for L(1/2, sym 2 f ) for k → ∞. Possibly if one had such an asymptotic formula then one could instead use an amplifier and thus obtain subconvexity for L(1/2, sym 2 f × g). In the following we make constant use of ε-convention, i.e. the symbol ε denotes an arbitrarily small positive constant whose value may change from occurrence to occurrence. We start by expressing L(1/2, sym 2 f × g) with f ∈ B k , g ∈ B 2κ by a standard approximate functional equation. The local factor at infinity is given by (combine [Or,Theorem 2] with (1.8)) Λ k,κ (s) := (2π) −3s Γ(s + k + κ − 3 2 )Γ(s + κ − 1 2 )Γ(s + κ − k + 1 2 ), κ ≥ k, (2π) −3s Γ(s + k + κ − 3 2 )Γ(s + κ − 1 2 )Γ(s + k − κ − 1 2 ) , κ < k, and the root number is 1 if and only if one of the following two cases hold: κ ≥ k and κ even, or κ < k and κ odd. Otherwise the root number is −1. In the latter case Theorem 1.4 is trivial, and we assume from now on that the root number is +1. In this case we have where W is a smooth weight function satisfying (5.2) x j W (j) (x) ≪ j,A 1 + x k 2 −A for any j, A ≥ 0 if κ = k + O(1) . For instance, we can take W (x) = 1 2πi (1) Λ k,κ ( 1 2 + s) Λ k,κ ( 1 2 ) cos πs 10A −60A x −s ds s (cf. e.g. [IK,Section 5.2]). With later applications in mind, we consider a slightly more general quantity M f (r) := 12 2k − 1 g∈B 2k λ g (r) L(1/2, sym 2 f × g) L(1, sym 2 g) for an integer 0 < r < k 1/10 . By positivity and Deligne's bound we have (5.3) M f (r) ≪ r ε M f (1). We can now apply the Petersson formula (2.2) getting M f (r) = M (1) f (r) + M (2) f (r) where M (1) f (r) is the diagonal term and M (2) f (r) is the off-diagonal contribution. We have M (1) f (r) = 2 ζ(2) m A(m, r) r 1/2 m W (m 2 ) ≪ k ε (5.4) by Deligne's bound A(m, r) ≪ (rm) ε (or Iwaniec's method [Iw2]) and (5.2). We proceed to analyze the off-diagonal contribution (5.5) M (2) f (r) = 4πi k ζ(2) n,m,c A(m, n) n 1/2 m W (nm 2 ) S(n, r, c) c J 2κ−1 4π √ nr c . The multiple sum is absolutely convergent. By (5.2) we can truncate the n-sum at n ≤ k 2+ε m −2 . We insert smooth partitions of unity for the n and c-sums, and are left with bounding M (2) f (r, N, C) = m,c Ω 1 (c/C) mCN 1/2 d (c) * e dr c n A(m, n)e d n c Ω 2 n N J 2κ−1 4π √ nr c for (5.6) N ≤ k 2+ε m 2 , C ≤ 100 √ N r k , the latter truncation coming from the decay properties of the Bessel function near 0. Here Ω 1 and Ω 2 are fixed, smooth, compactly supported weight functions. We remark that (5.6) implies (5.7) k 2 r −1 ≤ N ≤ k 2+ε , cm ≪ r 1/2 k ε . We apply the Voronoi formula (2.3) with (5.8) ψ(n) = ψ N,c,r (n) = Ω 2 n N J 2κ−1 4π √ nr c . We define Ψ ± as in (2.4). Then the Voronoi formula (2.3) implies (5.9) M (2) f (r, N, C) = m,c Ω 1 (c/C) mCN 1/2 c n1|c ± n2 A(n 2 , n 1 ) n 1 n 2 d (c) * e dr c S(md, ±n 2 , c/n 1 )Ψ ± n 2 n 2 1 c 3 m . We need the following two technical lemmas. Lemma 5.1. With ψ as in (5.8) and under the assumption (5.6) we have (1) ). Ψ ± (x) ≪ A,ε k ε x 1/2 c r 1/2 + xc 2 r 1 + x Xk ε −A , X := N 1/2 r 3/2 c 3 (≫ k 2+o In our situation x ≥ 1/(c 3 m), hence xc 2 /r ≥ 1/(mcr). Hence (5.7) implies the slightly simpler bound (5.10) Ψ ± (x) ≪ A,ε k ε xc 2 r 1/4 1 + x Xk ε −A , x ≥ 1 c 3 m . Lemma 5.2. We have d (c) * e dr c S(md, ±n 2 , c/n 1 ) ≤ τ (c)c(c, m) where τ (c) denotes the number of divisors of c. Coupling these results with Deligne's bound, it follows by straightforward estimates for (5.9) and (5.7) that M (2) f (1, N, C) ≪ k ε . This concludes the proof of Theorem 1.4. It remains to prove the two lemmas. Proof of Lemma 5.1. By [GR,6.561.14] we havẽ ψ(−s) = 1 2πi (ν)Ω 2 (u) 2π √ r c 2s+2u Γ(κ − s − u − 1 2 ) Γ(κ + s + u + 1 2 ) N u du whereΩ 2 denotes the Mellin transform of Ω 2 , which is an entire function that is rapidly decaying on vertical lines. Here and in the following we write u = ν + iw, and as usual s = σ + it. We conclude Ψ ± (x) = 1 2π 3/2 (σ) (ν)Ω 2 (u) (2 √ r/c) 2s+2u π s−2u G ± (s) Γ(κ − s − u − 1 2 ) Γ(κ + s + u + 1 2 ) N u x −s ds 2πi du 2πi with G ± as in (2.5). A simple version of Stirling's formula shows (5.11) G ± (s) ≪ σ (k + |t|) 2σ+1 (1 + |t|) σ+ 1 2 and (5.12) Γ(κ − s − u − 1 2 ) Γ(κ + s + u + 1 2 ) ≪ σ,ν (k + |t + w|) −2σ−2ν−1 . for any fixed σ, ν > −1, and we also recallΩ 2 (u) ≪ A (1 + |w|) −A . In particular, the double integral is absolutely convergent for 2ν > σ + 3/2. We first show that Ψ ± is rapidly decaying for x > X. To this end we shift the two contours to ℜs = A and ℜu = A/2 + 3/4 + ε for some large A and small ε > 0. By trivial bounds together with (5.11) and (5.12), we obtain Ψ ± (x) ≪ ε,A (N r) 3/4 k ε c 3/2 xc 3 N 1/2 r 3/2 −A . Changing A and ε if necessary, this is sufficient in the range x ≥ Xk ε . Next we investigate the range x ≤ Xk ε . Here we shift the s-contour to ℜs = −1/2. Shifting the u-contour to the far right, we see that we can truncate the s-integration at |t| ≤ T := N 1/2 r 1/2 k ε c = Xc 2 k ε r at the cost of a negligible error. Having done the truncation (in a smooth fashion) we shift the contour back to ℜu = 0, and truncate the u-integration at |w| ≤ k ε again at the cost of a negligible error. Hence we see that (5.13) Ψ ± (x) ≪ k ε x 1/2 c r 1/2 sup |w|≤k ε ∞ −∞ ω(t) 4r πc 2 x it G ± − 1 2 + it Γ(κ − it − iw) Γ(κ + it + iw) dt +O(k −10 ) where ω is a smooth function with ω(t) = 1 for |t| ≤ T , ω(t) = 0 for |t| ≥ 2T and ω (j) (t) ≪ j |t| −j for all j ∈ N 0 . We need to show square-root cancellation in the t-integral which follows from the stationary phase method. The argument is greatly simplified by the following observation: by well-known properties of the Gamma-function we have G ± (−1/2 + it) = ∓i 2 1/2−3it Γ(1/2 + it) exp(± iπ 4 (1 + 2it))Γ(k − 1/2 + it) √ πΓ(k − 1/2 − it) . Hence the t-integral in (5.13) contains the term H k (t, w) = Γ(k − 1/2 + it) Γ(k − 1/2 − it) Γ(κ − it − iw) Γ(κ + it + iw) which is almost constant (for small w). We see now the phenomenon mentioned in the introduction that large parts of the Voronoi kernel G ± are almost cancelled by the Mellin transform of the Bessel-function from Petersson's formula, as long as k ≈ κ. We note that Stirling's formula implies (5.14) ∓ i 2 1 2 −3it √ π Γ(1/2 + it) exp ± iπ 4 (1 + 2it) = exp it log |t| 8e v ± (t) + O (1 + |t|) −10 for a smooth function v ± satisfying v (j) ± (x) ≪ x −j for all j ∈ N 0 . Putting it all together, the integral in (5.13) equals (5.15) ∞ −∞ ω(t)v ± (t)H k (t, w) exp it log |t|r 2πec 2 x dt + O(1). Since d n dz n Γ ′ (z) Γ(z) ≪ |z| −n for n ≥ 1, it is not hard to see that ∂ n ∂t n H k (t, w) ≪ (1 + |w|) |t| n ≪ k ε |t| n if κ = k + O(1) . Now we integrate trivially in (5.15) for |t| ≤ k ε . There is one stationary point at |t 0 | = 2πxc 2 /r. We cut the remaining integral in O(k ε ) subintegrals over (smoothed) dyadic intervals of the form [V 1 , 2V 1 ] and assume without loss of generality that t 0 is the midpoint of one of the intervals. For all regions not containing t 0 we apply integration by parts in the form of Lemma 8.1 below with X = Y = 1, U = V , Q = V 1 , R ≍ k −ε to see that these are negligible. For the region containing t 0 we apply Proposition 8.2 with X = 1, Y = Q = xc 2 /r and V ≍ Q/k ε , so that altogether (5.15) is at most ≪ k ε + (xc 2 /r) 1/2 . This completes the proof of Lemma 5.1. Proof of Lemma 5.2. This is a straightforward computation. Interchanging sums, we find d (c) * e dr c S(md, ±n 2 , c/n 1 ) = * h (c/n1) e ±n 2h c/n 1 r c (r + mhn 1 ) = f |c f µ c f * h (c/n1) mhn1≡−r (f ) e ±n 2h c/n 1 , and this is trivially bounded by f |c f · c n 1 · (f, mn 1 ) f ≤ τ (c)c(c, m), as claimed. 6. Proof of Theorem 1.6 The proof of Theorem 1.6 uses heavily the analysis of the preceding section. For odd k we consider the quantity (6.1) S := 12 2k − 1 g∈B 2k π 2 15 L(3/2, g)L(1, sym 2 g) · 12 k f ∈B k+1 L 1 2 , sym 2 f × g . The crucial point is to sum over g first and postpone the f -average to the last possible moment. This different order of summation is the key to improving the result of [LY]. We will apply Theorem 1.4 several times with k + 1 (which is even) in place of k. We recall (5.1) and 1 L(3/2, g) = (r,s)=1 µ(r)µ(s) 2 λ g (r) r 3/2 s 3 and insert both expressions into (6.1). We use the Petersson formula (2.2) for the g-sum and obtain S = π 2 15 · 12 k f ∈B k+1 L(1, sym 2 f ) L(1, sym 2 f ) (r,s)=1 µ(r)µ(s) 2 r 3/2 s 3 M (1) f (r) + M (2) f (r) , where M (1) f (r) and M (2) f (r) were defined in (5.4) and (5.5). We have inserted a redundant fraction in order to ease the application of the Petersson formula later. The Dirichlet series for L(1, sym 2 f ) is not absolutely convergent, but for almost all f we can represent this value by a short Dirichlet polynomial. More precisely, the following holds: Lemma 6.1. Given δ 1 , δ 2 > 0, there is δ 3 > 0 such that (6.2) L(1, sym 2 f ) = d1,d2 λ f (d 2 1 ) d 1 d 2 2 exp − d 1 d 2 2 k δ1 + O(k −δ3 ) for all but O(k δ2 ) cusp forms f ∈ B k+1 . Proof. This follows from the zero-density estimate [LW, Theorem 1]: given 0 < η < 1/100, define [LW,(1.11)]. For f ∈ B + k+1 (η) it follows by standard complex analysis (see e.g. [Lu1,Lemma 2]) that L(s, sym 2 f ) ≪ k ε for s ∈ R(η/2). Let C(η) denote the boundary of R(η/2). Then R(η) := {s ∈ C | σ ≥ 1 − η, |t| ≤ 100k η } ∪ {s ∈ C | σ ≥ 1} and B + k+1 (η) := {f ∈ B k+1 | L(s, sym 2 f ) = 0 for s ∈ R(η)}. Then #(B k+1 \ B + k+1 (η)) ≪ k 31η byL(1, sym 2 f ) = d1,d2 λ f (d 2 1 ) d 1 d 2 2 exp − d 1 d 2 2 k δ1 − C(η) L(s, sym 2 f )Γ(s − 1)k δ1(s−1) ds for f ∈ B + k+1 (η) , and the integral is O(k −δ1η/2+ε ). The lemma follows with δ 3 < δ 1 δ 2 /62. By Lemma 6.1 we obtain S = π 2 15 · 12 k f ∈B k+1 1 L(1, sym 2 f ) d1,d2 λ f (d 2 1 ) d 1 d 2 2 exp − d 1 d 2 2 k δ1 × (r,s)=1 µ(r)µ(s) 2 r 3/2 s 3 M (1) f (r) + M (2) f (r) + O k −δ3+ε + k δ2−1+ε . (6.3) The error term comes from two sources: the error in Lemma 6.1 and the bad forms f for which (6.2) does not hold in which case we estimate trivially using (5.3) and Theorem 1.4. We proceed to estimate the two main terms in (6.3) that we call S (1) and S (2) . By the Hecke relations we have S (1) = 2π 2 15ζ(2) · 12 k f ∈B k+1 1 L(1, sym 2 f ) d1,d2,a,m1,m2,r,s (ar,s)=1 h|(m 2 1 ,r 2 ) µ(ar)µ(s) 2 µ(a)λ f (m 2 1 r 2 /h 2 )λ f (d 2 1 ) r 2 s 3 a 3 m 1 m 2 2 d 1 d 2 2 × W (rm 2 1 m 4 2 a 3 ) exp − d 1 d 2 2 k δ1 . We are now in a position to apply the Petersson formula a second time. The diagonal term equals S (11) = 2π 2 15ζ(2) 2 d2,a,m1,m2,r,s (ra,s)=1 h|(m 2 1 ,r 2 ) µ(a)µ(s 2 )µ(ra)h r 3 m 2 1 a 3 m 2 2 s 3 d 2 2 W (rm 2 1 a 3 m 4 2 ) exp − m 1 rd 2 2 /h k δ1 . By Mellin inversion and a straightforward computation with Euler products we obtain S (11) = 2π 2 15ζ(2) 2 (1) (1) L(u, v) W (u)Γ(v)k δ1v du 2πi dv 2πi where L(u, v) := ζ(2 + 4u)ζ(2 + 2u + v)ζ(2 + 2v) p 1 + 1 p 3 − 1 p 3+u+v − 1 p 4+3u+v . We shift the contours to ℜu = ℜv = −1/5, pick up the poles of W and Γ at u = 0 and v = 0 and obtain (6.4) S (11) = 2π 2 ζ(2) 15ζ(2) 2 ζ(2) 2 ζ(4) + O(k −2/5 + k −δ1/5 ) = 2 + O(k −2/5 + k −δ1/5 ). The off-diagonal contribution equals S (12) = 2πi −k 2π 2 15ζ(2) 2 d1,d2,a,m1,m2,r,s (ar,s)=1 h|(m 2 1 ,r 2 ) c µ(ar)µ(s) 2 µ(a) cr 2 s 3 a 3 m 1 m 2 2 d 1 d 2 2 S m 2 1 r 2 h 2 , d 2 1 , c × W (rm 2 1 m 4 2 a 3 ) exp − d 1 d 2 2 k δ1 J k m 1 rd 1 hc . By the rapid decay of the Bessel function near 0 we can truncate the c-sum at c ≤ 100 m1rd1 hk . We use the trivial bounds (6.5) |S( * , * , c)| ≤ c, J k (x) ≪ k −1/3 to see that (6.6) S (12) ≪ k −1/3+δ1+ε . Next we turn to the estimation of S (2) . Let 0 < δ 4 < 1/10. By (5.3) and Theorem 1.4 we can truncate the r-sum at r ≤ k δ4 at the cost of an error O(k −δ4/2+ε ). Hence we are left with bounding S (2) (N, C) := 12 k f ∈B k+1 1 L(1, sym 2 f ) d1,d2 λ f (d 2 1 ) d 1 d 2 2 exp − d 1 d 2 2 k δ1 r≤k δ 4 (r,s)=1 µ(r)µ(s) 2 r 3/2 s 3 M (2) f (r, N, C) with M (2) f (r, N, C) as in (5.9) and N, C as in (5.6). We insert Lemmas 5.1 (in the form of (5.10)) and 5.2 and conclude S (2) (N, C) ≪ d1≤k δ 1 +ε r≤k δ 4 m C≤c≤2C T (d 1 , r, m, c, N ) + O(k −100 ) where T (d 1 ,r, m, c, N ) = n2n 2 1 ≤k ε N 1/2 r 3/2 m n1|c n 1 τ (c)(c, m) d 1 r 7/4 m 2 N 1/2 12 k f ∈B k+1 λ f (d 2 1 )A(n 2 , n 1 ) L(1, sym 2 f ) ≪ k ε a,l1,l2,n1,n2 a 3 l 2 1 l2n 2 1 n2≤k ε N 1/2 r 3/2 m al1n1|c al 1 n 1 τ (c)(c, m) d 1 r 7/4 m 2 N 1/2 h|(n 2 1 ,n 2 2 ) 12 k f ∈B k+1 λ f (d 2 1 )λ f (n 2 1 n 2 2 /h 2 ) L(1, sym 2 f ) . One last time we apply the Petersson formula. For the off-diagonal term we apply as before only the trivial bounds (6.5) and truncate the series appropriately by the rapid decay of the Bessel function near 0. Hence T (d 1 , r, m, c, N ) ≪ a,l1,l2,n1,n2 a 3 l 2 1 l2n 2 1 n2≤k ε N 1/2 r 3/2 m al1n1|c al 1 n 1 τ (c)(c, m) d 1 r 7/4 m 2 N 1/2 h|(n 2 1 ,n 2 2 ) δ d1h=n1n2 + O d 1 n 1 n 2 hk 4/3 . Now it's just a matter of book-keeping, but we can simplify our task by noticing that (5.6) and (5.7) imply that m and c and hence a, l 1 , n 1 are O(k δ4/2+ε ), and h = O(k δ4+ε ). Hence S (2) (N, C) ≪ k 100(δ4+δ1)−1 1 + l2n2≤k 1+100δ 4 n 2 k 4/3 ≪ k −1/3+O(δ4+δ1) . Combining this with (6.3), (6.4) and (6.6) and choosing δ 1 , δ 2 , δ 4 sufficiently small, the proof is complete. A geodesic restriction problem In this section we prove Theorem 1.7. For convenience of the reader, we first indicate a proof of (1.12). By (2.1), an L 2 -normalized cuspidal Hecke eigenform has the Fourier expansion (7.1) f (z) = a f (1) ∞ n=1 λ f (n)(4πn) (k−1)/2 e(nz), |a f (1)| 2 = 2π 2 L(1, sym 2 f )Γ(k) . We compute the Mellin transform of f (iy)y k/2 : ∞ 0 f (iy)y k/2 y s dy y = a f (1) 2 k/2 √ 4π L(1/2 + s, f ) Γ(s + k 2 ) (2π) s . By Parseval we obtain I = 1 2π |a f (1)| 2 ∞ −∞ 2 k 4π |L(1/2 + it, f )| 2 |Γ(it + k 2 )| 2 dt, and (1.12) follows. We proceed to prove Theorem 1.7. We can spectrally decompose f 2 into cusp forms of weight 2k getting (7.2) I = g∈B 2k ∞ 0 F 2 , G g(iy)y k dy y = g∈B 2k F 2 , G a g (1) 2 k √ 4π L(1/2, g)Γ(k) where |a g (1)| 2 = 2π 2 L(1, sym 2 g)Γ(2k) is defined as in (7.1) and G(z) = g(z)y k . We insert (2.6) with f = h and use Cauchy-Schwarz together with the bound 2 k Γ(k) Γ(2k) 1/2 ≪ k −1/4 to conclude (again by positivity) I ≪ k −3/4+ε g∈B 2k L(1/2, sym 2 f × g) 1/2 g∈B 2k L(1/2, g) 3 1/2 . For both factors on the right-hand side we have best possible bounds; the former is given in Theorem 1.4, the latter in [Pe,Theorem 3.1.1,p. 36]. Remark: We also observe that (7.2) indicates k −1/2 g∈B 2k F 2 , G L(1/2, g) L(1, sym 2 g) = k o(1) , where each term in the sum is (on Lindelöf) of order k −1/4+o(1) . Hence there is some cancellation in this sum, but not square-root cancellation; in other words, the real number F 2 , G seems to have a slight tendency to be positive. In this context we remark that in the case of Maaß forms, Biró [Bi] has given an interesting formula for the triple product itself (not the square of its absolute value) in terms of a triple product over 1/2-integral weight forms. A general stationary phase lemma with smooth weights The main result of this section evaluates asymptotically fairly arbitrary smooth oscillating integrals. As mentioned in the introduction, this result is more general than needed for the immediate purposes of the present paper. We begin with a preparatory lemma which records conditions under which repeated integration by parts shows that an oscillatory integral is very small. This is similar in spirit to [JM,Lemma 6]. Lemma 8.1. Let Y ≥ 1, X, Q, U, R > 0, and suppose that w is a smooth function with support on [α, β], satisfying w (j) (t) ≪ j XU −j . Suppose h is a smooth function on [α, β] such that (8.1) |h ′ (t)| ≥ R for some R > 0, and (8.2) h (j) (t) ≪ j Y Q −j , for j = 2, 3, . . . . Then the integral I defined by I = ∞ −∞ w(t)e ih(t) dt satisfies (8.3) I ≪ A (β − α)X[(QR/ √ Y ) −A + (RU ) −A ]. This should be interpreted as follows: the integral I is negligible if RU and QRY −1/2 are both significantly bigger than 1. The variables X, Y measure the size of w and h, the variables U, Q the "flatness" of w and h. In practice, R, Y and Q are often not independent. A typical case is that (8.2) holds for j = 1 as well, and one has Y /Q ≍ R. Then RU is big, if roughly speaking e ih(t) oscillates more than w, and QRY −1/2 ≍ Y 1/2 is also big as long as e ih(t) has some oscillation. These are natural conditions away from the stationary point. A nice feature of Lemma 8.1 is that it can quickly show that I is extremely small even if QR/ √ Y and RU are tending to infinity rather slowly. Proof. Define the differential operator D(f )(t) := − d dx f ih ′ (t) for a smooth function f with compact support, so that (8.4) ∞ −∞ f (t)e ih(t) dt = ∞ −∞ D n (f )(t)e ih(t) dt for any n ∈ N 0 . It is easy to see by induction that (8.5) D n (f )(t) = 2n ν=n ν µ=0 f (µ) (t) h ′ (t) ν 2γ2+...+νγν =ν−µ c ν,µ,γ2,...,γν h (2) (t) γ2 · · · h (ν) (t) γν for certain absolute coefficients c ν,µ,γ2,...,γν ∈ C and any n ∈ N 0 . Then (8.6) |I| ≤ (β − α) D n (w) ∞ ≪ (β − α)X 2n ν=n R −ν ν µ=0 U −µ Y ν−µ 2 Q ν−µ , which quickly leads to (8.3). Proposition 8.2. Let 0 < δ < 1/10, X, Y, V, V 1 , Q > 0, Z := Q + X + Y + V 1 + 1, and assume that (8.7) Y ≥ Z 3δ , V 1 ≥ V ≥ QZ δ 2 Y 1/2 . Suppose that w is a smooth function on R with support on an interval J of length V 1 , satisfying w (j) (t) ≪ j XV −j for all j ∈ N 0 . Suppose h is a smooth function on J such that there exists a unique point t 0 ∈ J such that h ′ (t 0 ) = 0, and furthermore (8.8) h ′′ (t) ≫ Y Q −2 , h (j) (t) ≪ j Y Q −j , for j = 1, 2, 3, . . . , t ∈ J. Then the integral I defined by I = ∞ −∞ w(t)e ih(t) dt has an asymptotic expansion of the form (8.9) I = e ih(t0) h ′′ (t 0 ) n≤3δ −1 A p n (t 0 ) + O A,δ (Z −A ), p n (t 0 ) = √ 2πe πi/4 n! i 2h ′′ (t 0 ) n G (2n) (t 0 ), where A > 0 is arbitrary, and (8.10) G(t) = w(t)e iH(t) , H(t) = h(t) − h(t 0 ) − 1 2 h ′′ (t 0 )(t − t 0 ) 2 . Furthermore, each p n is a rational function in h ′′ , h ′′′ , . . . , satisfying (8.11) d j dt j 0 p n (t 0 ) ≪ j,n X(V −j + Q −j ) (V 2 Y /Q 2 ) −n + Y −n/3 . The leading term √ 2πe πi 4 e ih(t0) h ′′ (t 0 ) w(t 0 ) ≪ QX Y 1/2 in this asymptotic expansion is well-known and can be found in many sources but it can be difficult to find the full expansion in the literature. It is desirable to have such an expansion even for a (slightly) oscillating weight function w (cf. the end of the proof of Lemma 5.1 for an example) in which case V is a bit smaller than V 1 . Flexibility of the parameters V and V 1 is also useful in situations where one has several stationary points moving towards each other (in which case one splits the range of integration into sufficiently small subintervals). The conditions (8.7) and the bound (8.11) imply automatically that each term in the asymptotic expansion (8.9) is smaller than the preceding term. Observe that the second condition in (8.7) cannot be relaxed much because if V 1 ≪ Q 1−ε / √ Y then the trivial bound is smaller than the main term in (8.9). Corollary 8.3. Assume the conditions of Proposition 8.2. There exists a function w 0 (t) supported on the interval [−1, 1] such that with any T ≍ Z ε (h ′′ (t 0 )) −1/2 , we have (8.12) ∞ −∞ w(t)e ih(t) dt = ∞ −∞ w(t)w 0 t − t 0 T e ih(t) dt + O A,ε (Z −A ). We will derive Corollary 8.3 in the course of the proof of Proposition 8.2. The nice feature here is that the trivial bound applied to the right hand side of (8.12) is only slightly worse than the main term in Proposition 8.2, but the form of the expression may be easier to handle for further manipulations. For example, one may wish to study a multi-dimensional oscillatory integral by focusing on one variable at a time. If one applies stationary phase in terms of one of the variables, then the stationary point t 0 may then depend implicitly on the other variables; this may make the further analysis more challenging. The right hand side of (8.12) has the pleasant feature that t 0 only appears in the argument of w 0 and not in h, whereas it occurs in both the phase of h and in the weight function in (8.9). Proof. Let U ≤ V be a parameter satisfying Y U 2 Q 2 ≥ Z δ , Y U 3 Q 3 ≤ 1. This is possible for 0 < δ ≤ 1/10 by (8.7). Fix a smooth, compactly-supported function w 0 satisfying w 0 (x) = 1 for |x| < 1/2, and consider I 0 = ∞ −∞ w(t) 1 − w 0 t − t 0 U e ih(t) dt. Notice that with f (t) = w(t) 1 − w 0 t−t0 U , one has (8.13) f (j) ≪ j XU −j (j = 1, 2, . . .), |h ′ (t)| ≫ |t − t 0 | min |ξ−t0|≤t |h ′′ (ξ)| ≫ U Y Q 2 (t ∈ supp(f )). Then we apply Lemma 8.1 with β − α = V 1 , R ≍ U Y /Q 2 , to obtain (8.14) I 0 ≪ A,δ Z −A , where A > 0 is arbitrarily large, since U 2 Y /Q 2 ≥ Z δ . Hence I = ∞ −∞ w(t)w 0 t − t 0 U e ih(t) dt + O A,δ (Z −A ) =: I 1 + O A,δ (Z −A ), say. By choosing U ≍ Z ε h ′′ (t 0 ) −1/2 , we obtain Corollary 8.3. Writing a Taylor expansion for h(t) around t 0 , we have h(t) = h(t 0 ) + h ′′ (t 0 )(t − t 0 ) 2 2! + H(t), where H(t) = h ′′′ (t 0 )(t − t 0 ) 3 3! + . . . . Notice that H ′ ≪ U 2 Y Q 3 , H ′′ ≪ U Y Q 3 , H (j) = h (j) ≪ Y Q −j , for j ≥ 3. By (8.7) this implies H (j) ≪ U −j for j = 1, 2, . . . . With this notation we recast I 1 as I 1 = e ih(t0) ∞ −∞ g(t)e ih ′′ (t0)(t−t0) 2 /2 dt, g(t) = w(t)w 0 t − t 0 U e iH(t) . Observe that g (j) ≪ XU −j . This integral can be evaluated in a number of ways and its asymptotic expansion is easily found. One simple way is to write, for a small parameter ε to be chosen in a moment, g(t) = ∞ −∞ g(y)e(ty)dy = |y|≤U −1 Z ε g(y)e(ty)dy + O ε,A (Z −A ), reverse the orders of integration, complete the square, and evaluate the Gaussian integral. It becomes I 1 = √ 2πe πi/4 e ih(t0) h ′′ (t 0 ) |y|≤U −1 Z ε g(y) exp 2πiyt 0 − i 2π 2 y 2 h ′′ (t 0 ) dy + O ε,A (Z −A ). Next we note that y 2 /h ′′ (t 0 ) ≪ Y −1 Q 2 U −2 Z 2ε ≤ Z 2ε−δ . Now we choose ε = δ/4, so that the preceding quantity is O(Z −δ/2 ). Hence by another Taylor development we obtain I 1 = √ 2πe πi/4 e ih (t0) h ′′ (t 0 ) n≤N 1 n! −2π 2 i h ′′ (t 0 ) n |y|≤U −1 Z ε y 2n g(y)e 2πiyt0 dy+O δ,N (XZ − δN 2 +ε ) +O A (Z −A ) for any integer N . We choose N = ⌊3Aδ −1 ⌋. Next we extend the integral to the whole real line without making a new error term, and use ∞ −∞ y m g(y)e(yt 0 )dy = −i 2π m g (m) (t 0 ), which gives I 1 = e ih(t0) h ′′ (t 0 ) n≤3δ −1 A √ 2πe πi/4 n! i 2h ′′ (t 0 ) n g (2n) (t 0 ) + O δ,A (Z −A ). This is the desired asymptotic expansion, upon noting that g (m) (t 0 ) = G (m) (t 0 ) with G as in (8.10), since w 0 ( t−t0 U ) is identically 1 in a neighborhood of t 0 . To finish the proof, we show that (8.11) holds. We recall the definition of H in (8.10) and notice that H (j) (t 0 ) = 0 for j = 0, 1, 2, and H (j) (t 0 ) = h (j) (t 0 ) for j ≥ 3. Then we see that G (2n) (t 0 ) is a sum of (scalar multiples of) terms of the form w (ν0) (t 0 )H (ν1) (t 0 ) . . . H (ν l ) (t 0 ), where ν 0 + · · · + ν l = 2n. Hence we see that G (2n) (t 0 ) ≪ X(V −2n + (Q 3 /Y ) −2n/3 ), the two extreme cases being ν 0 = 2n, and ν 0 = 0, ν 1 = ν 2 = · · · = ν l = 3. Then each time we differentiate G (2n) (t 0 ) with respect to t 0 we save either a factor Q or a V , and so d j dt j 0 G (2n) (t 0 ) ≪ X(V −j + Q −j )(V −2n + (Q 3 /Y ) −2n/3 ). By the easily verifiable formula d j dx j 1 F (x) = j + 1 j j l=0 (−1) l 1 + l j l d j dx j (F (x) l ) F (x) 1+l . we also have that d j dt j 0 1 (h ′′ (t 0 )) n ≪ Q −j (Q 2 /Y ) n , and (8.11) follows. n 1+α+β + . . . , the dots indicating three similar terms obtained by switching the signs on the α's and β's. It follows easily from the Hecke relations that the Dirichlet series is m,n A(m, n) m 1+2β n 1+α+β = L(sym 2 f, 1 + 2β)L(sym 2 f, 1 We would like to thank G. Harcos for pointing this out. Subconvexity bounds for triple L-functions and representation theory. J Bernstein, A Reznikov, Annals of Math. 172J. Bernstein, A. Reznikov, Subconvexity bounds for triple L-functions and representation theory, Annals of Math. 172 (2010), 1679-1718 Regular and irregular semiclassical wavefunctions. M Berry, J. Phys. A. 10M. Berry, Regular and irregular semiclassical wavefunctions, J. Phys. A 10 (1977), 2083-2091 A relation between triple products of weight 0 and weight 1 2 cusp forms. A Biró, Isr. J. Math. 182A. Biró, A relation between triple products of weight 0 and weight 1 2 cusp forms, Isr. J. Math. 182 (2011), 61-101 On the 4-norm of an automorphic form, to appear in J. V Blomer, Eur. Math. SocV. Blomer, On the 4-norm of an automorphic form, to appear in J. Eur. Math. Soc. Geodesic restrictions and L p -estimates for eigenfunctions of Riemannian surfaces, in: Linear and complex analysis. J Bourgain, Amer. Math. Soc. Transl. Ser. 2Amer. Math. SocJ. Bourgain Geodesic restrictions and L p -estimates for eigenfunctions of Riemannian surfaces, in: Linear and complex analysis, Amer. Math. Soc. Transl. Ser. 2, 226 (2009), 27-35, Amer. Math. Soc., Providence, RI Integral moments of L-functions. J Conrey, D Farmer, J Keating, M Rubinstein, N Snaith, Proc. LMS. 91J. Conrey, D. Farmer, J. Keating, M. Rubinstein, N. Snaith, Integral moments of L-functions, Proc. LMS 91 (2005), 33-104 M Eichler, D Zagier, The theory of Jacobi forms. Boston, MABirkhäuser Boston, Inc55M. Eichler and D. Zagier, The theory of Jacobi forms, Progress in Mathematics, 55. Birkhäuser Boston, Inc., Boston, MA, 1985 D Goldfeld, Automorphic forms and L-functions for the group GL(n, R), Cambridge Studies in Advanced Mathematics 99. Cambridge University PressD. Goldfeld, Automorphic forms and L-functions for the group GL(n, R), Cambridge Studies in Advanced Mathematics 99, Cambridge University Press, 2006 I S Gradshteyn, I M Ryzhik, Table of integrals, series, and products. Academic Press IncI. S. Gradshteyn, I. M. Ryzhik, Table of integrals, series, and products, Academic Press Inc. 2000 D Hejhal, B Rackner, On the topography of Maass waveforms for P SL. 1D. Hejhal, B. Rackner, On the topography of Maass waveforms for P SL(2, Z), Exp. Math. 1 (1992), 275-305 On quantum chaos and Maass waveforms of CM-Type. D Hejhal, A Strömbergsson, Found. Phys. 31D. Hejhal, A. Strömbergsson, On quantum chaos and Maass waveforms of CM-Type, Found. Phys. 31 (2001), 519-533 Coefficients of Maass forms and the Siegel zero. J Hoffstein, P Lockhart, Ann. of Math. D. Goldfeld, J. Hoffstein and D. Lieman2J. Hoffstein and P. Lockhart, Coefficients of Maass forms and the Siegel zero, with an appendix by D. Goldfeld, J. Hoffstein and D. Lieman, Ann. of Math. (2) 140 (1994), no. 1, 161-181. Mass equidistribution for Hecke eigenforms. R Holowinsky, K Soundararajan, Ann. of Math. 2R. Holowinsky, K. Soundararajan, Mass equidistribution for Hecke eigenforms, Ann. of Math. (2) 172 (2010), 1517-1528 Pullbacks of Saito-Kurokawa lifts. A Ichino, Invent. Math. 162A. Ichino, Pullbacks of Saito-Kurokawa lifts, Invent. Math. 162 (2005), 551-647. H Iwaniec, Topics in Classical Automorphic Forms. Amer. Math. Soc17H. Iwaniec, Topics in Classical Automorphic Forms, Grad. Stud. Math., vol 17, Amer. Math. Soc., 1997. The spectral growth of automorphic L-functions. H Iwaniec, J. Reine Angew. Math. 428H. Iwaniec, The spectral growth of automorphic L-functions. J. Reine Angew. Math. 428 (1992), 139-159. H Iwaniec, Spectral methods of automorphic forms. Providence, RIAmerican Mathematical Society53H. Iwaniec, Spectral methods of automorphic forms, Grad. Stud. Math. 53, American Mathematical Society, Providence, RI Analytic number theory. H Iwaniec, E Kowalski, American Mathematical Society53H. Iwaniec, E. Kowalski, Analytic number theory, AMS Colloquium Publications 53, American Mathematical Society 2004. Exterior square L-functions. H Jacquet, J Shalika, Automorphic forms, Shimura varieties, and L-functions. Ann Arbor, MI; Boston, MAAcademic PressIIH. Jacquet, J. Shalika, Exterior square L-functions, Automorphic forms, Shimura varieties, and L-functions, Vol. II (Ann Arbor, MI, 1988), 143-226, Perspect. Math. 11, Academic Press, Boston, MA, 1990 Uniform bound for Hecke L-functions. M Jutila, Y Motohashi, Acta Math. 195M. Jutila, Y. Motohashi, Uniform bound for Hecke L-functions, Acta Math. 195 (2005), 61-115 Values of L-series of modular forms at the center of critical strip. W Kohnen, D Zagier, Invent. math. 64W. Kohnen and D. Zagier, Values of L-series of modular forms at the center of critical strip, Invent. math. 64 (1981), 175-198 Value distribution for eigenfunctions of desymmetrized quantum maps. P Kurlberg, Z Rudnick, P. Kurlberg, Z. Rudnick, Value distribution for eigenfunctions of desymmetrized quantum maps, IMRN 2001, no. 18, 985-1002 On the nonnegativity of Rankin-Selberg L-functions at the center of symmetry. E M Lapid, Int. Math. Res. Not. E. M. Lapid, On the nonnegativity of Rankin-Selberg L-functions at the center of symmetry, Int. Math. Res. Not. 2003, 65-75 A density theorem on automorphic forms and some applications. Y.-K Lau, J Wu, Trans. Amer. Math. Soc. 358Y.-K. Lau, J. Wu, A density theorem on automorphic forms and some applications, Trans. Amer. Math. Soc. 358 (2005), 441-472 Bounds for GL(3) × GL(2) L-functions and GL(3) L-functions. X Li, Ann. of Math. 173X. Li, Bounds for GL(3) × GL(2) L-functions and GL(3) L-functions, Ann. of Math. 173 (2011), 301-336 Growth and nonvanishing of restricted Siegel modular forms arising as Saito-Kurokawa lifts. S.-C Liu, M Young, submittedS.-C. Liu and M. Young, Growth and nonvanishing of restricted Siegel modular forms arising as Saito- Kurokawa lifts, submitted Values of symmetric square L-functions at 1. W Luo, J. Reine Angew. Math. 506W. Luo, Values of symmetric square L-functions at 1, J. Reine Angew. Math. 506 (1999), 215-235 L 4 -norms of the dihedral Maass forms. W Luo, Int. Math. Res. Not. to appearW. Luo, L 4 -norms of the dihedral Maass forms, Int. Math. Res. Not., to appear Automorphic distributions, L-functions, and Voronoi summation for GL(3). S D Miller, W Schmid, Ann. of Math. 164S. D. Miller, W. Schmid, Automorphic distributions, L-functions, and Voronoi summation for GL(3), Ann. of Math. 164 (2006), 423-488. Special values and mixed weight triple products (with an appendix by Don Blasius). T Orloff, Invent. Math. 90T. Orloff, Special values and mixed weight triple products (with an appendix by Don Blasius), Invent. Math. 90 (1987), 169-188. Zeros and central values of automorphic L-functions. Z Peng, Princeton PhD thesisZ. Peng, Zeros and central values of automorphic L-functions, Princeton PhD thesis 2001 A Reznikov, Norms of geodesic restrictions for eigenfunctions on hyperbolic surfaces and representation theory. A. Reznikov, Norms of geodesic restrictions for eigenfunctions on hyperbolic surfaces and representation theory, http://arxiv.org/abs/math/0403437, 2004. Estimates of triple products of automorphic functions II. A Reznikov, A. Reznikov, Estimates of triple products of automorphic functions II, http://arxiv.org/abs/1202.4766, 2012. Spectra of hyperbolic surfaces. P Sarnak, Bull. Amer. Math. Soc. 40P. Sarnak, Spectra of hyperbolic surfaces, Bull. Amer. Math. Soc. 40 (2003), 441-478 Letter to Andrei Reznikov. P Sarnak, P. Sarnak, Letter to Andrei Reznikov, June 2008. Quantum unique ergodicity for SL 2 (Z)\H. K Soundararajan, Annals of Math. 172K. Soundararajan, Quantum unique ergodicity for SL 2 (Z)\H, Annals of Math. 172 (2010), 1529-1538 The L 4 -norm of Eisenstein series. F Spinu, PrincetonPhD thesisF. Spinu, The L 4 -norm of Eisenstein series, Princeton PhD thesis 2003 Rankin triple products and quantum chaos. T Watson, to appear in Annals of MathT. Watson, Rankin triple products and quantum chaos, to appear in Annals of Math. On L ∞ -norms of holomorphic cusp forms. H Xia, J. Number Theory. 124H. Xia, On L ∞ -norms of holomorphic cusp forms, J. Number Theory 124 (2007), 325-327 Bunsenstraße 3-5, D-37073 Göttingen, Germany E-mail address: [email protected], [email protected]. Mathematisches Institut, Georg-August Universität Göttingen, College Station, TX 77843-3368, U.S.de Department of Mathematics, Texas A&M UniversityA. E-mail address: [email protected] Institut, Georg-August Universität Göttingen, Bunsenstraße 3-5, D-37073 Göttingen, Germany E-mail address: [email protected], [email protected] Department of Mathematics, Texas A&M University, College Station, TX 77843-3368, U.S.A. E-mail address: [email protected]
[]
[ "Oscillating nematic aerogel in superfluid 3 He", "Oscillating nematic aerogel in superfluid 3 He" ]
[ "M. SV V Dmitriev +1 ", "Kutuzov \nMetallurg Engineering Ltd\n11415TallinnEstonia\n", "A A Soldatov ", "E V Surovtsev ", "A N Yudin ", "\n+ P. L\nKapitza Institute for Physical Problems of RAS\n119334MoscowRussia\n" ]
[ "Metallurg Engineering Ltd\n11415TallinnEstonia", "+ P. L\nKapitza Institute for Physical Problems of RAS\n119334MoscowRussia" ]
[]
We present experiments on nematic aerogel oscillating in superfluid 3 He. This aerogel consists of nearly parallel mullite strands and is attached to a vibrating wire moving along the direction of the strands. Previous nuclear magnetic resonance experiments in 3 He confined in similar aerogel sample have shown that the superfluid transition of 3 He in aerogel occurs into the polar phase and the transition temperature (Tca) is only slightly suppressed with respect to the superfluid transition temperature of bulk 3 He. In present experiments we observed a change in resonant properties of the vibrating wire at T = Tca and found that below Tca an additional resonance mode is excited which is coupled to the main resonance.
10.1134/s0021364020240017
[ "https://export.arxiv.org/pdf/2011.13747v1.pdf" ]
227,209,457
2011.13747
a12b5e1deb95a65b308b8d8156cdf910ec23a8b7
Oscillating nematic aerogel in superfluid 3 He 27 Nov 2020 M. SV V Dmitriev +1 Kutuzov Metallurg Engineering Ltd 11415TallinnEstonia A A Soldatov E V Surovtsev A N Yudin + P. L Kapitza Institute for Physical Problems of RAS 119334MoscowRussia Oscillating nematic aerogel in superfluid 3 He 27 Nov 2020Submitted November 30, 2021 We present experiments on nematic aerogel oscillating in superfluid 3 He. This aerogel consists of nearly parallel mullite strands and is attached to a vibrating wire moving along the direction of the strands. Previous nuclear magnetic resonance experiments in 3 He confined in similar aerogel sample have shown that the superfluid transition of 3 He in aerogel occurs into the polar phase and the transition temperature (Tca) is only slightly suppressed with respect to the superfluid transition temperature of bulk 3 He. In present experiments we observed a change in resonant properties of the vibrating wire at T = Tca and found that below Tca an additional resonance mode is excited which is coupled to the main resonance. INTRODUCTION Superfluidity of 3 He in aerogel can be investigated using a vibrating wire (VW) resonator immersed in liquid 3 He with an aerogel sample attached to it. In this case an appearance of the superfluid fraction of 3 He in aerogel influences resonant properties of the VW. Experiments with aerogel attached to the VW have been done previously only with silica aerogel [1,2,3,4] where superfluid phases (A-like and B-like) have the same order parameters as A and B phases of bulk 3 He. These experiments have allowed to estimate the temperature dependence of the superfluid fraction in A-like and Blike phases as well as to detect an influence of the superfluid flow on the texture of the order parameter in the A-like phase. In this Letter, we present results of experiments with 3 He in the so-called nematic aerogel using VW resonator. The nematic aerogel is a highly porous structure consisting of strands with almost parallel orientation [5]. It has been established that in this case the superfluid transition occurs into a new phase (polar phase) that does not exist either in bulk 3 He or in 3 He in silica aerogel [6]. The polar phase becomes favorable due to an essentially anisotropic scattering of 3 He quasiparticles inside the aerogel [7,8,9,10]. This phase has a superfluid gap with line of zeroes in the plane perpendicular to the specific direction [11,12,13]. In nematic aerogel this direction coincides with the average direction of the strands, along which the mean free path of 3 He quasiparticles is maximal [7]. 1) e-mail: [email protected] SAMPLE AND METHODS We used a sample of mullite nematic aerogel which has a form of cuboid with a size along strands ≈ 2.6 mm and characteristic transverse sizes ∼ 3 × 3 mm. It was cut from a larger piece of the original sample synthesized by Metallurg Engineering Ltd so that it has perfectly flat ends (the planes where strands begin and end): irregularities are about 100 nm. The sample consists of nearly parallel mullite strands with diameters of ≤ 14 nm (estimated from the scanning transmission electron microscope images) and has the overall density ≈ 150 mg/cm 3 . If we assume that the density of mullite is 3.1 g/cm 3 then the porosity of the sample is 95.2% and the average distance between the strands is 60 nm. Similar mullite sample (which was cut from the same original piece and was placed in a separate cell of the same experimental chamber) has been used in nuclear magnetic resonance (NMR) experiments in 3 He [14,15] where it was found that the superfluid transition of 3 He in this sample actually occurs into the polar phase and that the superfluid transition temperature (T ca ) is only slightly suppressed with respect to the transition temperature (T c ) of bulk 3 He. It was also found that on further cooling the second order transition into the polar-distorted A phase (the PdA phase) occurs, and that effective mean free paths of 3 He quasiparticles in directions parallel and transverse to the aerogel strands in the limit of zero temperature are ≈ 900 nm and ≈ 235 nm correspondingly. The present sample was glued using a small amount of Stycast-1266 epoxy resin to a 240 µm NbTi wire, bent into the shape of an arch with a total hight of 10 mm and a distance between the legs of 8 mm as shown in Fig. 1. The stycast was left to thicken until it was almost set before it was applied to the aerogel V H 1=1 sin(2 BJ) Oe. Necessary temperatures were obtained by a nuclear demagnetization cryostat and measured by a quartz tuning fork calibrated by Leggett frequency measurements in bulk 3 He-B and 3 He-A in separate NMR experiments. To stabilize the polar phase in nematic aerogel [16], the samples were preplated with 2.5 atomic layers of 4 He. A measurement procedure of the aerogel resonator is the same as in the case of a conventional wire resonator [17]. The mechanical flapping resonance of the wire is excited by the Lorentz force on an alternating current with amplitude I 0 (from 0.05 mA to 0.5 mA in our experiments), passing through the wire in a steady magnetic field. In liquid 3 He the maximum velocity of the wire at such currents in the used range of temperatures did not exceed 0.1 mm/s. The motion of the wire generates a Faraday voltage which is amplified by a room-temperature step-up transformer 1:30 and measured with a lock-in amplifier. In-phase (dispersion) and quadrature (absorption) signals are joint fitted to Lorentz curves in order to extract a resonance frequency f a and a full width at half-maximum of the absorption (a resonance width) of the signal ∆f a . For our resonator, the resonance frequency in vacuum (f 0 ) is 752 Hz. THEORETICAL MODEL In 3 He the resonance frequency of the VW resonator is inversely proportional to the square root of the effective mass (M ) which is oscillating. We use a simple model in which we neglect effects of 3 He flow around the wire. We also do not consider effects due to a finite mean free path of 3 He quasiparticles in bulk 3 He because our measurements have been carried out at T > 0.6 T c where these effects can be neglected. Then M has five contributions [3,18,19,20]: (i) the mass of the oscillating part of the wire and mass of the empty aerogel which sum (m 0 ) defines f 0 , (ii) the mass of the normal-fluid fraction entrained in the aerogel (m n = ρ a n V ), (iii) the effective mass of the superfluid flow (m sf ), (iv) the effective mass of the normal component potential backflow (m nf = αρ n V ), and (v) the effective mass m v which is carried by the body due to viscosity of the normal component of liquid 3 He. Here V is the volume of 3 He in aerogel, ρ a n and ρ n are the densities of normal components of 3 He in aerogel and in bulk 3 He, and α ∼ 1 is a geometrical factor (α = 0.5 for a sphere and α = 1 for a cylinder oscillating along the direction normal to its axis). Then the expected resonant frequency equals f 2 a = f 2 0 m 0 m 0 + m n + m sf + m nf + m v ,(1) The effective mass of the superfluid flow for an ellipsoidal sample moving along a principal axis can be found using analogy with a dielectric sample in an electric field [19]: m sf = αV (ρ s − ρ a s ) 2 ρ s + αρ a s ,(2) where ρ a s and ρ s are the densities of superfluid components of 3 He in aerogel and in bulk 3 He. It should be noted that Eq. (2) is obtained in the case of isotropic superfluid density tensors of the phases in the problem. The latter is valid only for the B phase of superfluid 3 He. The superfluid density tensor of the polar phase (or the PdA phase) is anisotropic. As a result, the mass of the superfluid flow depends on the angle between the intrinsic anisotropy axis of aerogel and the direction of motion of the sample. In our case this angle is zero and ρ a s is the superfluid density of the polar phase (or the PdA phase) along the strands. The mass m v is related to the inertial part of the viscous drag force and can be estimated as ρ n Sδ, where δ is the viscous penetration depth and S is the surface area of the body. For a sphere with diameter D the exact result is m v = 3D 2 4 πρ n η f ,(3) where η is the shear viscosity and f is the frequency of oscillations [21]. The dissipative part of the viscous drag force determines the resonance width. If δ is much less than D/2 then in resonance m v ≈ ρ n V ∆f a /f a . Therefore, for the case of a general shape of the sample, the resonant frequency at T > T ca is expected to be given by f 2 a = f 2 0 m 0 m 0 + ρV (1 + α) + βρ n V ∆f a /f a ,(4) where β is a geometrical factor (β = 1 for a sphere) and ρ is the density of 3 He. The resonant frequency f n in normal 3 He in the limit of η → 0 (that is ∆f a → 0) satisfies the following condition: 1 f 2 n − 1 f 2 0 = (1 + α)V m 0 f 2 0 ρ.(5) If ∆f a ≪ f a then at T > T ca f a = f n − 1 2 β∆f a .(6) In the above reasoning, we have assumed that the normal and superfluid components of 3 He are separately incompressible. As it is shown in Ref. [22] in superfluid 3 He the compressibility affects Stokes parameters (C and C ′ in notations of Ref. [22]) that affects m v (due to change of C) and ∆f a (due to change of C ′ ). Fortunately, these changes are not large and C varies almost linearly with C ′ [22]. Therefore, in the first approximation, in superfluid 3 He m v also should be proportional to ∆f a . EXPERIMENTS IN NORMAL 3 HE In our experiments δ is essentially less than the characteristic sizes of the aerogel sample: even near T c at P = 29.3 bar δ ≈ 0.3 mm, so the observed resonance properties of our VW at T > T c are well described by the theoretical model. In Fig. 2 by open symbols we show dependencies of the resonant frequency on the resonance width measured at different pressures at T > T c . It is seen that these dependencies agree with Eq. (6). The solid lines in Fig. 2 are the best linear fits to the data which allow us to determine f n and β. We obtain that at all pressures β is close to 1, that is to the value expected for a sphere. The obtained values of f n also agree with Eq. (5) (see the inset to Fig. 2): the slope of the line in the inset is 12.7 × 10 −6 cm 3 s 2 /g while the value of the slope calculated from Eq. (5) (with α = 0.5 and estimated value of m 0 ≈ 5.1 mg) is 11.6 × 10 −6 cm 3 s 2 /g. We note that the dependence of f a on ∆f a remains the same also at T ca < T < T c (filled symbols in Fig. 2). It means that the influence of compressibility of normal and superfluid components is not essential. Our experience with bare VWs resonators show that this dependence follows Eq. (6) down to T ∼ 0.6 T c . EXPERIMENTS IN SUPERFLUID 3 HE In Fig. 3 we show temperature dependencies of the resonance frequency and width measured at 29.3 bar. On cooling in normal 3 He the resonance width is increasing and the frequency is decreasing due to the Fermi-liquid behavior of the viscosity ∝ 1/T 2 corresponding to the increase of m v . Then a rapid decrease of the width (a rise of the frequency) is observed indicating a superfluid transition in bulk 3 He at T = T c . On further cooling, the second resonance appears (filled triangles in Fig. 3) accompanied by the spike in the width of the main resonance. This additional resonance mode appears just below the superfluid transition temperature of 3 He in the sample used in NMR experiments [14,15]. Therefore, we conclude that this resonance is due to the superfluid transition of 3 He into the polar phase in the oscillating sample which occurs at T ca ≈ 0.989 T c . Although we have not been able to observe a clear resonance peak at frequencies lower than 470 Hz, we assume that on cooling from T = T ca the frequency of the second mode rapidly grows from 0 and slightly lower T ca becomes close to the frequency of the main resonance resulting in an interaction (repulsion) between these modes. It is illustrated by Fig. 4(a) where we show the evolution of the VW absorption signal during a very slow passage through T ca . It is seen that just below T ca there are two resonance peaks. The temperature dependence of the resonant frequencies near T ca is shown in Fig. 4(b) that demonstrates the repulsion of two resonance modes. For clarity, below T ca we continue to call as the main resonance the mode with a smaller frequency. As it is seen from Fig. 5, on cooling the resonance frequency of another (second) mode (f a2 ) is increasing up to about 1600 Hz at T = 0.75 T c . Similar behavior of the second mode was observed at lower pressures. We suppose that the second mode is an analog of a so-called slow sound mode observed previously in silica aerogel immersed in superfluid helium [23,24,25]. The point is that in aerogel the normal fluid component is clamped to the matrix, since δ exceeds the characteristic separation of the strands. However, the skeleton of aerogel is elastic and the normal component can move together with the strands. Therefore, the superfluid component and the combined normal fluid and aerogel matrix can move in opposite directions, resulting in a second-sound-like mode [23] which resonant frequency grows from 0 on cooling from T ca . In superfluid 3 He in silica aerogel such resonance mode was observed in the low-frequency sound measurements [24,25]. We are dealing with a highly anisotropic aerogel which is soft in the direction normal to the strands but is rigid in the direction along the strand. Therefore, in our case the slow mode should correspond to periodic deformations of the sample in the direction normal to the strands. We note that we detect motions of the wire, but we can excite and detect the slow mode in aerogel, even if its resonance frequency is far from the original VW mechanical resonance. It means that even well below T ca this second resonance is strong enough to affect the wire oscillations. If we neglect corrections due to that our sample is not an ellipsoid, then, the results of measurements of the frequency of the main resonance mode can be used to estimate the superfluid fraction ρ a s /ρ of 3 He inside the aerogel. For this purpose at T < T ca we can subtract contribution of m v into f a , using the dependence of f a = f a (∆f a ) measured at T > T c . If we denote asf a the result of the subtraction then using Eq. (1) without m v we obtain the following equation, which allows to estimate ρ a s /ρ: ρ s ρ a s (1 + α) ρ(ρ s + αρ a s ) = 1 − (f 0 /f a ) 2 − 1 (f 0 /f n ) 2 − 1 .(7) However, the existence of the second resonance makes the estimation impossible. The point is that even well below T ca the interaction between two resonant modes seems to be essential and their frequencies remain coupled. Fig. 6 illustrates the influence of the second mode on the frequency of the main resonance. If we exclude data points just below T ca (where frequencies of resonance modes are too close to each other) then there is a jump-like decrease in the main resonance frequency due to the appearance of the second mode. At T = 0.975 T c the frequency of the main resonance is ≈ 500 Hz and it is by 15 Hz smaller than at T = T ca , despite the fact that the frequency of the second mode is already much higher (≈ 1100 Hz). On further cooling, the frequency of the second mode continues to change, and we cannot distinguish contributions into f a from ρ a s and from the influence of the second mode. Unfortunately, the theoretical model of the slow mode in 3 He in aerogel described in Refs. [23,24] is not applicable to our strongly anisotropic sample and further development of the theory is necessary for treatment of our results. Worthy to mark that the second order transition from the polar phase into the PdA phase should not influence the slope of the temperature dependence of ρ a s measured along the aerogel strands [26]. As it follows from the NMR experiments [14,15], the transition into the PdA phase in our sample should occur at T = 0.95 T c (at 29.3 bar) and at T = 0.90 T c (at 15.4 bar), and at these temperatures we see no any specific features in the dependencies shown in Figs. 3 and 5. We note that Eq. (7) at ρ s = ρ differs from the equation used in Refs. [1,2,3,4] for determination of ρ a s /ρ of 3 He in silica aerogel. Using Eq. (7) we obtain that values of ρ a s /ρ are about 1.3-2 times smaller (depending on temperature and α) than that reported in Refs. [1,2,3,4]. CONCLUSIONS Using the aerogel wire techniques, earlier used to investigate superfluidity of 3 He in isotropic silica aerogels, we have observed a superfluid transition of 3 He in nematic aerogel accompanied by appearance of the second (slow sound) mode inside the aerogel sample. Resonance frequencies and widths of both the main and slow sound modes are measured in a wide range of temperatures. We think that a proper theoretical model of the slow mode in nematic aerogel might allow to estimate a superfluid density fraction inside our sample. Our results are promising for experiments on searching for the beta phase in nematic aerogel [26,27], a new superfluid phase of 3 He that should appear in a strong magnetic field right below T ca . The beta phase must exist in a narrow temperature region close to T ca (proportional to the value of magnetic field), and on cooling from the beta phase a transition to the distorted beta phase (which is continuously transformed to the polar phase on further cooling) should be observed as a kink on a superfluid fraction versus temperature plot [26]. The latter can be seen in the resonant frequencies in VW experiments. In present experiments the maximal magnetic field which we were able to apply is 1650 Oe. In this field the range of existence of the beta phase is expected to be very small (about 0.005 T c ) [26]. Unfortunately, this range of temperatures is nearly the same as the range, wherein the interaction of the observed two resonance modes is rather strong and the frequency of the slow sound mode resonance is rapidly changing. This, together with experimental errors in determination of resonance frequencies, has prevented us from detecting any clear kink on temperature dependencies of f a and f a2 . We are grateful to V.I. Marchenko for useful discussions. Fig. 1 . 1Signal measurement circuit of a VW immersed in liquid 3 He in external steady magnetic field H. The strands of nematic aerogel glued to the wire are oriented along the oscillations to prevent the aerogel from soaking up the glue. The wire is mounted in one of the cells of our experimental chamber. The experiments were carried out at pressures 7.1, 15.4, and 29.3 bar and in magnetic fields 305-1650 Fig. 2 . 2The resonant frequency versus the resonance width measured at 29.3 bar (circles), at 15.4 bar (triangles), and at 7.1 bar (squares). Open symbols correspond to measurements in normal 3 He (T > Tc), filled symbols have been obtained at Tca < T < Tc. Solid lines are fits to the data at T > Tc using Eq. (6) (squares: β = 1.022, triangles: β = 0.973, circles: β = 0.962). I0 = 0.25 mA, H = 1650 Oe. Inset: The dependence of fn on ρ determined from linear fits shown in the main panel. Solid line is the best fit according to Eq. (5). Fig. 3 . 3Temperature dependencies of the resonance width of the main resonance (open circles) and of the frequencies of the main (filled circles) and the second (filled triangles) resonances. P = 29.3 bar, I0 = 0.25 mA, H = 1650 Oe. Arrows mark Tca, Tc, and AB transition in bulk 3 He at T = TAB. Fig. 4 . 4(a) Temperature evolution of the VW absorption signal on slow warming from T ≈ 0.985 Tc to T ≈ 0.992 Tc. For better view, the absorption lines are successively shifted upward with increasing temperature. Thick (red) and thin (blue) lines correspond to T > Tca and T < Tca respectively. (b) Two branches of the wire resonance versus temperature near Tca obtained by fitting the lines in panel (a) with a sum of two Lorentz peaks. P = 29.3 bar, Tca ≈ 0.989 Tc, I0 = 0.25 mA, H = 1650 Oe. Fig. 5 . 5The resonance frequency and the resonance width (inset) of the slow sound mode in nematic aerogel versus temperature measured at P = 29.3 bar (circles, Tca ≈ 0.989 Tc), P = 15.4 bar (squares, Tca ≈ 0.985 Tc), and P = 7.1 bar (triangles, Tca ≈ 0.97 Tc). The given superfluid transition temperatures are nearly the same as measured in NMR experiments[14] with the similar sample. I0 = 0.25 mA, H = 1650 Oe. Fig. 6 . 6The frequency of the main mode versus the resonance width measured at P = 15.4 bar from 0.83 Tc to 1.7 Tc. Open triangles correspond to measurements in normal 3 He (T > Tc), filled triangles are the data in the range of Tca < T < Tc, filled circles are the data in the range of 0.83 Tc < T < Tca. Tca ≈ 0.985 Tc. The data points in the range of 0.975 Tc < T < Tca where frequencies and intensities of the resonance modes are close to each other are not shown. FUNDING This work was supported by the Russian Science Foundation (project no. 18-12-00384). . P Brussaard, S N Fisher, A M Guénault, A J Hale, G R Pickett, J. Low Temp. Phys. 121555P. Brussaard, S.N. Fisher, A.M. Guénault, A.J.Hale, and G.R. Pickett, J. Low Temp. Phys. 121, 555 (2000). . P Brussaard, S N Fisher, A M Guénault, A J Hale, N Mulders, G R Pickett, Phys. Rev. Lett. 864580P. Brussaard, S.N. Fisher, A.M. Guénault, A.J. Hale, N. Mulders, and G.R. Pickett, Phys. Rev. Lett. 86, 4580 (2001). . D I Bradley, S N Fisher, A M Guénault, R P Haley, N Mulders, S O&apos;sullivan, G R Pickett, J Roberts, V Tsepelin, Phys. Rev. Lett. 9875302D.I. Bradley, S.N. Fisher, A.M. Guénault, R.P. Haley, N. Mulders, S. O'Sullivan, G.R. Pickett, J. Roberts, and V. Tsepelin, Phys. Rev. Lett. 98, 075302 (2007). . D I Bradley, S N Fisher, A M Guénault, R P Haley, G R Pickett, J Roberts, S O&apos;sullivan, V Tsepelin, J. Low Temp. Phys. 150445D.I. Bradley, S.N. Fisher, A.M. Guénault, R.P. Haley, G.R. Pickett, J.E Roberts, S. O'Sullivan, V. Tsepelin, J. Low Temp. Phys. 150, 445 (2008). . V E Asadchikov, R Sh, V V Askhadullin, V V Volkov, N K Dmitriev, P N Kitaeva, A A Martynov, A A Osipov, A A Senin, D I Soldatov, A N Chekrygina, Yudin, JETP Lett. 101556V.E. Asadchikov, R.Sh. Askhadullin, V.V. Volkov, V.V. Dmitriev, N.K. Kitaeva, P.N. Martynov, A.A. Osipov, A.A. Senin, A.A. Soldatov, D.I. Chekrygina, and A.N. Yudin, JETP Lett. 101, 556 (2015). . V V Dmitriev, A A Senin, A A Soldatov, A N Yudin, Phys. Rev. Lett. 115165304V.V. Dmitriev, A.A. Senin, A.A. Soldatov, and A.N. Yudin, Phys. Rev. Lett. 115, 165304 (2015). . K Aoyama, R Ikeda, Phys. Rev. B. 7360504K. Aoyama and R. Ikeda, Phys. Rev. B 73, 060504 (2006). . I A Fomin, J. Exp. Theor. Phys. 118765I.A. Fomin, J. Exp. Theor. Phys. 118, 765 (2014). . R Ikeda, Phys. Rev. B. 91174515R. Ikeda, Phys. Rev. B 91, 174515 (2015). . I A Fomin, J. Exp. Theor. Phys. 127933I.A. Fomin, J. Exp. Theor. Phys. 127, 933 (2018). The Superfluid Phases of Helium 3. D Vollhardt, P Wölfle, Taylor & FrancisLondonD. Vollhardt and P. Wölfle, The Superfluid Phases of Helium 3 (Taylor & Francis, London, 1990). . V B Eltsov, T Kamppinen, J Rysti, G E Volovik, arXiv:1908.01645V.B. Eltsov, T. Kamppinen, J. Rysti, and G.E. Volovik, arXiv:1908.01645. . S Autti, J T Mäkinen, J Rysti, G E Volovik, V V Zavjalov, V B Eltsov, Phys. Rev. Research. 233013S. Autti, J.T. Mäkinen, J. Rysti, G.E. Volovik, V.V. Zavjalov, V.B. Eltsov, Phys. Rev. Research 2, 033013 (2020). . V V Dmitriev, M S Kutuzov, A A Soldatov, A N Yudin, JETP Lett. 110734V.V. Dmitriev, M.S. Kutuzov, A.A. Soldatov, A.N. Yudin, JETP Lett. 110, 734 (2019). . V V Dmitriev, A A Soldatov, A N Yudin, J. Exp. Theor. Phys. 131V.V. Dmitriev, A.A. Soldatov, and A.N. Yudin, J. Exp. Theor. Phys. 131, 2 (2020). . V V Dmitriev, A A Soldatov, A N Yudin, Phys. Rev. Lett. 12075301V.V. Dmitriev, A.A. Soldatov, and A.N. Yudin, Phys. Rev. Lett. 120, 075301 (2018). . D C Carless, H E Hall, J R Hook, J. Low Temp. Phys. 50583D.C. Carless, H.E. Hall, and J.R. Hook, J. Low Temp. Phys. 50, 583 (1983). . J T Tough, W D Mccormick, J G Dash, Rev. Sci. Inst. 351345J.T. Tough, W.D. McCormick, and J.G. Dash, Rev. Sci. Inst. 35, 1345 (1964). . C Gabay, P E Wolf, L Puech, C. Gabay, P.E. Wolf, L. Puech, Physica B284-288, 97 (2000). . R Blaauwgeers, M Blazkova, M Clovecko, V B Eltsov, R De Graaf, J Hosio, M Krusius, D Schmoranzer, W Schoepe, L Skrbek, P Skyba, R E Solntsev, D E Zmeev, J. Low Temp. Phys. 146537R. Blaauwgeers, M. Blazkova, M.Clovecko, V.B. Eltsov, R. de Graaf, J. Hosio, M. Krusius, D. Schmoranzer, W. Schoepe, L. Skrbek, P. Skyba, R.E. Solntsev, and D.E. Zmeev, J. Low Temp. Phys. 146, 537 (2007). L D Landau, E M Lifshitz, Fluid Mechanics. Pergamon, OxfordL.D. Landau and E.M. Lifshitz, Fluid Mechanics (Pergamon, Oxford, 1987). . D C Carless, H E Hall, J R Hook, J. Low Temp. Phys. 50605D.C. Carless, H.E. Hall, and J.R. Hook, J. Low Temp. Phys. 50, 605 (1983). . M J Mckenna, T Slawecki, J D Maynard, Phys. Rev. Lett. 661878M.J. McKenna, T. Slawecki, and J.D. Maynard, Phys. Rev. Lett. 66, 1878 (1991). . A Golov, D A Geller, J M Parpia, Phys. Rev. Lett. 823492A. Golov, D.A. Geller, and J.M. Parpia, Phys. Rev. Lett. 82, 3492 (1999). . E Nazaretski, D M Lee, J M Parpia, Phys. Rev. B. 71144506E. Nazaretski, D.M. Lee, and J.M. Parpia, Phys. Rev. B 71, 144506 (2005). . E V Surovtsev, J. Exp. Theor. Phys. 1291055E.V. Surovtsev, J. Exp. Theor. Phys. 129, 1055 (2019). . E V Surovtsev, J. Exp. Theor. Phys. 128477E.V. Surovtsev, J. Exp. Theor. Phys. 128, 477 (2019).
[]
[ "NoisywikiHow: A Benchmark for Learning with Real-world Noisy Labels in Natural Language Processing", "NoisywikiHow: A Benchmark for Learning with Real-world Noisy Labels in Natural Language Processing" ]
[ "Tingting Wu [email protected] \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n", "Xiao Ding [email protected] \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n", "Minji Tang [email protected] \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n", "Hao Zhang \nFaculty of Computing\nHarbin Institute of Technology\nChina\n", "Bing Qin \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n", "Ting Liu [email protected] \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n" ]
[ "Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina", "Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina", "Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina", "Faculty of Computing\nHarbin Institute of Technology\nChina", "Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina", "Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina" ]
[]
Large-scale datasets in the real world inevitably involve label noise. Deep models can gradually overfit noisy labels and thus degrade model generalization. To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance. Due to the lack of suitable datasets, previous studies have frequently employed synthetic label noise to mimic real-world label noise. However, synthetic noise is not instance-dependent, making this approximation not always effective in practice. Recent research has proposed benchmarks for learning with real-world noisy labels. However, the noise sources within may be single or fuzzy, making benchmarks different from data with heterogeneous label noises in the real world. To tackle these issues, we contribute NoisywikiHow, the largest NLP benchmark built with minimal supervision. Specifically, inspired by human cognition, we explicitly construct multiple sources of label noise to imitate human errors throughout the annotation, replicating real-world noise, whose corruption is affected by both groundtruth labels and instances. Moreover, we provide a variety of noise levels to support controlled experiments on noisy data, enabling us to evaluate LNL methods systematically and comprehensively. After that, we conduct extensive multi-dimensional experiments on a broad range of LNL methods, obtaining new and intriguing findings. 1
10.48550/arxiv.2305.10709
[ "https://export.arxiv.org/pdf/2305.10709v1.pdf" ]
258,762,705
2305.10709
35225d37ec210becb5af7c07a4b2af715f5b7e6c
NoisywikiHow: A Benchmark for Learning with Real-world Noisy Labels in Natural Language Processing Tingting Wu [email protected] Research Center for Social Computing and Information Retrieval Harbin Institute of Technology China Xiao Ding [email protected] Research Center for Social Computing and Information Retrieval Harbin Institute of Technology China Minji Tang [email protected] Research Center for Social Computing and Information Retrieval Harbin Institute of Technology China Hao Zhang Faculty of Computing Harbin Institute of Technology China Bing Qin Research Center for Social Computing and Information Retrieval Harbin Institute of Technology China Ting Liu [email protected] Research Center for Social Computing and Information Retrieval Harbin Institute of Technology China NoisywikiHow: A Benchmark for Learning with Real-world Noisy Labels in Natural Language Processing Large-scale datasets in the real world inevitably involve label noise. Deep models can gradually overfit noisy labels and thus degrade model generalization. To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance. Due to the lack of suitable datasets, previous studies have frequently employed synthetic label noise to mimic real-world label noise. However, synthetic noise is not instance-dependent, making this approximation not always effective in practice. Recent research has proposed benchmarks for learning with real-world noisy labels. However, the noise sources within may be single or fuzzy, making benchmarks different from data with heterogeneous label noises in the real world. To tackle these issues, we contribute NoisywikiHow, the largest NLP benchmark built with minimal supervision. Specifically, inspired by human cognition, we explicitly construct multiple sources of label noise to imitate human errors throughout the annotation, replicating real-world noise, whose corruption is affected by both groundtruth labels and instances. Moreover, we provide a variety of noise levels to support controlled experiments on noisy data, enabling us to evaluate LNL methods systematically and comprehensively. After that, we conduct extensive multi-dimensional experiments on a broad range of LNL methods, obtaining new and intriguing findings. 1 Introduction Large-scale labeled data has become indispensable in the notable success of deep neural networks (DNNs) in various domains and tasks (Russakovsky et al., 2015;Wang et al., 2019). Due to imperfect sources like crowd-sourcing and web crawling (Xiao et al., 2015;Zhang et al., 2017b;Lee Input Output et al., 2018), datasets frequently include real-world label noise (Chen et al., 2021), which may induce model overfitting to noisy labels and hurt the generalization of deep models (Zhang et al., 2017a;Wu et al., 2022a,b). To alleviate this issue, learning with noisy labels (LNL) methods for robustly training deep models have been studied extensively. Due to the lack of appropriate benchmarks, previous research often studied synthetic label noise to simulate real-world label noise (Zhang et al., 2018;Lukasik et al., 2020). As a general and realistic noise, real-world noise may have several noise sources (i.e., be heterogeneous) (Northcutt et al., 2021) and be instance-dependent (i.e., P (ỹ|y, x), where the probability of an instance being assigned to the incorrect labelỹ depends on the original ground-truth label y and data x) (Han et al., 2021;Song et al., 2022). However, synthetic noise is generated from an artificial distribution and is thus instance-independent (i.e., P (ỹ|y)), which may not always work well in practice. Recently, various benchmarks for learning with real-world noisy labels have been proposed across fields like computer vision (CV) (Li et al., 2017), audio signal processing (ASP) (Gemmeke et al., 2017), and natural language processing (NLP) (Hedderich et al., 2021). To fully evaluate robust learning methods with real-world label noise, benchmarks should be as close to real-world scenarios as possible. Meanwhile, controlled ex-periments are encouraged to verify whether LNL methods can remain effective over a wide range of noise levels (Jiang et al., 2020). Nevertheless, the noise levels in most datasets are fixed and unknown, resulting in uncontrolled label noise (Fonseca et al., 2019a;Song et al., 2019). Moreover, the noise therein often comes from the same or ambiguous sources (Li et al., 2017;Jiang et al., 2020), which conflicts with the heterogeneous characteristics of real-world noise. These problems prevent a better understanding of LNL methods. To bridge this gap, we present NoisywikiHow, a new NLP benchmark for evaluating LNL methods focusing on the intention identification task. Intention identification promotes numerous downstream natural language understanding tasks, from commonsense reasoning (Sap et al., 2019) to dialogue systems (Pepe et al., 2022). Additionally, the complexity of the task (total of 158 categories) facilitates a deeper investigation of the efficacy of LNL approaches. The task form is shown in Table 1. To make the benchmark more representative of real-world scenarios, we propose a practical assumption: Real-world label noise in a dataset is mainly induced by human errors, regardless of whether the dataset's construction is automated or crowd-sourced. Existing psychological and cognitive evidence further supports our hypothesis. It shows that different annotators have different preferences and biases (Beigman and Klebanov, 2009;Burghardt et al., 2018), which means human labeling errors typically result from multiple noise sources. Furthermore, humans may make random labeling errors due to random attention slips. But they are more likely to produce label noise when labeling hard cases (Klebanov et al., 2008) (i.e., noise is instance-dependent), such as instance (c) in Table 1. Motivated by this human cognition, we first collect data from the wikiHow website, 2 which contains a collection of professionally edited how-to guideline articles, providing a vast quantity of clean scripts and corresponding categories for free to help achieve controlled experiments and ensure benchmark quality. After that, we explicitly inject a variety of noise sources into clean data to replicate human annotation errors, thus introducing real-world label noise into the benchmark. Notably, training samples in our benchmark exhibit a long-tailed class distribution, which is in line with the facts, i.e., data in real-world applications is heavily imbalanced (Van Horn et al., 2018;Liu et al., 2019b). Besides, we achieve minimal human supervision by using a series of automated labeling procedures, saving lots of time and human effort. To evaluate NoisywikiHow, we carry out extensive experimentation across various model architectures and noise sources, execute plentiful LNL methods on our benchmark, compare the more realistic real-world noise with the extensively studied synthetic noise, and investigate a case study and long-tailed distribution characteristics. 2 Related Work Datasets with real-world noisy labels In early studies of the LNL problem, due to a lack of appropriate benchmarks, synthetic noise was often used to reflect noise in the real world and assess the effectiveness of methods (Han et al., 2018b;Zhang et al., 2018). However, unlike real-world noise, synthetic noise follows an idealized artificial distribution, which leads to inaccurate approximations and inadequate evaluations. Recent studies have proposed numerous datasets with real-world noisy labels. Table 2 depicts a comparison of existing real-world noisy datasets for evaluating LNL methods in CV, ASP, and NLP. As shown in Table 2, most datasets fail to perform controlled experiments on real-world label noise and cannot be used to study DNNs across different noise levels (Fonseca et al., 2019a,b). A few benchmarks with controlled label noise, such as NoisyNER (Hedderich et al., 2021) and Red MiniImageNet (Jiang et al., 2020), were produced. However, the noise source in their datasets may be vague. Furthermore, NoisyNER focuses on the named entity recognition task in NLP. Though seven noisy label sets are provided, it is challenging to determine the precise noise level of each label set because a sentence-level instance has numerous word-level labels. Besides, Red MiniImageNet relies heavily on careful human annotation and follows a balanced class distribution, which diverges from real-world application scenarios. In this paper, we publish NoisywikiHow to solve the above limitations. As shown in Table 2, to the best of our knowledge, NoisywikiHow is the largest NLP benchmark for assessing LNL methods. Intention identification Intention identification is critical to many applications (Huang et al., 2016;Sap et al., 2019). Therefore, ensuring task reliability is essential. Some previous work formulates intention identification as an event process typing task. Given a sequence of events, the model is designed to understand the overall goal of the event process in terms of an action and an object (Chen et al., 2020;Pepe et al., 2022). In other studies, intention identification is modeled as a sentence classification task (Zhang et al., 2020a,b). When given a procedural event, the system predicts its intention in a 4-choose-1 multiple-choice format. However, none of these studies deal with task reliability. By building Noisy-wikiHow, we make a preliminary exploration of task reliability (i.e., model performance under label noise). Following Zhang et al. (2020b), we model intention identification as a sentence classification task. The difference is that our benchmark (including 158 labels) is analogous to the retrieval task in a more practical and challenging way. NoisywikiHow Dataset Data Collection We construct NoisywikiHow by crawling how-to articles from the wikiHow website. Detailed crawling strategies and related statistics are in Appendix A.1. We define the input as a procedural event, i.e., the header of a paragraph in a wikiHow article (e.g., Talk about food differently in Table 1), and the output as the intention of the event, namely the category of this article (e.g., Losing Weight in Table 1). Note that categories present a hierar- chy (e.g., Health Nutrition and Food Health Weight Management Losing Weight), and we select the category with the finest granularity as the label. Data Cleaning Similar to Jiang et al. (2020) andHedderich et al. (2021), we realize controlled label noise by injecting various amounts of noise into clean data. However, the data collection process introduces a lot of low-quality or irrelevant data. As a result, we develop a data cleaning procedure to remove bad data and facilitate the target task from two aspects: (1) input filtering and (2) label filtering. Regarding input filtering, we first devise four automatic filters and execute them sequentially to remove low-quality or ambiguous data. • Sample Length Filter intends to retain instances with more informative and complete semantic information by filtering excessively short or long data. • Format Normalization is to standardize in-stances (e.g., unifying the description of "Click Defragment Your Hard Drive." and "Click Defragment your hard drive."), ensuring the effectiveness of subsequent strategies. • Deduplication tries to eliminate redundant or ambiguous data (e.g., a procedural event corresponds to multiple intents). • TF-IDF Filter attempts to preclude overly uninformative instances by calculating the TF-IDF for each token. After that, we receive high-quality data D h , which follows a long-tailed class distribution with limited data on tail classes, resembling the distribution in Fig. 1. We create a Sample Size Filter to exclude the categories with too few samples (≤ 300), ensuring an appropriate split of training, validation, and test sets. We observe that the labels have two types, i.e., concepts defined as nominal phrases (e.g., Nutrition and Food Health), and event mentions defined as nominal or verbal phrases that refer to events (e.g., Losing Weight) (Min et al., 2020;Yu et al., 2021). Therefore, label filtering is required to retain only events, ensuring the effectiveness of intention identification. Specifically, each category is annotated by three graduate students from the NLP field and is regarded as an event if more than two annotators agree. Human annotators are asked to label 736 categories and achieve a high agreement (Fleiss-κ = 0.84) (Fleiss, 1971). After data cleaning, we obtain clean data D involving 89,143 instances in 158 classes. Due to the limited space, complete filtering strategies and more details are in Appendix A.2. Label Noise Injection To create a benchmark of real-world noisy labels, we introduce various sources of controlled label noise into the clean data. Prior to this, we assume that human mistake is the primary cause of real-world label noise in a dataset. Psychological and cognitive findings further corroborate the rationality of the assumption. It demonstrates that: (1) apparent differences between annotators result from different preferences and biases (Reidsma and op den Akker, 2008;Beigman and Klebanov, 2009;Burghardt et al., 2018), suggesting that human errors are heterogeneous; (2) label noise from humans regularly affects hard cases (Klebanov et al., 2008;Klebanov and Beigman, 2009), proving that noise is instance-dependent. Heterogeneous noise sources. Based on the above preliminaries, we simulate various mistakes committed by annotators to produce real-world noise containing heterogeneous noise sources. Specifically, human errors are often induced by ambiguity, insufficient annotator expertise, and random attention slips (Beigman and Klebanov, 2009;Hollenstein et al., 2016). Motivated by this, we develop three noise sources as follows: • Sub-categories (SC) under the same category (e.g., Starting a Business and Running a Business) tend to have higher semantic similarities and can be easily confused. SC depicts the noise caused by labeling ambiguous instances. • Intents beyond the commonsense categories (BCC) are hard to identify (e.g., Dog Grooming), readily inducing noisy labels. BCC portrays a scenario annotated by a human lack of expert knowledge. • Considering the long-tailed distribution, even a few labeling errors on tail classes can seriously affect learning of these categories. Therefore, achieving robust training on tail classes is critical. We concentrate on intents under the tail categories (TC), which describe the noise generated by humans randomly shifting their attention. Then, we design a simple mapping from noise sources to classes to facilitate the subsequent injection of noise from different sources and categories. Specifically, each class is associated with a noise source, and classes under various noise sources do not overlap. This mapping can cover all categories during noise injection and determine the potential noise source for each class. Finally, we divide 158 categories into 68, 36, and 54 to correspond to the sources SC, BCC, and TC, respectively. More details about the mapping can be found in Appendix A.3. Injecting instance-dependent label noise. Since each noise source contains a set of categories, each of which may involve hard cases, instance-dependent label noise exists in each noise source. Note that real-world label noise always comes from an open rather than a finite category set (Wang et al., 2018). We therefore enable label noise to derive from categories other than the current label set. However, this operation changes the number of labels and impacts the target classification task. To solve this problem, when injecting label noise into an instance (x, y), we leave the label y (output) unchanged like Jiang et al. (2020) but replace the procedural event x (input) with the one (x) under the other category (ỹ), which may not be in the existing 158 classes. Moreover, NoisywikiHow supports five noise levels (i.e., 0%, 10%, 20%, 40%, and 60%). Like Li et al. (2017) and Saxena et al. (2019), we assume that, given a specified noise level t, t is uniform across noise sources. For example, t = 10% represents that each source has roughly 10% label noise. We further identify hard cases and inject instance-dependent noise for each noise source. Intuitively, when we mislabel an instance from (x, y) into (x, y), if (x, y) is a hard case, the semantic representations of events x andx should be very similar. As a result, for any (x, y), we can assess its difficulty by finding an (x,ỹ) whosex has the maximum semantic similarity with x. To identify (x,ỹ), we take the following steps: (1) Determine D n : the candidate set of (x,ỹ). To avoid introducing bad data or duplicate data after noise injection, we construct D n as follows: • For the sources BCC and TC, D n = D h − D. • For the source SC, let D s be the sample set of all other sub-categories except y under the same category, and D n = (D h − D) ∩ D s . (2) Locatex in D n . Following Zhang et al. (2020b), we map each event to a vector representation by taking the average of the BART embeddings (Lewis et al., 2020) of the verbs.x thus can be calculated as: x = arg max v x cosine(v x , v x ), (x , y ) ∈ D n ,(1) where v (·) is the vector representation of an event, and cosine(·) denotes the cosine similarity of two vectors. For any (x, y), its difficulty can be obtained by calculating a score s x : s x = cosine(v x , vx).(2) The larger the s x , the harder the instance (x, y). We inject noise into the training set D tr ⊂ D. Given a specified noise level t (e.g., 10%), all instances in D tr are arranged in decreasing order of s x , with the top t of the samples in each source considered hard cases. We inject instance-dependent noise by replacing x for each hard case withx. Experiments We first present the general settings for experiments (Section 4.1). Further, we systematically evaluate our benchmark with varied model architectures (Section 4.2) and noise sources (Section 4.3). Also, we assess a broad range of LNL methods on Noisy-wikiHow (Section 4.4) and compare real-world noise with synthetic noise (Section 4.5). Finally, we conduct a case study (Section 4.6). In addition, we discuss the long-tailed distribution characteristics of NoisywikiHow in Appendix B.2. Experimental settings On our benchmark, all methods are trained on the noisy training sets 3 and evaluated on the same clean validation set to verify whether these approaches can resist label noise during training and achieve good generalization on the noise-free data. Before adding label noise, we randomly split out 15,800 instances from clean data and then equally divide them into two sets: a validation set and a test set. The remaining 73,343 instances serve as the training set, which follows a typical long-tailed class distribution and is analogous to heavily imbalanced data in real-world applications, as shown in Fig. 1. The statistics of NoisywikiHow are shown in Table 3. We cast intention identification as a classification problem. We exploit the cross-entropy loss for training models and use Top-1 accuracy and Top-5 accuracy as the evaluation metrics. (Lan et al., 2020), T5 (Raffel et al., 2020), and BART (Lewis et al., 2020). We finetune each model for 10 epochs with batch size 64, learning rate 3e-5. These hyperparameters remain unchanged in subsequent experiments unless indicated otherwise. In this paper, we conduct all experiments utilizing the base-sized version of the pre-trained language models. Besides, due to long output sequences in partial categories, we adopt beam search (Sutskever et al., 2014) in T5, with a beam width of 5 and a length penalty of α = 1.0. Results: As shown in Table 4, the Top-1 accuracies of SOTA pre-trained language models on our benchmark are generally not high, and an increase in noise levels can lead to considerable performance degradation for a given model, demonstrating the challenge of the NoisywikiHow dataset. Comparison of Model In Table 4, different architectures are representative of diverse capacities. For example, RoBERTa and XLNet consistently outperform ALBERT under different noise levels. In addition, we observe that BART achieves the best performance among these SOTA models under a majority of noise levels, regardless of Top-1 or Top-5 classification accuracy. This is mainly because a better denoising objective (i.e., masked language modeling) is used during pre-training of BART. In pre-training, BART gains better denoising ability by corrupting text with an arbitrary noise function (thus making the noise more flexible) and learning to reconstruct the original text. In the following, we use the BART model as the base model. Effects of Distinct Noise Sources We further explore the characteristics of different noise sources. To this end, we pick the same model (i.e., the base model) and separately validate the performances on individual noise sources under the same noise level. For convenience, we denote noise-free data by correct samples and data with label noise by incorrect samples. Results: Table 5 shows the results of the base model under four different noise sources with 10% label noise. As shown in Table 5, there exists an evident gap between the results under noise source TC and those in other conditions. The label noise from noise source TC is more difficult to mitigate than others at the same noise level, mainly due to the limited data on tail categories. When all noisy labels are derived from TC, fewer correct samples are left, leading to inadequate model training and degradation of model performance. It indicates that resisting label noise from different sources may have varying difficulty levels, although the noises in these sources are all real-world label noise. Additional details are provided in Appendix B.1. , which introduces the sparse regularization strategy, making any loss robust to noisy labels conforming to the specified assumption; (5) Co-teaching (Han et al., 2018b), which combats noisy labels by training two networks, and each network aims to teach the other one with clean data, i.e., the instances with smallloss; (6) CNLCU (Xia et al., 2022), which considers the uncertainty of loss estimation to refine correct sample selection; (7) SEAL (Chen et al., 2021), which provides instance-dependent label correction to resist real-world noise. Complete experimental results and unique hyperparameters for each noise level for each baseline are in Tables 8 and 9 in the Appendix. Effectiveness of Results: As Fig. 2(a) shows, Mixup outperforms the base model with limited performance improvement. It is because Mixup fails to consider the specialty of real-world label noise and improves generalization with a generic regularization-based method. The performance of Data Parameter is comparable to or slightly better than the base model under different noise levels. Although Data Parameter models the situation that instances within a class have different difficulty levels, it assumes small-loss training samples as correct samples and splits correct and incorrect samples via a loss-based separation. However, as shown in Fig. 3(a), loss distributions of correct and incorrect data overlap closely in the real-world label noise, making Data Parameter has no advantage under real-world label noise. Similarly, Coteaching and CNLCU fulfill sample selection following the same assumptions. They perform worse than the base model, with the exception of individual noise levels. It implies that Co-teaching and CNLCU are inapplicable to the heterogeneous and instance-dependent label noise. SR precedes the base model only at certain noise levels. This is because SR guarantees noise tolerance if and only if the label noise satisfies the instance-independent condition, which is inconsistent with noise in the real world. Hence, the validity of SR is not ensured on our benchmark. SEAL consistently outperforms the base model by a large margin on all noise levels, as SEAL provides instance-dependent label correction to combat real-world noise. However, during the correction, SEAL retrains the classifier using the averaged soft labels, introducing excessive computational overhead. Real-world Noise vs. Synthetic Noise Aside from the real-world label noise, synthetic label noise is one of the most widely studied label noises (Patrini et al., 2017;Wang et al., 2018;Reeve and Kabán, 2019). Unlike real-world noise, which is widespread in real applications, synthetic noise does not exist but is generated from artificial distributions. We further examine the differences between the two label noises. In this paper, synthetic label noise is implemented with symmetric label noise (Han et al., 2018a; Charoenphakdee et al., 2019) (the most common synthetic noise), assuming each label has the same probability of flipping to any other class. We build the dataset of controlled synthetic label noise by injecting a series of synthetic label noises into clean data in a controlled manner (i.e., 10%, 20%, 40%, and 60% noise levels). We pick the same baselines as in Section 4.4. More details are in Tables 10 and 11 in the Appendix. Results: As shown in Fig. 2(b), SEAL and Mixup consistently outperform the base model, showing their advantages in combating synthetic label noise. Unlike the real-world label noise, SR is effective for the synthetic label noise and achieves improvement over the base model regardless of the noise levels since the synthetic label noise meets the instance-independent condition. Besides, Coteaching, Data Parameter, and CNLCU improve the base model by an apparent margin under the synthetic label noise. In this case, as shown in Fig. 3(b), the loss distributions of correct and incorrect samples can be well split, allowing loss-based separation to work well. We discover that few LNL methods can effectively resist both real-world and synthetic noises simultaneously, highlighting the imperative of benchmark construction. Many LNL approaches can mitigate the synthetic but not real-world label noise. It is because synthetic noise is generated from artificial distributions to approximate real-world noise. The mislabeled probability is independent of each instance under synthetic noise but dependent on distinct instances under real-world noise, which makes complex modeling of the latter. Thus, our benchmark contributes to a more systematic and comprehensive assessment of LNL methods. Further, since most LNL method evaluation datasets focus on the CV and ASP, our NLP benchmark facilitates the modal integrity of the existing datasets. We also contrast the performance of the base model trained for 20 epochs under real-world label noise and synthetic label noise. In Fig. 4, as the running epochs and noise levels increase, the test accuracy curve with the real-world noise ( Fig. 4(a)) is much flatter than that with the synthetic noise ( Fig. 4(b)) at the same noise level (e.g., with 40% and 60% noise). It demonstrates that the model generalizes much better under real-world noise than synthetic noise of the same noise level. Case Study We construct a benchmark encompassing realworld noise involving multiple noise sources with minimal human supervision, which is analogous to human errors during annotation. To observe the dataset more clearly and intuitively, we randomly select five incorrect instances (i.e., samples with noisy labels) across multiple noise sources. As indicated in Table 6, we find it difficult to determine whether the sample contains noise. On the other hand, for any sample, the noise label and the respective ground-truth label are overly similar, making it challenging to distinguish one from another. Conclusion In this paper, we study the problem of learning with noisy labels and establish an NLP benchmark called NoisywikiHow with minimal human supervision, which contains more than 89K procedural events with heterogeneous and controlled realworld label noise. Experimental results reveal several new findings. (1) Some widely accepted LNL methods are not always impactful, especially with real-world label noise. (2) Different noise sources may have varying difficulties resisting label noise, although they are all from real-world noise. (3) Few LNL methods can effectively combat real-world noise and synthetic noise at the same time. (4) The model trained under the real-world label noise has better generalization performance. Limitations In this paper, we simplify intention identification into a sentence classification task, i.e., exploiting a specific procedural event in an event process to predict the intention of the whole event process. A more realistic way to model this task is to enter the entire event process rather than a single event. We will go into more detail about this type of task in future work. Ethics Statement This work presents NoisywikiHow, a free and open dataset for the research community to study learning with noisy labels. Since the data in Noisywiki-How is constructed based on the wikiHow website, which is free and open for academic usage, there is no privacy issue. We declare that all information in this paper has been obtained and presented following the ACL Ethics Policy. As required by these rules and conduct, we have fully cited and referenced all material and results that are not original to this work. References A Details of Dataset Construction A.1 Crawling Strategy According to wikiHow's crawler rules, 4 we use the crawling platform Scrapy (Kouzis-Loukas, 2016) to crawl all the articles in the 19 top-level categories (e.g., Arts and Entertainment, Computers and Electronics, etc.) of the latest wikiHow website, with a total of 100,623 pages (how-to articles), including 1,407,306 samples in 3,334 categories, as shown in Table 7. A.2 Filtering Strategies In the main paper, we apply a collection of filters to ensure low-quality instances removal, better dataset division, and task effectiveness. The details of each filter are as follows: Sample Length Filter: We remove instances with overly short or long event descriptions or with icon information. As too-short events may be less informative, too-long depictions may exceed the length restriction of the pre-trained language model. Icons in events present rich text starting with "smal-lUrl" without specific semantic information and may interfere with the understanding of procedural events. Format Normalization: We observe that some identical event descriptions would be slightly different in distinct articles (e.g., "Click Defragment Your Hard Drive." and "Click Defragment your hard drive."). Prior to the deduplication procedure, we devise format standardization operations. The manipulations involve standardizing varied languages and symbols with Unidecode, stopword exclusion and lemmatization with spaCy (Honnibal and Montani, 2017), word segmentation & POS tagging by applying the model "en_core_web_sm" in spaCy and reserving events containing verbs. Deduplication: We first apply inter-class deduplication to remove instances with labels of multiple categories. Then, we filter out repeated samples to achieve in-class deduplication. After the deduplication operation, each procedural event (i.e., event) corresponds to a unique event intent (i.e., category). TF-IDF Filter: We exploit the TF-IDF filter to preclude events from being overly uninformative when identifying the corresponding event intent and guarantee the instances are representative. Specifically, each wikiHow article is considered a document. We calculate the TF-IDF for each token and preserve only the events containing keywords. In this context, keywords refer to tokens whose TF-IDF values are in the top 10% in decreasing order. Each article includes a minimum of 3 and a maximum of 10 keywords. Label Filter: We filter labels by manual annotation to retain categories depicting only events. For human labeling, we used three graduate students from the NLP field. They were educated for two hours about annotation strategy before the labeling process. Specifically, we use Min et al. (2020)'s andYu et al. (2021)'s definitions of event mention (i.e., an event with surrounding context (text)) as guidelines for annotating events. In addition, categories exhibit a hierarchical structure. Typically, the descriptions of the upper categories are relatively general and vague (e.g., Cleaning), while the more fine-grained categories have more specific intentions (e.g., Kitchen Cleaning, Cleaning Metals). Accordingly, we label the category with the finest granularity as an event except for two cases. • If a candidate category has a broad intent meaning (e.g., Selling in Finance and Business Managing Your Money Making Money Selling), it will not be considered an event. • If it is difficult to distinguish semantically between two candidate categories, the category with the larger sample size is designated as an event. For example, in hierarchical categories (Hobbies and Crafts Crafts Needlework Knitting and Crochet Crochet Crochet Stitches), we label Crochet (with 1,263 samples) as an event rather than Crocheet Stitches (with 445 samples). This annotation strategy facilitates the balance between definite event intent and sample size. Sample size and class info reserved after data cleaning are provided in Table 7. A.3 Mapping from Noise Sources to Classes In the main paper, we briefly present the correspondence between noise sources and task categories. In particular, we first define 54 tail categories, each containing no more than 400 samples. Following that, we draw on the discussion of commonsense knowledge in Liu and Singh (2004) 5 and use it as a guideline for labeling categories beyond commonsense. We define the overall 45 categories beyond 5 See Section 1.1 for more details. Operation Class Size B Experiments Details For each experimental dimension, we refine the hyperparameters for every baseline across different noise levels. Optimal hyperparameters are obtained by using a popular hyperparameter optimization tool Hyperopt (Bergstra et al., 2013). B.1 Effects of Distinct Noise Sources We examine the base model's performance under four different noise sources. In addition, Fig. 5 (Kang et al., 2020), which decouples the learning procedure (including three baselines: Decoupling-NCM, Decoupling-cRT, and Decoupling-LWS) to understand how the long-tailed recognition ability is achieved. Complete experimental results of longtailed learning methods are shown in Table 12. We also demonstrate the settings of optimal hyperparameters in Table 13. Results: We focus on the relative performance boost with various baselines in original papers and that on Noisywikihow. In Fig. 6, we find that all baselines evaluated on the CV datasets can address the long-tailed problem properly and achieve a significant test accuracy boost (7.37%-20.9%) in the original papers. However, as shown in Fig. 7, the performance improvements across varied noise levels on our NLP benchmark are limited, with some methods not exceeding the base model (-0.07%-2.56%). Experimental results indicate that the effectiveness of long-tailed learning methods needs to be examined on datasets with different modals. Moreover, although the base model obtains performance degradation with the increase in the noise level, the effectiveness of each long-tailed learning method is not significantly affected by the noise level variation. The main reason is that the test accuracy we report is the best peak accuracy, producing an effect similar to the early stop and thus preventing the model from overfitting label noise. (Saxena et al., 2019) lr_inst_param=0.2, wd_inst_param=0.0 SR (Zhou et al., 2021) τ = 0.05, λ0 = 0, epochs=20 Co-teaching (Han et al., 2018b) T k = 8, τ = ( is the noise level) CNLCU (Xia et al., 2022) T k = 8, τmin = 0.3, fixed-length time intervals=5 SEAL (Chen et al., 2021) Number of iterations=4 (Saxena et al., 2019) lr_inst_param=0.2, wd_inst_param=0.0 SR (Zhou et al., 2021) τ = 0.5, λ0 = 0, epochs=20 Co-teaching (Han et al., 2018b) T k = 3, τ = ( is the noise level) CNLCU (Xia et al., 2022) T k = 3, τmin = 0.1, fixed-length time intervals=5 SEAL (Chen et al., 2021) Number of iterations=4 C=0.2, s=10 C=0.5, s=10 C=0.7, s=7 C=0.8, s=7 C=0.8, s=10 Decoupling-NCM (Kang et al., 2020) -Decoupling-cRT (Kang et al., 2020) epoch'=5, num_samples_cls=4 Decoupling-LWS (Kang et al., 2020) epoch'=5, num_samples_cls=4 Figure 7: Performance improvements over the base model under different long-tailed learning methods on Noisy-wikiHow. Given a method (e.g., BBN) and a noise level, a column height reflects performance when only using the base model. The length of the pink line on the column represents the performance boost from adopting the method. Method cultural and ethnic foods in your plan.(d) Talk about food differently. Figure 1 : 1Number of events per category of the training set of NoisywikiHow. Figure 2 :Figure 3 : 23Test accuracy (Top-1) of representative LNL methods trained with controlled label noise.(a) Real-world label noise. (b) Synthetic label noise Training loss distributions of correct samples and incorrect ones at the 4-th epoch with 40% label noise. orization of noisy labels by DNNs regularization, i.e., introducing a data-agnostic data augmentation routine; (3) Data Parameter (Saxena et al., 2019), which equips learnable parameters to help DNNs generalize better via learning from easier instances first; (4) SR (Zhou et al., 2021) Table 1 : 1Instances (a)-(d) depict examples of our task.Input: a procedural event. Output: a plausible intention toward that event. Table 2 : 2Comparison between our benchmark and other datasets. Noise Level(%): 0, 10, 20, 40, 60Noise Sources Class Train Val Test Total SC 68 39,674 3,400 3,400 46,474 BCC 36 20,413 1,800 1,800 24,013 TC 54 13,256 2,700 2,700 18,656 Total 158 73,343 7,900 7,900 89,143 Table 3 : 3Overview of NoisywikiHow of multiple noise sources and controlled label noise, where SC, BCC, and TC denote noise sources from sub-categories, categories beyond the commonsense, and tail categories. Architectures Baselines : BaselinesWe first evaluate the performance of different model architectures under varying levels of real-world label noise. Regarding the model architectures, we use seven state-of-the-art (SOTA) pre-trained language models, including BERT (De-Method Noise Level 0% 10% 20% 40% 60% Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) BERT (Devlin et al., 2019) 60.29(83.53) 58.86(83.82) 57.42(82.57) 52.91(79.84) 48.20(75.37) XLNET (Yang et al., 2019) 59.77(85.24) 60.23(85.90) 58.25(84.29) 53.74(81.73) 50.23(79.44) RoBERTa (Liu et al., 2019a) 60.59(85.10) 59.65(84.16) 57.77(83.77) 54.18(81.56) 50.85(78.87) GPT2 (Radford et al., 2019) 59.84(85.39) 58.35(84.90) 57.0(83.94) 52.71(80.81) 48.25(78.08) ALBERT (Lan et al., 2020) 55.13(80.80) 56.21(82.15) 53.68(80.52) 49.93(78.44) 44.81(74.41) T5 (Raffel et al., 2020) 58.35(83.63) 56.87(83.03) 56.19(82.20) 52.29(79.94) 47.47(77.39) BART (Lewis et al., 2020) 61.72(86.90) 60.28(85.92) 58.94(84.67) 54.57(82.38) 49.75(78.84) Table 4 : 4Top-1 (Top-5) classification accuracy (%) of pre-trained language models on the NoisywikiHow test set under different levels of real-world label noise. Top-1 results are in bold.vlin et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019a), GPT2 (Radford et al., 2019), ALBERT Table 5 : 5Test accuracy (%) of the base model under distinct noise sources with 10% label noise, where SC+BCC+TC denotes the default NoisywikiHow with a mixture of noise sources. Different LNL methods Baselines: We perform an extensive evaluation of the existing LNL methods on our benchmark. Seven representative baselines are involved for comparison: (1) Base model, which finetunes the BART model with no extra LNL methods; (2) Mixup (Zhang et al., 2018), which mitigates mem-0% 10% 20% 40% 60% Noise levels 47.5 50.0 52.5 55.0 57.5 60.0 62.5 65.0 Tes% accuracy Base model Mixup Da%a parame%er SR Co-%eachi g CNLCU SEAL (a) Real-world label noise. 10% 20% 40% 60% Noise levels 52 54 56 58 60 62 Test accuracy Base m del Mixup Data parameter SR C -teaching CNLCU SEAL (b) Synthetic label noise Test accuracy of the base model trained with controlled label noise.Test accuracy 0% noise 10% noise 20% noise 60% noise 40% noise (a) Real-world label noise 0 2 4 6 8 10 12 14 16 18 20 Epochs 20 30 40 50 60 Test accuracy 0% noise 10% noise 20% noise 60% noise 40% noise (b) Synthetic label noise Figure 4: Noise Sources Incorrect sample Noisy label Ground-truth label (unobservable) SC (a) Rinse off the paste using warm water. Coloring Hair Making Skin Look Lighter BCC (b) Mow your lawn and the leaves. Lawn Care Cleaning up Garden (c) Avoid over-fertilizing your tree. Growing Trees and Shrubs Growing Fruit TC (d) Give yourself a span of time to mourn. Domestic Violence Rebuilding Life After Divorce (e) Place the bananas on a wire rack. Steaming Food Food Preservation Techniques Table 6 : 6Five incorrect instances from three different noise sources in the NoisywikiHow dataset. Eyal Beigman and Beata Beigman Klebanov. 2009.Learning with annotation noise. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 280-287. James Bergstra, Daniel Yamins, and David Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision ar- chitectures. In International conference on machine learning, pages 115-123. PMLR. Keith Burghardt, Tad Hogg, and Kristina Lerman. 2018. Quantifying the impact of cognitive biases in question-answering systems. In Twelfth Interna- tional AAAI Conference on Web and Social Media. Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. 2019. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in neural information processing systems, 32. Nontawat Charoenphakdee, Jongyeong Lee, and Masashi Sugiyama. 2019. On symmetric losses for learning from corrupted labels. In International Conference on Machine Learning, pages 961-970. PMLR. Muhao Chen, Hongming Zhang, Haoyu Wang, and Dan Roth. 2020. What are you trying to do? se- mantic typing of event processes. In Proceedings of the 24th Conference on Computational Natural Lan- guage Learning, pages 531-542. Pengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, and Pheng-Ann Heng. 2021. Beyond class- conditional assumption: A primary attempt to com- bat instance-dependent label noise. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11442-11450. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In North American Chapter of the Association for Computational Linguistics. Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378. Eduardo Fonseca, Manoj Plakal, Daniel PW Ellis, Fred- eric Font, Xavier Favory, and Xavier Serra. 2019a. Learning sound event classifiers from web audio with noisy labels. In ICASSP 2019-2019 IEEE Inter- national Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), pages 21-25. IEEE. Eduardo Fonseca, Manoj Plakal, Frederic Font, Daniel PW Ellis, and Xavier Serra. 2019b. Audio tagging with noisy labels and minimal supervision. arXiv preprint arXiv:1906.02975. Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, and Masashi Sugiyama. 2022. Sample selection with uncertainty of losses for learn- ing with noisy labels. In International Conference on Learning Representations. Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xi- aogang Wang. 2015. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pat- tern recognition, pages 2691-2699. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Neural Information Pro- cessing Systems. Xiaodong Yu, Wenpeng Yin, Nitish Gupta, and Dan Roth. 2021. Event linking: Grounding event mentions to wikipedia. arXiv preprint arXiv:2112.07888. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Ben- jamin Recht, and Oriol Vinyals. 2017a. Understand- ing deep learning requires rethinking generalization. In International Conference on Learning Represen- tations. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empir- ical risk minimization. In International Conference on Learning Representations. Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020a. Intent detection with wikihow. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Lan- guage Processing, pages 328-333. Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020b. Reasoning about goals, steps, and temporal order- ing with wikihow. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 4630-4639. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D Manning. 2017b. Position- aware attention and supervised data improve slot fill- ing. In Conference on Empirical Methods in Natural Language Processing. Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. 2020. Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recogni- tion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9719-9728. Xiong Zhou, Xianming Liu, Chenyang Wang, Deming Zhai, Junjun Jiang, and Xiangyang Ji. 2021. Learn- ing with noisy labels via sparse regularization. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, pages 72-81. Table 7 : 7Statistics of reserved valid sample size and classes after different operations.Figure 5: Test accuracy (%) of typical LNL methods under distinct noise sources with 10% label noise.commonsense by asking three annotators to label 158 categories as commonsense or not, fulfilling a high agreement (Fleiss-κ = 0.88). To ensure that the label sets under different noise sources do not overlap, we remove 9 categories also appearing in tail categories from the 45 categories and eventually receive 36 categories beyond commonsense. Lastly, the remaining 68 of the 158 classes are designated as the noise source SC.SC+BCC+TC SC BCC TC 58 59 60 61 62 63 Test accuracy Base model Mixup Data parameter SR Co-teachi g SEAL CNLCU further compares the efficacy of typical LNL methods under various noise sources. We discover that regardless of the method employed, they are all less effective in reducing the effect of the noise source TC, further confirming our point of view.Figure 6: Performance improvements under different long-tailed learning methods in original papers.B.2 Long-tailed Distribution PropertiesBaselines: Our training set follows a typical longtailed class distribution akin to that in the real world. However, DNNs can be readily biased towards dominant classes with massive training data, triggering poor model performance on tail classes with limited data. This problem inspires large numbers of long-tailed learning studies. To fully explore the characteristics of the NoisywikiHow dataset, we select five long-tailed learning methods in three classical categories as baselines: (1)BBN (Zhou et al., 2020), which applies a resampling strategy to sample more tail-class samples for improving tailclass performance; (2) LDAM (Cao et al., 2019), which rebalances classes by designing an effective loss and training schedule; (3) DecouplingBBN LDAM Decoupling -NCM Decoupling -cRT Decoupling -LWS 0 5 10 15 20 25 Test accuracy im rovement 7.37 6.67 14.6 20.9 20.5 Table 8 : 8Top-1 (Top-5) classification accuracy (%) of representative LNL methods on the test set of NoisywikiHow under different noise levels. Top-1 results are in bold.Method Optimal Hyperparameters Settings Mixup (Zhang et al., 2018) α = 1 Data Parameter Table 9 : 9Optimal hyperparameter settings for different controlled real-world label noise on NoisywikiHow. Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) Top-1(Top-5)Method Noise Level 10% 20% 40% 60% Base model 59.83(85.66) 58.89(84.97) 55.66(82.03) 51.49(78.29) Mixup (Zhang et al., 2018) 61.61(86.40) 59.78(85.57) 57.01(83.20) 51.80(78.97) Data Parameter (Saxena et al., 2019) 60.94(85.74) 59.56(85.39) 55.81(82.41) 51.69(78.73) SR (Zhou et al., 2021) 60.25(82.35) 59.51(81.54) 56.80(79.71) 51.90(77.38) Co-teaching (Han et al., 2018b) 60.86(86.06) 59.97(85.16) 56.92(82.99) 52.95(79.85) CNLCU (Xia et al., 2022) 60.46(85.84) 59.49(84.92) 57.16(83.05) 52.51(78.28) SEAL (Chen et al., 2021) 62.69(87.66) 61.41(86.99) 58.97(84.77) 54.66(80.92) Table 10 : 10 test accuracy (%) of representative LNL methods with controlled synthetic label noise.Method Optimal Hyperparameters Settings Mixup (Zhang et al., 2018) α = 1 Data Parameter Table 11 : 11Optimal hyperparameter settings for different controlled synthetic label noise on NoisywikiHow.Method Noise Level 0% 10% 20% 40% 60% Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) Top-1(Top-5) BBN (Zhou et al., 2020) 63.11(87.06) 62.03(86.79) 60.03(85.73) 55.59(83.68) 50.22(80.47) LDAM (Cao et al., 2019) 64.25(86.82) 62.71(86.19) 60.69(85.18) 56.29(82.53) 50.79(79.52) Decoupling-NCM (Kang et al., 2020) 62.54(85.59) 60.85(85.61) 58.94(84.71) 54.86(82.58) 50.09(79.76) Decoupling-cRT (Kang et al., 2020) 62.89(86.16) 61.86(86.53) 59.99(85.41) 55.80(83.29) 51.82(81.40) Decoupling-LWS (Kang et al., 2020) 61.87(85.75) 60.42(85.96) 58.61(84.63) 54.30(82.20) 49.61(79.50) Table 12 : 12 test accuracy (%) of long-tailed learning methods on NoisywikiHow under different noise levels.Method Noise Level 0% 10% 20% 40% 60% BBN (Zhou et al., 2020) - LDAM (Cao et al., 2019) Table 13 : 13Optimal hyperparameter settings for different controlled real-world label noise on NoisywikiHow.BBN LDAM Decoupling-NCM Decoupling-cRT Decoupling-LWS 20 40 60 80 Test accuracy 1.17 2.31 0.6 0.95 -0.07 1.57 2.25 0.39 1.4 -0.04 1.41 2.07 0.32 1.37 -0.01 1.86 2.56 1.13 2.07 0.57 1.26 1.83 1.13 2.86 0.65 oise level=0% oise level=10% oise level=20% oise level=40% oise level=60% https://www.wikihow.com Synthetic noise and various noise sources under realworld noise correspond to diverse noisy training sets. https://www.wikihow.com/robots.txt AcknowledgementsWe thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the Technological Innovation "2030 Megaproject" -New Generation Artificial Intelligence of China (2020AAA0106501), and the National Natural Science Foundation of China (U22B2059, 62176079, 62106061). Audio set: An ontology and human-labeled dataset for audio events. Jort F Gemmeke, P W Daniel, Dylan Ellis, Aren Freedman, Wade Jansen, Channing Lawrence, Manoj Moore, Marvin Plakal, Ritter, 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEEJort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 776-780. IEEE. Masking: A new perspective of noisy supervision. Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, Masashi Sugiyama, Advances in neural information processing systems. 31Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, and Masashi Sugiyama. 2018a. Masking: A new perspective of noisy super- vision. Advances in neural information processing systems, 31. A survey of label-noise representation learning: Past, present and future. Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W Tsang, James T Kwok, Masashi Sugiyama, Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W Tsang, James T Kwok, and Masashi Sugiyama. 2021. A survey of label-noise represen- tation learning: Past, present and future. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W Tsang, Masashi Sugiyama, Neural Information Processing Systems. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. 2018b. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Neural Information Processing Systems. Analysing the noise model error for realistic noisy label data. A Michael, Dawei Hedderich, Dietrich Zhu, Klakow, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Michael A Hedderich, Dawei Zhu, and Dietrich Klakow. 2021. Analysing the noise model error for realistic noisy label data. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 35, pages 7675-7684. Inconsistency detection in semantic annotation. Nora Hollenstein, Nathan Schneider, Bonnie Webber, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Nora Hollenstein, Nathan Schneider, and Bonnie Web- ber. 2016. Inconsistency detection in semantic an- notation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3986-3990. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, 7To appearMatthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear, 7(1):411-420. Liberal event extraction and event schema induction. Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare Voss, Jiawei Han, Avirup Sil, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare Voss, Jiawei Han, and Avirup Sil. 2016. Lib- eral event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 258-268. Beyond synthetic noise: Deep learning on controlled noisy labels. Lu Jiang, Di Huang, Mason Liu, Weilong Yang, PMLRInternational Conference on Machine Learning. Lu Jiang, Di Huang, Mason Liu, and Weilong Yang. 2020. Beyond synthetic noise: Deep learning on controlled noisy labels. In International Conference on Machine Learning, pages 4804-4815. PMLR. Jiashi Feng, and Yannis Kalantidis. 2020. Decoupling representation and classifier for long-tailed recognition. Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Eighth International Conference on Learning Representations. ICLRBingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. 2020. Decoupling represen- tation and classifier for long-tailed recognition. In Eighth International Conference on Learning Representations (ICLR). Squibs: From annotator agreement to noise models. Eyal Beata Beigman Klebanov, Beigman, Computational Linguistics. 354Beata Beigman Klebanov and Eyal Beigman. 2009. Squibs: From annotator agreement to noise models. Computational Linguistics, 35(4):495-503. Analyzing disagreements. Eyal Beata Beigman Klebanov, Daniel Beigman, Diermeier, Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics. Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing disagreements. In Col- ing 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 2- 7. Learning Scrapy. Dimitrios Kouzis-Loukas, Packt Publishing LtdDimitrios Kouzis-Loukas. 2016. Learning Scrapy. Packt Publishing Ltd. Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations. Cleannet: Transfer learning for scalable image classifier training with label noise. Kuang-Huei Lee, Xiaodong He, Lei Zhang, Linjun Yang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. 2018. Cleannet: Transfer learning for scal- able image classifier training with label noise. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 5447-5456. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880. Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, Luc Van Gool, arXiv:1708.02862Webvision database: Visual learning and understanding from web data. arXiv preprintWen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. 2017. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862. Conceptnet-a practical commonsense reasoning tool-kit. Hugo Liu, Push Singh, BT technology journal. 224Hugo Liu and Push Singh. 2004. Conceptnet-a practi- cal commonsense reasoning tool-kit. BT technology journal, 22(4):211-226. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. arXiv: Computation and Language. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining ap- proach. arXiv: Computation and Language. Largescale long-tailed recognition in an open world. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, Stella X Yu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZiwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. 2019b. Large- scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 2537- 2546. Does label smoothing mitigate label noise. Michal Lukasik, Srinadh Bhojanapalli, Aditya Menon, Sanjiv Kumar, PMLRInternational Conference on Machine Learning. Michal Lukasik, Srinadh Bhojanapalli, Aditya Menon, and Sanjiv Kumar. 2020. Does label smoothing mit- igate label noise? In International Conference on Machine Learning, pages 6448-6458. PMLR. Towards few-shot event mention retrieval: An evaluation framework and a siamese network approach. Bonan Min, Yee Seng Chan, Lingjun Zhao, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceBonan Min, Yee Seng Chan, and Lingjun Zhao. 2020. Towards few-shot event mention retrieval: An eval- uation framework and a siamese network approach. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1747-1752. Pervasive label errors in test sets destabilize machine learning benchmarks. Anish Curtis G Northcutt, Jonas Athalye, Mueller, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Round 1Curtis G Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabilize machine learning benchmarks. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1). Making deep neural networks robust to label noise: A loss correction approach. Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, Lizhen Qu, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGiorgio Patrini, Alessandro Rozza, Aditya Kr- ishna Menon, Richard Nock, and Lizhen Qu. 2017. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1944-1952. Steps: Semantic typing of event processes with a sequence-to-sequence approach. Sveva Pepe, Edoardo Barba, Rexhina Blloshmi, Roberto Navigli, Sveva Pepe, Edoardo Barba, Rexhina Blloshmi, and Roberto Navigli. 2022. Steps: Semantic typing of event processes with a sequence-to-sequence ap- proach. Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67. Fast rates for a knn classifier robust to unknown asymmetric label noise. Henry Reeve, Ata Kabán, PMLRInternational Conference on Machine Learning. Henry Reeve and Ata Kabán. 2019. Fast rates for a knn classifier robust to unknown asymmetric label noise. In International Conference on Machine Learning, pages 5401-5409. PMLR. Exploiting 'subjective'annotations. Dennis Reidsma, Rieks Op Den Akker, Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics. Dennis Reidsma and Rieks op den Akker. 2008. Ex- ploiting 'subjective'annotations. In Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 8-16. Imagenet large scale visual recognition challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, International Journal of Computer Vision. 1153Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition chal- lenge. International Journal of Computer Vision, 115(3):211-252. Atomic: An atlas of machine commonsense for ifthen reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Roof, A Noah, Yejin Smith, Choi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 3027-3035. Data parameters: A new family of parameters for learning a differentiable curriculum. Shreyas Saxena, Oncel Tuzel, Dennis Decoste, Advances in Neural Information Processing Systems. 32Shreyas Saxena, Oncel Tuzel, and Dennis DeCoste. 2019. Data parameters: A new family of parameters for learning a differentiable curriculum. Advances in Neural Information Processing Systems, 32. Selfie: Refurbishing unclean samples for robust deep learning. Hwanjun Song, Minseok Kim, Jae-Gil Lee, PMLRInternational Conference on Machine Learning. Hwanjun Song, Minseok Kim, and Jae-Gil Lee. 2019. Selfie: Refurbishing unclean samples for robust deep learning. In International Conference on Ma- chine Learning, pages 5907-5915. PMLR. Learning from noisy labels with deep neural networks: A survey. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee, IEEE Transactions on Neural Networks and Learning Systems. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. 2022. Learning from noisy labels with deep neural networks: A survey. IEEE Transactions on Neural Networks and Learning Sys- tems. Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. 27Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. The inaturalist species classification and detection dataset. Oisin Mac Grant Van Horn, Yang Aodha, Yin Song, Chen Cui, Alex Sun, Hartwig Shepard, Pietro Adam, Serge Perona, Belongie, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGrant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. 2018. The inaturalist species classification and detection dataset. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 8769-8778. Glue: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, International Conference on Learning Representations. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. Iterative learning with open-set noisy labels. Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, and Shu-Tao Xia. 2018. It- erative learning with open-set noisy labels. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 8688-8696. Stgn: an implicit regularization method for learning with noisy labels in natural language processing. Tingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, Ting Liu, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingTingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, and Ting Liu. 2022a. Stgn: an implicit regu- larization method for learning with noisy labels in natural language processing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, page 7587-7598. Discrimloss: A universal loss for hard samples and incorrect samples discrimination. Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu, arXiv:2208.09884arXiv preprintTingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, and Ting Liu. 2022b. Dis- crimloss: A universal loss for hard samples and incorrect samples discrimination. arXiv preprint arXiv:2208.09884.
[]
[ "Preference or Intent? Double Disentangled Collaborative Filtering ACM Reference Format", "Preference or Intent? Double Disentangled Collaborative Filtering ACM Reference Format" ]
[ "Chao Wang ", "Hengshu Zhu [email protected] ", "Dazhong Shen ", "Wei Wu ", "Hui Xiong [email protected] ", "Fok Ying ", "Chao Wang ", "Hengshu Zhu ", "Dazhong Shen ", "Wei Wu ", "Hui Xiong ", "\nCareer Science Lab\nSchool of Computer Science\nHKUST Fok Ying Tung Research Institute\nThe Hong Kong University of Science and Technology (Guangzhou\nChina, BOSS Zhipin China\n", "\nSchool of Computer Science\nUniversity of Science and Technology\nof China China\n", "\nThe Hong Kong University of Science and Technology (Guangzhou)\nUniversity of Science and Technology\nof China China\n", "\nHKUST\nTung Research Institute\nChina\n" ]
[ "Career Science Lab\nSchool of Computer Science\nHKUST Fok Ying Tung Research Institute\nThe Hong Kong University of Science and Technology (Guangzhou\nChina, BOSS Zhipin China", "School of Computer Science\nUniversity of Science and Technology\nof China China", "The Hong Kong University of Science and Technology (Guangzhou)\nUniversity of Science and Technology\nof China China", "HKUST\nTung Research Institute\nChina" ]
[ "Conference acronym 'XX" ]
People usually have different intents for choosing items, while their preferences under the same intent may also different. In traditional collaborative filtering approaches, both intent and preference factors are usually entangled in the modeling process, which significantly limits the robustness and interpretability of recommendation performances. For example, the low-rating items are always treated as negative feedback while they actually could provide positive information about user intent. To this end, in this paper, we propose a two-fold representation learning approach, namely Double Disentangled Collaborative Filtering (DDCF), for personalized recommendations. The first-level disentanglement is for separating the influence factors of intent and preference, while the secondlevel disentanglement is performed to build independent sparse preference representations under individual intent with limited computational complexity. Specifically, we employ two variational autoencoder networks, intent recognition network and preference decomposition network, to learn the intent and preference factors, respectively. In this way, the low-rating items will be treated as positive samples for modeling intents while the negative samples for modeling preferences. Finally, extensive experiments on three real-world datasets and four evaluation metrics clearly validate the effectiveness and the interpretability of DDCF.
10.48550/arxiv.2305.11084
[ "https://export.arxiv.org/pdf/2305.11084v1.pdf" ]
258,762,713
2305.11084
92bed992992131cb6805063ca138daf054e00034
Preference or Intent? Double Disentangled Collaborative Filtering ACM Reference Format June 03-05. 2018 Chao Wang Hengshu Zhu [email protected] Dazhong Shen Wei Wu Hui Xiong [email protected] Fok Ying Chao Wang Hengshu Zhu Dazhong Shen Wei Wu Hui Xiong Career Science Lab School of Computer Science HKUST Fok Ying Tung Research Institute The Hong Kong University of Science and Technology (Guangzhou China, BOSS Zhipin China School of Computer Science University of Science and Technology of China China The Hong Kong University of Science and Technology (Guangzhou) University of Science and Technology of China China HKUST Tung Research Institute China Preference or Intent? Double Disentangled Collaborative Filtering ACM Reference Format Conference acronym 'XX Woodstock, NYJune 03-05. 2018ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 https://doi.org/XXXXXXX.XXXXXXXRecommender systemsCollaborative filteringUser intent People usually have different intents for choosing items, while their preferences under the same intent may also different. In traditional collaborative filtering approaches, both intent and preference factors are usually entangled in the modeling process, which significantly limits the robustness and interpretability of recommendation performances. For example, the low-rating items are always treated as negative feedback while they actually could provide positive information about user intent. To this end, in this paper, we propose a two-fold representation learning approach, namely Double Disentangled Collaborative Filtering (DDCF), for personalized recommendations. The first-level disentanglement is for separating the influence factors of intent and preference, while the secondlevel disentanglement is performed to build independent sparse preference representations under individual intent with limited computational complexity. Specifically, we employ two variational autoencoder networks, intent recognition network and preference decomposition network, to learn the intent and preference factors, respectively. In this way, the low-rating items will be treated as positive samples for modeling intents while the negative samples for modeling preferences. Finally, extensive experiments on three real-world datasets and four evaluation metrics clearly validate the effectiveness and the interpretability of DDCF. INTRODUCTION As one of the most popular techniques for building personalized recommender systems, collaborative filtering (CF) aims to model the user behaviors by learning latent representations based on the historical interaction records, such as the implicit feedback or explicit rating matrix. In real-world recommender systems, users usually have different intents over some item groups, while their preference on a specific item under distinct intents may also different. Marketing studies have shown that purchase intent should be distinguished from product preference and a person may indicate his preference without any intent of buying [18,32]. However, in traditional CF approaches, both intent and preference factors are usually entangled as integrated factors in the rating modeling process [30], which significantly limits the robustness and accuracy of recommendation results. A piece of evidence is that when transforming explicit feedback into implicit form, using all of the feedback as positive samples for training sometimes can lead to better recommendation performances than only considering high ratings as positive samples. This is counterintuitive since the lowrating interactions are always viewed as negative in the literature. Therefore, we argue that such low-rating feedback is not completely "negative" since it can still provide positive guidance from the intent perspective. In this paper, we denote a user intent as one's interests on a group of items sharing similar properties, while the user preference represents her evaluation on a specific item. For instance, Alice may choose to watch a comedy A rather than a tragic film B for celebrating the holiday, although she actually would give a higher rating on film B than A. Consequently, neglecting the user intents will lead to many limitations: 1) It is prone to inappropriately analyze the underlying preferences and thus produce suboptimal representations for both users and items; 2) Since the real-world data often include some noisy or ambiguous interaction records, the latent representations may be quite obscure and entangled to fit the integrated intents; 3) The representations are usually lack of interpretability without consideration of intents. However, it is very challenging to discriminate the different influence factors from massive user behaviors. First, the intents of users are implicit without tagged labels, we usually have to figure out their intents based only on the user-item rating matrix. In the literature, although there have been some efforts to capture the bias factors in the rating modeling process, such as popularity bias, conformity bias, selection bias and so on [8,26,31,34,36,42,48], the task of disentangling the intent and preference is still underexplored. Recently, Ma et al. [21], Wang et al. [41] began to study users' intents over item groups in implicit feedback. However, they did not clearly disentangle the intent and preference factors, and can only handle quite coarse-grained intents due to the unacceptable computational complexity with respect to the number of intents. Besides, the disentanglement of user preference representations under different intents is also vital for producing high-quality representations, which has still been largely neglected in the literature. To this end, in this paper, we propose a novel CF approach for personalized recommendation, namely Double Disentangled Collaborative Filtering (DDCF), which can perform two-fold disentanglement in the rating modeling process for explicit feedback and thus produce more robust and interpretable latent representations. Formally, in this paper, we define one intent channel as a probability distribution over all items, and one user's intent distribution is a probability distribution over all intent channels. Hence, the items with high probabilities in one intent channel can be viewed as a group. Accordingly, our model separates the traditional integrated latent space into two new spaces, representing the intent and preference factors, respectively. Each of both is learned from a variational autoencoder (VAE) style network, namely the intent recognition network and preference decomposition network. We first learn the distributions over all the intent channels for both users and items from the implicit-form input. Then, the observed user ratings will be partitioned into each intent based on the learned user and item intent distributions. Thus, we can obtain different latent preference representations from those tailored ratings for different concepts by the preference decomposition network. For handling fine-grained intent channels, we propose to adopt the independent sparse representations for user preferences through sampling technology so that the computational complexity would not grow with the number of intent channels. Besides, we designed a novel disentangled contrastive Learning mechanism for learning preference representations. Meanwhile, we also discuss some new recommendation strategies based on the intent representations. To summarize, the main contributions of this work are: • We discuss and emphasize that low-rating feedback is not completely "negative". In our approach, the low-rating items will be treated as positive samples for modeling intents while negative samples for modeling preferences. • We propose a novel double disentangled collaborative filtering approach, DDCF. Specifically, the first-level disentanglement is designed for separating the influence factors of intent and preference, while the second-level disentanglement is performed to construct independent sparse representations for different intents. • We propose a novel disentangled contrastive Learning mechanism to improve the disentangled embedding quality of user preference under different intent channels. • We validate the effectiveness and interpretability of our approach DDCF on three real-world datasets and a number of state-of-the-art baselines. PRELIMINARY AND RELATED WORKS In this section, we first discuss and emphasize the limitations of the representation learning process in conventional latent factor models (LFMs) with respect to the modeling of the user-item rating matrix. Then, we will introduce some related works about the improvements of the representation learning process in LFMs. Rating Modeling Paradigm Collaborative filtering (CF) [24,25] methods have been widely applied in many web-based services like Google, Netflix, and Amazon [7]. Among them, latent factor model (LFM) based CF methods have drawn many researchers' attention and achieved great success in recommender systems due to their superior recommendation performance and high extensibility [2,27,37]. Generally speaking, the user-item rating matrix can be considered as a low-rank matrix with a vast majority of unobserved entries. Supposing there are users and items in the data. The main idea of the latent factor model is to factorize the low-rank rating matrix ∈ R × into two latent representations ∈ R × and ∈ R × in a shared low-dimensional space with dimension , representing the latent user and item factors, respectively. Further, we respectively use ∈ R and ∈ R to denote the latent vectors for the -th user and -th item. As shown in Figure 1, implies the -th user's preferences while represents the -th item's properties. When and get closer in the latent space, the -th user would be more likely to prefer the -th item. Let (·) denote the embedding function and , be the user and item's IDs. Here, we summarize the representation learning process of conventional LFMs as follows: = ( ), = ( ).(1) After obtaining the latent representations of both users and items, the next step is to reconstruct the user-item interactions through the rating modeling process. The most widely-used rating prediction paradigm is the inner product of latent user vector and item vector [16,46] :ˆ= ,(2) whereˆdenotes the predicted rating for the -th user on the -th item. Equation 2 implies that we measure the user-item preferences by the distance estimation between the two latent vectors in the shared latent space as shown in Figure 1. However, Equation 2 failed to consider and disentangle the intent factor in user behaviors. Thus, the learned latent representations will lack interpretability and may miss some important information. Entangled Data Analysis In this subsection, we leveraged four representative CF methods for implicit feedback, REL [31], PDA [48], DMF [46], and Multi-VAE [20], to perform an interesting comparison between two settings. Since many public real-world recommendation data were provided in the form of explicit feedback, researchers usually chose to transform the graded ratings into binary data for studying implicit feedback [11,14,28,45]. Accordingly, we have to distinguish the positive and negative feedback from original ratings. In the literature, the most common treatment method is to set a threshold [11,14,28,45]. In this way, the items with ratings lower than the given threshold would be viewed as negative feedback. However, an important question raises, "Should the items with low ratings be viewed as totally negative samples?" In our experiment, for the first setting, we transformed all the explicit feedback into positive feedback in the training sets, while only the high ratings were transformed into positive feedback for the second setting. For the test set, only high ratings were viewed as positive feedback for both settings. Thus, the two settings share the same test set and different training sets in this experiment. To perform a fair comparison, we fix all the hyper-parameters to be the same in the two settings. We present the experimental results in the two settings on the three datasets in Table 1. Surprisingly, we can observe from Table 1 that the performances in the first setting are better than in the second setting. These results clearly demonstrate the conventional 'negative feedback' (items with low ratings) are actually not totally negative. Actually, these interactions also provide the intent information behind users' decision making processes. Without the information of low-rating items, we cannot make a full understanding of all the user's intents, and thus the CF models would perform worse on covering user interests. A piece of evidence is that the difference on the recall metric is larger than the precision metric. It is worth noting that the user intent are quite different from the well-known data bias problem in recommender systems. The intents comes from personalized user inherent interests, while data bias represents the distribution for which the training data is collected is different from the ideal test data distribution [6]. To avoid the influence of data bias, we leveraged two debiasing approaches, REL and PDA. REL is for Missing-Not-At-Random problem and PDA is for conformity/popularity bias. These biases may lead to the improvements of performance with more negative samples. However, we can observe from Table 1 that these two debiasing approaches have significant differences between the two settings, demonstrating the differences do not come from data bias problem. To summarize, while a low rating may imply low user preference over the item itself, it would also indicate potential user intent, i.e., the interest over the groups the item belongs to. Therefore, our motivation is to disentangle the intent and preference in the userdecision process. Related Works Since exploring the user choices based purely on the rating matrix is the central task of LFM, there are many efforts trying to improve the rating modeling process in the literature. some researchers pointed out that the straightforward rating modeling process is insufficient for depicting complex real-world situations. For example, Wang et al. [38] and Wang et al. [39] believed that the exposure status in real-world situations would significantly influence the user-item interaction probabilities. Thus, they analyzed the exposure problem for missing data by estimating user exposures towards items, and then used them to guide the rating modeling process. Besides, Abdollahpouri et al. [1] and Yang et al. [47] discussed the popularity factor in real-world data that users are more likely to interact with popular items. There are also many other works focused on different biases in the rating modeling process [5,8,13,26,31,34,36,42,48]. Although there have been many efforts in the rating modeling process, the disentanglement of intent and preference still largely remains unexplored, which restricts the further understanding of the latent representations. Recently, Ma et al. [21] and Wang et al. [41]attempted to construct independent representation for each intent in implicit feedback. Li et al. [19] aimed to mine the latent user intents in sequential data. Li et al. [19] and Zhao et al. [49] leverage graph modeling to study the user intents behind user feedback. Differently, Wang et al. [40] tried to model intents with knowledge graph but not interactions. However, these methods cannot explicitly learn the disentangled intent and preference representations. They are mostly designed for implicit feedback and incapable of differentiating the influence of intent and preference. Besides, they are mainly suitable for handling coarse-grained intents due to the at least linear computational complexity to the number of intents. In this paper, in order to obtain more robust and interpretable user and item representations, we focus on the two-folded disentanglement implementation for explicit feedback and innovatively propose a double-disentangled collaborative filtering approach to concurrently consider both the users' intents and preferences. METHODOLOGY In this section, we will introduce the technical details of our proposed Double Disentangled Collaborative Filtering (DDCF) approach. We will begin with the notations and solution overview. Solution Overview An explicit rating matrix ∈ R × is composed of historical ratings with real numbers and other missing entries denoted as 0. Since the ratings may have various ranges in real-world applications, here we suppose all the ratings are larger than 0 without loss of generality. Let denote the -th user's rating records on all items. Then we use the binary matrix ∈ R × to denote the implicit form of , that is, = 1 if > 0 and = 0 if = 0. Thus, all the user-item interactions are treated equally in the binary matrix while the rating matrix contains more information about users' individual preferences. Similarly, the -th user's binary records can be represented by . As shown in Figure 2, DDCF aims to perform double disentangled representation learning for better modeling the user-item interactions. Along this line, we learn two types of latent representations for both users and items, namely intent representation and preference representation. Specifically, we first adopt input in the intent recognition network to model the intent representation, which represents the user and item's distribution on each intent channel. Formally, we define one intent channel as a probability distribution over the items. Then, based on the obtained intent vectors, we further decompose the user's rating record into tailored inputs ∈ R according to different intent channels. To solve the complexity problem, we use the sampling technique and only learn the independent preference representations ∈ R for topintent channels through the preference decomposition network. To better mine the user preference under different intention channels, we design the disentangled contrastive learning mechanism, which share the same network parameter Θ with preference decomposition network. Finally, the rating prediction is performed by jointly considering the predicted ratings under intent channels. Intent Recognition Modeling The user behavior may be driven by multiple intent channels. Taking movie recommendations as an example, the appeal to users may come from movies' categories, directors, actors, plots, and many other fine-grained concepts. Hence, distinct intent channels will lead to distinct contributions to the final user decisions. The first step of DDCF is to model the users' and items' distributions on all the intent channels. Suppose there are intent channels in the data. Inspired by classic topic modeling algorithm [3], we assume every user binary records is generated from a mixture of intent channels = ( 1 , ..., ) ∈ R × . Each intent channel ∈ R can be viewed as a probability distribution over the entire items. A higher probability value in implies this item is more likely to belong to this channel, where the sum of all the items' probability values should be equal to 1 for each channel. Then the user intent representation ∈ R for the -th user is defined as the proportion distribution over all the channels, which obeys Dirichlet distribution: ∼ ℎ ( ). Here ∈ R is the learned parameter and the -th dimension of represents the proportion of the -th intent channel. Supposing there are positive items in the user binary records . Hence, can be drawn from the Multinomial distribution: ∼ ( , ). Under this assumption, the marginal likelihood of the user binary record is given as follows: ( | , ) = ∫ ( | , ) ( | ) .(3) Note that the Dirichlet prior is significant for obtaining interpretable and sparse intent representations [35]. However, it is incapable of directly taking gradients through the sampling process. Therefore, we utilize a Laplace approximation to the softmax basis of Dirichlet prior [22] and then perform the reparameterization trick for the Normal distribution. Specifically, let = ( / ), where (·) is the softmax function with temperature parameter . Each dimension of is related to a specific intent channel in full accord with . Then can be given as a multivariate Normal with mean and covariance Σ. Note that the covariance matrix is an approximately diagonal covariance matrix for large since its off-diagonal elements will be suppressed with (1/ ). Hence, the Laplace approximation ( ) can be given by: = log − 1 ∑︁ ′ =1 log ′ , Σ = 1 (1 − 2 ) + 1 2 ∑︁ ′ =1 1 ′ .(4) Since the latent vector is the softmax basis of Dirichlet prior, we are able to approximate the simplex basis by the logistic normal distribution ( | ) ≈ LN ( | , Σ) [17]. According to Equation 3, the generation network is defined as a linear layerˆ= . Here should be constrained to make sure the sum of each column is equal to 1 in principle, which can be easily realized by applying the softmax transformation to . ... i  f      ... f      ' i R .. To inference the model, we employ the Neural Variational inference (NVI) approach due to its success in the embedding competency [15,33]. Specifically, we construct an inference network parameterized by for efficient posterior inference of . ( ) = ( | ) = N ( ( ), Σ ( )).(5) Here the variational distribution obeys multivariate Normal distribution owing to the Laplace approximation. The prior distribution ( ) is also given in the form of Equation 4, where all the is set as a predefined value. Then, we perform the inference network (·) through an -layer MLP. As shown in Figure 2, the network input is . The first layer can be seen as the embedding layer since the input is in the 0/1 form. Each layer contains the linear transformation and activation function except for the last layer, which produces two outputs, ( ) and Σ ( ). Accordingly, by leveraging the reparameterization trick [29], we can easily obtain ( ) by = ( ) + Σ ( ), where ∼ N (0, ). Finally, we can write the evidence lower bound (ELBO) as −L 1 = E [log ( | )] − KL( ( | )|| ( )).(6) The former term in Equation 6 is the reconstruction loss. Following VAE [15], we employ the Monte Carlo sampling to estimating the expected values [log ( | )] = . Furthermore, to encourage the independence among different dimensions in each representation, we follow -VAE [12] to add a penalty parameter for the KL term. Such treatment could help the representation ignore noise in the input and focus more on the key information [4]. As for obtaining the item intent representation ∈ R × , naturally we can assume the user representation as the prior probability for the interacted items, i.e., ∈ { | = 1}. Then we employ the MLP ( ) as the posterior inference network: ∼ (1, ( ( ))/ ).(7) Here is the -th row of the weight matrix in the first layer of the network so that the initial embeddings of the items are shared in the two inference networks. Consequently, we use the KL divergence to measure the distance between the prior and posterior probability as follows: L 2 = Σ Σ ∈ { | =1} ( ( )/ ) log ( ( )/ ) .(8) Preference Decomposition Modeling In this subsection, for constructing disentangled preference representations, we will further decompose the user preferences into the corresponding intent channels. The user-item rating can be viewed as the composite outcome of the -th user's preference under different intent channels. Therefore, we first decompose the user ratings into each channel. Specifically, the input of the preference decomposition network is defined as = 2 ( · ), where · represents the inner product and ∈ R is the -th row of matrix . We also perform L2 normalization to rescale the inputs. In this way, the modified rating order among the items may be quite different from the observed order. For example, item A is rated with 5 points while item B is rated with 3 points. However, under the channel which is quite related to item B while not related to item A, the modified rating may be 0.4 and 0.7 for item A and B, respectively. Different from traditional LFMs, the disentangled preference representation ∈ R ′ , ′ = is composed of the independent representations ∈ R as shown in Figure 1. Specifically, each subdivided representation means the preference under the -th intent channel in the latent preference space. However, when the number of channels becomes too large, the total dimension of will be unacceptable. To avoid this problem, considering that the top channels have the biggest influences for users' decision making processes, we first obtained the sampled intent distribution for the -th user. Then we only pick the top-channels from for constructing preference representations, where is a very small number. Along this line, the obtained user representation matrix will be a sparse matrix, since only entries in each row are non-zero as shown in Figure 1, where << . Note that the selected intent channels will change with different sampling results to avoid falling into local optimum. For convenience, we let ′ ∈ R × and ′ ∈ R × denote the processed input and user representation for the -th user with intent channels. Now we can detailedly introduce the preference decomposition network, which is also based on VAE architecture as shown in Figure 2. Firstly, the encoder (·) transforms the modified inputs to produce independent representations for each user. Here we also select MLP as the network architecture. Each layer consists of the linear transformation and activation function except for the last layer. The last layer generates two outputs, ( ′ ) ∈ R × and Σ ( ′ ) ∈ R × × . The prior ( ) was chosen as the standard normal distribution N (0, I). Accordingly, ′ can be drawn by the variational distribution as follows: ( ′ ) = ( ′ | ′ ) = N ( ( ), Σ ( )). (9) The decoder aims to restore the input by the corresponding processed representation ′ . Following the LFM, we let ∈ R denote the latent property presentation for the -th item. Hence, the decoder is defined by the inner product of the user and item representations,ˆ= . To gather the predicted ratings under different intent channels, a simple but effective manner is the weighted average. Specifically, we first choose the top-largest dimensions in to form up the processed weight vector ′ . Then we perform the normalization treatment to make sure that the sum of all the dimensions of ′ is equal to 1. Consequently, we havê = Σ ′ Σ ′ ′ ′ˆ.(10) Note that all the weights and biases in both encoder and decoder share the same parameters among different intent channels so that the number of entire trainable parameters is equal to a common autoencoder network. Also, the computational complexity of the entire network is limited since is very small. Finally, we can give the ELBO of the preference decomposition network as follows: −L 3 = E [log ( ′ | ′ )] − KL( ( ′ | ′ )|| ( ′ )).(11) Disentangled Contrastive Learning To better study the consistency between the original feedback and different views, in this subsection, we design a disentangled contrastive learning mechanism. Firstly, we leverage the encoder (·) to transform the original input into user embedding , . Then, following Wu et al. [43], we apply data augmentation tricks, node and edge dropout, to modified inputs to produce augmented inputs , for each user. Again, we leverage the encoder (·) to transform , into disentangled user representation , . Note that the network (·) share the same parameters with the encoder in preference decomposition modeling network. The augmented user input , can be viewed as different views of the original user feedback under different intent channels. Conventional contrastive learning methods, such as InfoNCE [9], usually treat two views generated from the same sample as positive pairs. However, in DDCF, the user representations under different intent channels should be distinct. Instead, we treat , and each of the disentangled representations , as a positive pair. , and the other users' representations ′ , , ′ ≠ are treated as negative pairs, enforcing the divergence among different users. Formally, we can maximize the agreement of positive pairs and minimize that of negative pairs with the following contrastive loss: L 4 = Σ Σ − log exp(cos( , , , )/ ) Σ ′ ≠ exp(cos( , , ′ , )/ ) .(12) Here the cosine function cos(·) measures the similarity between two vectors. is the temperature parameter. Optimization In this subsection, we will detailedly introduce how to optimize the four losses given above. The optimization process can be divided into two stages, pre-training stage and unified learning stage. Firstly, since the preference decomposition modeling process is based on the intent recognition modeling process, the learned disentangled preference representation would be meaningless if the intent distribution is randomized. Therefore, in order to avoid converging to the local optimum prematurely, we need to first pre-train the intent recognition networks with the two loss functions L 1 and L 2 . Note that in Equation 8, we leverage the KL divergence between each user's intent representation and interacted items' intent representations. Hence, the loss L 2 has a trend to let all the intent channels have similar probabilities , which obviously run in opposite directions against our motivation of disentanglement. Here we introduce the stop-gradient strategy to prevent this problem. Specifically, we can stop the gradient of in Equation 8 in the backpropagation process, since we mainly aim to learn the item intent representations with the help of user intent representations . Let (·) denote the stop-gradient treatment. We have: L 2 = Σ Σ ∈ { | =1} ( ( )/ ) log ( ( )/ ) ( ) .(13) In this way, the pre-training loss function can be given by: L = L 1 + 2L2 .(14) After the pre-training, we can jointly optimize the intent recognition and preference decomposition modeling processes. Finally, the unified loss of our DDCF model is given by: L = L 1 + 2L2 + 3 L 3 + 4 L 4 .(15) Recommendation The disentangled representations can promote some novel and flexible recommendation strategies to satisfy the specific needs of users when recommending in practical scenes. First, as in the training process, we can integrate users' intents by employing the weighted average with users' intent distributions. In this way, the obtained recommendation results come from comprehensive user interests. Besides, we can only rank the movies under one specific intent channel or assign a user-determined intent distribution to replace the predicted user distribution for satisfying users' dynamic intents. Then we can simply rank these movies by the predicted ratings or design a new scoring system considering both similarities and preferences. Moreover, when a user wants to find some movies similar to one specific movie, we can calculate the similarity between the other movies' intent distributions and this movie's distribution to find similar movies. EXPERIMENTS In this section, we first provide detailed information on the three datasets and evaluation protocols used in the experiments 1 . Next, we introduce the baseline methods and our experimental settings. Then, we report the recommendation performance results of our proposed DDCF models compared to the state-of-the-art baselines. We also discuss the influence of contrastive learning, hyperparameters and validate the intents. Finally, we present some case studies to show the interpretability of DDCF. Experimental Settings Datasets. We conducted our experiments on three real-world datasets, i.e., MovieLens 2 , Amovie 3 and Yahoo 4 . MovieLens-20m is a widelyused movie recommendation dataset. Amovie is a dataset consisting of product ratings collected from Amazon Movies and TV. Yahoo [23] contains the ratings for songs from Yahoo! Music. All the rating data in MovieLens, Amovie, and Yahoo are in the form of 5stars. For validation, following Wu et al. [45], we adopted the data preprocessing to differentiate the positive and negative feedback depending on whether the ratings are not less than 4. Besides, in order to make sure we have adequate observed feedback for better evaluating the recommendation algorithms, we filtered out users with less than 10 observed items. After the data preprocessing, MovieLens contains 20,623 users and 12,975 items with 2,980,083 observed entries. Amovie contains 11,838 users and 9,107 items with 166,363 observed entries. Yahoo contains 13,847 users and 1000 items with 350,174 observed entries. Evaluation metrics. To construct the training set, we randomly sampled 60% observed items for each user. Then, we sampled 10% observed items of each user for validation, and the rest data were used for the test. Hence, we randomly split each dataset five times and reported all the results by average values. We employed four widely used evaluation metrics for evaluating the performance, i.e., P@ , R@ , MAP@ , and NDCG@ [45]. For each user, P (Precision) @ measures the ratio of correct prediction results among top-items to and R (Recall) @ measures the ratio of correct prediction results among top-items to all positive items. Furthermore, MAP (Mean Average Precision) @ and NDCG (Normalized Discounted Cumulative Gain) @ consider the ranking of correct prediction results among top-items. The results of the four metrics are given in the average of all users. Baselines. In the experiments, we compare our proposed approach with various stat-of-the-art explicit-feedback based methods, debiased CF methods, and disentangled CF methods: • PMF: Probabilistic Matrix Factorization [25] is a classic pointwise rating prediction method. • Primal-CR++: Primal-CR++ [44] is a state-of-the-art pairwise CF approach for explicit feedback. • DGCF: Disentangled Graph Collaborative Filtering is a stateof-the-art disentangled CF method for implicit feedback, which exploits user-item relationships through graphs. • DMF: Deep Matrix Factorization [46] is a state-of-the-art pointwise neural approach for explicit feedback. • MACR, CPR: MACR [42] and CPR [36] are two state-ofthe-art debiasing approaches. LightGCN [10] is used as the basic model for implementation of these two methods. Overall Recommendation Performance We present the overall recommendation performance results for the three datasets in Table 2 under two types of settings, i.e., = 5 and = 10. We can discover from Table 2 that DDCF can outperform all the baseline methods on every dataset owing to the learned disentangled intent representations and preference representations. Specifically, DDCF outperforms the best baseline, by a relative boost of 5.85%, 7.16%, 4.46%, 8.04% for the metric P@5, R@5, MAP@5, and NDCG@5 in MovieLens, 9.21%, 10.68%, 7.73%, 8.75% in Amovie, and 5.69%, 3.86%, 6.97%, 5.94% in Yahoo, respectively. Hence, the results clearly demonstrate the effectiveness of our proposed approaches. DGCF is designed for intent disentanglement in implicit feedback. Since they do not exploit the information of graded ratings, they cannot outperform the state-of-the-art recommenders for explicit Ablation Study In this subsection, we evaluate two variants of DDCF. First, DDCF-n is the variant that only adopts positive feedback for intent recognition networks. Second, DDCF-s is the variant without disentangled contrastive learning mechanism. Comparing the results in 3, we can find that DDCF consistently outperforms DDCF-n in all the situations, which demonstrates again that low-rating items are not Investigations on Hyper-parameters In this paper, we factorize the rating matrix into the product of user and item latent representations under different intent channels in a low-rank space. Consequently, the number of dimensions is If is too small, the latent space would have very weak representation ability to fit the real-world data. On the opposite, if is too large, the model complexity would also become too large and may face the over-fitting problem. In this subsection, we maintain the entire dimension = 200 to be unchanged and varied the number of sampled intents and dimension to train our model DDCF. The results in the evaluation metric precision are presented in Figure 3 (see more in Appendix). We can observe that the performance result of DDCF is not good when = 10. With a larger value of , the performance tends to be much better. When = 67, DDCF achieves the best results. With larger , the performance of DDCF will begin to decrease. Here, we also discuss the warm-up trick in our model. The posterior collapse problem will greatly influence the model training. When the KL divergence vanishes, the reconstruction results will be irrelevant to input records. This is unacceptable since we aim to provide personalized recommendations. To solve the problem, we use the warm-up trick for the KL divergences. Specifically, we add a penalty parameter to each KL term in the loss. Then, the initial value of the penalty parameter is set as 0. With the number of training batches growing, the penalty parameter will also increase linearly and finally reach the maximum value in the -th batch. Here, we tune the value of for the loss in Equation 9 in the paper to control the influence of KL divergence. Then, we present the performances of P@5, R@5, and MAP@5 with = 1, 10, 100, 1000, 10000 on the datasets Yahoo in Figure 4 as an example. We fix the maximum value of the penalty parameter as 1. From Figure 4 we can observe that when is small, the warm-up trick has almost no impact on the model training so that the performance of DDCF is quite bad. With the larger , DDCF could greatly avoid the posterior collapse problem and achieve good results. However, when is too large, it will take too many batches to achieve the maximum value of the penalty parameter, which reduces the influence of KL divergence in the loss. Thus the model will converge into sub-optimal results. Validation on Intent Channels In this subsection, we aim to verify the quality of the disentangled intent channels. To validate the intent channels, one direct way is to compare our obtained intent channels with explicit item genres. Considering that the concepts in these two situations are not the same, the correlation would not be too strong. However, there should still exist some correlations, since item genre is one of the important reasons for users to interact with. Here we use the MovieLens-1m dataset for validation, which contains 18 explicit genres of the movies. Since we cannot allocate which intent is related to which explicit genre, it is intractable to directly calculate the relevance between an intent and a genre. Instead, we choose to compute the successful co-occurrence rate for the item pairs. Specifically, we first count the number of pairs that exist in each of our generated intent channels. Then, if the pair also exists in any genre in the real data, we denote it as a successful co-occurrence pair. Hence, the successful co-occurrence rate can be defined as the ratio of the number of co-occurrence pairs to all pairs. Finally, we find that the successful co-occurrence rate is 47.6% for our method DDCF in MovieLens-1m. For comparison, the successful co-occurrence rate of MacridVAE is 38.8% and the average successful co-occurrence rate for the random grouping setting is 33.0%, which are significantly lower than DDCF. These results demonstrate that DDCF has automatically learned the information about movie genres from historical user-item interactions without the usage of ground-truth genre data. Case Study In this subsection, we aim to provide interpretable insights into the learned intent channels. We set the number of intent channels = 200 and then trained DDCF on MovieLens. Table 4 shows a real user case study, where we list the top-3 intent channels of one user. For each channel, we present the titles of the top movies. Thus, we can easily speculate about the user's interests with the help of the intent channels. First, it can be observed from Table 4 that Channel 1 and 3 are both comprised of action Movies. If the channels are coarse-grained, these two channels may be classified into one category. Thanks to our proposed independent sparse representations, DDCF can handle fine-grained intent channels. We can observe that movies in Channel 1 tend to contain more exciting and criminal elements, while movies in Channel 3 tend to be more scary and fictional. Besides, the movies in Channel 2 are mainly animations and comedies, which shows another type of interests for this user. Consequently, DDCF can explicitly present the users' intents, which is beneficial for constructing user profiles. Moreover, conventional LFM tends to consider user preferences synthetically, and hence cannot distinguish the recommendation results. With DDCF, we can separately recommend movies with different intents according to users' dynamic requirements. For example, when the user wants to watch comedies, we can reduce the weights for Channel 1 and 3 while increase the weight for Channel 2 in the rating prediction process. CONCLUSION In this paper, we proposed a two-fold representation learning approach, namely Double Disentangled Collaborative Filtering (DDCF), for improving the robustness and interpretability of recommender systems. A unique perspective of DDCF is that low-rating items can be partially used as positive feedback for recognizing intents, rather than always viewed as negative feedback in traditional approaches. Specifically, the first-level disentanglement is for separating the influence factors of intent and preference, while the second-level disentanglement aims to construct independent sparse preference representations under different intents with limited computational complexity. Moreover, we designed a contrastive learning mechanism for disentangled representations. Finally, we conducted extensive experiments on three real-world datasets which validated both the effectiveness and interpretability of DDCF. Figure 1 : 1The rating modeling processes in conventional LFM and DDCF, respectively. Figure 2 : 2The network architectures of DDCF. in Equation 6is the KL divergence, which can be given in analytical form: KL( || ) • SQL-Rank, Deep-SQL: Stochastic Queuing Listwise Ranking[45] is a state-of-the-art listwise CF approach. Deep-SQL is the implementation for SQL-Rank with neural networks like DMF, which can produce better results.• MacridVAE: MacridVAE is a state-of-the-art disentangled representation learning method based on user behaviors. • Multi-VAE+: Variational Autoencoders for Collaborative Filtering (Multi-VAE)[20] is a state-of-the-art autoencoder based approach. Here we build the model Multi-VAE+ by leveraging the normalized cross-entropy loss[46] to replace the original cross-entropy loss for the reconstruction process so that Multi-VAE+ can handle explicit feedback.For all the above baselines, we used grid search to carefully tune the corresponding parameters, such as the number of dimensions and regularization parameters. Besides, for SQL-Rank, we chose the ratio of subsampled unobserved items to positive items as 3 : 1 by grid search. For DMF, Deep-SQL, Multi-VAE+, DDCF-n, and DDCF, we use the same 2-layer MLP as the encoder architecture for a fair comparison. For MACR and CPR, we use the 3-layer LightGCN as the basic model. For our DDCF and DDCF-n models, we tuned the number of dimensions and in[50, 100, 150, 200, 250, 300] and the number of sampled intent channels in[2, 3, ..., 10]. We set the prior parameter as 1 and tuned the penalty parameter in [0.2, 0.3, ..., 1.4, 1.5]. Moreover, to prevent posterior collapse, we also adopt the warm-up trick to increase and decrease by epochs. We set = 0.4, = 0.2 and 2 = 1, 3 = 1, 4 = 0.001. Figure 3 : 3The performance of P@5 with different values of dimension on the three datasets. Figure 4 : 4The performance of DDCF with different values of on the three metrics. Table 1 : 1Performances with/without negative items.Datasets Methods P@10 R@10 MAP@10 MovieLens REL (Setting 1) 0.4016 0.1018 0.2922 REL (Setting 2) 0.3829 0.0906 0.2760 PDA (Setting 1) 0.4064 0.1012 0.2965 PDA (Setting 2) 0.3916 0.0936 0.2836 DMF (Setting 1) 0.4302 0.1205 0.3112 DMF (Setting 2) 0.4282 0.1104 0.3109 Multi-VAE (Setting 1) 0.4596 0.1232 0.3504 Multi-VAE (Setting 2) 0.4533 0.1126 0.3494 Amovie REL (Setting 1) 0.0532 0.0433 0.0251 REL (Setting 2) 0.0444 0.0353 0.0205 PDA (Setting 1) 0.0588 0.0475 0.0290 PDA (Setting 2) 0.0505 0.0391 0.0242 DMF (Setting 1) 0.0733 0.0593 0.0532 DMF (Setting 2) 0.0677 0.0545 0.0342 Multi-VAE (Setting 1) 0.0765 0.0643 0.0396 Multi-VAE (Setting 2) 0.0714 0.0584 0.0365 Yahoo REL (Setting 1) 0.1070 0.3106 0.0536 REL (Setting 2) 0.0998 0.2789 0.0497 PDA (Setting 1) 0.1038 0.3168 0.0567 PDA (Setting 2) 0.0970 0.2869 0.0514 DMF (Setting 1) 0.1113 0.3321 0.0592 DMF (Setting 2) 0.1075 0.3050 0.0567 Multi-VAE (Setting 1) 0.1140 0.3369 0.0622 Multi-VAE (Setting 2) 0.1092 0.3121 0.0590 ... ...... ... ... ... ... ... i X i X i s ... ... ... Preference Decomposition Modeling User Intent Recognition . ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...' i R ... i R ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ' i u ... ... ... ... ... ... ... i R ... ... f  j  Item Intent Recognition ... ... ... f  ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Data Augmentation f  i R Disentangled Contrastive Learning Table 2 : 2The overall recommendation performances of different approaches.Datasets Methods P@5 P@10 R@5 R@10 MAP@5 MAP@10 NDCG@5 NDCG@10 MovieLens PMF 0.1863 0.1650 0.0336 0.0558 0.1318 0.0995 0.1955 0.1843 Primal-CR++ 0.2872 0.2587 0.0582 0.0934 0.2250 0.1804 0.3035 0.2907 DGCF 0.3318 0.2991 0.0722 0.1211 0.2591 0.2081 0.3487 0.3371 SQL-Rank 0.3405 0.3034 0.0750 0.1224 0.2678 0.2140 0.3585 0.3437 MACR 0.3571 0.3164 0.0851 0.1403 0.2840 0.2257 0.3895 0.3703 CPR 0.3437 0.3087 0.0909 0.1510 0.2635 0.2097 0.3825 0.3711 DMF 0.4033 0.3589 0.1019 0.1656 0.3270 0.2645 0.4253 0.4114 Deep-SQL 0.4141 0.3717 0.1030 0.1711 0.3330 0.2725 0.4336 0.4222 MacridVAE 0.3828 0.3420 0.1031 0.1697 0.3002 0.2401 0.4050 0.3952 Multi-VAE+ 0.4128 0.3651 0.1076 0.1711 0.3385 0.2722 0.4386 0.4229 DDCF 0.4304 0.3801 0.1153 0.1819 0.3536 0.2847 0.4576 0.4415 Amovie PMF 0.0098 0.0091 0.0054 0.0091 0.0050 0.0033 0.0101 0.0099 Primal-CR++ 0.0247 0.0212 0.0077 0.0130 0.0143 0.0094 0.0262 0.0256 DGCF 0.0683 0.0588 0.0287 0.0469 0.0420 0.0291 0.0734 0.0744 SQL-Rank 0.0719 0.0639 0.0289 0.0491 0.0452 0.0320 0.0784 0.0801 MACR 0.0788 0.0690 0.0317 0.0543 0.0494 0.0346 0.0845 0.0860 CPR 0.0827 0.0734 0.0362 0.0615 0.0520 0.0369 0.0897 0.0929 DMF 0.0810 0.0699 0.0353 0.0581 0.0512 0.0354 0.0887 0.0891 Deep-SQL 0.0851 0.0726 0.0374 0.0604 0.0549 0.0379 0.0891 0.0907 MacridVAE 0.0923 0.0773 0.0412 0.0655 0.0608 0.0414 0.1017 0.1025 Multi-VAE+ 0.0916 0.0779 0.0404 0.0644 0.0594 0.0411 0.1002 0.1015 DDCF 0.1008 0.0847 0.0456 0.0716 0.0655 0.0452 0.1106 0.1115 Yahoo PMF 0.0608 0.0492 0.0816 0.1275 0.0360 0.0228 0.0809 0.0947 Primal-CR++ 0.0861 0.0668 0.1306 0.1945 0.0521 0.0324 0.1291 0.1493 DGCF 0.1291 0.1008 0.1914 0.2852 0.0837 0.0534 0.1948 0.2239 SQL-Rank 0.1316 0.1012 0.2043 0.3040 0.0838 0.0524 0.2016 0.2333 MACR 0.1358 0.1033 0.2252 0.3292 0.0887 0.0549 0.2193 0.2530 CPR 0.1418 0.1101 0.2190 0.3254 0.0926 0.0587 0.2173 0.2517 DMF 0.1450 0.1125 0.2227 0.3349 0.0946 0.0598 0.2246 0.2565 Deep-SQL 0.1546 0.1187 0.2434 0.3574 0.1033 0.0650 0.2407 0.2773 MacridVAE 0.1453 0.1107 0.2290 0.3332 0.0954 0.0597 0.2271 0.2603 Multi-VAE+ 0.1537 0.1161 0.2389 0.3442 0.1027 0.0643 0.2402 0.2728 DDCF 0.1634 0.1234 0.2528 0.3649 0.1105 0.0695 0.2550 0.2884 feedback. Moreover, for MacridVAE, the preference and intent fac- tors are still entangled in the rating modeling process. Therefore, DDCF can outperform MacridVAE. MACR and CPR are two debi- ased approaches. MACR is designed for popularity bias and CPR is for exposure bias. However, the two approaches are incapable of disentangling user intent and thus performs worse than DDCF. Besides, we can find that the matrix factorization based approaches, i.e., PMF, Primal-CR++, and SQL-rank, usually perform worse than deep learning based approaches. This observation demonstrates the effectiveness of neural networks. PMF and Primal-CR++ both per- form not well in Amovie. This may be because Amovie is extremely sparse. Moreover, we conduct the significant test (p-value = 0.05) to validate the improvements of DDCF over the strongest baseline are statistically significant in all three datasets. Table 3 : 3Ablation experiments.totally negative and can provide useful information from the intent perspective. With only positive feedbacks for intent modeling, DDCF-n cannot learn the intents well. As for DDCF-s, it always performs worse than DDCF, since it cannot leverage auxiliary supervision of positive pairs among different views. With contrastive learning, DDCF can learn the intention and preference better.Datasets Methods P@10 R@10 MAP@10 NDCG@10 MovieLens DDCF-n 0.3680 0.1743 0.2736 0.4263 DDCF-s 0.3754 0.1794 0.2802 0.4361 DDCF 0.3801 0.1819 0.2847 0.4415 Amovie DDCF-n 0.0802 0.0669 0.0421 0.1033 DDCF-s 0.0826 0.0687 0.0443 0.1069 DDCF 0.0847 0.0716 0.0452 0.1115 Yahoo DDCF-n 0.1187 0.3510 0.0665 0.2797 DDCF-s 0.1217 0.3593 0.0683 0.2857 DDCF 0.1234 0.3649 0.0695 0.2884 Table 4 : 4Case studies on user intents. (We present the top-3 intent channels for a user.) Intent Channel 1 Payback, Scream 3 , Lethal Weapon 3 , Gone in 60 Seconds , Wild Wild West, Mission: Impossible 2 Intent Channel 2 A Bug's Life, Toy Story 2, Austin Powers: The Spy Who Shagged Me, Doctor Dolittle, Lethal Weapon Intent Channel 3 Terminator 2: Judgment Day, The Matrix , The Terminator , Alien , The World Is Not Enough quite vital for the performance. Preference or Intent? Double Disentangled Collaborative Filtering Preference or Intent? Double Disentangled Collaborative Filtering Conference acronym 'XX, June 03-05, 2018, Woodstock, NY All the code will be publicly available after the paper is accepted. 2 https://grouplens.org/datasets/movielens/ 3 http://jmcauley.ucsd.edu/data/amazon/ 4 https://webscope.sandbox.yahoo.com/catalog.php?datatype=r Controlling popularity bias in learning-to-rank recommendation. Himan Abdollahpouri, Robin Burke, Bamshad Mobasher, Proceedings of the Eleventh ACM Conference on Recommender Systems. the Eleventh ACM Conference on Recommender SystemsHiman Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2017. Control- ling popularity bias in learning-to-rank recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems. 42-46. Item2vec: neural item embedding for collaborative filtering. Oren Barkan, Noam Koenigstein, IEEE 26th International Workshop on Machine Learning for Signal Processing. Oren Barkan and Noam Koenigstein. 2016. Item2vec: neural item embedding for collaborative filtering. In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 1-6. . M David, Blei, Y Andrew, Michael I Jordan Ng, Latent dirichlet allocation. the Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research 3 (2003), 993-1022. . P Christopher, Irina Burgess, Arka Higgins, Loic Pal, Nick Matthey, Guillaume Watters, Alexander Desjardins, Lerchner, arXiv:1804.03599Understanding disentangling in -VAE. arXiv preprintChristopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guil- laume Desjardins, and Alexander Lerchner. 2018. Understanding disentangling in -VAE. arXiv preprint arXiv:1804.03599 (2018). Balanced neighborhoods for multi-sided fairness in recommendation. Robin Burke, Nasim Sonboli, Aldo Ordonez-Gauger, PMLRConference on Fairness, Accountability and Transparency. Robin Burke, Nasim Sonboli, and Aldo Ordonez-Gauger. 2018. Balanced neigh- borhoods for multi-sided fairness in recommendation. In Conference on Fairness, Accountability and Transparency. PMLR, 202-214. Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2020. Bias and debias in recommender system: A survey and future directions. Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2020. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems (2020). The yahoo! music dataset and kdd-cup'11. Gideon Dror, Noam Koenigstein, Yehuda Koren, Markus Weimer, Proceedings of the 2011 International Conference on KDD Cup. the 2011 International Conference on KDD Cup18Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. 2011. The yahoo! music dataset and kdd-cup'11. In Proceedings of the 2011 International Conference on KDD Cup 2011-Volume 18. JMLR. org, 3-18. FaiRecSys: mitigating algorithmic bias in recommender systems. Bora Edizel, Francesco Bonchi, Sara Hajian, International Journal of Data Science and Analytics. 9André Panisson, and Tamir TassaBora Edizel, Francesco Bonchi, Sara Hajian, André Panisson, and Tamir Tassa. 2020. FaiRecSys: mitigating algorithmic bias in recommender systems. Interna- tional Journal of Data Science and Analytics 9, 2 (2020), 197-213. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Michael Gutmann, Aapo Hyvärinen, Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings. the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference ProceedingsMichael Gutmann and Aapo Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 297-304. Lightgcn: Simplifying and powering graph convolution network for recommendation. Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang, Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. the 43rd International ACM SIGIR conference on research and development in Information RetrievalXiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639-648. Neural collaborative filtering. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, Tat-Seng Chua, Proceedings of the 26th International Conference on World Wide Web. the 26th International Conference on World Wide WebXiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web. 173-182. Shakir Mohamed, and Alexander Lerchner. 2016. beta-vae: Learning basic visual concepts with a constrained variational framework. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2016. beta-vae: Learning basic visual concepts with a constrained variational framework. (2016). Effects of position bias on click-based recommender evaluation. Katja Hofmann, Anne Schuth, Alejandro Bellogin, Maarten De Rijke, European Conference on Information Retrieval. SpringerKatja Hofmann, Anne Schuth, Alejandro Bellogin, and Maarten De Rijke. 2014. Effects of position bias on click-based recommender evaluation. In European Conference on Information Retrieval. Springer, 624-630. Collaborative Filtering for Implicit Feedback Datasets. Yifan Hu, Yehuda Koren, Chris Volinsky, ICDM. Citeseer8Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets.. In ICDM, Vol. 8. Citeseer, 263-272. P Diederik, Max Kingma, Welling, arXiv:1312.6114Auto-encoding variational bayes. arXiv preprintDiederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013). Matrix factorization techniques for recommender systems. Yehuda Koren, Robert Bell, Chris Volinsky, Computer. 42Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization tech- niques for recommender systems. Computer 42, 8 (2009), 30-37. Correlated topic models. D John, David M Lafferty, Blei, Advances in neural information processing systems. John D Lafferty and David M Blei. 2006. Correlated topic models. In Advances in neural information processing systems. 147-154. Self concept, ideal self concept, and consumer purchase intentions. E Laird LandonJr, Journal of consumer research. 1E Laird Landon Jr. 1974. Self concept, ideal self concept, and consumer purchase intentions. Journal of consumer research 1, 2 (1974), 44-51. Intention-aware sequential recommendation with structured intent transition. Haoyang Li, Xin Wang, Ziwei Zhang, Jianxin Ma, Peng Cui, Wenwu Zhu, IEEE Transactions on Knowledge and Data Engineering. 34Haoyang Li, Xin Wang, Ziwei Zhang, Jianxin Ma, Peng Cui, and Wenwu Zhu. 2021. Intention-aware sequential recommendation with structured intent transition. IEEE Transactions on Knowledge and Data Engineering 34, 11 (2021), 5403-5414. Variational autoencoders for collaborative filtering. Dawen Liang, G Rahul, Krishnan, D Matthew, Tony Hoffman, Jebara, WWW. International World Wide Web Conferences Steering Committee. Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. 2018. Variational autoencoders for collaborative filtering. In WWW. International World Wide Web Conferences Steering Committee, 689-698. Learning disentangled representations for recommendation. Advances in neural information processing systems. Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, Wenwu Zhu, Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, and Wenwu Zhu. 2019. Learn- ing disentangled representations for recommendation. Advances in neural infor- mation processing systems (2019). Choice of basis for Laplace approximation. J C David, Mackay, Machine learning. 33David JC MacKay. 1998. Choice of basis for Laplace approximation. Machine learning 33, 1 (1998), 77-86. Collaborative prediction and ranking with non-random missing data. M Benjamin, Richard S Marlin, Zemel, Proceedings of the third ACM conference on Recommender systems. the third ACM conference on Recommender systemsACMBenjamin M Marlin and Richard S Zemel. 2009. Collaborative prediction and ranking with non-random missing data. In Proceedings of the third ACM conference on Recommender systems. ACM, 5-12. Recommender systems. Prem Melville, Vikas Sindhwani, Encyclopedia of machine learning. 1Prem Melville and Vikas Sindhwani. 2010. Recommender systems. Encyclopedia of machine learning 1 (2010), 829-838. Probabilistic matrix factorization. Andriy Mnih, R Ruslan, Salakhutdinov, Advances in neural information processing systems. Andriy Mnih and Ruslan R Salakhutdinov. 2008. Probabilistic matrix factorization. In Advances in neural information processing systems. 1257-1264. Political communication on social media: A tale of hyperactive users and bias in recommender systems. Orestis Papakyriakopoulos, Juan Carlos Medina Serrano, Simon Hegelich, Online Social Networks and Media. 15100058Orestis Papakyriakopoulos, Juan Carlos Medina Serrano, and Simon Hegelich. 2020. Political communication on social media: A tale of hyperactive users and bias in recommender systems. Online Social Networks and Media 15 (2020), 100058. Nikhil Rao, Hsiang-Fu Yu, Pradeep Ravikumar, Dhillon, Collaborative Filtering with Graph Information: Consistency and Scalable Methods. Nikhil Rao, Hsiang-Fu Yu, Pradeep Ravikumar, and Inderjit S Dhillon. 2015. Col- laborative Filtering with Graph Information: Consistency and Scalable Methods.. . In NIPS. 27CiteseerIn NIPS, Vol. 2. Citeseer, 7. BPR: Bayesian personalized ranking from implicit feedback. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, Lars Schmidt-Thieme, Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. the twenty-fifth conference on uncertainty in artificial intelligenceAUAI PressSteffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. AUAI Press, 452-461. Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, arXiv:1401.4082arXiv preprintDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082 (2014). Introduction to recommender systems handbook. Francesco Ricci, Lior Rokach, Bracha Shapira, Recommender systems handbook. SpringerFrancesco Ricci, Lior Rokach, and Bracha Shapira. 2011. Introduction to rec- ommender systems handbook. In Recommender systems handbook. Springer, 1-35. Unbiased recommender learning from missing-not-at-random implicit feedback. Yuta Saito, Suguru Yaginuma, Yuta Nishino, Hayato Sakata, Kazuhide Nakata, Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningYuta Saito, Suguru Yaginuma, Yuta Nishino, Hayato Sakata, and Kazuhide Nakata. 2020. Unbiased recommender learning from missing-not-at-random implicit feedback. In Proceedings of the 13th International Conference on Web Search and Data Mining. 501-509. The self-concept in relation to product preference and purchase intention. M Joseph Sirgy, Marketing Horizons: A 1980's Perspective. SpringerM Joseph Sirgy. 2015. The self-concept in relation to product preference and purchase intention. In Marketing Horizons: A 1980's Perspective. Springer, 350- 354. Autoencoding variational inference for topic models. Akash Srivastava, Charles Sutton, arXiv:1703.01488arXiv preprintAkash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. arXiv preprint arXiv:1703.01488 (2017). Track recommendation bias: Gender, migration background and SES bias over a 20-year period in the Dutch context. C Anneke, H De Timmermans, Boer, Mpc Hta Amsing, Van Der, Werf, British Educational Research Journal. 44Anneke C Timmermans, H De Boer, HTA Amsing, and MPC Van Der Werf. 2018. Track recommendation bias: Gender, migration background and SES bias over a 20-year period in the Dutch context. British Educational Research Journal 44, 5 (2018), 847-874. Rethinking LDA: Why priors matter. M Hanna, Wallach, M David, Andrew Mimno, Mccallum, Advances in neural information processing systems. Hanna M Wallach, David M Mimno, and Andrew McCallum. 2009. Rethinking LDA: Why priors matter. In Advances in neural information processing systems. 1973-1981. Cross pairwise ranking for unbiased item recommendation. Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, Ruiming Tang, Proceedings of the ACM Web Conference 2022. the ACM Web Conference 2022Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, and Ruiming Tang. 2022. Cross pairwise ranking for unbiased item recommendation. In Proceedings of the ACM Web Conference 2022. 2370-2378. Collaborative recurrent autoencoder: recommend while learning to fill in the blanks. Hao Wang, Dit-Yan Shi Xingjian, Yeung, Advances in Neural Information Processing Systems. Hao Wang, SHI Xingjian, and Dit-Yan Yeung. 2016. Collaborative recurrent autoencoder: recommend while learning to fill in the blanks. In Advances in Neural Information Processing Systems. 415-423. Modeling dynamic missingness of implicit feedback for recommendation. Menghan Wang, Mingming Gong, Xiaolin Zheng, Kun Zhang, Advances in neural information processing systems. 316669Menghan Wang, Mingming Gong, Xiaolin Zheng, and Kun Zhang. 2018. Modeling dynamic missingness of implicit feedback for recommendation. Advances in neural information processing systems 31 (2018), 6669. Collaborative filtering with social exposure: A modular approach to social recommendation. Menghan Wang, Xiaolin Zheng, Yang Yang, Kun Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Menghan Wang, Xiaolin Zheng, Yang Yang, and Kun Zhang. 2018. Collaborative filtering with social exposure: A modular approach to social recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. Xiangnan He, and Tat-Seng Chua. 2021. Learning intents behind interactions with knowledge graph for recommendation. Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Proceedings of the Web Conference 2021. the Web Conference 2021Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He, and Tat-Seng Chua. 2021. Learning intents behind interactions with knowledge graph for recommendation. In Proceedings of the Web Conference 2021. 878-887. Disentangled Graph Collaborative Filtering. Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalXiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, and Tat-Seng Chua. 2020. Disentangled Graph Collaborative Filtering. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1001-1010. Model-agnostic counterfactual reasoning for eliminating popularity bias in recommender system. Tianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, Jinfeng Yi, Xiangnan He, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningTianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, Jinfeng Yi, and Xiangnan He. 2021. Model-agnostic counterfactual reasoning for eliminating popularity bias in recommender system. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1791-1800. Self-supervised graph learning for recommendation. Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie, Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. the 44th international ACM SIGIR conference on research and development in information retrievalJiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. In Proceed- ings of the 44th international ACM SIGIR conference on research and development in information retrieval. 726-735. Large-scale collaborative ranking in near-linear time. Liwei Wu, Cho-Jui Hsieh, James Sharpnack, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMLiwei Wu, Cho-Jui Hsieh, and James Sharpnack. 2017. Large-scale collaborative ranking in near-linear time. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 515-524. SQL-Rank: A Listwise Approach to Collaborative Ranking. Liwei Wu, Cho-Jui Hsieh, James Sharpnack, Proceedings of the 35th International Conference on Machine Learning, ser. the 35th International Conference on Machine Learning, ser80Liwei Wu, Cho-Jui Hsieh, and James Sharpnack. 2018. SQL-Rank: A Listwise Approach to Collaborative Ranking. In Proceedings of the 35th International Con- ference on Machine Learning, ser, Vol. 80. 5315-5324. Deep Matrix Factorization Models for Recommender Systems. Xinyu Hong-Jian Xue, Jianbing Dai, Shujian Zhang, Jiajun Huang, Chen, IJCAI. Hong-Jian Xue, Xinyu Dai, Jianbing Zhang, Shujian Huang, and Jiajun Chen. 2017. Deep Matrix Factorization Models for Recommender Systems.. In IJCAI. 3203-3209. Unbiased offline recommender evaluation for missing-not-atrandom implicit feedback. Longqi Yang, Yin Cui, Yuan Xuan, Chenyang Wang, Serge Belongie, Deborah Estrin, Proceedings of the 12th ACM Conference on Recommender Systems. the 12th ACM Conference on Recommender SystemsLongqi Yang, Yin Cui, Yuan Xuan, Chenyang Wang, Serge Belongie, and Debo- rah Estrin. 2018. Unbiased offline recommender evaluation for missing-not-at- random implicit feedback. In Proceedings of the 12th ACM Conference on Recom- mender Systems. 279-287. Chonggang Song, Guohui Ling, and Yongdong Zhang. 2021. Causal intervention for leveraging popularity bias in recommendation. Yang Zhang, Fuli Feng, Xiangnan He, Tianxin Wei, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalYang Zhang, Fuli Feng, Xiangnan He, Tianxin Wei, Chonggang Song, Guohui Ling, and Yongdong Zhang. 2021. Causal intervention for leveraging popularity bias in recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 11-20. Multi-view intent disentangle graph networks for bundle recommendation. Sen Zhao, Wei Wei, Ding Zou, Xianling Mao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Sen Zhao, Wei Wei, Ding Zou, and Xianling Mao. 2022. Multi-view intent disen- tangle graph networks for bundle recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 4379-4387.
[]
[ "MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks", "MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks", "MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks", "MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks" ]
[ "Raffaele Galliera [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Alessandro Morelli [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Roberto Fronteddu [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Niranjan Suri [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Raffaele Galliera [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Alessandro Morelli [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Roberto Fronteddu [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n", "Niranjan Suri [email protected] \nDepartment of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA\n" ]
[ "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA", "Department of Intelligent Systems & Robotics\n‡ US Army Research Laboratory (ARL) Adelphi\nFlorida Institute for Human & Machine Cognition (IHMC)\nThe University of West Florida (UWF) Pensacola\nFL, MDUSA, USA" ]
[]
Fast and efficient transport protocols are the foundation of an increasingly distributed world. The burden of continuously delivering improved communication performance to support next-generation applications and services, combined with the increasing heterogeneity of systems and network technologies, has promoted the design of Congestion Control (CC) algorithms that perform well under specific environments. The challenge of designing a generic CC algorithm that can adapt to a broad range of scenarios is still an open research question. To tackle this challenge, we propose to apply a novel Reinforcement Learning (RL) approach. Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return and models the learning process as an infinite-horizon task. We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch that researchers have encountered when applying RL to CC. We evaluated our solution on the task of file transfer and compared it to TCP Cubic. While further research is required, results have shown that MARLIN can achieve comparable results to TCP with little hyperparameter tuning, in a task significantly different from its training setting. Therefore, we believe that our work represents a promising first step towards building CC algorithms based on the maximum entropy RL framework.
10.48550/arxiv.2302.01301
[ "https://export.arxiv.org/pdf/2302.01301v1.pdf" ]
256,503,534
2302.01301
bdc49f4e619f9bc43b614ce6aedd73a105fec3dd
MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks 2 Feb 2023 Raffaele Galliera [email protected] Department of Intelligent Systems & Robotics ‡ US Army Research Laboratory (ARL) Adelphi Florida Institute for Human & Machine Cognition (IHMC) The University of West Florida (UWF) Pensacola FL, MDUSA, USA Alessandro Morelli [email protected] Department of Intelligent Systems & Robotics ‡ US Army Research Laboratory (ARL) Adelphi Florida Institute for Human & Machine Cognition (IHMC) The University of West Florida (UWF) Pensacola FL, MDUSA, USA Roberto Fronteddu [email protected] Department of Intelligent Systems & Robotics ‡ US Army Research Laboratory (ARL) Adelphi Florida Institute for Human & Machine Cognition (IHMC) The University of West Florida (UWF) Pensacola FL, MDUSA, USA Niranjan Suri [email protected] Department of Intelligent Systems & Robotics ‡ US Army Research Laboratory (ARL) Adelphi Florida Institute for Human & Machine Cognition (IHMC) The University of West Florida (UWF) Pensacola FL, MDUSA, USA MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks 2 Feb 2023Index Terms-Computer NetworksCommunications ProtocolMachine LearningCongestion ControlReinforcement LearningSoft Actor-Critic Fast and efficient transport protocols are the foundation of an increasingly distributed world. The burden of continuously delivering improved communication performance to support next-generation applications and services, combined with the increasing heterogeneity of systems and network technologies, has promoted the design of Congestion Control (CC) algorithms that perform well under specific environments. The challenge of designing a generic CC algorithm that can adapt to a broad range of scenarios is still an open research question. To tackle this challenge, we propose to apply a novel Reinforcement Learning (RL) approach. Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return and models the learning process as an infinite-horizon task. We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch that researchers have encountered when applying RL to CC. We evaluated our solution on the task of file transfer and compared it to TCP Cubic. While further research is required, results have shown that MARLIN can achieve comparable results to TCP with little hyperparameter tuning, in a task significantly different from its training setting. Therefore, we believe that our work represents a promising first step towards building CC algorithms based on the maximum entropy RL framework. I. INTRODUCTION Network communications are the backbone of an everincreasingly distributed digital world. Each day, technologies such as software, platform, and infrastructure as a service over Cloud/Edge/Fog computing systems, the Internet of Things, 5G networks, space communications and networks, peer-topeer networks, blockchain, ad-hoc and mesh networks enable countless organizations and people to run their business and perform tasks that have become part of our daily routine with just a tap on a screen. The different characteristics of those technologies, combined with the requirements of very diverse services built on top of them, present unique challenges for network protocol designers and researchers. Transport protocols that can achieve high network resource utilization while maintaining congestion, i.e., endto-end queueing delays, low in presence of ever-changing network conditions and high churn of connections are the key enabling technology that underlies modern distributed systems. Within transport protocol design, this dual goal (high channel utilization and low congestion) is the objective of Congestion Control (CC) algorithms. The critical impact of CC on the performance of distributed applications and services has led researchers and engineers to design many variants, each one with different target scenarios and trade-offs in terms of aggressiveness, responsiveness to loss and congestion, fairness, and friendliness [1,2,3]. However, the challenge of designing a generic CC algorithm that can provide nearoptimal performance in a broad range of network and traffic scenarios is still open. Recent advances in computational capacity, made possible by novel CPU, GPU, and Application-Specific Integrated Circuit (ASIC) architectures, along with the availability of low-cost, yet powerful, versions of such hardware accelerators, have led to an explosion of successful applications of Machine Learning (ML) technology in several domains, coming to meet the requirements of resource constrained environments such as the edge [4]. This has sprung renewed research interest towards the development of ML models, Reinfocement Learning (RL) agents, algorithms, and applications that can face the unique challenges of the real-world [5,6,7,8]. The difficulty of designing an efficient generic CC algorithm, in contrast with the relative ease of collecting data on the communication performance and the small size of the action space, which typically focuses solely on adjusting the size of the Congestion Window (CWND), has prompt researchers to investigate the use of ML for CC optimization [9,10]. While many studies have already shown promising results, they still underperform when compared to existing TCP CC algorithms in many scenarios. Reasons may include: difficulties to train RL agents and/or ML models using real networks and hardware; low fidelity of simulation and emulation environments; sub-optimal environment representation (state), action space, and/or reward function design; and delayed action effects on the state due to communications delay. Additional research is needed to address these problems. This paper introduces our initial work on MARLIN, a RL agent for CC. Our approach is based on Soft Actor-Critic (SAC) [11], an off-policy, entropy-regularized RL algorithm, that has seen successful application in numerous real-world problems, especially robotics [7]. We describe the design and architecture of MARLIN and discuss how we trained the agent, the hyperparameters used, and the main assumptions and choices we made during its design phase, comparing our work with other RL-based approaches to CC. Furthermore, we discuss the future directions of this research, of which this paper is just the first step. Finally, we present preliminary experimental results obtained in a real networking scenario, where we investigate the current performance of MARLIN and compare it to the Transmission Control Protocol (TCP) in handling a file transfer task. II. DESIGNING A REINFORCEMENT LEARNING AGENT FOR CONGESTION CONTROL MARLIN is a RL agent for CC, trained in a real network by using continuous actions to update the CWND. For the purpose of this research, we did not consider fairness towards competing MARLIN flows and/or other protocols as one of the optimization objectives of MARLIN, which we leave as a future research question. Within MARLIN, learning is based on SAC [11], which trains a stochastic, off-policy, entropy-regularized agent. The agent interfaces with a custom transport protocol, presented in Section III, via Remote Procedure Calls (RPCs) in a nonblocking, bi-directional fashion, as shown in Figure 1. The frequency of action taking depends on the latest Smoothed Round-Trip Time (SRTT) estimated for the path. The reward function is shaped as a strictly negative reward to intrinsically encode the principle that every step taken to transfer the information is a step too much. Although it is generally desirable that the agent completes its tasks as quickly as possible, the RL setting proposed here does not aim for terminal states. It is inconvenient for the agent to overfit its trajectories on a fixed amount of bytes to transfer, or a fixed episode length, such that it will try to find a terminal state at any cost in a certain number of timesteps. Instead, we aim at designing RL agents able to work on "crowded" links and achieve high channel utilization independently from the volume of data transferred during training. For such reason, training is based on infinite-horizon tasks, never encountering a terminal state and following the Partial Episode Bootstrapping (PEB) principles for infinitehorizon training presented in [12]. A. Congestion Control as Markov Decision Process Traditional CC algorithms implemented within transport protocols such as TCP take actions, e.g., updating their CWND, in response to changes in the environment they sense with the goal of achieving maximum throughput and minimum congestion. Changes may include, for example, packet loss, duplicate packets, or variations in the measured Round-Trip Time (RTT). Such sequential decision making setting can be naturally posed as a Markov Decision Process (MDP) [13]. The MDP framework formalizes with S, A, p, R the sequential agent-environment interaction as the exchange of three different signals at every time step t of the process. The RL "learning from interaction" fundamentals [14] stem from this abstraction: the action a t ∈ A(s t ) taken by the agent at a certain step t after sensing the state s t ∈ S leads to a consequent response of the environment as a reward signal r t+1 ∈ R ⊂ R and transitions to a new state s t+1 . This process follows the environment dynamics described by the probability function p(s t+1 , r t+1 |s t , a t ) : S × R × S × A −→ [0, 1]. Such formulation of the transition dynamics also defines the Markov property, which requires the state to include all the relevant information about past interactions that would impact the future, as each possible values of r t ∈ R and s t ∈ S depend solely on the previous state and action s t−1 , a t−1 . Communication networks are a subtle environment for a RL agent. Packets in modern networks are often transmitted a few milliseconds apart, sometimes microseconds. At the same time, running Stochastic Gradient Descent (SGD) to update the model during training, or even simply computing a new action, can take up a significantly greater amount of time. As a consequence, the environment might have already changed significantly by the time the action is taken, potentially making it obsolete or, even, wrong. This has been identified as one of the main reasons why it is hard to maintain the performance achieved in a simulated environment after moving to a realworld deployment [15]. Furthermore, actions do not instantly affect the agent's perception, as the effects of changing the CWND require some time before they have an impact on the network and that impact is propagated back to the source. As new packets are transmitted at a different rate after an action has been taken, they first need to reach their destination, then the receiver needs to generate and transmit a new ACKnowledge (ACK) message for those packets, and finally that ACK needs to arrive back at the source before any information about the impact of the action taken can be inferred. These events will take, at the very least, one RTT to complete. This raises a fundamental question for anybody who wants to apply RL to the problem of CC: when should the next action be taken? While other similar approaches, like [16,17], gather statistics during a fixed amount of time to compute the state and take the next action, MARLIN implements a different heuristic algorithm. Every time the transport protocol makes new network statistics available to MARLIN, if at least the last reported SRTT has elapsed since the time the last action was taken, data collected up to that time is refined, the reward r t with regards to action a t−1 is assigned, a new state s t is fed to the agent, and a new action a t is taken. Waiting for at least one SRTT between actions prevents that a new action is taken before the impact of the previous one can be sensed by the agent. B. Training Environment Setup A key design point of MARLIN is to be trained and evaluated on real networks. Utilizing real components allowed us to understand the challenges that the CC domain brings to RL, avoiding the risk of failing a "sim-to-real" transposition, which could be caused, by inaccurate emulated/simulated patterns and weak assumptions, at the cost of a slower training time. Relying on real hardware and protocol implementation intrinsically diversifies training experience thanks to the stochasticity of the environment and it prevents the agent from overfitting on artificial patterns that would not be found in the real world. To this end, we built the network shown in Fig. 2, which comprises two sources of traffic and two receivers, two WiFi Access Point (AP), two network switches, and one router connected in a dumbbell topology. Traffic sources are connected to the WiFi AP located at one end of the dumbbell, while traffic receivers are connected to the AP at the other end. The "Background Traffic Source", on the left, is responsible for generating background and send it to the "Background Traffic Receiver" node, on the right. The "Sender Application" transmits traffic to the "Receiver Application" via a custom transport protocol implementation that uses MARLIN as its CC algorithm. The need for implementation flexibility led us to integrate MARLIN with Mockets, a protocol originally designed for communication environments characterized by limited bandwidth, typically found in tactical and wireless sensor networks. Further details regarding Mockets are presented in Section III. To avoid running into situations where MARLIN could underperform due to the protocol implementation, we shaped the network scenario around the typical Mockets use case for this first iteration. We set a Smart Queue policy on the router to limit the maximum amount of traffic flowing between the two subnetworks to 250 KB/s; we also used TC NetEm -Traffic Control Network Emulator (TC-NETEM) on the "Sender Application" node to introduce 100 ms of latency and 3% random packet loss that only affect the traffic generated by the application. Note that the router's manual reports that it effectively limits the actual rate to 95% of the specified value when Smart Queue is enabled [18]. The background traffic patterns generated can be divided into elephant flows, i.e., long-lived data transfers that represent a large percentage of the total traffic (imagine large file transfers or video streaming), and mice flows, i.e., shortlived data transfers at low throughput (e-mail or RPC calls). Roughly 87% of the background traffic in our testbed is made by elephant flows, consisting of four different flows that alternate each other over the link every two seconds, and two mice flows, which continuously generate very short-lived (in the order of milliseconds) traffic bursts with intervals that follow a Poisson distribution. The Background Traffic Source generates traffic using Multi-Generator Network Test Tool (MGEN) [19]. Elephant flows consist of two UDP communications transferring data at 100 KB/s, one UDP communication transferring data at 50 KB/s, and one TCP connection producing 200 KB/s of traffic. One TCP and one UDP mice flow introduce an extra 17 KB of traffic in average over the link each second. Each elephant flow is assigned to a different temporal slot of a 2 second duration and they repeat every 8 seconds. Mice flows start with the first elephant flow and continue until traffic generation is stopped. C. Agent Design The interface between MARLIN and the transport protocol, shown in Figure 1, is implemented using gRPC. The gRPC middleware sitting between the transport protocol and MAR- LIN takes care of delivering observations and reward pairs to the agent and actions back to the protocol. At a given step t, the agent receives network statistics from the transport protocol via the gRPC middleware. Statistics are then processed and stacked with the previous 10 observations to form the state s t . The agent can take actions that will increase, maintain constant, or decrease the transport protocol CWND by a chosen factor. Ideally, the agent should learn to maximize the volume of data transmitted in the minimum amount of time, while being watchful of growing queueing delays, which would manifest with an increase in the measured RTT. MARLIN is implemented on top of the RL Baselines3 Zoo (RL-Zoo3) [20] framework, which follows best practices for using Stable-Baselines3 (SB3) [21], a PyTorch [22]-based library that implements state-of-the-art RL algorithms following the OpenAI gym interface [23]. 1) State: Table I describes the state encoded in MARLIN. 14 features are gathered during the time frame that follows the action taken at step t − 1. MARLIN then augments the state space with 7 statistics, i.e., last, mean, standard deviation, minimum, maximum, Exponential Moving Average (EMA), and difference from the previous state s t−1 , that are computed for each of the 14 features. The previous N states are also stacked together to form a history of the previous observations, attempting to adhere to the Markov property. The final state served to the agent will then have N × |F eatures| × |Stats| features. This totals up to 980 different features. N is considered a hyperparameter of the problem; table II has a complete list of all hyperparameters used for MARLIN. For the choice of N , we followed the empirical considerations made in [24], and chose 10 as the length of the history after some preliminary trials and evaluations. During the training process, observations are processed and normalized through a moving average by using the VecNormalize environment wrapper present in SB3. 2) Actions: MARLIN takes continuous actions contained in the range [−1, 1], which represent the percentage gain of the CWND size. For example, if the action chosen by the agent is going to be 1, the CWND is doubled; a value of 0 means no change in the CWND; −0.5 reduces the window size by 50%. The initial CWND size is set to 4KB at the beginning of the episode, as per the transport protocol implementation default. For the purpose of this study, we capped the CWND size to 50 KB, a value that, if reached, would yield double the throughput that the network can accommodate in the setup we used for training and evaluation. This choice only impacts the very first phases of training, when the agent takes random steps to prime the model. After this phase, which in MARLIN is set to last 10K steps, our experiments have shown that the agent correctly never fills the CWND to 50 KB. 3) Reward: We designed a reward function that gives higher rewards to the agent the closer it gets to fully utilizing the available bandwidth: r t = − target t target t + acked_kilobytes cumulative t (1) where target t represents the amount of bytes the agent should have delivered up to step t since the beginning of the episode in order to fully utilize the link and acked_kilobytes cumulative t represents the number of kilobytes there were acknowledged by the receiver until step t. A strictly negative reward function promotes the agent to accumulate the smallest amount of penalties. The penalty received is much smaller the closer the agent it is, at each step, to having utilized the link to the best of its possibilities. The careful reader might notice that such rewarding system encourages the agent to accumulate acked bytes regardless explicit impact on the RTT, falling into the risk of privileging actions that could produce more acked bytes in the immediate future, with the drawback of causing undesired queuing delays. To prevent such risk, we consider a second formulation of the reward function that introduces a RTT-based penalty coefficient: r t = − target t target t + acked_kilobytes cumulative t * (1 − penalties)(2) The term penalties depends on the difference between the current RTT and the minimum EMA RTT and is defined as follows: penalties = α rtt dif f rtt ema min , if rtt dif f rtt ema min < 1 0.99, otherwise(3) In Eq. 3, α depends on the magnitude of the difference between rtt dif f and rtt ema min and it is defined as follows: α =            1, if | rtt dif f rtt ema min | > 0.6 0.5, if 0.1 < | rtt dif f rtt ema min | ≤ 0.6 0.3, if 0.05 < | rtt dif f rtt ema min | ≤ 0.1 0.1, otherwise(4) It is worth noticing that, in case rtt dif f < 0, the penalties term becomes negative, thus rewarding the agent when RTT improvements are detected. 4) RL Algorithm: MARLIN adopts SAC [11], an off-policy actor-critic algorithm based on the maximum entropy RL framework. SAC augments the maximum reward objective, foundation of RL settings, with an entropy maximization term. Such term acts as a trade-off between exploration and exploitation, so that the agent aims to maximize its return while also acting as random as possible. In circumstances where multiple actions seem equally attractive, i.e. in the case of equal or close Q-Values, the learned policy is encouraged to assign equal probability mass to those actions. In practice, the effects of the entropy, or temperature, term prompt the agent to discard unsuitable trajectories in favor of more promising ones, as well as to improve the learning speed. The entropy term can be either fixed or optimized/learned as further steps are taken. However, the optimal entropy coefficient varies depending on a series of factors, such as the nature of the task or even the current policy. As a consequence, it is usually considered good practice to avoid fixed values, preferring instead to update the term at the same time actor, critic, and the target networks are optimized [7]. D. Future Directions The work described above is the first step of a multi-year research project. We have already identified future steps that follow naturally from our present work. While we proceed describing them sequentially, in reality these steps are much intertwined and we will likely addressed them in subsequent iterations. We think that MARLIN would benefit from a more expressive reward function. We envision problem and reward formulations that truncate unpromising trajectories that have moved too distant from the optimal. We believe this could significantly speed up the solution convergence. Furthermore, we suspect that MARLIN's current state might present redundant information as well as features of low relevance to the problem. To address this, we plan to investigate smaller and more refined state representations, with the double goal of lowering complexity and improving convergence. We plan to train this new agent in diversified networking scenarios, which can capture different traffic patterns and network technologies, to assess the degree of generalizability. Finally, a thorough, automated hyperparameter tuning would further enhance MARLIN's performance and complete a first cycle of improvements. Further advancements will require an evolution in the agent's design congruent with specific problems that afflict CC. The literature has shown that ML models can accurately distinguish between packet loss attributable to congestion or channel errors. We plan to integrate a similar classifier within MARLIN and investigate the feasibility of an analogous approach to identify variations in end-to-end latency that are caused by changes in the path to destination. The fundamental building block of CC algorithms is how and when to change the CWND size. MARLIN currently actively controls the "how", leaving the "when" to a heuristic. We will investigate learning-based approaches to include such decision factor into MARLIN, with the goal of turning it into a more comprehensive and reactive system, able to make rational decisions at its own tempo. Up until now, we shaped the problem of CC from the perspective of a single RL agent. Nonetheless, the need for CC algorithms in transport protocols originated from the lack of coordination in a multi-agent system, where single entities were acting in a completely self-centered manner. Therefore, we expect that the next step ahead in learning-based CC will come from the application of advancing Multi-Agent Reinforcement Learning (MARL) algorithms that can optimize cooperative and/or competitive agent behavior. III. THE MOCKETS TRANSPORT PROTOCOL One of the main design choices we faced was the transport protocol we would use to train and evaluate MARLIN. The decision fell on a custom transport protocol because it is much simpler to integrate with MARLIN than TCP. Additionally, a custom protocol enables greater flexibility in terms of retrieving the information required to encode the agent state. For this study, we integrated MARLIN with Mockets, a message-based communication middleware implemented on top of UDP, following a school of thought similar to the one that drove the design of the QUIC protocol [25] (for additional details on Mockets, the reader can refer to [26]). Mockets implements a very aggressive CC algorithm whose purpose is to fully utilize the available bandwidth in presence of variable communication latency and elevated packet loss. While it presents several advantages over TCP and other transport protocols in degraded environments [27], Mockets' existing CC fails to share link capacity with other communication flows going through the same links and cannot adapt quickly to changes in the available bandwidth. Therefore, Mockets could benefit significantly from MARLIN, which could make it a viable choice also for traditional network scenarios. To have an accurate representation of the state of the environment within MARLIN, we improved the RTT estimation in Mockets. To do so, the sender keeps track of the transmission time of each packet. When an ACK arrives, the sender computes the RTT by calculating the difference between the current time and the transmission time of the last packet received by the receiver before generating the ACK message and then subtracting the ACK processing time from it. To make this calculation possible, the receiver appends the identifier of the last received packet (a strictly increasing unsigned integer) and the processing time (in microseconds) to all ACK messages. IV. EXPERIMENTAL RESULTS A. Hardware Involved The testbed described in Figure 2 is implemented using Commercial Off-The-Shelf (COS) devices. The agent is trained on a workstation equipped with an Intel i7-8700 CPU, 32GB DDR4 RAM, an NVIDIA GeForce RTX 2060 GPU, and an Intel Wireless-AC 9560 Network Interface Card (NIC). The "Receiver Application" resides on a 4GB Raspberry Pi 4 Model B (RPi4B). The two AP used are a TP-Link EAP245 and a Netgear WAC505, connected to two Netgear B. Training The agent is trained for 1M steps on an infinite-horizon task with partial episodes lasting 200 steps. When the last step of a partial episode is encountered, a truncated signal is emitted and PEB is performed, recalling the agent that additional steps and rewards would actually be available thereafter [12]. Due to time constraints, the optimization steps involved (for actor, critic, target networks, and learned entropy coefficient) happen at the end of every partial episode. Taking training steps between two actions on a real network is a luxury we cannot afford, as we observed it would introduce delays in the order of 800-1000ms. Training at the end of an episode allows us to avoid such effect, limiting the overhead to the sole pipeline, which we report being around 25 to 30 ms. At the beginning of training, in order to increase exploration, actions coming from a uniform random distribution are taken for 10K steps. Subsequently, SAC starts its normal explorationexploitation strategy, led by its entropy coefficient. We trained three different models for MARLIN. For the first one, we used Eq 1 (5a). For the second model, we used the reward function in Eq. 2 (5b). In both cases, the background traffic pattern was fixed, with elephant flows following this order: [100 UDP, 200 TCP, 100 UDP, 50 UDP]. For the last model (5c), we changed the background traffic to use all permutations presented in Section II-B. The mean training reward shows a promisingly increasing trend in each of the three setting, as shown in Figure 4. It is straightforward to interpret the performance showed during training by taking as a reference our ideal target, which, in this case, would result in -100 mean training reward. C. Results Following training, we tested all the three MARLIN models by transferring a 3MB file on the same network with the background traffic pattern [100 UDP, 200 TCP, 100 UDP, 50 UDP]. It is important to note that file transfer is a significantly different problem from the one that the agent faces during training: here, the agent is evaluated on completing a task longer than the partial episodes seen during training, which are limited to 200 steps, and a faster completion coincides better results. We repeated the experiment 100 times for each of the three trained agents. Results are then compared to the performance of Secure Copy Protocol (SCP), which in our system uses TCP CUBIC as the transport protocol, on the same file transfer task. Figure 5 shows the optimal behaviour, computed from the a-priori knowledge of the background traffic, along with the batch of results obtained from the trained agents. The average, best, and worst performance of TCP CUBIC are also shown under the same conditions after having repeated the transfer 100 times as well. Figure 3 presents the variation of the CWND size during MARLIN's best run across all the three experiments. The color of each segment represents the RTT of the communication measured during a period of one SRTT immediately following the action that took the CWND size to the value represented by the rightmost end of the segment. Runs with significant performance degradation were aborted after 80 seconds, with 5a reporting 4 aborted experiments, 5b reporting 20, and 5c reporting 2. TCP CUBIC completes the file transfer in 45.03s in average, with its fastest transfer achieved in 33.97s, and a worst performance of 70.78s. Excluding aborted runs, MARLIN reached the target in 46.1s in average in 5a, 50.31s in 5b, and 46.9s in 5c. All agents' best performance was faster than TCP CUBIC's best. In 5a, 48% of the experiments achieve a performance equal or better than the average accomplished by TCP CUBIC, 31% in 5b, and 47% in 5c. The fastest transfer was completed by 5b in 24.84s, 27% faster than the fastest SCP transfer and 45% faster than its average. For comparison, the available bandwidth on the link allows file transfers to be completed in 25s (see Figure 5); faster transfers can still occur if the traffic injected by MARLIN causes other flows to slow down. D. Discussion Despite the increased decision making complexity brought by continuous actions in a problem as intricate as CC in a real network and a training budget of 1M steps, which is modest when compared to other RL-based CC agents, results are promising and already comparable to TCP CUBIC in our environment. We believe that part of it is due to the sample efficiency of SAC. Another reason can be found in the shaping of the CC problem as an infinite-horizon task with strictly negative rewards, which enables the agent to exploit its acquired experience in tasks longer than the partial episodes seen during training and with different goals. Permuting the order of the background traffic patterns during training did not deteriorate the performance during evaluation. In fact, evaluation runs exhibited lower variance, a better fastest transfer, and similar average transfer time compared to the experiments shown in 5a. Moreover, fewer experiments had to be aborted. These results suggest that MARLIN's robustness might be further improved by feeding data from diverse scenarios to the algorithm. Figure 3 shows the CWND size during one of the evaluation runs. Note that most of the time, following an increased RTT reading, the agent has learned to respond by reducing the CWND size or decreasing its growth speed in the next step. This behavior is compatible with many RTT-based CC algorithms. Finally, although most trajectories obtained during evaluation express promising and valuable results ( Figure 5), they also present a significant degree of instability, which warrants additional research. This behavior is particularly evident in 5b, where the model is trained using the function in Eq. 2. This model has had several transfers significantly faster than the ones in 5a and 5c, as well as those obtained with SCP; nonetheless, the model has proved considerably slower in average. V. RELATED WORK Research efforts on the design and optimization of TCP CC have historically been very different from the approach discussed in this paper. Traditional CC algorithms aim at achieving full bottleneck link utilization by applying diverse heuristic strategies that increase the CWND size until a congestion sign emerges, such as a packet loss, e.g., NewReno [28] and CUBIC [29] (the default CC algorithm in modern Linux Kernels and recent Windows operating systems), or an increase in latency, e.g., Vegas [30] and, more recently, BBR [31]. These approaches have been shown to work well under specific network conditions, but underperform or experience other types of issues in other scenarios [32,33,34]. Due to the exceptional degree of heterogeneity, complexity, and intrinsic dynamism of networking environments, and following the reinvigorated interest in ML that captivated the scientific community in the last two decades, several recent efforts have focused on learning-based transport protocol optimization techniques. Some approaches focus on addressing very specific problem of transport protocols, such as accurately identifying congestion events, but do not aim at replacing CC algorithms. For instance, researchers have successfully built ML-based models that can distinguish between losses caused by congestion and losses due to channel errors in wireless networks [35,36] or losses caused by medium contention in optical burst switching networks [37]. While we expect that our approach would benefit from such solutions, as discussed above, the final goal of MARLIN is to train an agent that can take on the tasks of CC algorithms efficiently in a number of different network scenarios. More interesting studies seek to apply ML models and RL agents directly to CC, by learning policies that can optimally adjust the sending rate [38], the CWND [39], or other parameters that tune the CC algorithm [40]. Comprehensive surveys of applications of ML to CC are given in [9,10]. Common problems that emerged from the reviewed studies include parameter selection, computational complexity, high memory consumption, low training efficiency, and compatibility and fairness against existing CC heuristics. The possibility of training a RL agent in a completely autonomous manner makes the case for applying RL to CC optimization very compelling. The blocking nature of widely used RL libraries such as OpenAI Gym, designed to solve RL problems such as the ATARI games [41], which can be blocked while waiting for the next action, constitutes one of the main challenges. Researchers have circumvented this obstacle by training agents in simulated environments and achieved very promising results, such as with DRL-CC [42] and Aurora [43]. However, reproducing those results in real-world networks has proved to be a challenge. Solutions that rely on blocking while the agent computes the next action introduce significant delays, which are detrimental to the overall communication performance, especially in fast networks. In contrast, we designed MARLIN to integrate asynchronously with Mockets, which avoids blocking the transport protocol waiting for the agent to take the next action. Other researchers propose the usage of non-blocking agents to optimize CC. MVFST-RL [16] proposed a non-blocking agent based on IMPALA [44], a C++ implementation of the QUIC transport protocol, and Pantheon [39] for network emulation. Communication between the agent and the system work in a similar fashion to Park [17], a platform for experimenting with RL agents on computer system problems based on RPCs. MARLIN follows a similar philosophy to avoid the drawbacks of blocking systems. However, to the best of our knowledge, MARLIN, is the only work that uses strictly negative rewards and investigates the use of an off-policy and entropy-regularized RL algorithm, such as SAC, with a continuous action space, trained on a real network in which real background traffic flows compete for bandwidth access. Additionally, we trained MARLIN on a infinite-horizon setting and evaluated the model on a common real-world problem such as transferring a file over a shared link. VI. CONCLUSION This work has shown how effective policies can be obtained by training a RL agent based on a off-policy, entropyregularized algorithm such as SAC. MARLIN shapes the CC problem as a strictly negative rewarded task actuating on continuous-actions in a real network with competing dynamic background traffic. We have also presented future research directions that we plan to pursue. These include training in more heterogeneous environments, exploring MARL settings, investigating more expressive reward functions, and designing an agent able to autonomously decide when to take the next action. Figure 1 : 1Agent-Protocol interface. Communication protocol and agent communicate through a gRPC server. Figure 2 : 2A dumbbell network is used for training and testing. Background and agent traffic are transmitted from one sub-network to the other over the bottleneck link. Senders and receivers are all connected to a WiFi Access Point. Figure 3 : 3The rolling averaged CWND size is plotted against the time from the beginning of the episode, while segment color represents the RTT observed between two actions. This plot was obtained from the best run of the second MARLIN model, i.e. 5b, which completed the transfer in 24.84s. GS108E switches. A 2015 Apple MacBook Pro with macOS Catalina v10.15.6 generates background traffic, which is sent to the receiving application running on a Dell Latitude 7000 laptop (E7470). Both the desktop machine and E7470 run the Ubuntu 20.04.5 LTS OS. All NIC-related optimizations, e.g., TCP Segmentation Offload (TSO) or Large Receive Offload (LRO), are enabled, as per default, on all systems. A Ubiquiti Networks EdgeRouter X with EdgeOS routes packets between the two subnetworks. Figure 4 : 4Mean training reward over the last 100 partial episodes on the three training runs. Agent trained on a single traffic pattern with RTT penalties. Agent trained on every permutation of the traffic flows. Figure 5 : 5Evaluation of each model on 100 testing experiments. Table I : IFeatures composing the state of the environment. The horizon of the observation is also augmented with the previous 10 observations. Table II : IIHyperparameters used in MARLIN. A Comprehensive Overview of TCP Congestion Control in 5G Networks: Research Challenges and Future Perspectives. Josip Lorincz, Zvonimir Klarin, Julije Ožegović, 10.3390/s21134510Sensors 21. 13Josip Lorincz, Zvonimir Klarin, and Julije Ože- gović. "A Comprehensive Overview of TCP Con- gestion Control in 5G Networks: Research Chal- lenges and Future Perspectives". In: Sensors 21.13 (2021). ISSN: 1424-8220. DOI: 10.3390/s21134510. URL: https://www.mdpi.com/1424-8220/21/13/4510. Fairness of Congestion-Based Congestion Control: Experimental Evaluation and Analysis. Shiyao Ma, Jingjie Jiang, Wei Wang, Bo-Chen Li, arXiv: Networking and Internet Architecture. Shiyao Ma, Jingjie Jiang, Wei Wang, and Bo-chen Li. "Fairness of Congestion-Based Congestion Control: Experimental Evaluation and Analysis". In: arXiv: Net- working and Internet Architecture (2017). A survey on TCP-friendly congestion control. J Widmer, R Denda, M Mauve, 10.1109/65.923938IEEE Network. 15J. Widmer, R. Denda, and M. Mauve. "A survey on TCP-friendly congestion control". In: IEEE Network 15.3 (2001), pp. 28-37. DOI: 10.1109/65.923938. Object Detection at the Edge: Off-the-shelf Deep Learning Capable Devices and Accelerators. Raffaele Galliera, Niranjan Suri, 10.1016/j.procs.2022.09.0252022 International Conference on Military Communication and Information Systems (ICMCIS). Procedia Computer Science 205Raffaele Galliera and Niranjan Suri. "Object Detection at the Edge: Off-the-shelf Deep Learning Capable Devices and Accelerators". In: Procedia Computer Science 205 (2022). 2022 International Conference on Military Communication and Information Systems (ICMCIS), pp. 239-248. ISSN: 1877-0509. DOI: https://doi.org/10.1016/j.procs.2022.09.025. URL: https://www.sciencedirect.com/science/article/pii/S1877050922008900. Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learn. Nikita Rudin, David Hoeller, Philipp Reist, Marco Hutter, 10.48550/ARXIV.2109.11978Nikita Rudin, David Hoeller, Philipp Reist, and Marco Hutter. Learning to Walk in Minutes Us- ing Massively Parallel Deep Reinforcement Learn- ing. 2021. DOI: 10.48550/ARXIV.2109.11978. URL: https://arxiv.org/abs/2109.11978. Learning dexterous in-hand manipulation. Openai: Marcin Andrychowicz, 10.1177/0278364919887447The International Journal of Robotics Research. 39OpenAI: Marcin Andrychowicz et al. "Learning dexterous in-hand manipulation". In: The International Journal of Robotics Research 39.1 (2020), pp. 3-20. DOI: 10.1177/0278364919887447. eprint: https://doi.org/10.1177/0278364919887447. URL: https://doi.org/10.1177/0278364919887447. Soft Actor-Critic Algorithms and Applications. Tuomas Haarnoja, 10.48550/ARXIV.1812.05905Tuomas Haarnoja et al. Soft Actor-Critic Algorithms and Applications. 2018. DOI: 10.48550/ARXIV.1812.05905. URL: https://arxiv.org/abs/1812.05905. Marine Vessel Tracking using a Monocular Camera. Tobias Jacob , Raffaele Galliera, Ali Muddasar, Sikha Bagui, 10.5220/0010516000170028Proceedings of the 2nd International Conference on Deep Learning Theory and Applications -DeLTA, INSTICC. the 2nd International Conference on Deep Learning Theory and Applications -DeLTA, INSTICCSciTePressTobias Jacob., Raffaele Galliera., Muddasar Ali., and Sikha Bagui. "Marine Vessel Tracking using a Monoc- ular Camera". In: Proceedings of the 2nd International Conference on Deep Learning Theory and Applications -DeLTA, INSTICC. SciTePress, 2021, pp. 17-28. ISBN: 978-989-758-526-5. DOI: 10.5220/0010516000170028. Congestion Control: A Renaissance with Machine Learning. Wenting Wei, Huaxi Gu, Baochun Li, 10.1109/MNET.011.2000603IEEE Network. 35Wenting Wei, Huaxi Gu, and Baochun Li. "Congestion Control: A Renaissance with Machine Learning". In: IEEE Network 35 (4 July 2021), pp. 262-269. ISSN: 1558156X. DOI: 10.1109/MNET.011.2000603. When machine learning meets congestion control: A survey and comparison. Huiling Jiang, 10.1016/j.comnet.2021.108033Huiling Jiang et al. When machine learning meets congestion control: A survey and comparison. June 2021. DOI: 10.1016/j.comnet.2021.108033. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine, arXiv:1801.01290Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. "Soft Actor-Critic: Off- Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor". In: CoRR abs/1801.01290 (2018). arXiv: 1801.01290. URL: http://arxiv.org/abs/1801.01290. Time Limits in Reinforcement Learning. Fabio Pardo, Arash Tavakoli, Vitaly Levdik, Petar Kormushev, 10.48550/ARXIV.1712.00378Fabio Pardo, Arash Tavakoli, Vitaly Levdik, and Petar Kormushev. "Time Limits in Reinforcement Learning". In: (2017). DOI: 10.48550/ARXIV.1712.00378. URL: https://arxiv.org/abs/1712.00378. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Martin L Puterman, 1st. USAJohn Wiley & Sons, Inc471619779Martin L. Puterman. Markov Decision Processes: Dis- crete Stochastic Dynamic Programming. 1st. USA: John Wiley & Sons, Inc., 1994. ISBN: 0471619779. Reinforcement Learning: An Introduction. Richard S Sutton, Andrew G Barto, A Bradford Book262039249Cambridge, MA, USARichard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. Cambridge, MA, USA: A Bradford Book, 2018. ISBN: 0262039249. Reinforcement Learning Based Congestion Control in a Real Environment. Lei Zhang, 10.1109/ICCCN49398.2020.92097502020 29th International Conference on Computer Communications and Networks (ICCCN). 2020. Lei Zhang et al. "Reinforcement Learning Based Con- gestion Control in a Real Environment". In: 2020 29th International Conference on Computer Communi- cations and Networks (ICCCN). 2020, pp. 1-9. DOI: 10.1109/ICCCN49398.2020.9209750. MVFST-RL: An Asynchronous RL Framework for Congestion Control with Delayed Actions. Viswanath Sivakumar, Viswanath Sivakumar et al. "MVFST-RL: An Asynchronous RL Framework for Congestion Control with Delayed Actions". In: (Oct. 2019). URL: http://arxiv.org/abs/1910.04054. Park: An Open Platform for Learning-Augmented Computer Systems. Hongzi Mao, Advances in Neural Information Processing Systems. H. Wallach et al.Hongzi Mao et al. "Park: An Open Platform for Learning-Augmented Computer Systems". In: Advances in Neural Information Pro- cessing Systems. Ed. by H. Wallach et al. . Ubiquiti -Edgeos, Ubiquiti - EdgeOS. URL: https://dl.ubnt.com/guides/edgemax/EdgeOS_UG.pdf. Multi-Generator (MGEN) Network Test Tool. Naval Research Laboratory (NRL) PROTocol Engineering Advanced Networking (PROTEAN) Research GroupNaval Research Laboratory (NRL) PROTocol Engi- neering Advanced Networking (PROTEAN) Research Group. Multi-Generator (MGEN) Network Test Tool. https://www.nrl.navy.mil/Our-Work/Areas-of-Research/Information- 2021. . Antonin Raffin Rl Baselines3, Zoo, Antonin Raffin. RL Baselines3 Zoo. https://github.com/DLR-RM/rl-baselines3-zoo. 2020. Stable-Baselines3: Reliable Reinforcement Learning Implementations. Antonin Raffin, Journal of Machine Learning Research. 22Antonin Raffin et al. "Stable-Baselines3: Reliable Rein- forcement Learning Implementations". In: Journal of Machine Learning Research 22.268 (2021), pp. 1-8. URL: http://jmlr.org/papers/v22/20-1364.html. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adam Paszke, Proceedings of the 33rd International Conference on Neural Information Processing Systems. the 33rd International Conference on Neural Information Processing SystemsRed Hook, NY, USACurran Associates IncAdam Paszke et al. "PyTorch: An Imperative Style, High-Performance Deep Learning Library". In: Pro- ceedings of the 33rd International Conference on Neu- ral Information Processing Systems. Red Hook, NY, USA: Curran Associates Inc., 2019. . Greg Brockman, arXiv:1606.01540OpenAI Gym. 2016. eprintGreg Brockman et al. OpenAI Gym. 2016. eprint: arXiv:1606.01540. A Deep Reinforcement Learning Perspective on Internet Congestion Control. Nathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, Aviv Tamar, Proceedings of the 36th International Conference on Machine Learning. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning97Proceedings of Machine Learning Research. PMLRNathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, and Aviv Tamar. "A Deep Reinforcement Learning Perspective on Internet Congestion Control". In: Proceedings of the 36th International Conference on Machine Learning. Ed. by Kamalika Chaudhuri and Ruslan Salakhutdinov. Vol. 97. Proceedings of Machine Learning Research. PMLR, Sept. 2019, pp. 3050-3059. URL: https://proceedings.mlr.press/v97/jay19a.html. QUIC: A UDP-Based Multiplexed and Secure Transport. RFC 9000. IETF. Martin Thomson, Jana Iyengar, Martin Thomson Jana Iyengar. QUIC: A UDP-Based Multiplexed and Secure Trans- port. RFC 9000. IETF, Feb. 2022. URL: https://datatracker.ietf.org/doc/rfc9000/. Seamless network migration using the Mockets communications middleware. Erika Benvegnù, Niranjan Suri, Mauro Tortonesi, Tomás Esterrich, 10.1109/MILCOM.2010.5680364Erika Benvegnù, Niranjan Suri, Mauro Tortonesi, and Tomás Esterrich. "Seamless network migration us- ing the Mockets communications middleware". In: 2010 -MILCOM 2010 MILITARY COMMUNICA- TIONS CONFERENCE. 2010, pp. 2298-2303. DOI: 10.1109/MILCOM.2010.5680364. Performance Evaluation of Transport Protocols in Tactical Network Environments. Alessandro Morelli, Michel Provosty, Roberto Fronteddu, Niranjan Suri, 10.1109/MILCOM47813.2019.9021047MILCOM 2019 -2019 IEEE Military Communications Conference (MILCOM). Alessandro Morelli, Michel Provosty, Roberto Fronteddu, and Niranjan Suri. "Performance Evaluation of Transport Protocols in Tactical Network Environments". In: MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). 2019, pp. 30-36. DOI: 10.1109/MILCOM47813.2019.9021047. The NewReno Modification to TCP's Fast Recovery Algorithm. RFC 6582. Andrei Gurtov, Tom Henderson, Sally Floyd, Yoshifumi Nishida, 10.17487/RFC6582Andrei Gurtov, Tom Henderson, Sally Floyd, and Yoshifumi Nishida. The NewReno Modification to TCP's Fast Recovery Algorithm. RFC 6582. Apr. 2012. DOI: 10.17487/RFC6582. URL: https://www.rfc-editor.org/info/rfc6582. CUBIC: A New TCP-Friendly High-Speed TCP Variant. Sangtae Ha, Injong Rhee, Lisong Xu, 10.1145/1400097.1400105SIGOPS Oper. Syst. Rev. 425Sangtae Ha, Injong Rhee, and Lisong Xu. "CUBIC: A New TCP-Friendly High-Speed TCP Variant". In: SIGOPS Oper. Syst. Rev. 42.5 (July 2008), pp. 64-74. ISSN: 0163-5980. DOI: 10.1145/1400097.1400105. URL: https://doi.org/10.1145/1400097.1400105. Lawrence S Brakmo, Sean W O&apos;malley, Larry L Peterson, TCP Vegas: New Techniques for Congestion Detection and Avoidance". In: SIGCOMM. Lawrence S. Brakmo, Sean W. O'Malley, and Larry L. Peterson. "TCP Vegas: New Techniques for Congestion Detection and Avoidance". In: SIGCOMM. 1994. BBR: Congestion-Based Congestion Control. Neal Cardwell, Yuchung Cheng, C Stephen Gunn, Soheil Hassas Yeganeh, Van Jacobson, ACM Queue. 14Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson. "BBR: Congestion-Based Congestion Control". In: ACM Queue 14, September-October (2016), pp. 20-53. URL: http://queue.acm.org/detail.cfm?id=3022184. Performance and Improvements of TCP CUBIC in Low-Delay Cellular Networks. Philipp Bruhn, Mirja Kuehlewind, Maciej Muehleisen, 10.23919/IFIPNetworking55013.2022.98297812022 IFIP Networking Conference (IFIP Networking). 2022. Philipp Bruhn, Mirja Kuehlewind, and Maciej Muehleisen. "Performance and Improvements of TCP CUBIC in Low-Delay Cellular Networks". In: 2022 IFIP Networking Conference (IFIP Networking). 2022, pp. 1-9. DOI: 10.23919/IFIPNetworking55013.2022.9829781. Resolving poor TCP performance on high-speed long distance links -Overview and comparison of BIC, CUBIC and Hybla. Marko Šošić, Vladimir Stojanović, 10.1109/SISY.2013.66625952013 IEEE 11th International Symposium on Intelligent Systems and Informatics (SISY). Marko Šošić and Vladimir Stojanović. "Resolving poor TCP performance on high-speed long distance links - Overview and comparison of BIC, CUBIC and Hybla". In: 2013 IEEE 11th International Symposium on Intelli- gent Systems and Informatics (SISY). 2013, pp. 325-330. DOI: 10.1109/SISY.2013.6662595. TCP BBR in Cloud Networks: Challenges, Analysis, and Solutions. Phuong Ha, Minh Vu, Tuan-Anh Le, Lisong Xu, 10.1109/ICDCS51616.2021.000942021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS). 2021. Phuong Ha, Minh Vu, Tuan-Anh Le, and Lisong Xu. "TCP BBR in Cloud Networks: Challenges, Analysis, and Solutions". In: 2021 IEEE 41st International Conference on Distributed Comput- ing Systems (ICDCS). 2021, pp. 943-953. DOI: 10.1109/ICDCS51616.2021.00094. Machine-Learning based Loss Discrimination Algorithm for Wireless TCP Congestion Control. Kimoon Han, Jae Yong Lee, Byung Chul Kim, 10.23919/ELINFOCOM.2019.87063822019 International Conference on Electronics, Information, and Communication (ICEIC). Kimoon Han, Jae Yong Lee, and Byung Chul Kim. "Machine-Learning based Loss Discrimination Algo- rithm for Wireless TCP Congestion Control". In: 2019 International Conference on Electronics, Information, and Communication (ICEIC). 2019, pp. 1-2. DOI: 10.23919/ELINFOCOM.2019.8706382. A machine learning approach to improve congestion control over wireless computer networks. P Geurts, I El Khayat, G Leduc, 10.1109/ICDM.2004.10063Fourth IEEE International Conference on Data Mining (ICDM'04. P. Geurts, I. El Khayat, and G. Leduc. "A machine learning approach to improve congestion control over wireless computer networks". In: Fourth IEEE Interna- tional Conference on Data Mining (ICDM'04). 2004, pp. 383-386. DOI: 10.1109/ICDM.2004.10063. Loss classification in optical burst switching networks using machine learning techniques: improving the performance of TCP. A Jayaraj, T Venkatesh, C Siva Ram, Murthy, 10.1109/JSACOCN.2008.033508IEEE Journal on Selected Areas in Communications. 26A. Jayaraj, T. Venkatesh, and C. Siva Ram Murthy. "Loss classification in optical burst switching networks using machine learning techniques: improving the per- formance of TCP". In: IEEE Journal on Selected Ar- eas in Communications 26.6 (2008), pp. 45-54. DOI: 10.1109/JSACOCN.2008.033508. PCC: Re-Architecting Congestion Control for Consistent High Performance. Mo Dong, Qingxi Li, Doron Zarchy, P Godfrey, Michael Schapira, Proceedings of the 12th USENIX Conference on Networked Systems Design and Implementation. NSDI'15. the 12th USENIX Conference on Networked Systems Design and Implementation. NSDI'15Oakland, CAUSENIX Association9781931971218Mo Dong, Qingxi Li, Doron Zarchy, P. Brighten God- frey, and Michael Schapira. "PCC: Re-Architecting Con- gestion Control for Consistent High Performance". In: Proceedings of the 12th USENIX Conference on Net- worked Systems Design and Implementation. NSDI'15. Oakland, CA: USENIX Association, 2015, pp. 395-408. ISBN: 9781931971218. Pantheon: the training ground for Internet congestion-control research. Y Francis, Yan, ISBN: 978-1-939133-01-42018 USENIX Annual Technical Conference (USENIX ATC 18). Boston, MA: USENIX AssociationFrancis Y. Yan et al. "Pantheon: the training ground for Internet congestion-control research". In: 2018 USENIX Annual Technical Conference (USENIX ATC 18). Boston, MA: USENIX Association, July 2018, pp. 731-743. ISBN: 978-1-939133-01-4. URL: https://www.usenix.org/conference/atc18/presentation/yan-francis. TCP Ex Machina: Computer-Generated Congestion Control. Keith Winstein, Hari Balakrishnan, 10.1145/2534169.2486020SIGCOMM Comput. 43Keith Winstein and Hari Balakrishnan. "TCP Ex Machina: Computer-Generated Congestion Control". In: SIGCOMM Comput. Commun. Rev. 43.4 (Aug. 2013), pp. 123-134. ISSN: 0146- 4833. DOI: 10.1145/2534169.2486020. URL: https://doi.org/10.1145/2534169.2486020. Playing Atari with Deep Reinforcement Learning. Volodymyr Mnih, arXiv:1312.5602Volodymyr Mnih et al. "Playing Atari with Deep Rein- forcement Learning". In: CoRR abs/1312.5602 (2013). arXiv: 1312.5602. URL: http://arxiv.org/abs/1312.5602. Experience-Driven Congestion Control: When Multi-Path TCP Meets Deep Reinforcement Learning. Zhiyuan Xu, Jian Tang, Chengxiang Yin, Yanzhi Wang, Guoliang Xue, 10.1109/JSAC.2019.2904358IEEE Journal on Selected Areas in Communications. 37Zhiyuan Xu, Jian Tang, Chengxiang Yin, Yanzhi Wang, and Guoliang Xue. "Experience-Driven Congestion Control: When Multi-Path TCP Meets Deep Reinforce- ment Learning". In: IEEE Journal on Selected Areas in Communications 37.6 (2019), pp. 1325-1336. DOI: 10.1109/JSAC.2019.2904358. A Deep Reinforcement Learning Perspective on Internet Congestion Control. Nathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, Aviv Tamar, Proceedings of the 36th International Conference on Machine Learning. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning97Proceedings of Machine Learning Research. PMLRNathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, and Aviv Tamar. "A Deep Reinforcement Learning Perspective on Internet Congestion Control". In: Proceedings of the 36th International Conference on Machine Learning. Ed. by Kamalika Chaudhuri and Ruslan Salakhutdinov. Vol. 97. Proceedings of Machine Learning Research. PMLR, Sept. 2019, pp. 3050-3059. URL: https://proceedings.mlr.press/v97/jay19a.html. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. Lasse Espeholt, arXiv:1802.01561Lasse Espeholt et al. "IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Ar- chitectures". In: CoRR abs/1802.01561 (2018). arXiv: 1802.01561. URL: http://arxiv.org/abs/1802.01561.
[ "https://github.com/DLR-RM/rl-baselines3-zoo." ]
[ "Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization", "Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization" ]
[ "Philipp Seeberger [email protected] \nTechnische Hochschule Nürnberg Georg Simon Ohm\n\n", "Korbinian Riedhammer [email protected] \nTechnische Hochschule Nürnberg Georg Simon Ohm\n\n" ]
[ "Technische Hochschule Nürnberg Georg Simon Ohm\n", "Technische Hochschule Nürnberg Georg Simon Ohm\n" ]
[]
The CrisisFACTS Track aims to tackle challenges such as multi-stream fact-finding in the domain of event tracking; participants' systems extract important facts from several disaster-related events while incorporating the temporal order. We propose a combination of retrieval, reranking, and the well-known Integer Linear Programming (ILP) and Maximal Marginal Relevance (MMR) frameworks. In the former two modules, we explore various methods including an entity-based baseline, pre-trained and fine-tuned Question Answering systems, and ColBERT. We then use the latter module as an extractive summarization component by taking diversity and novelty criteria into account. The automatic scoring runs show strong results across the evaluation setups but also reveal shortcomings and challenges.
10.48550/arxiv.2302.01148
[ "https://export.arxiv.org/pdf/2302.01148v1.pdf" ]
256,504,009
2302.01148
1d845e4100719481a2b2eb4e866c4da281f2a026
Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization Philipp Seeberger [email protected] Technische Hochschule Nürnberg Georg Simon Ohm Korbinian Riedhammer [email protected] Technische Hochschule Nürnberg Georg Simon Ohm Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization The CrisisFACTS Track aims to tackle challenges such as multi-stream fact-finding in the domain of event tracking; participants' systems extract important facts from several disaster-related events while incorporating the temporal order. We propose a combination of retrieval, reranking, and the well-known Integer Linear Programming (ILP) and Maximal Marginal Relevance (MMR) frameworks. In the former two modules, we explore various methods including an entity-based baseline, pre-trained and fine-tuned Question Answering systems, and ColBERT. We then use the latter module as an extractive summarization component by taking diversity and novelty criteria into account. The automatic scoring runs show strong results across the evaluation setups but also reveal shortcomings and challenges. Introduction Natural and human-made disasters can result in significant loss of life, property, and environment if situational awareness is insufficient due to a lack of critical information in an ongoing emergency event. Today's information ecosystem offers new opportunities and directions for emergency response by integrating various online information sources (Buntain et al., 2021;Kruspe et al., 2021). Additional information sources such as social media and microblogging platforms can immediately provide details about current developments (Sakaki et al., 2010;Reuter et al., 2018). This leads to a multistream setting in which traditional sources are complemented with a variety of recently emerged online sources. Previous research efforts acknowledged this setting as a promising venue, as shown by the evolving tasks over several decades (Allan et al., 1998;Aslam et al., 2015;Sequiera et al., 2018;Buntain et al., 2021). However, the high-velocity nature of content generation and the inherent properties of different information sources (Kaufhold, 2021) and events (Seeberger and Riedhammer, 2022) face present models with new challenges. Those provide relevant event-related results to the user but are still ill-suited to multi-stream fact-finding and summarization needs. The novel CrisisFACTS Track aims to tackle these issues and challenges the community to develop systems more suited for factoid extraction over time. Overall the task asks participants' systems to extract a query-focused list of facts from crisis-related datasets, including Twitter, Reddit, Web News, and Facebook as data sources. Each of these extracted lists of facts is based on an event-day pair and shipped with importance scores which serve as a basis for downstream summarization. In fact, this can be considered as an extension of previous tasks in the area of Information Retrieval (IR) and summarization. The recent incorporation of pre-trained language models such as BERT (Devlin et al., 2019) has significantly improved ad-hoc ranking (Lin et al., 2022) and summarization (Ma et al., 2022) results. In particular, BERT-based cross-encoders achieve notable improvements over classical retrieval and dual-encoder approaches but have infeasible computational costs (Khattab and Zaharia, 2020). To mitigate this issue, deep neural ranking models are typically deployed as second stage rerankers, whereby the first stage often represents an efficient retriever to create a subset of candidate documents (MacAvaney et al., 2022). Similarly, modern summarization methods rely on cross-encoders to fetch relevant documents, paragraphs, or sentences for both extractive summarization (Xu and Lapata, 2020;Ahuja et al., 2022) or as preliminary selection for abstractive summarization (Xu and Lapata, 2021). Finally, the resulting pool of ranked documents can be further refined through approaches such as MMR (Carbonell and Goldstein, 1998), Documents Summary ! (3) Summarize E1 E2 E3 E4 Select 2.0 - - 2.0 3.0 3.0 1.0 - 2.0 - - - - - - -(1)top (#) top (%) &'(! ()* (, ! ) (, " ) (, # ) concat !-# Figure 1: Overview of our proposed framework. All queries and documents for each event and time period are separately processed by the following three major components: (1) Retrieve, (2) Rerank, and (3) Summarize. The symbols E1 to E4 represent the concepts w.r.t. the ILP formulation. For final scoring, the selected S sel and past summary S past documents are used in terms of redundancy penalization. ILP (McDonald, 2007), and TextRank (Mihalcea and Tarau, 2004). In this work, we explore various information retrieval and reranking pipelines ranging from pretrained to fine-tuned state-of-the-art models. Complementary, we propose to subsequently process the list of facts in an extractive summarization setup by leveraging a combination of the well-known ILP and MMR frameworks. In this way, we aim to overcome issues related to diversity and redundancy. Approach As illustrated in Figure 1, our proposed framework first retrieves and reranks a set of documents D = {d 1 , . . . , d N } based on an information request Q = {q 1 , . . . , q M } where each query q i typically consists of a short text or list of indicative terms. The number of documents 1 and queries are given as N and M , respectively. Let C = {C (q 1 ) , . . . , C (q M ) } denote the result- ing set of query-related clusters with C (q i ) = {d (q i ) 1 , . . . , d (q i ) k } consisting of the top k candidates ranked by relevance. Then, a summarization component further selects L candidates from the cluster pool M i=1 C (q i ) to create a summary S. Following the track design, each summary S t is created w.r.t. a time period p t ∈ {p 1 , . . . , p T } with T as the number of time periods. Within the scope of CrisisFACTS, the time period p i corresponds to 1 Throughout this work, we use the term document interchangeably with the CrisisFACTS stream items. These stream items are rather sentences or short posts than long documents. one day. In the following, we detail each individual component used for our submissions. Stage 1: Retrieve In the first stage, we employ lexical retrieval approaches to mitigate the infeasible computational costs of deep neural models such as BERT-based cross-encoders. Hence, we first retrieve the top k (1) candidates for each query q ∈ Q from a set of documents D using a list of indicative terms. For each query q, the reduced set of k (1) candidates is then subsequently processed by the given reranking stage. For retrieval, we adopt the well-known BM25 model (Robertson and Zaragoza, 2009) but one can easily replace it with more sophisticated methods. We empirically found at preliminary experiments that the number of candidates is relatively low, limiting the subsequent reranking components. To address this problem, one can implement query expansion (Amati and Van Rijsbergen, 2002), document expansion (Nogueira et al., 2019), or adaptive reranking (MacAvaney et al., 2022) methods. In this work, we integrate query expansion in order to grow the candidates pool for overcoming the recall limitation. Stage 2: Rerank Classical retrieval approaches may not be sufficient to capture the semantics expressed in the query and its relation to the text contents. Therefore, each query-related cluster C (q) of the previous retrieval stage is reranked and the top k (2) candidates are se-lected, resulting in a new set C (q) . Here, we exploit supervision signals from other existing datasets by using pre-trained deep neural models. Furthermore, we include an entity and keyword-based baseline in order to assess the performance gain due to the models pre-trained on large text corpora. In the following, we detail the considered reranking models: BoE As baseline we adopt the Bag-of-Entities representation for document ranking introduced by Xiong et al. (2016) which relies on the semantic information achieved by entity linking systems. We implement a simplified version by constructing Bag-of-Entities vectors for each document based on entity-types and extend them with Bag-of-Keywords vectors. The final model scores a document by summing over the frequency of expected query entity-types and keywords present. QA Similar to Xu and Lapata (2020), we employ Question Answering (QA) systems to leverage distant supervision signals related to best answer selection. While QA approaches support both sentence and span selection, we rely only on sentence level selection which suits more to the queries and stream items provided by the organizers. That is, we concatenate the query q and a candidate document d into a sequence [CLS] q [SEP] d and predict the relevance score with a BERT-based crossencoder. In this work, we consider a pre-trained and fine-tuned QA version whereby the fine-tuned system is adapted to the crisis domain. ColBERT This model follows a contextualized late interaction approach which makes use of both a first-stage approximate nearest neighbor search and a reranking stage to calculate ranking scores (Khattab and Zaharia, 2020). In particular, ColBERT supports the reranking mechanism to produce more precise scores based on a candidate pool but can also be used for end-to-end retrieval. We follow the end-to-end retrieval approach. In this way, we consider an approach without the limitations related to classical retrieval models such as BM25. Note that this alternative skips the first retrieval stage by directly retrieving the set of documents D to obtain the top k (2) candidates for each query q. Stage 3: Summarize Selection Finding a diverse set of facts with less redundancy is crucial for summarization tasks. However, without any post-processing, reranked candidates still suffer in terms of diversity and re-dundancy. To tackle this problem, we use an additional selection step formalized as ILP. We follow the concept-based model (Gillick and Favre, 2009;Riedhammer et al., 2010) where concepts can be facts, events, or information units. In this problem setup, the objective function is maximized over the weighted sum of the concepts present in the selection, subject to a length constraint. Finally, we obtain an extractive summary S t = {d 1 , . . . , d l } where |S t | is limited by l ≤ L with L as the maximum number of documents. Scoring The well-known MMR algorithm greedily selects documents by trading off query-based relevancy and redundancy to the previously selected documents, until a summary length constraint is met. However, this constraint can be relaxed to rerank a summary S t in order to increase the diversity in the top documents. Formally, we define the final score of a document i as (1) where Rel i is the relevance score of document i and Red ij is the redundancy penalty for having both documents i and j in the summary S sel as well as past summaries S past = t−1 i=1 S i . However, a single retrieved document might contain multiple scores due to multiple matched queries. We argue that a document that covers multiple queries expresses more relevant information content for the summary. Formally, we denote the relevance score as Rel i = |Q (i) | · score i where score i is the mean score of document i weighted by the number of matched queries Q (i) ⊆ Q. λ · Rel i − (1 − λ) · max j∈S sel ∪Spast Red ij Experiments In this section, we detail the experimental setup and discuss the results for our submitted runs. Throughout all experiments, we mainly consider the sources Twitter, Reddit, Web News and ignore Facebook due to the limited access to the post contents. Preprocessing We normalize all tweets in order to represent the text content similar to the other online sources. Specifically, all retweet-indicating prefixes, user mentions, emoticons, emojis, and URLs are removed. Furthermore, we remove any hashtag symbols and split the text into their corresponding words using WordSegment. Crisis-QA Since the first CrisisFACTS Track does not provide any annotations w.r.t. the task, we decided to create a synthetic version that reflects the query-focused sentence selection. We leverage the DocEE dataset (Tong et al., 2022), a recently published benchmark for document-level event extraction. We extract a subset of 6818 documents which only covers crisisrelated events and their corresponding event arguments. 3 First, we manually create coarse-grained questions for each event argument. Second, the dataset is augmented with a T5 BASE question generation model 4 for obtaining fine-grained questions. Last, we synthesize question-sentence pairs based on the argument position and label this pairs as binary relevance classification task. For model validation, we use the published dataset splits. 3 We checked for an overlap between the DocEE and Cri-sisFACTS events. In fact, some of the events are part of the DocEE dataset and thus we removed the corresponding documents prior to our experiments. 4 https://huggingface.co/mrm8488/ t5-base-finetuned-question-generation-ap Experimental Setup Retrieve For the first stage, we use the BM25 model with default settings of the PyTerrier library (Macdonald and Tonellotto, 2020) and extend it with the Bo1 query expansion (Amati and Van Rijsbergen, 2002) component. For each query, we concatenate the query text and indicative terms, retrieve the top k (1) = 100 candidates, and drop exact duplicates. The majority of duplicates appear in the tweet documents which is mostly related to retweets. Rerank The BoE model is based on a manually curated set of entity-types that mostly fits the expected information needs w.r.t. each query. For example, queries about missing peoples typically cover numbers and locations, respectively. The indicative terms provided by the organizers are used for the keywords. The QA ASNQ system is based on RoBERTa BASE pre-trained on the ASNQ dataset (Garg et al., 2020) without any further adjustments. Similarly, we employ the ColBERT v2 version (Santhanam et al., 2022) which is trained on the MS MARCO Passage Ranking task. In terms of QA Crisis , we follow the adaptation step of Garg et al. (2020) by fine-tuning the QA ASNQ model on the domain-specific Crisis-QA dataset. This results in an adapted version of the QA system. Although the synthesized dataset relies on a broad range of labeled event arguments, we still observe a significant proportion of false negatives within the question-sentence pairs. Hence, we use the model QA Crisis-0 in a first step to denoise the dataset with an upper threshold of 0.1 and then train a new model QA Crisis-1 in a second step, which is in line with previous work such as RocketQA (Qu et al., 2021). We use the Transformers library (Wolf et al., 2020) for the QA models, the official implemen-tation of ColBERT, and select the top k (2) = 25 candidates for each query. Summarize To enable a fair comparison among the different retrieval and reranking components, we re-use the selection and scoring procedure for each run. Specifically, inspired by information extraction (Martinez-Rodriguez et al., 2020), we extract entities 5 as concepts, entity-frequency as weights, and set L = 150 for the ILP formulation. For MMR, we select λ = 0.8 and calculate the redundancy Red ij based on TF-IDF features and cosine similarity. Results In Table 1, we present the overall performance of our pipeline setups. Since this is the first installment of the CrisisFACTS Track, we mainly limit the analysis across our submission runs. However, we provide the reader a comparison of our models to the medians and top results for the summarization task (Table 2). Overall The QA models outperform the baseline BoE and ColBERT in almost all evaluation settings. These results reflect the findings of previous text retrieval work which report higher performance for cross-encoder architectures. Interestingly, the finetuned QA model decreases the performance in two summarization setups and in terms of comprehensiveness. We assume that the adapted QA is biased towards the entities of the Crisis-QA dataset. This might result into higher scores for only a subset of facts. Furthermore, we are aware of concerns about potential data overlap due to the time intersection between the CrisisFACTS and Crisis-QA events. However, the performance increase appears only for the NIST reference summaries and we therefore leave the analysis for future work. Summarization In depth analysis in Table 2 found that the pre-trained QA model achieves top results for a variety of events and reference summaries. When compared to the BoE baseline, the performance increase differs among the events, metrics, and reference summaries. However, only three performance measures are below the TREC medians which suggests strong results for the overall pipeline. Nevertheless, in contrast to automatic summarization evaluation, manual matching re- veals high variance for different days within the same event. Matching If we plot the comprehensiveness evolution along the number of days (Figure 2), we see that the performance decreases by large extent across a variety of events. Since this trend holds for all models, we hypothesize that this is due to at least two factors. First, the retrieval and reranking stages of the pipeline setup does not consider diversity for each query and might cut off rare facts in favor of facts with higher relevance, spread along the timeline of the event. Second, the diversification in the selection stage w.r.t. past summaries still displays a challenging task. For example, specific sentences only differ by a single number (e.g. burned acres) and might unintentionally penalize new facts by unsophisticated similarity measures. Conclusion In this work, we have investigated the combination of deep neural reranking and global unsupervised extraction for a multi-query focused summarization task. Our experiments demonstrated the strength of cross-encoders with QA based on distant supervision. However, we identified shortcomings and challenges in the face of temporal aspects which underlines the downstream summarization as a critical component. We believe there is much room for improvement, especially by integrating more sophisticated extractive approaches, abstractive summarization techniques, or even joint optimization. Figure 2 : 2Comprehensiveness trend for all events. The QA ASNQ system is displayed in bold, while the minmax region of all models is highlighted. Retrieve. . . . . . (2) Rerank . . . . . . E1 E3 E3 E2 E4 E1 . . . . . . 0.91 0.74 0.68 2Summarization Matching ICS NIST Wiki Comprehensiveness Redundancy ColBERT .050/.450 .139/.546 .031/.542 .189 .201 BM25 → BoE .047/.436 .142/.560 .030/.533 .185 .176 BM25 → QA ASNQ .051/.448 .147/.563 .036/.565 .213 .226 BM25 → QA Crisis .046/.443 .147/.564 .034/.545 .210 .226 TREC best .058/.459 .147/.564 .036/.565 .217 .125 Table 1: Overall results of our automatic submission runs. We report the Rouge-2/BERTScore for summarization and comprehensiveness and redundancy ratio for matching as defined in Appendix A. The top results across our proposed systems are in bold. Event ICS NIST Wiki 001 .116 /.522 .273/.560 .013 /.540 002 .066/.561 .050 /.563 .043/.579 003 .053/.516 .238/.611 .021/.593 004 .061/.480 .171 / .585 .060/.582 005 - .136/.544 .032/.526 006 .057/.506 .048/.533 .019 /.580 007 .040/.494 .104 /.524 .057/.554 008 .012/.501 .154/.583 .044/.562 Table 2 : 2Rouge-2/BERTScore results for each event w.r.t. the QA ASNQ system. The TREC best results across all submissions are in bold. Results below the TREC median are in grey, while results below our baseline BoE are marked with . https://grantjenks.com/docs/ wordsegment https://stanfordnlp.github.io/stanza/ ner.html AcknowledgmentsThe authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany in the project ISAKI (project number 13N15572).A Matching MetricsThe submitted stream items for a system and eventday pair are ordered by the importance scores and formed to a summary S by a rank cut-off k. Based on a manual matching of the stream items against a fact list F , the comprehensiveness is calculated aswhere f ∈ F is a unique fact, M (f, S) is the set of stream items matching fact f , and R(f ) is the gain assigned to the fact f . Similarly, the redundancy ratio is measured for a system and event-day pair asAll runs are macro-averaged across days within an event, and then across the eight events. ASPECTNEWS: Aspect-Oriented Summarization of News Documents. Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett, 10.18653/v1/2022.acl-long.449Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, Ireland. ACLLong Papers1Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, and Greg Durrett. 2022. ASPECTNEWS: Aspect-Oriented Summarization of News Docu- ments. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6494-6506, Dublin, Ireland. ACL. Topic Detection and Tracking Pilot Study Final Report. James Allan, Jaime G Carbonell, George Doddington, Jonathan Yamron, Yiming Yang, 10.1184/R1/6626252.V1Carnegie Mellon UniversityJames Allan, Jaime G. Carbonell, George Doddington, Jonathan Yamron, and Yiming Yang. 1998. Topic Detection and Tracking Pilot Study Final Report. Publisher: Carnegie Mellon University. Probabilistic models of information retrieval based on measuring the divergence from randomness. Gianni Amati, Cornelis Joost Van Rijsbergen, 10.1145/582415.582416ACM Transactions on Information Systems. 204Gianni Amati and Cornelis Joost Van Rijsbergen. 2002. Probabilistic models of information retrieval based on measuring the divergence from random- ness. ACM Transactions on Information Systems, 20(4):357-389. A Javed, Fernando Aslam, Matthew Diaz, Richard Ekstrand-Abueg, Virgil Mccreadie, Tetsuya Pavlu, Sakai, Proceedings of The Twenty-Fourth Text REtrieval Conference, TREC 2015. The Twenty-Fourth Text REtrieval Conference, TREC 2015Gaithersburg, Maryland, USANIST Special Publication. National Institute of Standards and Technology (NISTTREC 2015 Temporal Summarization Track OverviewJaved A. Aslam, Fernando Diaz, Matthew Ekstrand- Abueg, Richard McCreadie, Virgil Pavlu, and Tet- suya Sakai. 2015. TREC 2015 Temporal Summa- rization Track Overview. In Proceedings of The Twenty-Fourth Text REtrieval Conference, TREC 2015, Gaithersburg, Maryland, USA, November 17- 20, 2015, volume 500-319 of NIST Special Publica- tion. National Institute of Standards and Technology (NIST). Incident Streams 2020: TREC-IS in the Time of COVID-19. Cody L Buntain, Richard Mccreadie, Ian Soboroff, ISCRAM 2021: 18th International Conference on Information Systems for Crisis Response and Management. Cody L. Buntain, Richard McCreadie, and Ian Sobo- roff. 2021. Incident Streams 2020: TREC-IS in the Time of COVID-19. In ISCRAM 2021: 18th Interna- tional Conference on Information Systems for Crisis Response and Management. The use of MMR, diversity-based reranking for reordering documents and producing summaries. Jaime Carbonell, Jade Goldstein, 10.1145/290941.291025Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval -SIGIR '98. the 21st annual international ACM SIGIR conference on Research and development in information retrieval -SIGIR '98Melbourne, AustraliaACMJaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st annual international ACM SIGIR confer- ence on Research and development in information retrieval -SIGIR '98, pages 335-336, Melbourne, Australia. ACM. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolisJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Min- nesota. ACL. TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. Siddhant Garg, Thuy Vu, Alessandro Moschitti, 10.1609/aaai.v34i05.6282Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. Proceedings of the AAAI Conference on Artificial In- telligence, 34(05):7780-7788. A Scalable Global Model for Summarization. Dan Gillick, Benoit Favre, Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, ILP '09. the Workshop on Integer Linear Programming for Natural Language Processing, ILP '09Boulder, ColoradoDan Gillick and Benoit Favre. 2009. A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Nat- ural Language Processing, ILP '09, pages 10-18, USA. ACL. Event-place: Boulder, Colorado. Information Refinement Technologies for Crisis Informatics: User Expectations and Design Principles for Social Media and Mobile Apps. Marc-André Kaufhold, 10.1007/978-3-658-33341-6SpringerFachmedien Wiesbaden; WiesbadenMarc-André Kaufhold. 2021. Information Refinement Technologies for Crisis Informatics: User Expecta- tions and Design Principles for Social Media and Mobile Apps. Springer Fachmedien Wiesbaden, Wiesbaden. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. Omar Khattab, Matei Zaharia, 10.1145/3397271.3401075Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalACMOmar Khattab and Matei Zaharia. 2020. ColBERT: Ef- ficient and Effective Passage Search via Contextu- alized Late Interaction over BERT. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 39-48, Virtual Event China. ACM. Review article: Detection of actionable tweets in crisis events. A Kruspe, J Kersten, F Klan, 10.5194/nhess-21-1825-2021Natural Hazards and Earth System Sciences. 216A. Kruspe, J. Kersten, and F. Klan. 2021. Review article: Detection of actionable tweets in crisis events. Natural Hazards and Earth System Sciences, 21(6):1825-1845. Pretrained Transformers for Text Ranking: BERT and Beyond. Synthesis Lectures on Human Language Technologies. Jimmy Lin, Rodrigo Nogueira, Andrew Yates, 10.1007/978-3-031-02181-7Springer International PublishingChamJimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2022. Pretrained Transformers for Text Ranking: BERT and Beyond. Synthesis Lectures on Human Language Technologies. Springer International Pub- lishing, Cham. Multidocument summarization via deep learning techniques: A survey. Congbo Ma, Wei Emma Zhang, Mingyu Guo, Hu Wang, Quan Z Sheng, 10.1145/3529754ACM Comput. Surv. 555Congbo Ma, Wei Emma Zhang, Mingyu Guo, Hu Wang, and Quan Z. Sheng. 2022. Multi- document summarization via deep learning tech- niques: A survey. ACM Comput. Surv., 55(5). Adaptive re-ranking with a corpus graph. Sean Macavaney, Nicola Tonellotto, Craig Macdonald, 10.1145/3511808.3557231Proceedings of the 31st ACM International Conference on Information and Knowledge Management, CIKM '22. the 31st ACM International Conference on Information and Knowledge Management, CIKM '22New York, NY, USAACMSean MacAvaney, Nicola Tonellotto, and Craig Mac- donald. 2022. Adaptive re-ranking with a corpus graph. In Proceedings of the 31st ACM Interna- tional Conference on Information and Knowledge Management, CIKM '22, page 1491-1500, New York, NY, USA. ACM. Declarative Experimentation in Information Retrieval Using PyTerrier. Craig Macdonald, Nicola Tonellotto, 10.1145/3409256.3409829Proceedings of the 2020 ACM SI-GIR on International Conference on Theory of Information Retrieval, ICTIR '20. the 2020 ACM SI-GIR on International Conference on Theory of Information Retrieval, ICTIR '20New York, NY, USA; NorwayACMEvent-place: Virtual EventCraig Macdonald and Nicola Tonellotto. 2020. Declar- ative Experimentation in Information Retrieval Us- ing PyTerrier. In Proceedings of the 2020 ACM SI- GIR on International Conference on Theory of Infor- mation Retrieval, ICTIR '20, pages 161-168, New York, NY, USA. ACM. Event-place: Virtual Event, Norway. Information extraction meets the Semantic Web: A survey. Semantic Web. L Jose, Aidan Martinez-Rodriguez, Ivan Hogan, Lopez-Arevalo, 10.3233/SW-180333Publisher: IOS Press11Jose L. Martinez-Rodriguez, Aidan Hogan, and Ivan Lopez-Arevalo. 2020. Information extraction meets the Semantic Web: A survey. Semantic Web, 11(2):255-335. Publisher: IOS Press. A Study of Global Inference Algorithms in Multi-Document Summarization. Ryan Mcdonald, Proceedings of the 29th European Conference on IR Research, ECIR'07. the 29th European Conference on IR Research, ECIR'07Berlin, Heidelberg; Rome, ItalySpringer-VerlagRyan McDonald. 2007. A Study of Global Inference Algorithms in Multi-Document Summarization. In Proceedings of the 29th European Conference on IR Research, ECIR'07, pages 557-564, Berlin, Heidel- berg. Springer-Verlag. Event-place: Rome, Italy. TextRank: Bringing Order into Text. Rada Mihalcea, Paul Tarau, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, Spain. ACLRada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. ACL. Rodrigo Nogueira, Wei Yang, Jimmy Lin, Kyunghyun Cho, 10.48550/ARXIV.1904.08375Document Expansion by Query Prediction. Publisher: arXiv Version Number. Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document Expansion by Query Prediction. Publisher: arXiv Version Num- ber: 2. RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, Haifeng Wang, 10.18653/v1/2021.naacl-main.466Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. ACLYingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. ACL. Social Media in Crisis Management: An Evaluation and Analysis of Crisis Informatics Research. Christian Reuter, Amanda Lee Hughes, Marc-André Kaufhold, 10.1080/10447318.2018.1427832International Journal of Human-Computer Interaction. 344Christian Reuter, Amanda Lee Hughes, and Marc- André Kaufhold. 2018. Social Media in Crisis Man- agement: An Evaluation and Analysis of Crisis In- formatics Research. International Journal of Hu- man-Computer Interaction, 34(4):280-294. Long story short -Global unsupervised models for keyphrase based meeting summarization. Korbinian Riedhammer, Dilek Benoit Favre, Hakkani-Tür, 10.1016/j.specom.2010.06.002Speech Communication. 5210Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tür. 2010. Long story short -Global unsu- pervised models for keyphrase based meeting sum- marization. Speech Communication, 52(10):801- 815. The Probabilistic Relevance Framework: BM25 and Beyond. Stephen Robertson, Hugo Zaragoza, ; Hanover, M A Publisher, 10.1561/1500000019Found. Trends Inf. Retr. 34Now Publishers IncStephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Be- yond. Found. Trends Inf. Retr., 3(4):333-389. Place: Hanover, MA, USA Publisher: Now Publishers Inc. Earthquake shakes Twitter users: real-time event detection by social sensors. Takeshi Sakaki, Makoto Okazaki, Yutaka Matsuo, 10.1145/1772690.1772777Proceedings of the 19th international conference on World wide web -WWW '10. the 19th international conference on World wide web -WWW '10Raleigh, North Carolina, USAACM851Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web -WWW '10, page 851, Raleigh, North Carolina, USA. ACM. Col-BERTv2: Effective and efficient retrieval via lightweight late interaction. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, Matei Zaharia, 10.18653/v1/2022.naacl-main.272Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United States. ACLKeshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Col- BERTv2: Effective and efficient retrieval via lightweight late interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715-3734, Seattle, United States. ACL. Enhancing crisis-related tweet classification with entity-masked language modeling and multi-task learning. Philipp Seeberger, Korbinian Riedhammer, 10.48550/ARXIV.2211.11468Publisher: arXiv Version Number: 1. Philipp Seeberger and Korbinian Riedhammer. 2022. Enhancing crisis-related tweet classification with entity-masked language modeling and multi-task learning. Publisher: arXiv Version Number: 1. Overview of the TREC 2018 Real-Time Summarization Track. Royal Sequiera, Luchen Tan, Jimmy Lin, Proceedings of the Twenty-Seventh Text REtrieval Conference, TREC 2018. the Twenty-Seventh Text REtrieval Conference, TREC 2018Gaithersburg, Maryland, USANIST Special Publication. National Institute of Standards and Technology (NISTRoyal Sequiera, Luchen Tan, and Jimmy Lin. 2018. Overview of the TREC 2018 Real-Time Summariza- tion Track. In Proceedings of the Twenty-Seventh Text REtrieval Conference, TREC 2018, Gaithers- burg, Maryland, USA, November 14-16, 2018, vol- ume 500-331 of NIST Special Publication. National Institute of Standards and Technology (NIST). DocEE: A Large-Scale and Finegrained Benchmark for Document-level Event Extraction. Meihan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, Juanzi Li, 10.18653/v1/2022.naacl-main.291Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United States. ACLMeiHan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, and Juanzi Li. 2022. DocEE: A Large-Scale and Fine- grained Benchmark for Document-level Event Ex- traction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 3970-3982, Seattle, United States. ACL. Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander Lhoest, Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline. ACLThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. ACL. Bag-of-Entities Representation for Ranking. Chenyan Xiong, Jamie Callan, Tie-Yan Liu, 10.1145/2970398.2970423Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval. the 2016 ACM International Conference on the Theory of Information RetrievalNewark Delaware USAACMChenyan Xiong, Jamie Callan, and Tie-Yan Liu. 2016. Bag-of-Entities Representation for Ranking. In Pro- ceedings of the 2016 ACM International Conference on the Theory of Information Retrieval, pages 181- 184, Newark Delaware USA. ACM. Coarse-to-Fine Query Focused Multi-Document Summarization. Yumo Xu, Mirella Lapata, 10.18653/v1/2020.emnlp-main.296Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline. ACLYumo Xu and Mirella Lapata. 2020. Coarse-to-Fine Query Focused Multi-Document Summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 3632-3645, Online. ACL. Generating Query Focused Summaries from Query-Free Resources. Yumo Xu, Mirella Lapata, 10.18653/v1/2021.acl-long.475Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. ACLYumo Xu and Mirella Lapata. 2021. Generating Query Focused Summaries from Query-Free Resources. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing, pages 6096-6109, Online. ACL.
[]
[ "Identifying a Criminal's Network of Trust", "Identifying a Criminal's Network of Trust" ]
[ "Pritheega Magalingam [email protected] \nSchool of Mathematical and Geospatial Sciences\nRMIT University\nGPO Box 24763001Melbourne, MelbourneVictoriaAustralia\n", "Member, IEEEAsha Rao \nSchool of Mathematical and Geospatial Sciences\nRMIT University\nGPO Box 24763001Melbourne, MelbourneVictoriaAustralia\n", "Stephen Davis [email protected] \nSchool of Mathematical and Geospatial Sciences\nRMIT University\nGPO Box 24763001Melbourne, MelbourneVictoriaAustralia\n" ]
[ "School of Mathematical and Geospatial Sciences\nRMIT University\nGPO Box 24763001Melbourne, MelbourneVictoriaAustralia", "School of Mathematical and Geospatial Sciences\nRMIT University\nGPO Box 24763001Melbourne, MelbourneVictoriaAustralia", "School of Mathematical and Geospatial Sciences\nRMIT University\nGPO Box 24763001Melbourne, MelbourneVictoriaAustralia" ]
[]
Tracing criminal ties and mining evidence from a large network to begin a crime case analysis has been difficult for criminal investigators due to large numbers of nodes and their complex relationships. In this paper, trust networks using blind carbon copy (BCC) emails were formed. We show that our new shortest paths network search algorithm combining shortest paths and network centrality measures can isolate and identify criminals' connections within a trust network. A group of BCC emails out of 1,887,305 Enron email transactions were isolated for this purpose. The algorithm uses two central nodes, most influential and middle man, to extract a shortest paths trust network.
10.1109/sitis.2014.64
[ "https://arxiv.org/pdf/1503.04896v1.pdf" ]
8,006,106
1503.04896
3baf9fa90a7f3ff33592136eec7caffe23f5d87e
Identifying a Criminal's Network of Trust Pritheega Magalingam [email protected] School of Mathematical and Geospatial Sciences RMIT University GPO Box 24763001Melbourne, MelbourneVictoriaAustralia Member, IEEEAsha Rao School of Mathematical and Geospatial Sciences RMIT University GPO Box 24763001Melbourne, MelbourneVictoriaAustralia Stephen Davis [email protected] School of Mathematical and Geospatial Sciences RMIT University GPO Box 24763001Melbourne, MelbourneVictoriaAustralia Identifying a Criminal's Network of Trust 1Shortest pathego networkmiddle man (MM)most influential (MI)trust Tracing criminal ties and mining evidence from a large network to begin a crime case analysis has been difficult for criminal investigators due to large numbers of nodes and their complex relationships. In this paper, trust networks using blind carbon copy (BCC) emails were formed. We show that our new shortest paths network search algorithm combining shortest paths and network centrality measures can isolate and identify criminals' connections within a trust network. A group of BCC emails out of 1,887,305 Enron email transactions were isolated for this purpose. The algorithm uses two central nodes, most influential and middle man, to extract a shortest paths trust network. Table I shows the list of criminals involved in Enron money laundering crime [23], [24]. The ID in the table is a computer generated number assigned to distinct email addresses. Even though there are 10 criminals in this list, there are more than 10 email addresses as some of the criminals have more than one email address. There are distinct links from each of these email addresses to different recipients, hence we do not merge the email addresses of a particular criminal. III. EMAIL GROUPING AND THRESHOLD LIMIT Yasin et al. [2] point out that taking only a part of a large amount of data from a network helps to reduce the complexity of identifying criminals. A subset that occurs within the Enron email dataset are those that have BCC recipients. The bcc-ed email carries the email addresses of some recipients who are kept concealed from other recipients. We will use this categorisation to narrow the search scope, since within the 1,887,305 emails, there are 60649 emails with BCC recipients, giving a sizeable subset. We start with statistical analysis to further reduce our search space. Within the bcc-ed emails, a large group, 17784 emails, had 33.73% of recipients bcc-ed ; that is, approximately 1 out of 3 recipients were bcc-ed. From Figure 1, it is clear that the majority of emails had at most 5; 1 to 4 BCC recipients. Based on the statistical findings, the BCC email group was divided into two different groups; emails with more than 5 recipients and emails that had at most 5 recipients. There were 34195 emails with more than 5 recipients, with a small number of these emails having large bcc recipient lists. The ratios of bcc-ed recipients in the group with more than 5 recipients were calculated. Figure 2 shows that there were some abnormal scenarios detected when BCC recipients in emails with more than 5 recipients were plotted. For example, 1 recipient out of 948 recipients was bcc-ed. Emails that have large lists of recipients would not typically imply a trust relationship, rather a quirk of the email system or just an information security practice. Thus, this subset was not used for analysis. Fig. 2: The BCC recipients in emails with more than 5 recipients. Note that there are some abnormal scenarios, such as 1 recipient out of 948 being bcc-ed. Next, the emails that were sent to at most 5 recipients were analysed. There were 26454 emails of this type. On average, out of 5 recipients between 1 and 2 recipients were bcc-ed. Based on this, our analysis was restricted to those emails where only one or two recipients were bcc-ed. We called these the 1-BCC and 2-BCC email networks respectively. The 1-BCC network consists of 5290 nodes 17838 edges whereas the 2-BCC network consists of 3766 nodes and 13486 edges. These two subnetworks are analysed in section V. The next section describes the shortest paths network search algorithm. IV. THE SHORTEST PATHS NETWORK SEARCH ALGORITHM The R igraph [32] package was used to create a network graph of all emails with either one or two recipients bcc-ed (the 1-BCC and 2-BCC groups). The process of obtaining a new network is displayed in Algorithm 1. A. Details of terms used and functions The following abbreviations of terms are used in the algorithm: C i is a criminal and each criminal is stored in an array, A C . For each iteration, we take a criminal C i from A C as an ego and refer to it as EC i . An ego can be any suspicious entity. In our BCC email network analysis, C i refers to the criminals involved in the Enron money laundering crime as stated in Table I. N ECi is the i th criminal's subnetwork, also called the ego subnetwork. An ego subnetwork is a network which comprises of all the vertices reachable from the i th ego, EC i , all the vertices from which i th ego, EC i , is reachable and all the links connecting these two sets of vertices. The set of all the vertices reachable from an i th ego, EC i , is called the out-component and the set of all the vertices from which i th ego, EC i , is reachable is called the in-component. A vertex v j is reachable from vertex v i if v j was bcc-ed in an email from v i . If there exists more than one email from v i to v j where v j was bcc-ed then we simplify it by removing multiple links. M I N EC i and M M N EC i are the most influential node and the middle man node in the ego subnetwork N ECi respectively. OC refers to other criminals in N ECi not including the ego EC i . R denotes the result that is obtained from each step of Algorithm 1. A C := [ Despite the different subject matter exchanged between a sender and a recipient, a directed unweighted BCC shortest paths network graph is formed where the edge between one node to another shows a trust relationship (see Algorithm 1 Criminal shortest paths network search algorithm A. Store the criminals, C i to Cn in an array A C B. Form subnetwork of each ego and follow the steps below until all ego subnetworks have been tested. for i = 1 to n do 1. Select a criminal, C i from A C as ego, EC i . 2. Retrieve the ego subnetwork N EC i . (a) connection from ego to MI in ego subnetwork N EC i (i) Find M I N EC i . (ii) Find the direct path from EC i to M I N EC i . if exists then retrieve from graph and output it, then go to (b)...R1 else go to (a)(iii) end if (iii) Find the shortest path from EC i to M I N EC i . if exists then retrieve from graph and output it, then go to (b)...R2 else go to (b) end if (b) connection from ego to MM in ego subnetwork N EC i (i) Find M M N EC i . (ii) Find the direct path from EC i to M M N EC i . ifC. Merge R1-R7 into a network V. SHORTEST PATHS NETWORKS In this section, we present the discovery of the 1-BCC and 2-BCC shortest paths networks and the result of these two networks' analyses. As mentioned before, an edge exists from a node A to a node B only if A sent an email on which B was a BCC recipient. A. Discovery of 1-BCC shortest paths trust network We start by investigating Andrew Fastow's ego subnetwork in the 1-BCC network followed by the other criminals' ego subnetworks. Note that in the 1-BCC network, only one of Andrew Fastow's two ego subnetworks exists, that is [email protected] (686) exists, but [email protected] (687) In these small groups, paths don't exist due to two scenarios; betweenness centrality value does not exist or two nodes obtained the same eigenvalue. Similarly, the algorithm drops Lea Fastow (11009) and Rex Shelby (15224, 15225). Following this, the most influential and the middle man were picked from Lea Fastow (11010) and Kevin Hannon (10068)'s subnetworks. In both cases, we obtained the same MI and MM as in Andrew Fastow's ego subnetwork. Combining the results obtained from running algorithm 1, the 1-BCC shortest paths trust network was constructed (see Figure 3). The position and connections of criminals with other nodes could be used for further investigation. In the next section, we test the ability of the shortest paths network search algorithm on a scenario where an investigator is at the beginning stages of an investigation with no information about who the criminals may be. He only suspects that a money laundering crime is occurring. VI. FIRST SUSPECT TEST The first suspect test replaces all criminals with a group of people who are potentially under suspicion. Each combined shortest paths network in these tests is analysed separately and the number of criminals who occurred are counted. The purpose of these experiments is to find a suitable sparse subgraph to start an investigation. This money laundering first suspect test is started by investigating the Enron officials who were involved in financial account management. In the financial network, financial managers act as egos. The union of financial managers' network from the BCC network consist of shortest paths from each financial manager (ego) to the MI, to the MM and from one financial manager to another. The financial managers are Sherron Watkins ([email protected] (16929)), the Head of Enron Global Finance, Andrew Fastow ([email protected] (686), [email protected] (687)), the Enron Chief Financial Officer, Ben Glisan ([email protected] (1369)), the Enron Corporation Treasurer, Rick Causey ([email protected] (15077)), the Chief Accounting Officer, Jeff McMahon ([email protected] (8071)), the Chief Financial Officer of Enron after Andrew Fastow. We notice that this list of financial managers includes two criminals; Andrew Fastow and Ben Glisan. We assume that the investigator would not know this in the beginning stage of a crime investigation. The next part shows the result of our first suspect test on the 1-BCC and 2-BCC financial managers' networks. A. Result of first suspect test in 1-BCC financial manager network B. Result of first suspect test in 2-BCC financial manager network The first suspect test was used on the 2-BCC financial managers' network with the same financial officers used as suspects. Figure 6 depicts the network formed when the union of financial managers' subnetworks is extracted using the shortest paths algorithm. We identified the links from Sherron Watkins (16929) and Jeff McMahon (8071) to other nodes and obtained an unknown email address [email protected] (1117) and Greg Piper (6667) as active intermediate nodes respectively. Common nodes are identified from the subgraph in Figure 6 when compared to the subgraph in Figure 4. This is discussed in section VII. Again, no additional criminals were found. VII. DISCUSSION OF RESULTS A. Common node comparison The subgraph in Figure 5 extracted from the 1-BCC email network using the financial managers contains 9 common nodes when compared to the subgraph in Figure 3 found using criminals. Some common nodes that we would like to highlight here are Sara Shackleton (16383), Andrew Fastow (686), Louise Kitchen (11370) and Vince Kaminski (19075). Subgraphs that were extracted from the 2-BCC email network show 10 common nodes. In the first suspect test, starting with the financial managers, no additional known criminals appeared. However, after obtaining these subgraphs, the next step an investigator can attempt is to explore each of these nodes' history. This is possible because these subgraphs are very sparse and small. Even though these people are not yet identified as criminals during this first suspect test, further exploration of these nodes and trust relationships may lead to discovering events that are related to money laundering. Jeff Skilling was the president and Chief Operating Officer (COO) of Enron Corporation in December 1996 [33]. B. Intermediate node comparison To support Enron's fast growth in the 1990s, Skilling hired the best intellectuals for the company. This accounts for the appointment of Michael (Mike) McConnell [30], [34] as the Executive Vice President, Technology, Enron Corp., in July 1999. At the end of 1999, Enron Online came into being and Louise Kitchen, a trader at Enron [30], [34] was the main person involved in its start-up. McConnell later helped to promote Enron Online [34]. The development of the Enron Online was hidden from the COO, Jeff Skilling by Louise Kitchen [35] with the deployment of Enron online being revealed to him only two weeks before it was launched [35]. and Greg Piper are both connected with Enron Online. Although we can't prove that these identities are suspicious, these interesting and intriguing emails indicate that they should be investigated further. In fact, Louise Kitchen has been identified as an important node in prior research using node neighbourhood search, page rank [8] and rule-based search on the Enron employee's job field [37]. On the other hand, even though Sheri Sera occurred most frequently between Kevin Hannon and other nodes, history does not indicate that she should be considered suspicious [38]. VIII. CONCLUSION AND FUTURE WORK Our shortest paths network search algorithm is able to capture a closely connected trust network. The algorithm managed to show connections between nodes in 2 different scenarios; when an investigator knows all the criminals and when the investigator is at the starting stage and doesn't have any information about the criminals. The analyses conducted in this paper show that when crime is suspected, our algorithm provides a means of identifying possible people to investigate. It is the first step of an investigation: identifying trusted connections between known criminals or financial managers with other active intermediate nodes in a network. Future work includes testing the efficacy of our algorithm on a larger dataset and combining the node ranking with dependency methods to identify the most trusted node of a known source. Fig. 1 : 1Number of BCC recipients in emails. The majority of emails have at most 5, that is, 1 to 4 BCC recipients. Figure 3 ) 3. To start with, we identified the criminals that existed in both the 1-BCC and 2-BCC email groups. In the 1-BCC email network, only Andrew Fastow (686 and 687), Lea Fastow (11010, 11009), Kevin Hannon (10068), Kenneth Rice (9994), Rex Shelby (15224, 15225) and A. Khan (205) exist. Meanwhile, the criminals that exist in the 2-BCC network are Andrew Fastow (686 and 687), Lea Fastow (11010, 11009), Kevin Hannon (10068), Ben Glisan (1369), Kenneth Rice (9994) and Rex Shelby (15224). The rest of the criminals, 3 of the 10, didn't appear in either the 1-BCC or the 2-BCC networks. From the same ego subnetwork, N ECi , we also extract the shortest paths from other criminals in the array, A C , to the MI and the MM followed by extracting the shortest paths from the EC i to the other criminals. In the last step of the algorithm, all the shortest paths are combined to give a shortest path network, showing each criminal's position and the network association pattern. The steps are shown in Algorithm 1. doesn't. The connection from Andrew Fastow to the MM (16383 [email protected]) and the MI (19075 [email protected]) were retrieved. This step was repeated for the other criminals in Andrew Fastow's ego subnetwork. Finally the algorithm found the shortest paths from Andrew Fastow to the other 5 criminals in his ego subnetwork. Next, two other criminals' ego subnetworks were found; Lea Fastow (11010) and Kevin Hannon (10068). On retrieval of Kenneth Rice (9994) and A.Khan (205)'s ego subnetworks, there were only 2 and 3 nodes respectively. Fig. 3 : 3The 1-BCC criminals' shortest paths trust network. The MM (16383 -Sara Shackleton) and the MI (19075 -Vince Kaminski) are indicated. All the criminals who occurred in this trust network, Andrew Fastow (686), Lea Fastow (11010) and Kevin Hannon (10068), are highlighted.1) Result (Figure 3) of 1-BCC shortest paths trust network: The network shows the criminals and the nodes that are closely connected to them. Out of the 6 criminals existing in the 1-BCC network, only Andrew Fastow (686), Lea Fastow (11010) and Kevin Hannon (10068) were extracted. The criminals are located within a range of 2 to 5 hops from the MM and the MI. We also notice that Louise Kitchen ([email protected] (11370)), Jeff Skilling ([email protected] (8024)) and Michael (Mike) McConnell ([email protected] (12935)) occur the most number of times connecting Andrew Fastow (686), Kevin Hannon (10068) and Lea Fastow (11010) respectively to other nodes. In this shortest paths trust network, all three of these nodes are adjacent to the criminals. B. Discovery of 2-BCC shortest paths trust network Next we ran the algorithm on the ego subnetworks of Andrew Fastow and the other criminals in the 2-BCC email group. Both of Andrew Fastow's (686 and 687) subnetworks were extracted. Here too, the two important central nodes in both of the Andrew Fastow's (686 and 687) subnetworks were ([email protected] -16383), the MM and ([email protected] -19075), the MI. In both of Andrew Fastow's (686 and 687) ego subnetworks, only Kevin Hannon (10068) was connected to the most influential and middleman. The algorithm was also applied to each of the other 5 criminal subnetworks that exist in the 2-BCC email group. The result of the algorithm is the 2-BCC shortest paths trust network (SeeFigure 4). Fig. 4 : 4The 2-BCC criminals' shortest paths trust network. The MM (16383 -Sara Shackleton) and the MI (19075 -Vince Kaminski) are indicated. All the criminals who occurred in this trust network, Andrew Fastow (686, 687), Lea Fastow (11010), Kevin Hannon (10068), Ben Glisan (1369), Kenneth Rice (9994) and Rex Shelby (15224), are highlighted. 1) Result (Figure 4) of 2-BCC shortest paths trust network: All 6 criminals were found within the 2-BCC shortest paths network; they are Andrew Fastow (686, 687), Lea Fastow (11010), Kevin Hannon (10068), Ben Glisan (1369), Kenneth Rice (9994) and Rex Shelby (15224). Out of these, Andrew Fastow (686, 687), Ben Glisan (1369), Rex Shelby (15224) and Kenneth Rice (9994) did not have an out-component but acted as end nodes. The criminals who had out-components were Kevin Hannon and Lea Fastow. In this trust network, there exists a seperate single connection from Lea Fastow (11010) to Andrew Fastow (687). The only criminal that has a path to the MI and MM is Kevin Hannon in the range of 2 to 4 hops away. The node that connects Kevin Hannon to other nodes the most is Sherri Sera ([email protected] (16926)) followed by Greg Piper (6667). Figure 5 5shows the union of financial managers' shortest paths. In this shortest paths trust network two financial managers, Andrew Fastow ([email protected] (686)) and Jeff McMahon ([email protected] (8071)), occurred. Note that Andrew Fastow was the only criminal identified here with no other criminals appearing. Louise Kitchen ([email protected] (11370)) was in between Andrew Fastow and other nodes the most number of times. Meanwhile, Bruce Garner (2058) connects Jeff McMahon the most to other nodes. We also compared the nodes that occurred in subgraphs in Figure 5 and in Figure 3. The results are discussed in section VII. Fig. 5 : 5The 1-BCC financial managers' shortest paths trust network. The MM (16383 -Sara Shackleton) and the MI (19075 -Vince Kaminski) are indicated. Besides Andrew Fastow (686) who was also a financial manager, no additional criminals occurred in this trust network. Fig. 6 : 6The 2-BCC financial managers' shortest paths trust network. The MM (16383 -Sara Shackleton) and the MI (19075 -Vince Kaminski) are indicated. Besides Andrew Fastow (686) and Ben Glisan (1369) who were also financial managers in Enron, no additional criminals occurred in this trust network. Few of them are Ben Glisan (1369), Louise Kitchen (11370), Vince Kaminski (19075), Greg Piper (6667), Andrew Fastow (686), Sara Shackleton (16383). We obtained atleast three criminals in the common nodes' group; 1 criminal (Andrew Fastow (686)) from financial managers 1-BCC shortest paths network and 2 criminals (Andrew Fastow (686) and Ben Glisan (1369)) from financial managers 2-BCC shortest paths network. The active intermediate nodes found in section V-A1 and V-B1 are Louise Kitchen (11370), Michael (Mike) McConnell (12935), Jeff Skilling (8024), Greg Piper (6667) and Sherri Sera (16926). There were 2 out of 5 persons of interest occurring again when using financial managers as the list of first suspects. These are Louise Kitchen (11370) and Greg Piper (6667). No matter the email contents and number of emails being exchanged, we take all the active intermediaries obtained in section V-A1, V-B1 and section VI as persons of interest. We investigated these nodes to see if they had any relationship to important events that occurred during the period leading up to the Enron collapse. The next person of interest is Greg Piper. Greg Piper (6667) was the Managing Director of Enron NetWorks. He supported the growth of the web based trading introduced by Louise Kitchen [36]. He was responsible for all of Enron's e-commerce systems development, such as EnronOnline and ClickPaper.com [36]. Thus, Louise Kitchen M I N EC i := [Most Influential Node in N ECi ] M M N EC i := [Middle Man Node in N ECi ]Array of criminals] C i := [i th Criminal] EC i := [Ego / i th criminal in the list A C ] N ECi := [Ego / i th criminal's subnetwork] OC := [Other Criminals in the ego subnetwork N ECi ] R := [Result] exists then retrieve from graph and output it, then go to (c)...R3 else (iii) Find the shortest path from EC i to M M N EC i . if exists then retrieve from graph and output it, then go to (c)...R4 else go to (c) end if (c) connection from OC to MI and MM in ego subnetwork N EC i for i < n do (i) Set OC = {EC j |j = i}. (ii) Find the shortest path from OC to M I N EC i in N EC i . if exists then retrieve from graph and output it, then go to c(ii)...R5 else go to c(iii) end if (iii) Find the shortest path from OC to M M N EC i in N EC i . if exists then retrieve from graph and output it, then go to (d)...R6 else Algorithm 1 Criminal shortest paths network search algorithm (continued) (d) connection from ego to OC in ego subnetwork N EC i for i < n do (i) Set OC = {EC j |j = i}. (ii) Find the shortest path from EC i to OC in N EC i . if exists thengo to (b)(iii) end if go to (d) end if end for retrieve from graph and output it...R7 end if end for end for Constructing and analyzing criminal networks. Presented at the International Workshop on Cyber Crime (IWCC 2014) an event of The IEEE Computer Society's Security and Privacy Workshops. H Sarvari, E Abozinadah, A Mbaziira, D Mccoy, The Fairmont Hotel. San Jose, CA, USASPWH. Sarvari, E. Abozinadah, A. Mbaziira, and D. McCoy. (2014) Constructing and analyzing criminal networks. Presented at the International Workshop on Cyber Crime (IWCC 2014) an event of The IEEE Computer Society's Security and Privacy Workshops (SPW 2014), The Fairmont Hotel, San Jose, CA, USA. [Online]. Available: http://www.ieee-security.org/TC/SPW2014/papers/5103a084.PDF A granular approach for user-centric network analysis to identify digital evidence. M Yasin, J A Qureshi, F Kausar, J Kim, J Seo, Peer-to-Peer Networking and ApplicationsM. Yasin, J. A. Qureshi, F. Kausar, J. Kim, and J. Seo, "A granular approach for user-centric network analysis to identify digital evidence," Peer-to-Peer Networking and Applications, pp. 1-14, 2014. M Newman, Networks: An Introduction. New York, NYOxford University PressM. Newman, in Networks: An Introduction. New York, NY: Oxford University Press, 2010. T H Cormen, C E Leiserson, R L Rivest, C Stein, Introduction to Algorithms. Cambridge, Massachusetts London, EnglandThe MIT Press3rd ed.T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 3rd ed. Cambridge, Massachusetts London, England: The MIT Press, 2009. Fighting organized crimes: using shortest-path algorithms to identify associations in criminal networks. J J Xu, H Chen, Decision Support Systems. 383J. J. Xu and H. Chen, "Fighting organized crimes: using shortest-path algorithms to identify associations in criminal networks," Decision Support Systems, vol. 38, no. 3, pp. 473-487, 2004. Using link analysis technique with a modified shortest-path algorithm to fight money laundering. C Yunkai, M Quanwen, L Zhengding, Wuhan University Journal of Natural Sciences. 115C. Yunkai, M. Quanwen, and L. Zhengding, "Using link analysis technique with a modified shortest-path algorithm to fight money laundering," Wuhan University Journal of Natural Sciences, vol. 11, no. 5, pp. 1352-1356, 2006. Centrality in valued graphs: A measure of betweenness based on network flow. L C Freeman, S P Borgatti, D R White, Social networks. 132L. C. Freeman, S. P. Borgatti, and D. R. White, "Centrality in valued graphs: A measure of betweenness based on network flow," Social networks, vol. 13, no. 2, pp. 141-154, 1991. Discovering important nodes through comprehensive assessment theory on enron email database. H Yang, J Luo, Y Liu, M Yin, D Cao, Biomedical Engineering and Informatics (BMEI), 2010 3rd International Conference on. IEEE7H. Yang, J. Luo, Y. Liu, M. Yin, and D. Cao, "Discovering important nodes through comprehensive assessment theory on enron email database," in Biomedical Engineering and Informatics (BMEI), 2010 3rd International Conference on, vol. 7. IEEE, 2010, pp. 3041-3045. Discovering important nodes through graph entropy the case of enron email database. J Shetty, J Adibi, Proceedings of the 3rd international workshop on Link discovery. the 3rd international workshop on Link discoveryACMJ. Shetty and J. Adibi, "Discovering important nodes through graph entropy the case of enron email database," in Proceedings of the 3rd international workshop on Link discovery. ACM, 2005, pp. 74-81. Relationship identification for social network discovery. C P Diehl, G Namata, L Getoor, Proceedings of the 22nd AAAI conference on Artificial intelligence. the 22nd AAAI conference on Artificial intelligence22C. P. Diehl, G. Namata, and L. Getoor, "Relationship identification for social network discovery," in Proceedings of the 22nd AAAI conference on Artificial intelligence, vol. 22, no. 1, 2007, pp. 546-552. Exploration of communication networks from the enron email corpus. J Diesner, K M Carley, SIAM International Conference on Data Mining: Workshop on Link Analysis, Counterterrorism and Security. Newport Beach, CAJ. Diesner and K. M. Carley, "Exploration of communication networks from the enron email corpus," in SIAM International Conference on Data Mining: Workshop on Link Analysis, Counterterrorism and Security, Newport Beach, CA, 2005. Dynamic shortest path algorithms for hypergraphs. J Gao, Q Zhao, W Ren, A Swami, R Ramanathan, A Bar-Noy, Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), 2012 10th International Symposium on. IEEEJ. Gao, Q. Zhao, W. Ren, A. Swami, R. Ramanathan, and A. Bar-Noy, "Dynamic shortest path algorithms for hypergraphs," in Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), 2012 10th International Symposium on. IEEE, 2012, pp. 238-245. Analysing information flows and key mediators through temporal centrality metrics. J Tang, M Musolesi, C Mascolo, V Latora, V Nicosia, Proceedings of the 3rd Workshop on Social Network Systems. the 3rd Workshop on Social Network SystemsACM3J. Tang, M. Musolesi, C. Mascolo, V. Latora, and V. Nicosia, "Analysing information flows and key mediators through temporal centrality metrics," in Proceedings of the 3rd Workshop on Social Network Systems. ACM, 2010, pp. 3:1 -3:6. Social network analysis for email classification. K Yelupula, S Ramaswamy, Proceedings of the 46th Annual Southeast Regional Conference on XX. the 46th Annual Southeast Regional Conference on XXACMK. Yelupula and S. Ramaswamy, "Social network analysis for email classification," in Proceedings of the 46th Annual Southeast Regional Conference on XX. ACM, 2008, pp. 469-474. Behavior-based email analysis with application to spam detection. S Hershkop, Columbia UniversityPh.D. dissertationS. Hershkop, "Behavior-based email analysis with application to spam detection," Ph.D. dissertation, Columbia University, 2006. Behavior-based modeling and its application to email analysis. S J Stolfo, S Hershkop, C.-W Hu, W.-J Li, O Nimeskern, K Wang, ACM Transactions on Internet Technology (TOIT). 62S. J. Stolfo, S. Hershkop, C.-W. Hu, W.-J. Li, O. Nimeskern, and K. Wang, "Behavior-based modeling and its application to email analysis," ACM Transactions on Internet Technology (TOIT), vol. 6, no. 2, pp. 187-221, 2006. An analysis of fuzzy approach for detecting anomalous behaviour with e-mail traffic. A Shukla, K D Singh, International Journal of Latest Research In Engineering and Computing (IJREC). 1A. Shukla and K. D. Singh, "An analysis of fuzzy approach for detecting anomalous behaviour with e-mail traffic," International Journal of Latest Research In Engineering and Computing (IJREC), vol. 1, pp. 33-39, 2013. Benefits of bcc. M Mcdowell, A Householder, M. McDowell and A. Householder. (2009) Benefits of bcc. [Online]. Available: https://www.us-cert.gov/ncas/tips/ST04-008 Email mining: a review. P S Bogawar, K K Bhoyar, IJCSI International Journal of Computer Science Issues. 91P. S. Bogawar and K. K. Bhoyar, "Email mining: a review," IJCSI International Journal of Computer Science Issues, vol. 9, no. 1, 2012. June) Trusted electronic communications. G S Fox, B E Schaefer, App. 13/530713US PatentG. S. Fox and B. E. Schaefer. (2012, June) Trusted electronic communications. US Patent App. 13/530,713. Trust networks on the semantic web. J Golbeck, B Parsia, J Hendler, SpringerJ. Golbeck, B. Parsia, and J. Hendler, Trust networks on the semantic web. Springer, 2003. Optimal trust network analysis with subjective logic. A Josang, T Bhuiyan, Emerging Security Information, Systems and Technologies, 2008. SECURWARE'08. Second International Conference on. IEEEA. Josang and T. Bhuiyan, "Optimal trust network analysis with subjective logic," in Emerging Security Information, Systems and Technologies, 2008. SECURWARE'08. Second International Conference on. IEEE, 2008, pp. 179-184. Enron Email Dataset. W W Cohen, W. W. Cohen. (2009) Enron Email Dataset. [Online]. Available: http://www.cs.cmu.edu/ ∼ enron/ From Enron to WorldCom and beyond: Life and crime after Sarbanes-Oxley. K F Brickey, Washington University Law Quarterly. 81K. F. Brickey, "From Enron to WorldCom and beyond: Life and crime after Sarbanes-Oxley," Washington University Law Quarterly, vol. 81, pp. 357-402, 2003. Some unique properties of eigenvector centrality. P Bonacich, Social Networks. 294P. Bonacich, "Some unique properties of eigenvector centrality," Social Networks, vol. 29, no. 4, pp. 555-564, 2007. Network-based filtering for large email collections in e-discovery. H Henseler, Artificial Intelligence and Law. 184H. Henseler, "Network-based filtering for large email collections in e-discovery," Artificial Intelligence and Law, vol. 18, no. 4, pp. 413-430, 2010. How influential are you: detecting influential bloggers in a blogging community. I Kayes, X Qian, J Skvoretz, A Iamnitchi, Social Informatics. SpringerI. Kayes, X. Qian, J. Skvoretz, and A. Iamnitchi, "How influential are you: detecting influential bloggers in a blogging community," in Social Informatics. Springer, 2012, pp. 29-42. Locating central actors in co-offending networks. M A Tayebi, L Bakker, U Glasser, V Dabbaghian, Advances in Social Networks Analysis and Mining (ASONAM), 2011 International Conference on. IEEEM. A. Tayebi, L. Bakker, U. Glasser, and V. Dabbaghian, "Locating central actors in co-offending networks," in Advances in Social Networks Analysis and Mining (ASONAM), 2011 International Conference on. IEEE, 2011, pp. 171-179. Online sampling of high centrality individuals in social networks. A S Maiya, T Y Berger-Wolf, Advances in Knowledge Discovery and Data Mining. A. S. Maiya and T. Y. Berger-Wolf, "Online sampling of high centrality individuals in social networks," in Advances in Knowledge Discovery and Data Mining, 2010, publisher=, pp. 91-98. Innovation corrupted: the origins and legacy of Enron's collapse. M S Salter, Harvard University PressM. S. Salter, Innovation corrupted: the origins and legacy of Enron's collapse. Harvard University Press, 2008. former Enron Chief Financial Officer, pleads guilty, settles civil fraud charges and agrees to cooperate with ongoing investigation. U Securities, E C P Release, S Andrew, Fastow, U. Securities and E. C. P. Release. (2004) Andrew S. Fastow, former Enron Chief Financial Officer, pleads guilty, settles civil fraud charges and agrees to cooperate with ongoing investigation. [Online]. Available: http://www.sec.gov/litigation/complaints/comp17762.htm The igraph software package for complex network research. G Csardi, T Nepusz, InterJournal, Complex Systems. 1695G. Csardi and T. Nepusz, "The igraph software package for complex network research," InterJournal, Complex Systems, p. 1695, 2006. [Online]. Available: http://igraph.org March) Enron: A select chronology of congressional, corporate, and government activities. J M Anderson, Congressional Research Service, Library of CongressJ. M. Anderson. (2003, March) Enron: A select chronology of congressional, corporate, and government activities. Congressional Research Service, Library of Congress. [Online]. Available: http://digital.library.unt.edu/ark:/67531/metacrs4006/ Cancer and deadly infection in institutions: Developing use cases for an mbe application to prevent another enron or barings. T Nguyen, ICCGI 2013, The Eighth International Multi-Conference on Computing in the Global Information Technology. T. Nguyen, "Cancer and deadly infection in institutions: Developing use cases for an mbe application to prevent another enron or barings," in ICCGI 2013, The Eighth International Multi-Conference on Computing in the Global Information Technology, 2013, pp. 139-144. Enron ethics (or: culture matters more than codes). R R Sims, J Brinkmann, Journal of Business Ethics. 453R. R. Sims and J. Brinkmann, "Enron ethics (or: culture matters more than codes)," Journal of Business Ethics, vol. 45, no. 3, pp. 243-256, 2003. . H-04-25Crim. 40302Email from Ken Lay and Jeff Skilling to All Enron Worldwide. Government ExhibitOnlineEmail from Ken Lay and Jeff Skilling to All Enron Worldwide. Government Exhibit, 4030, Crim. No. H-04-25 (S-2). [Online]. Detection of multi-relations based on semantic communities behaviors," in Service Systems and Service Management. Y Zhao, L Feng, L Chen, Y. Zhao, L. Feng, and L. Chen, "Detection of multi-relations based on semantic communities behaviors," in Service Systems and Service Management, 2007 International Conference on. IEEE, 2007, pp. 1-7. Skilling's Last Stand. W P S W , Carrie Johnson, W. P. S. W. Carrie Johnson. (2006) Skilling's Last Stand. [Online]. Available: http://www.washingtonpost.com/wp-dyn/content/discussion/ 2006/10/20/DI2006102000645.html
[]
[ "The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Multi-Agent Game Abstraction via Graph Attention Neural Network", "The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Multi-Agent Game Abstraction via Graph Attention Neural Network" ]
[ "Yong Liu \nNational Key Laboratory for Novel Software Technology\nNanjing University\n\n", "Weixun Wang \nTianjin University\n\n", "Yujing Hu \nNetEase Fuxi AI Lab\n\n", "Jianye Hao [email protected]@corp.netease.com \nTianjin University\n\n\nNoah's Ark Lab\nHuawei\n", "Xingguo Chen [email protected] \nJiangsu Key Laboratory of Big Data Security & Intelligent Processing\nNanjing University of Posts and Telecommunications\n\n", "Yang Gao [email protected] \nNational Key Laboratory for Novel Software Technology\nNanjing University\n\n" ]
[ "National Key Laboratory for Novel Software Technology\nNanjing University\n", "Tianjin University\n", "NetEase Fuxi AI Lab\n", "Tianjin University\n", "Noah's Ark Lab\nHuawei", "Jiangsu Key Laboratory of Big Data Security & Intelligent Processing\nNanjing University of Posts and Telecommunications\n", "National Key Laboratory for Novel Software Technology\nNanjing University\n" ]
[]
In large-scale multi-agent systems, the large number of agents and complex game relationship cause great difficulty for policy learning. Therefore, simplifying the learning process is an important research issue. In many multi-agent systems, the interactions between agents often happen locally, which means that agents neither need to coordinate with all other agents nor need to coordinate with others all the time. Traditional methods attempt to use pre-defined rules to capture the interaction relationship between agents. However, the methods cannot be directly used in a large-scale environment due to the difficulty of transforming the complex interactions between agents into rules. In this paper, we model the relationship between agents by a complete graph and propose a novel game abstraction mechanism based on two-stage attention network (G2ANet), which can indicate whether there is an interaction between two agents and the importance of the interaction. We integrate this detection mechanism into graph neural network-based multi-agent reinforcement learning for conducting game abstraction and propose two novel learning algorithms GA-Comm and GA-AC. We conduct experiments in Traffic Junction and Predator-Prey. The results indicate that the proposed methods can simplify the learning process and meanwhile get better asymptotic performance compared with state-of-the-art algorithms.
10.1609/aaai.v34i05.6211
[ "https://ojs.aaai.org/index.php/AAAI/article/download/6211/6067" ]
208,267,864
1911.10715
8bc37f1684c51d4cf48ced986a2d974b100ee9cd
The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Multi-Agent Game Abstraction via Graph Attention Neural Network Yong Liu National Key Laboratory for Novel Software Technology Nanjing University Weixun Wang Tianjin University Yujing Hu NetEase Fuxi AI Lab Jianye Hao [email protected]@corp.netease.com Tianjin University Noah's Ark Lab Huawei Xingguo Chen [email protected] Jiangsu Key Laboratory of Big Data Security & Intelligent Processing Nanjing University of Posts and Telecommunications Yang Gao [email protected] National Key Laboratory for Novel Software Technology Nanjing University The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Multi-Agent Game Abstraction via Graph Attention Neural Network In large-scale multi-agent systems, the large number of agents and complex game relationship cause great difficulty for policy learning. Therefore, simplifying the learning process is an important research issue. In many multi-agent systems, the interactions between agents often happen locally, which means that agents neither need to coordinate with all other agents nor need to coordinate with others all the time. Traditional methods attempt to use pre-defined rules to capture the interaction relationship between agents. However, the methods cannot be directly used in a large-scale environment due to the difficulty of transforming the complex interactions between agents into rules. In this paper, we model the relationship between agents by a complete graph and propose a novel game abstraction mechanism based on two-stage attention network (G2ANet), which can indicate whether there is an interaction between two agents and the importance of the interaction. We integrate this detection mechanism into graph neural network-based multi-agent reinforcement learning for conducting game abstraction and propose two novel learning algorithms GA-Comm and GA-AC. We conduct experiments in Traffic Junction and Predator-Prey. The results indicate that the proposed methods can simplify the learning process and meanwhile get better asymptotic performance compared with state-of-the-art algorithms. Introduction Multi-agent reinforcement learning (MARL) has shown a great success for solving sequential decision-making problems with multiple agents. Recently, with the advance of deep reinforcement learning (DRL) (Mnih et al. 2016;Schulman et al. 2017), the combination of deep learning and multi-agent reinforcement learning has also been widely studied Sunehag et al. 2018;Rashid et al. 2018). Recent work has focused on multi-agent reinforcement learning in large-scale multi-agent systems Chen et al. 2018), in which the large number of agents and the complexity of interactions pose a significant challenge to the policy learning process. Therefore, simplifying the learning process is a crucial research area. Earlier work focuses on loosely coupled multi-agent systems, and adopt techniques such as game abstraction and knowledge transfer to help with speeding up multi-agent reinforcement learning (Guestrin, Lagoudakis, and Parr 2002;Kok and Vlassis 2004;De Hauwere, Vrancx, and Nowé 2010;Melo and Veloso 2011;Hu, Gao, and An 2015;Yu et al. 2015;Liu et al. 2019). However, in a large multiagent environment, agents are often related to some other agents rather than independent, which makes the previously learnt single-agent knowledge has limited use. Recent work focuses on achieving game abstraction through pre-defined rules (e.g., the distance between agents) Jiang, Dun, and Lu 2018). However, it is difficult to define the interaction relationship between agents through predefined rules in large-scale multi-agent systems. In this paper, we propose to automatically learn the interaction relationship between agents through end-to-end model design, based on which game abstraction can be achieved. The key to game abstraction is learning the relationship between agents. Recent work uses soft-attention mechanism to learn the importance distribution of the other agents for each agent Iqbal and Sha 2019). However, the final output softmax function indicates that the importance weight of each agent still depends on the weight of the other agents. That is to say, these methods cannot really learn the relationship between agents and ignore irrelevant agents to simplify policy learning. As shown in Figure 1, we represent all agents as a complete graph and propose a novel multi-agent game abstraction algorithm based on two-stage attention network (G2ANet), where hard-attention is used to cut the unrelated edges and soft-attention is used to learn the importance weight of the edges. In addition, we use GNN to obtain the contribution from other agents, which includes the information of the other agents to achieve coordination, and apply the mechanism into several algorithms. We list the main contributions as follows: • We propose a novel two-stage attention mechanism G2ANet for game abstraction, which can be combined with graph neural network (GNN). • By combining G2ANet with a policy network and Qvalue network respectively, we propose a communicationbased MARL algorithm GA-Comm and an actor-critic (AC) based algorithm GA-AC. • Experiments are conducted in Traffic Junction and Predator-Prey. The results show that our methods can simplify the learning process and meanwhile get better asymptotic performance compared with state-of-the-art algorithms. Background We review some key concepts in multi-agent reinforcement learning and related work in this section. Markov Game and Game Abstraction Markov game, which is also known as stochastic game, is widely adopted as the model of multi-agent reinforcement learning (MARL). It can be treated as the extension of Markov decision process (MDP) to multi-agent setting. Definition 1 An n-agent (n ≥ 2) Markov game is a tuple N, S, {A i } n i=1 , {R i } n i=1 , T , where N is the set of agents, S is the state space, A i is the action space of agent i(i=1,...,n). Let A = A 1 × A 2 × · · · × A n be the joint action space. R i : S × A → R is the reward function of agent i and T : S × A × S → [0, 1] is the transition function. In a markov game, each agent attempts to maximize its expected sum of discounted rewards, E{ ∞ k=0 γ k r i,t+k }, where r i,t+k is the reward received k steps into the future by agent i and γ is the discount factor. Denote the policy of agent i by π i : S × A i → [0, 1] and the joint policy of all agents by π = (π 1 , . . . , π n ). The stateaction value function of an agent i under a joint policy π can be defined as: Q π i (s, a) = E π ∞ k=0 γ k r t+k i |s t = s, a t = a ,(1) where a ∈ A represents a joint action and r t+k i is the reward received by agent i at time step (t + k). However, since Q π i depends on the actions of all agents, the concept of optimal policy should be replaced with joint policy. Game Abstraction The main idea of game abstraction is to simplify the problem model of multi-agent reinforcement learning (Markov game) to a smaller game, so as to reduce the complexity of solving (or learning) the game equilibrium policy. Attention Mechanism Attention is widely used in many AI fields, including natural language processing (Bahdanau, Cho, and Bengio 2014), computer vision , and so on. Soft and hard attention are the two major types of attention mechanisms. Soft-Attention Soft attention calculates a importance distribution of elements. Specially, soft attention mechanism is fully differentiable and thus can be easily trained by endto-end back-propagation. Softmax function is a common activation function. However, the function usually assigns nonzero probabilities to unrelated elements, which weakens the attention given to the truly important elements. Hard-Attention Hard attention selects a subset from input elements, which force a model to focus solely on the important elements, entirely discarding the others. However, hard attention mechanism is to select elements based on sampling and thus is non-differentiable. Therefore, it cannot learn the attention weight directly through end-to-end back-propagation. Deep Multi-Agent Reinforcement Learning With the development of deep reinforcement learning, recent work in MARL has started moving from tabular methods to deep learning methods. In this paper, we select communication-based algorithms CommNet (Sukhbaatar, Fergus, and others 2016), IC3Net (Singh, Jain, and Sukhbaatar 2019), Actor-Critic-based algorithms MADDPG (Lowe et al. 2017) and MAAC (Iqbal and Sha 2019) as baselines. CommNet CommNet allows communication between agents over a channel where an agent is provided with the average of hidden state representations of the other agents as a communication signal. IC3Net IC3Net can learn when to communicate based on a gating mechanism. The gating mechanism allows agents to block their communication and can be treated as a simple hard-attention. MADDPG MADDPG adopts the framework of centralized training with decentralized execution, which allows the policies to use extra information at training time. It is a simple extension of actor-critic policy gradient methods where the critic is augmented with extra information for other agents, while the actor only has access to local information. MAAC MAAC learns a centralized critic with an softattention mechanism. The mechanism is able to dynamically select which agents to attend to at each time step. Our Method In this section, we propose a novel game abstraction approach based on two-stage attention mechanism (G2ANet). Based on the mechanism, we propose two novel MARL algorithms (GA-Comm and GA-AC). G2ANet: Game Abstraction Based on Two-Stage Attention We construct the relationship between agents as a graph, where each node represents a single agent, and all nodes are connected in pairs by default. We define the graph as Agent-Coordination Graph. Definition 2 (Agent-Coordination Graph) The relationship between agents is defined as an undirected graph as G = (N, E), consisting of the set N of nodes and the set E of edges, which are unordered pairs of elements of N . Each node represents the agent entry, and the edge represents the relationship between the two adjacent agents. In large scale multi-agent systems, the number of agents is large, and not all agents need to interact with each other. In this paper, we try to identify unrelated agents by learning the relationship between agents, and perform game abstraction according to the learnt relationship. The simplest way of game abstraction is to design some artificial rules. Yang et al. proposed mean-field based multi-agent learning algorithm, where each agent has its own vision and just needs to interact with the agents in its vision ). However, such mean-field MARL algorithm requires strong prior knowledge of the environment and may not be suitable for application in complex environments. In a large-scale MAS, the interaction between agents is more complicated, the predefined rules are difficult to obtain and it cannot dynamically adjust based on the state transition. Inspired by attention mechanism (Bahdanau, Cho, and Bengio 2014;Ba, Mnih, and Kavukcuoglu 2014;Mnih et al. 2014;Xu et al. 2015;Vaswani et al. 2017), we firstly propose the two-stage attention game abstraction algorithm called G2ANet, which learns the interaction relationship between agents through hard-attention and soft-attention mechanisms. Recent work has tried to combine MARL with the attention mechanism (Jiang and Lu 2018; Iqbal and Sha 2019). However, the main idea is to use the soft-attention mechanism to learn the importance distribution of all other agents to the current agent through softmax function: w k = exp(f (T, e k )) K i=1 exp(f (T, e i )) ,(2) where e k is the feature vector of agent k, T is the current agent feature vector, and w k is the importance weight for agent k. However, the output value of the softmax function is a relative value and cannot really model the relationship between agents. In addition, this method cannot directly reduce the number of agents that need to interact since the unrelated agents will also obtain an importance weight. In addition, the softmax function usually assigns small but nonzero probabilities to trivial agents, which weakens the attention given to the few truly significant agents. In this paper, we propose a novel attention mechanism based on two-stage attention (G2ANet) to solve the above problems. We consider a partially observable environment, where at each time-step t, each agent i receives a local observation o t i , which is the property of agent i in the agent-coordination graph G. The local observation o t i is encoded into a feature vector h t i by MLP. Then, we use the feature vector h t i to learn the relationship between agents by attention mechanism. We know that the hard attention model can output a one-hot vector. That is, we can get whether the edge between node i and j exist in the graph G and which agents each agent needs to interact with. In this way, the policy learning is simplified to several smaller problems and preliminary game abstraction can be achieved. In addition, we find that each agent plays a different role for a specific agent. That is, the weight of each edge in the graph G is different. Inspired by (Vaswani et al. 2017), we train a soft-attention model to learn the weight of each edge. In this way, we can get a sub-graph G i for agent i, where agent i is only connected with the agents that need to interact with, and the weight on edge describes the importance of the relationship. For subgraph G i , we can use Graph Neural Network (GNN) to obtain a vector representation, which represents the contribution from other agents to agent i. Moreover, G2ANet has a good generality which can combine with communicationbased algorithms (Sukhbaatar, Fergus, and others 2016;Singh, Jain, and Sukhbaatar 2019) and AC-based algorithms (Lowe et al. 2017;Iqbal and Sha 2019). We will discuss it in the next subsection. The two-stage attention mechanism is shown in Figure 2. First, we use the hard-attention mechanism to learn the hard weight W i,j h , which determines whether there is interaction relationship between agent i and j. In this paper, we use a LSTM network to achieve it, where each time-step output a weight (0 or 1) for agent i, j, where j ∈ {1, ..., n} and i = j. For agent i, we merge the embedding vector of agent i, j into a feature (h i , h j ) and input the feature into LSTM model: h i,j = f (LST M (h i , h j )),(3) where f (·) is a fully connected layer for embedding. However, the output of traditional LSTM network only depends on the input of the current time and the previous time but ignores the input information of the later time. That is to say, the order of the inputs (agents) plays an important role in the process and the output weight value cannot take advantage of all agents' information. We think that is short-sighted and not reasonable. In this paper, we select a Bi-LSTM model to solve it. For example, the relationship weight between agent i and agent j also depends on the information of agent k in the environment, where agent k ∈ {1, ..., n} and agent k is not in {i, j}. In addition, the hard-attention is often unable to achieve back-propagation of gradients due to the sampling process. We try to use gumbel-softmax (Jang, Gu, and Poole 2017) function to solve it: W i,j h = gum(f (LST M (h i , h j ))),(4) where gum(·) represents gumbel-softmax function. By hard-attention mechanism, we can get a sub-graph G i for agent i, where agent i just connected with the agents that need to coordinate. Then we use soft-attention to learn the weight of each edge in G i . As shown in Figure 2, the softattention weight W i,j s compares the embedding e j with e i , using the query-key system (key-value pair) and passes the matching value between these two embeddings into a softmax function: W i,j s ∝ exp(e T j W T k W q e i W i,j h ),(5) where W k transforms e j into a key, W q transforms e i into a query and W i,j h is the hard-attention value. Finally, the softattention weight value W i,j s is the final weight of the edge, which is defined as W i,j . Learning Algorithm Based on Game Abstraction Through the two-stage attention model, we can get a reduced graph in which each agent (node) is connected only to the agent (node) that needs to interact with. For example, in Figure 1, we can get a sub-graph G i for agent i, where the center node is agent i (node i). As we all know, GNN has powerful encoding ability. If each node represents the agent's encoding in the sub-graph G i , we can use GNN to get a joint encoding for agent i, which defines the contribution of all other agents for the current agent i. By the joint vector encoding, our method can make better decisions. As mentioned earlier, our two-stage attention-based game abstraction method is a general mechanism. In this paper, we combine G2ANet with policy network and Q-value network respectively, and propose two learning algorithms: In this way, each agent can receive all agent's information and achieve communication. However, there is no need for each agent to communicate with all other agents in most environments. That means the frequent communication will cause high computing cost and increase the difficulty of policy learning. In this paper, we combine the novel game abstraction mechanism G2ANet with policy network and propose a novel communication-based MARL learning algorithm GA-Comm. As shown in Figure 3, o i represents the observation of agent i, its policy takes the form of: a i = π(h i , x i ),(6) where π is the action policy of an agent, h i is the observation feature for agent i and x i is the contribution from other agents for agent i. In this paper, we use a LSTM layer to extract the feature: h i , s i = LST M (e(o i ), h i , s i ),(7) where o i is the observation of agent i at time-step t, e(·) is an encoder function parameterized by a fully-connected neural network. Also, h i and s i are the hidden and cell states of the LSTM. As for the contribution for agent i from other agents, we firstly use two-stage attention mechanism to select which agents the agent i need to communicate and obtain its importance: W i,j h = M hard (h i , h j ), W i,j s = M sof t (W h , h i , h j ), (8) where W i,j h is the hard-attention value and W i,j s is the softattention value calculated by hidden state h i , h j . M hard is the hard-attention model and M sof t is the soft-attention model. In this way, we can get the contribution x i from other agents by GNN. We use a simple method to calculate, which is a weighted sum of other agents' contribution by two-stage attention mechanism: x i = j =i w j h j = j =i W i,j h W i,j s h j .(9) Finally, we can get the action a i for agent i. During the training process, we train the policy π with REINFORCE (Williams 1992). Actor-Critic Network Based on Game Abstraction Inspired by MAAC (Iqbal and Sha 2019), we propose a novel learning algorithm based on G2ANet. To calculate the Qvalue Q i (o i , a i ) for agent i, the critic network receives the observations o = (o 1 , ..., o N ) and actions, a = (a 1 , ..., a N ) for all agents. Q i (o i , a i ) is the value function for agent i : Q i (o i , a i ) = f i (g i (o i , a i ), x i ),(10) where f i and g i is a multi-layer perception (MLP), x i is the contribution from other agents, which is computed by GNN. In this paper, we use a simple method, which is a weighted sum of each agents value based on our two-stage attention mechanism: x i = j =i w j v j = j =i w j h(V g j (o j , a j )),(11) where the value, v j is an embedding of agent j, encoded with an embedding function and then transformed by a shared matrix V and h(·) is an elementwise nonlinearity. The attention weight w j is computed by the two-stage attention mechanism, which compares the embedding e j with e i = g i (o i , a i ) and passes the relation value between these two embeddings into a softmax function: w j = W i,j h W i,j s ∝ exp(h(BiLST M j (e i , e j ))e T j W T k Wqe i ),(12) where W q transforms e i into a query and W k transforms e j into a key. In this way, we can obtain the attention weight w j and calculate the Q value for each agent. Experiments In this section, we evaluate the performance of our game abstraction algorithms in two scenarios. The first one is conducted in Traffic Junction (Singh, Jain, and Sukhbaatar 2019), where we use policy based game abstraction algorithm GA-Comm and baselines are CommNet and IC3Net. The second is the Predator-Prey in Multi-Agent Particle environment (Lowe et al. 2017), where we use Q-value based game abstraction algorithm GA-AC and the baselines are MADDPG and MAAC. Traffic Junction The simulated traffic junction environments from (Singh, Jain, and Sukhbaatar 2019) consists of cars moving along pre-assigned potentially interesting routes on one or more road junctions. Success indicates that no collisions occur at a time-step. We can calculate the success rate according to The total number of cars is fixed at N max and new cars get added to the environment with probability p arrive at every time-step. The task has three difficulty levels which vary in the number of possible routes, entry points and junctions. Fallowing the same setting in IC3Net (Singh, Jain, and Sukhbaatar 2019), the number of agents in the easy, medium, and hard environments is 5, 10, and 20, respectively. We make this task harder by always setting vision to zero in all the three difficulty levels, which means that each agent's local observation only has its position information and each agent need to obtain other agents information to achieve coordination via communication mechanism. The action space for each car is gas and break, and the reward consists of a linear time penalty −0.01τ , where τ is the number of time-steps since the car has been active, and a collision penalty r collision = −10. Figure 7 illustrates the success rate per episode attained by various methods on TJ, where GA-Comm is our communication model based on G2ANet and IC3Net is a communication method based on one-stage hard attention. Table 1 shows the success rates on different levels (easy, medium, and hard), which is the average success rate of 10 runs and the variance of the 10 repeated experiments can be obtained from the shaded area in Figure 7. Our proposed approach based on game abstraction is competitive when compared to other methods. As the setting in IC3Net (Singh, Jain, and Sukhbaatar 2019), we use the method of curriculum learning to train the model, gradually increase the number of agents in the environment, and further simplify the learning of the model. As shown in Figure 7, GA-Comm performs better than all baseline methods in all modes. Our approach is not only high in success rate but also more stable. In addition, as the difficulty of the environment gradually increases (the number of junctions increases) and the number of agents gradu- ally increases, the effect is more obvious. We can find that the success rate of our method is about 6%, 7% and 11% higher than IC3Net in the three levels (easy, medium and hard), which verifies that our method is more effective (6-7-11) as the difficulty of environment gradually increases. This further illustrates the applicability of our game abstraction mechanism in large-scale multi-agent systems. At different time steps in an episode, the relationship between agents is constantly changing. In our method, we can learn the adaptive and dynamic attention value. In order to analyze the influence of the game abstraction mechanism on the learning results, the game relationship between agents is showed in Figure 6(a), which only describes the attention values of one certain time-step. Each agent has its color (e.g., green, blue, yellow, red, and purple), and the same color agents represent a group. It is observed that each agent can select their partners and form a group (purple is the independent agent), and ignores the unrelated agents. For example, all agents are mainly divided into four groups, each group mainly gathers near the junction. For agent a, the green agents are its teammates, which concentrate on one junc- tion, and it can ignore other agents when making a decision. In addition, for each agent, the importance is different for the agents in the group. Figure 6(b-c) shows the final attention value distribution for agent a (left) and agent k. Agent a, c, d, e in the same group and the importance of agent c and agent d is larger than agent e for agent a. Similarly, the importance of agent l, m is larger than agent n, o for agent k. We can conclude that the game abstraction that first ignores unrelated agents, and then learns an important distribution in a smaller number of environments. In this way, we can avoid learning the importance distribution of all agents directly in a larger-scale MAS, and the final value is more accurate. Multi-Agent Particle Environment The second scenario in this paper is the Multi-Agent Particle Environment. As shown in Figure 8(a), we choose predator − prey as the test environment, where the adversary agent (red) is slower and needs to capture the good agent (green), and the good agent is faster and needs to escape. In this paper, we fix the policy (DQN) of the good agents. As the setting in MADDPG, adversary agents receive a reward of +10 when they capture good agents. We trained the model in the setting of N a = 5 and N g = 2 for 1500 episodes, where N a is the number of adversary and N g is the number of good agents. Similarly, adversary agents need to achieve multiple groups to capture all the good agents. Figure 8 shows the learning curves of each agent's average reward, where MADDPG is an algorithm proposed by Lowe et al. and MAAC is a soft-attention based algorithm proposed by Iqbal and Sha. GA-AC outperforms all the baselines in terms of mean reward. It is observed that our method is slower to learn (Compared with the softattention method MAAC) in the early stage. We think that is because the architecture of our two-stage attention network is more complex. The final better performance verifies the effectiveness of our game abstraction mechanism. As shown in Figure 8, it's observed that five adversary agents are divided into two groups to chase two good agents. Each agent just needs to interact with the agents in the same group, which can effectively avoid the interference of the unrelated agents. The final result also shows that our game abstraction mechanism based algorithm GA-AC has learned a reasonable combination form. In Figure 9, we can obtain the attention value distribution for agent 1 (Figure 9(a)) and agent 4 (Figure 9(b)). Agents 1, 2, 3 are in the same group, and the importance of agent 2 and agent 3 is larger than that of agents 4 and 5 for agent 1. Similarly, the importance of agent 5 is larger than agent that of agents 1, 2, and 3 for agent 4. We can conclude that the game abstraction method proposed in this paper can well model the game relationship between agents, avoid the interference of unrelated agents and accelerate the process of policy learning. Conclusions In this paper, we focus on the simplification of policy learning in large-scale multi-agent systems. We learn the relationship between agents and achieve game abstraction by defining a novel attention mechanism. At different time steps in an episode, the relationship between agents is constantly changing. In this paper, we can learn the adaptive and dynamic attention value. Our major contributions include the novel two-stage attention mechanism G2ANet, and the two game abstraction based learning algorithms GA-Comm and GA-AC. Experimental results in Traffic Junction and Predator-Prey show that with the novel game abstraction mechanism, the GA-Comm and GA-AC algorithms can get better performance compared with state-of-the-art algorithms. Figure 1 : 1Game Abstraction based on two-stage attention mechanism and Graph Neural Network (GNN). Figure 2 : 2Two-Stage Attention Neural Network. ( 1 )Figure 3 : 13Policy network in communication model (GA-Comm): Each agent considers the communication vectors of all other agents when making decisions; (2) Critic network in actorcritic model (GA-AC): Critic network of each agent considers the state and action information of all other agents when calculating its Q-value in AC-based methods.Policy Network Based on Game Abstraction As we all know, much related work focus on learning multiagent communication(Sukhbaatar, Fergus, and others 2016;Singh, Jain, and Sukhbaatar 2019), most of which achieve communication through aggregation function, which can access all other agents' communication vector (e.g., average function, maximum function) into one vector and pass it to Communication model based on Game Abstraction each agent. Figure 4 : 4Actor-Critic model based on Game Abstraction the number of time steps and collisions (failures) in each episode. Figure 5 : 5Traffic Junction Environment. Agents have zero vision and can only observe their own location. The cars have to cross the the whole road minimizing collisions. Figure 6 : 6Agents with the same color represent a group, and each agent just need to interact with the agents in the group. Figure 7 : 7Experimental results in Traffic Junction. (a) is the result in easy version, (b) is the result in medium version and (c) is the result in hard version. Shaded regions are one standard deviation over 10 runs. Figure 8 : 8Experimental result in Predator-Prey. Figure 9 : 9Attention value distribution. (a) is the attention contribution for agent 1, (b) is the attention distribution for agent 4. Table 1 : 1Success Rate in the Traffic JunctionAlgorithm Easy Medium Hard CommNet 93.5% 78.8% 6.5% IC3Net 93.2% 90.8% 70.9% GA-Comm 99.7% 97.6% 82.3% Acknowledgments Multiple object recognition with visual attention. J Ba, V Mnih, K Kavukcuoglu, arXiv:1412.7755arXiv preprintBa, J.; Mnih, V.; and Kavukcuoglu, K. 2014. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755. Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintBahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural ma- chine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Factorized qlearning for large-scale multi-agent systems. Y Chen, M Zhou, Y Wen, Y Yang, Y Su, W Zhang, D Zhang, J Wang, H Liu, arXiv:1809.03738arXiv preprintChen, Y.; Zhou, M.; Wen, Y.; Yang, Y.; Su, Y.; Zhang, W.; Zhang, D.; Wang, J.; and Liu, H. 2018. Factorized q- learning for large-scale multi-agent systems. arXiv preprint arXiv:1809.03738. Learning multi-agent state space representations. , Y.-M De Hauwere, P Vrancx, A Nowé, Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems. the 9th International Conference on Autonomous Agents and Multiagent SystemsDe Hauwere, Y.-M.; Vrancx, P.; and Nowé, A. 2010. Learn- ing multi-agent state space representations. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, 715-722. Counterfactual multi-agent policy gradients. J N Foerster, G Farquhar, T Afouras, N Nardelli, S Whiteson, Thirty-Second AAAI Conference on Artificial Intelligence. Foerster, J. N.; Farquhar, G.; Afouras, T.; Nardelli, N.; and Whiteson, S. 2018. Counterfactual multi-agent policy gra- dients. In Thirty-Second AAAI Conference on Artificial In- telligence. Coordinated reinforcement learning. C Guestrin, M G Lagoudakis, R Parr, Proceedings of the 9th International Conference on Machine Learning. the 9th International Conference on Machine LearningGuestrin, C.; Lagoudakis, M. G.; and Parr, R. 2002. Co- ordinated reinforcement learning. In Proceedings of the 9th International Conference on Machine Learning, 227-234. Learning in multi-agent systems with sparse interactions by knowledge transfer and game abstraction. Y Hu, Y Gao, An , B , Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. the 2015 International Conference on Autonomous Agents and Multiagent SystemsHu, Y.; Gao, Y.; and An, B. 2015. Learning in multi-agent systems with sparse interactions by knowledge transfer and game abstraction. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 753-761. Actor-attention-critic for multiagent reinforcement learning. S Iqbal, Sha , F , Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningIqbal, S., and Sha, F. 2019. Actor-attention-critic for multi- agent reinforcement learning. In Proceedings of the 36th In- ternational Conference on Machine Learning, 2961-2970. Categorical reparameterization with gumbel-softmax. E Jang, S Gu, B Poole, 5th International Conference on Learning Representations. Jang, E.; Gu, S.; and Poole, B. 2017. Categorical repa- rameterization with gumbel-softmax. In 5th International Conference on Learning Representations. Learning attentional communication for multi-agent cooperation. J Jiang, Z Lu, Advances in Neural Information Processing Systems. Jiang, J., and Lu, Z. 2018. Learning attentional communi- cation for multi-agent cooperation. In Advances in Neural Information Processing Systems, 7254-7264. Graph convolutional reinforcement learning for multi-agent cooperation. J Jiang, C Dun, Z Lu, arXiv:1810.09202arXiv preprintJiang, J.; Dun, C.; and Lu, Z. 2018. Graph convolutional reinforcement learning for multi-agent cooperation. arXiv preprint arXiv:1810.09202. Sparse cooperative Q-learning. J R Kok, N A Vlassis, Proceedings of the 21st International Conference on Machine Learning. the 21st International Conference on Machine LearningKok, J. R., and Vlassis, N. A. 2004. Sparse cooperative Q-learning. In Proceedings of the 21st International Con- ference on Machine Learning, 61-68. Value function transfer for deep multi-agent reinforcement learning based on n-step returns. Y Liu, Y Hu, Y Gao, Y Chen, C Fan, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. the Twenty-Eighth International Joint Conference on Artificial IntelligenceLiu, Y.; Hu, Y.; Gao, Y.; Chen, Y.; and Fan, C. 2019. Value function transfer for deep multi-agent reinforcement learn- ing based on n-step returns. In Proceedings of the Twenty- Eighth International Joint Conference on Artificial Intelli- gence, 457-463. Multi-agent actor-critic for mixed cooperative-competitive environments. R Lowe, Y Wu, A Tamar, J Harb, O P Abbeel, I Mordatch, Advances in Neural Information Processing Systems. Lowe, R.; Wu, Y.; Tamar, A.; Harb, J.; Abbeel, O. P.; and Mordatch, I. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. In Advances in Neu- ral Information Processing Systems, 6379-6390. Decentralized MDPs with sparse interactions. F S Melo, M M Veloso, Artifitial Intelligence. 17511Melo, F. S., and Veloso, M. M. 2011. Decentral- ized MDPs with sparse interactions. Artifitial Intelligence 175(11):1757-1789. Recurrent models of visual attention. V Mnih, N Heess, A Graves, Advances in neural information processing systems. Mnih, V.; Heess, N.; Graves, A.; et al. 2014. Recurrent models of visual attention. In Advances in neural informa- tion processing systems, 2204-2212. Asynchronous methods for deep reinforcement learning. V Mnih, A P Badia, M Mirza, A Graves, T Lillicrap, T Harley, D Silver, K Kavukcuoglu, In International conference on machine learning. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asyn- chronous methods for deep reinforcement learning. In In- ternational conference on machine learning, 1928-1937. QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning. T Rashid, M Samvelyan, C S De Witt, G Farquhar, J N Foerster, S Whiteson, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningRashid, T.; Samvelyan, M.; de Witt, C. S.; Farquhar, G.; Fo- erster, J. N.; and Whiteson, S. 2018. QMIX: monotonic value function factorisation for deep multi-agent reinforce- ment learning. In Proceedings of the 35th International Con- ference on Machine Learning, 4292-4301. J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Learning when to communicate at scale in multiagent cooperative and competitive tasks. A Singh, T Jain, S Sukhbaatar, 7th International Conference on Learning Representations. Singh, A.; Jain, T.; and Sukhbaatar, S. 2019. Learning when to communicate at scale in multiagent cooperative and com- petitive tasks. In 7th International Conference on Learning Representations. Learning multiagent communication with backpropagation. S Sukhbaatar, R Fergus, Advances in Neural Information Processing Systems. Sukhbaatar, S.; Fergus, R.; et al. 2016. Learning multia- gent communication with backpropagation. In Advances in Neural Information Processing Systems, 2244-2252. Value-decomposition networks for cooperative multi-agent learning based on team reward. P Sunehag, G Lever, A Gruslys, W M Czarnecki, V Zambaldi, M Jaderberg, M Lanctot, N Sonnerat, J Z Leibo, K Tuyls, Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. the 17th International Conference on Autonomous Agents and MultiAgent SystemsSunehag, P.; Lever, G.; Gruslys, A.; Czarnecki, W. M.; Zam- baldi, V.; Jaderberg, M.; Lanctot, M.; Sonnerat, N.; Leibo, J. Z.; Tuyls, K.; et al. 2018. Value-decomposition networks for cooperative multi-agent learning based on team reward. In Proceedings of the 17th International Conference on Au- tonomous Agents and MultiAgent Systems, 2085-2087. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in neural information processing systems, 5998-6008. Nonlocal neural networks. X Wang, R Girshick, A Gupta, K He, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWang, X.; Girshick, R.; Gupta, A.; and He, K. 2018. Non- local neural networks. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, 7794- 7803. Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams, Machine learning. 83-4Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ma- chine learning 8(3-4):229-256. Show, attend and tell: Neural image caption generation with visual attention. K Xu, J Ba, R Kiros, K Cho, A Courville, R Salakhudinov, R Zemel, Y Bengio, International conference on machine learning. Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhudi- nov, R.; Zemel, R.; and Bengio, Y. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In International conference on machine learning, 2048-2057. Mean field multi-agent reinforcement learning. Y Yang, R Luo, M Li, M Zhou, W Zhang, J Wang, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningYang, Y.; Luo, R.; Li, M.; Zhou, M.; Zhang, W.; and Wang, J. 2018. Mean field multi-agent reinforcement learning. In Proceedings of the 35th International Conference on Ma- chine Learning, 5567-5576. Multiagent learning of coordination in loosely coupled multiagent systems. C Yu, M Zhang, F Ren, G Tan, IEEE Transactions on Cybernetics. 4512Yu, C.; Zhang, M.; Ren, F.; and Tan, G. 2015. Multiagent learning of coordination in loosely coupled multiagent sys- tems. IEEE Transactions on Cybernetics 45(12):2853-2867.
[]
[ "DEEP ENSEMBLES: A LOSS LANDSCAPE PERSPECTIVE", "DEEP ENSEMBLES: A LOSS LANDSCAPE PERSPECTIVE" ]
[ "Stanislav Fort [email protected] \nGoogle Research\n\n", "Huiyi Hu \nGoogle Research\n\n", "Balaji Lakshminarayanan [email protected] \nGoogle Research\n\n", "Deepmind \nGoogle Research\n\n" ]
[ "Google Research\n", "Google Research\n", "Google Research\n", "Google Research\n" ]
[]
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversityaccuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.
null
[ "https://arxiv.org/pdf/1912.02757v1.pdf" ]
208,637,294
1912.02757
6ddf83bee5ca7a0542964e389a98adc1ed4a6838
DEEP ENSEMBLES: A LOSS LANDSCAPE PERSPECTIVE Stanislav Fort [email protected] Google Research Huiyi Hu Google Research Balaji Lakshminarayanan [email protected] Google Research Deepmind Google Research DEEP ENSEMBLES: A LOSS LANDSCAPE PERSPECTIVE Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversityaccuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. INTRODUCTION Consider a typical classification problem, where x n ∈ R D denotes the D-dimensional features and y n ∈ [1, . . . , K] denotes the class label. Assume we have a parametric model p(y|x, θ) for the conditional distribution where θ denotes weights and biases of a neural network, and p(θ) is a prior distribution over parameters. The Bayesian posterior over parameters is given by p(θ|{x n , y n } N n=1 ) ∝ p(θ) N n=1 p(y n |x n , θ). Computing the exact posterior distribution over θ is computationally expensive (if not impossible) when p(y n |x n , θ) is a deep neural network. A variety of approximations have been developed for Bayesian neural networks, including Laplace approximation (MacKay, 1992), Markov chain Monte Carlo methods (Neal, 1996;Welling & Teh, 2011;Springenberg et al., 2016), variational Bayesian methods (Graves, 2011;Blundell et al., 2015;Louizos & Welling, 2017;Wen et al., 2018) and Monte-Carlo dropout (Gal & Ghahramani, 2016;Srivastava et al., 2014). While computing the posterior is challenging, it is usually easy to perform maximum-a-posteriori (MAP) estimation, which corresponds to a mode of the posterior. The MAP solution can be written as the minimizer of the following loss (negative log likelihood + negative log prior): log p(y n |x n , θ). θ The MAP solution is computationally efficient, but only gives a point estimate and not a distribution over parameters. Deep ensembles, proposed by Lakshminarayanan et al. (2017), train an ensemble of neural networks by initializing at M different values and repeating the minimization multiple times which could lead to M different solutions, if the loss is non-convex. (Lakshminarayanan et al. (2017) found adversarial training provides additional benefits in some of their experiments, but we will ignore adversarial training and focus only on ensembles with random initialization in this paper.) Given finite training data, many parameter values could equally well explain the observations, and capturing these diverse solutions is crucial for quantifying epistemic uncertainty (Kendall & Gal, 2017). Bayesian neural networks learn a distribution over weights, and a good posterior approximation should be able to learn multi-modal posterior distributions in theory. Deep ensembles were inspired by the bootstrap (Breiman, 1996), which has nice theoretical properties. However, it has been empirically observed by Lakshminarayanan et al. (2017); Lee et al. (2015) that training individual networks with just random initialization is sufficient in practice and using the bootstrap even hurts performance in some cases (e.g. for small ensemble sizes). Furthermore, Ovadia et al. (2019) and Gustafsson et al. (2019) independently benchmarked existing methods for uncertainty quantification on a variety of datasets and architectures, and observed that ensembles tend to outperform approximate Bayesian neural networks in terms of both accuracy and uncertainty, particularly under dataset shift. These empirical observations raise an important question: Why do ensembles trained with just random initialization work so well in practice? One possible hypothesis is that ensembles tend to sample from different modes 1 in function space, whereas variational Bayesian methods (which minimize D KL (q(θ)|p(θ|{x n , y n } N n=1 )) might fail to explore multiple modes even though they are effective at capturing uncertainty within a single mode. See Figure 1 for a cartoon illustration. Note that while the MAP solution is a local minima for the training loss by definition, it may not necessarily be a local minima for the validation loss. Recent work on understanding loss landscapes (Fort & Jastrzebski, 2019;Draxler et al., 2018; allows us to investigate this hypothesis. Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in Fort & Jastrzebski (2019). Our findings show that: • The functions sampled along a single training trajectory or subspace thereof (e.g. diagonal Gaussian, low-rank Gaussian and Dropout subspaces) tend to be very similar in predictions (while potential far away in the weight space), whereas functions sampled from different randomly initialized trajectories tend to be very diverse. • Solution modes are connected in the loss landscape but they are distinct in the space of predictions. Low-loss tunnels create functions with near-identical low values of loss along the path, however these functions tend to be very different in function space, changing significantly in the middle of the tunnel. BACKGROUND The loss landscape of neural networks (also called the objective landscape) -the space of weights and biases that the network navigates -is typically a very high dimensional function and therefore could potentially be very complicated. However, many empirical results show interesting properties of the loss surface. Goodfellow & Vinyals (2014) observed that the loss along a linear path from an initialization to the corresponding optimum is monotonically decreasing, encountering no significant obstacles along the way. Li et al. (2018) demonstrated that constraining optimization to a random, low-dimensional hyperplane in the weight space leads to results comparable to full-space optimization, provided that the dimension exceeds a modest threshold. This was geometrically understood and extended in (Fort & Scherlis, 2019). ; Draxler et al. (2018) demonstrate that while a linear path between two independent optima hits a high loss area in the middle, there in fact exist continuous, low-loss paths connecting any pair of optima. These observations are unified into a single phenomenological model in (Fort & Jastrzebski, 2019). While independent, low-loss optima in the loss landscape are connected, Fort & Jastrzebski (2019) provide an early indication that in fact they represent very different functions in terms of their predictions. Therefore the connectivity cannot be due to trivial symmetries of the network which would keep the input-output mapping intact. VISUALIZING FUNCTION SIMILARITY ACROSS INITIALIZATIONS We train convolutional neural networks on the CIFAR-10 (Krizhevsky, 2009) dataset: • SmallCNN: channels [16,32,32] for 10 epochs which achieves 64% test accuracy. • MediumCNN: channels [64,128,256,256] for 20 epochs which achieves 70% test accuracy. • ResNet20v1: for 200 epochs which achieves 90% test accuracy. We use the Adam optimizer (Kingma & Ba, 2015) for training and to make sure the effects we observe are general, we validate that our results hold for vanilla stochastic gradient descent (SGD) as well, which we do not show in this paper. We use batch size 128 and dropout 0.03 for training SmallCNN and MediumCNN. To generate weight space and prediction space similarity results, we use a constant learning rate of 1.6 × 10 −3 , unless specified otherwise. We do not use any data augmentation with those two architectures. For ResNet20v1, we use the data augmentation and learning rate schedule used in Keras examples 2 . The overall trends are consistent across all architectures, datasets, and other hyperparameter and non-linearity choices we explored. SIMILARITY OF FUNCTIONS WITHIN AND ACROSS TRAJECTORIES First, we compute the similarity between different checkpoints along a single trajectory. We plot the cosine similarity in weight space in Figure 2(a) and the disagreement in function space, defined as the fraction of points the checkpoints disagree on, in Figure 2(b). We observe that the checkpoints along a trajectory are largely similar both in the weight space and the function space. Next, we evaluate how diverse the final solutions from different random initializations are. The functions from different initialization are different, as demonstrated by the similarity plots in Figure 3. Comparing this with Figures 2(a) and 2(b), we see that functions within a single trajectory exhibit higher similarity and functions across different trajectories exhibit much lower similarity. Next, we take the predictions from different checkpoints along the individual training trajectories from multiple initializations and compute a t-SNE plot (Maaten & Hinton, 2008) to visualize their similarity in function space. More precisely, we take the softmax output for a set of points, flatten the vector and use it as the input to the t-SNE plot. Figure 2(c) shows that the functions explored by different trajectories (denoted by circles with different colors) are far away, while functions explored within a single trajectory (circles with the same color) tend to be much more similar. SIMILARITY OF FUNCTIONS ACROSS SUBSPACES FROM EACH TRAJECTORY In addition to the checkpoints along a trajectory, we also construct subspaces based on each individual trajectory. Scalable Bayesian methods typically compute statistics based on the weights along a trajectory, hence visualizing the diversity of functions between the subspace helps understand the difference between Bayesian neural networks and ensembles. We use a representative set of four subspace sampling methods: a random subspace, a Monte Carlo dropout, a diagonal Gaussian approximation, and a low-rank covariance matrix Gaussian approximation. In the descriptions of the methods, let w 0 be the current weight-space position (the weights and biases of our trained neural net) around which we will construct the subspace. • Random subspace sampling: We start at an optimized solution w 0 and choose a random directionv in the weight space. We step in that direction by choosing different values of t and looking at predictions at configurations w 0 + tv. We do this for many random directionsv. • Monte Carlo dropout subspace: We start at an optimized solution w 0 and apply dropout with a randomly chosen p keep to it. We do this many times, each time choosing a random p keep , and look at predictions at dropout p keep ( w 0 ). • Diagonal Gaussian subspace: We start at an optimized solution w 0 and look at the most recent iterations of training proceeding it. For each trainable parameter w i , we calculate its mean mean i and standard deviation std(w i ). To sample solutions from the subspace, we draw each parameter independently as w i ∼ N (mean i , std i ). We repeat this many times and obtain predictions in each. This corresponds to sampling from a normal distribution with a diagonal covariance matrix. • Low-rank Gaussian subspace: We start at an optimized solution w 0 and look at the most recent iterations of training proceeding it. For each trainable parameter w i , we calculate its mean mean i . For a rank-k approximation, we calculate top k principal components of the weight vectors in the most recent iterations of training { p i ∈ R params } k . We sample from a k-dimensional normal distribution and obtain the weight configurations as w ∼ mean + i N (0 k , 1 k ) p i . Figure 4 shows that functions sampled from a subspace (denoted by colored squares) corresponding to a particular initialization, are much more similar to each other. While some subspaces are more diverse, they still do not overlap with functions from another randomly initialized trajectory. Diversity versus Accuracy plots To illustrate the difference in another fashion, we sample functions from a single subspace and plot diversity (as measured by disagreement between predictions) versus accuracy in Figure 5. Comparing these subspace points (colored dots) to the baseline optima (green Figure 4: Results using SimpleCNN on CIFAR-10: t-SNE plots of validation set predictions for each trajectory along with four different subspace generation methods (showed by squares), in addition to 3 independently initialized and trained runs (different colors). As visible in the plot, the subspacesampled functions stay in the prediction-space neighborhood of the run around which they were constructed, demonstrating that truly different functions are not sampled. star) and the optima from different random initializations (denoted by red stars), we observe that random initializations are much more effective at sampling diverse and accurate solutions, than subspace based methods constructed out of a single trajectory. The results are consistent across different architectures and we also observed similar trends on CIFAR-100 (see Appendix A). The diversity score used above quantifies the difference of two functions, by measuring fraction of points on which their predictions differ. We chose this approach due to its simplicity; one could also compute the KL-divergence or other distances between the output probability distributions Let d diff denote the fraction of predictions on which the two functions differ. It is 0 when the two functions make identical class predictions, and 1 when they differ on every single example. To account for the fact that the lower the accuracy of a function, the higher its potential d diff due to the possibility of the wrong answers being random and uncorrelated between the two functions, we normalize this by (1 − a), where a is the accuracy. For a reference function f * of accuracy a * and a function f of accuracy a whose predictions are obtained by randomly perturbing the predictions of f * , the expected fractional difference is d diff = (C − 1)(a * − a)/(a * C − 1), where C is the number of classes. If the function f of accuracy a were entirely independent of f * , then the expected fractional difference would be d diff = (1 − a * )a + (1 − a)a * + (1 − a * )(1 − a)(C − 2)/(C − 1) . Those two limiting behavioursthe function f being derived from f * by a perturbation, and f and f * being completely independent -form the two dashed lines in Figure 5. We refer to Appendix B for further details on the limiting curves. The diversity reached is not as high as the theoretical optimum even for the independently initialized and optimized solutions, which provides scope for future work. Figure 6 shows the radial loss landscape (train as well as the validation set) along the directions of two different optima. The left subplot shows that different trajectories achieve similar values of the loss, and the right subplot shows the similarity of these functions to their respective optima (in particular the fraction of labels predicted on which they differ divided by their error rate). While the loss values from different optima are similar, the functions are different, which confirms that random initialization leads to different modes in function space. Figure 6: Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima and the predictions of models on the same plane. IDENTICAL LOSS DOES NOT IMPLY IDENTICAL FUNCTIONS IN PREDICTION SPACE We construct a low-loss tunnel between different optima using the procedure proposed by Fort & Jastrzebski (2019), which is a simplification of the procedures proposed in and Draxler et al. (2018). As shown in Figure 7(a), we start at the linear interpolation point (denoted by the black line) and reach the closest point on the manifold by minimizing the training loss. The minima of the training loss are denoted by the yellow line in the manifolds. Figure 7(b) confirms that the tunnel is indeed low-loss. In order to visualize the 2-dimensional cut through the loss landscape and the the associated predictions on along a curved low-loss path, we divide the path into linear segments, and compute the loss and prediction similarities on a triangle given by this segment on one side and the origin of the weight space on the other. We perform this operation on each of the linear segments from which the low-loss path is constructed, and place them next to each other for visualization. Figure 8 visualizes the loss along the manifold, as well as the similarity to the original optima. Note that the regions between radial yellow lines consist of segments, and we stitch these segments together in Figure 8. The accuracy plots show that as we traverse along the low-loss tunnel, the accuracy remains fairly constant as expected. However, the prediction similarity plot shows that the low-loss tunnel does not correspond to similar solutions in function space. What it shows is that while the modes are connected in terms of accuracy/loss, their functional forms remain distinct and they do not collapse into a single mode. EVALUATING THE RELATIVE EFFECTS OF ENSEMBLING VERSUS SUBSPACE METHODS Our observations in the previous section suggest that subspace-based methods and ensembling should provide complementary benefits in terms of uncertainty and accuracy. To test this, we evaluate the performance of the following four variants using SmallCNN on CIFAR-10: • Baseline: optimum at the end of a single training trajectory. • Subspace sampling: average predictions over the solutions sampled from a subspace. • Ensemble: train baseline multiple times with random initialization and average the predictions. • Ensemble + Subspace sampling: train multiple times with random initialization, use subspace sampling within each trajectory. WEIGHT AVERAGING WITHIN A SUBSPACE One could use the mean and diagonal/low-rank variance to approximate each mode of the posterior, however that increases the number of parameters required for each mode. Using just the mean weight for each mode would not increase the number of parameters. proposed stochastic weight averaging (SWA) for better generalization. One could also compute an (exponential moving) average of the weights along the trajectory, inspired by Polyak-Ruppert averaging in convex optimization, (see also (Mandt et al., 2017) for a Bayesian view on iterate averaging). As weight averaging has been already studied by , we do not discuss it in detail. Figure S2 in Appendix C provides an illustration of why these strategies might help with generalization. We use weight averaging (WA) on the last few epochs which corresponds to using the mean of the subspace within each mode. Figure 10(a) shows that weight averaging achieves better performance within each mode, and ensemble + WA performs as well as ensemble + subspace combination methods, without any additional parameter overhead. (Ovadia et al., 2019). We see that ensembling and weight-averaging provide complementary benefits. WA improves over the vanilla baseline, but combining WA with ensembling over multiple random initializations improves performance further. Figure 9 reports accuracy and Brier score on the usual CIFAR-10 test set as a function of ensemble size. Under dataset shift, it is particular important to have diverse functions to avoid overconfident predictions (as averaging over similar functions would not reduce overconfidence). RESULTS ON IMAGENET AND IMAGENET-C To illustrate the effect on another challenging dataset, we repeat these experiments on ImageNet (Deng et al., 2009) using the same ResNet20V1 architecture. Due to computational constraints, we focus mainly on the experiment decoupling the effect of weight averaging vs ensembling. Figure 11(a) shows the complementary effects of ensembling and weight averaging; Figure 11(b) shows results on (subset of) ImageNet-C demonstrating that these trends are similar to those observed on CIFAR-10. DISCUSSION Our results show that trajectories of randomly initialized neural networks explore different modes in function space, which explains why deep ensembles with random initializations help. They are essentially orthogonal to each other in the space of weights and very diverse in terms of their predictions. While these modes can be connected via optimized low-loss paths between them, we demonstrate that they correspond to distinct functions in terms of their predictions. Therefore the connectivity in the loss landscape does not imply connectivity in the space of functions. Subspace sampling methods such as weight averaging, Monte Carlo dropout, and various versions of local Gaussian approximations, sample functions that might lie relatively far from the starting point in the weight space, however, they remain in the vicinity of their starting point in terms of predictions, giving rise to an insufficiently diverse set of functions. Using the concept of the diversityaccuracy plane, we demonstrate empirically that these subspace sampling methods never reach the combination of diversity and accuracy that independently trained models do, limiting their usefulness for ensembling. A ADDITIONAL DIVERSITY -ACCURACY RESULTS ON CIFAR-100 We run additional experiments comparing the diversity of solutions found vs their test accuracy on CIFAR-100. CIFAR-100 is an intermediate step between CIFAR-10 and ImageNet, and is overall much more challenging to learn than CIFAR-10. Our additional results are presented in Figure S1. Solutions obtained by subspace sampling methods described in Section 4 have a worse trade off between prediction diversity (needed for ensembling) and accuracy, compared to independently initialized and trained optima. This is consistent with our results on CIFAR-10 in Figure 5. Figure S1: Diversity versus accuracy plots for a ResNet20v1 trained on CIFAR-100. B DERIVING THE UPPER AND LOWER LIMIT CURVES IN THE DIVERSITY-ACCURACY PLOTS In Figures 5 and S1 we bound our empirical results by two theoretically derived curves, limiting the expected trade off between diversity and accuracy in the best and worst case scenarios. The resulting functions are presented in the main text in Section 3.2. We will show the detailed derivations here. Given a C-class classification problem and a reference solution with accuracy a * , we would like to obtain a function d diff (a) which gives us the fraction of labels on which another solution disagrees with the reference solution as a function of its accuracy. B.1 UNCORRELATED PREDICTIONS -THE BEST CASE The best case scenario is when the predicted labels are uncorrelated with the reference solution's labels. On a particular example, the probability that the reference solution got it correctly is a * , and the probability that the new solution got it correctly is a. On those examples, the predictions do not differ since they both have to be equal to the ground truth label. The probability that the reference solution is correct on an example while the new solution is wrong is a * (1 − a). The probability that the reference solution is wrong on an example while the new solution is correct is (1 − a * )a. On the examples where both solutions are wrong (probability (1 − a * )(1 − a)) there are two cases: 1. the two solutions agree (an additional factor of 1/(C − 1)) or 2. the two solutions disagree (an additional factor of (C − 2)/(C − 1)). Only case 2 contributes to the fraction of labels on which they disagree. Hence we end up with d diff (a; a * , C) = (1 − a * )a + (1 − a)a * + (1 − a * )(1 − a) C − 2 C − 1 .(3) B.2 CORRELATED PREDICTIONS -THE WORST CASE The other extreme case is when the predictions of the new solution are just the predictions of the reference solution perturbed by perturbations of different strength. Then, the solutions retain a great amount of correlation. Let the probability of a label changing be p. We will consider 4 cases: 1. the label of the correctly classified image does not flip (probability a * (1 − p)), 2. it flips (probability a * p), 3. an incorrectly labelled image does not flip (probability (1 − a * )(1 − p)), and 4. it flips (probability (1 − a * )p). The resulting accuracy a(p) obtains a contribution a * (1 − p) from case 1 and a contribution (1 − a * )p with probability 1/(C − 1) from case 4. Therefore a(p) = a * (1 − p) + p(1 − a * )/(C − 1). Inverting this relationship, we get p(a) = (C − 1)(a * − a)/(Ca * − 1). The fraction of labels on which the solutions disagree is simply p by our definition of p, and therefore d diff (a; a * , C) = (C − 1)(a * − a) Ca * − 1 . (4) C VISUALIZING THE LOSS LANDSCAPE ALONG ORIGINAL DIRECTIONS AND WA DIRECTIONS Figure S2 shows the loss landscape (train as well as the validation set) and the effect of WA. Figure S2: Loss landscape versus generalization: weights are typically initialized close to 0 and increase radially through the course of training. Top row: we pick two optima from different trajectories as the axes, and plot loss surface. Looking at x and y axes, we observe that while a wide range of radii achieve low loss on training set, the range of optimal radius values is narrower on validation set. Bottom row: we average weights within each trajectory using WA and use them as axes. A wider range of radius values generalize better along the WA directions, which confirms the findings of . D EFFECT OF RANDOMNESS: RANDOM INITIALIZATION VERSUS RANDOM SHUFFLING Random seed affects both initial parameter values as well the order of shuffling of data points. We run experiments to decouple the effect of random initialization and shuffling; Figure S3 shows shows the results. We observe that both of them provide complementary sources of randomness, with random initialization being the dominant of the two. As expected, random mini-batch shuffling adds more randomness at higher learning rates due to gradient noise. Figure S3: The effect of random initializations and random training batches on the diversity of predictions. For runs on a GPU, the same initialization and the same training batches (red) do not lead to the exact same predictions. On a TPU, such runs always learn the same function and have therefore 0 diversity of predictions. MAP = arg min θ L(θ, {x n , y n } N n=1 ) = arg min θ − log p(θ) − N n=1 Figure 1 : 1Cartoon illustration of the hypothesis. x-axis indicates parameter values and y-axis plots the negative loss −L(θ, {x n , y n } N n=1 ) on train and validation data. Figure 2 :Figure 3 : 23Results using SimpleCNN on CIFAR-10. Left plot: Cosine similarity between checkpoints to measure weight space alignment along optimization trajectory. Middle plot: The fraction of labels on which the predictions from different checkpoints disagree. Right plot: t-SNE plot of predictions from checkpoints corresponding to 3 different randomly initialized trajectories (in different colors). Results on CIFAR-10 using two different architectures. For each of these architectures, the left subplot shows the cosine similarity between different solutions in weight space, and the right subplot shows the fraction of labels on which the predictions from different solutions disagree. Figure 5 : 5Diversity versus accuracy plots for 3 models trained on CIFAR-10: SmallCNN, Medium-CNN and a ResNet20v1. The clear separation between the subspace sampling populations (for 4 different subspace sampling methods) and the population of independently initialized and optimized solutions (red stars) is visible. The 2 limiting curves correspond to solution generated by perturbing the reference solution's predictions (bottom curve) and completely random predictions at a given accuracy (upper curve). Figure 7 : 7Left: Cartoon illustration showing linear connector (black) along with the optimized connector which lies on the manifold of low loss solutions. Right: The loss and accuracy in between two independent optima on a linear path and an optimized path in the weight space. Figure 8 : 8Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima along an optimize low-loss connector and predictions similarity along the same planes. Figures 9 (Figure 9 : 99a) and 9(b) show the results for low rank Gaussian subspace and diagonal Gaussian subspace respectively. The results validate our hypothesis as (i) subspace sampling and ensembling provide complementary benefits, and (ii) the relative benefits of ensembling are higher as it averages predictions over more diverse solutions. Results on CIFAR-10 showing the complementary benefits of ensemble and subspace methods, as well as the effect of ensemble size. Figure 10 : 10Results on CIFAR-10 using SimpleCNN: clean test and CIFAR-10-C corrupted test set.4.2 RESULTS ON CIFAR-10-C Figure 10(b) shows accuracy and Brier score on CIFAR-10, both on the usual test set (corresponding to the intensity = 0 column) as well as on the CIFAR-10-C benchmark proposed (Hendrycks & Dietterich, 2019) which contains corrupted versions of CIFAR-10 with varying intensity values (1-5), making it useful to verify calibration under dataset shift Figure 11 : 11Results using ResNet on ImageNet: clean test and ImageNet-C corrupted test set. We use the term mode to refer to unique functions f θ (x). Due to weight space symmetries, different parameters could correspond to the same function, i.e. f θ 1 (x) = f θ 2 (x) even though θ1 = θ2, but we ignore this aspect and leave it to future work. https://keras.io/examples/cifar10_resnet/ Weight uncertainty in neural networks. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra, ICML. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In ICML, 2015. Bagging predictors. Machine learning. Leo Breiman, Leo Breiman. Bagging predictors. Machine learning, 1996. ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. Felix Draxler, Kambis Veschgini, Manfred Salmhofer, Fred A Hamprecht, arXiv:1803.00885Essentially no barriers in neural network energy landscape. arXiv preprintFelix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred A Hamprecht. Essentially no barriers in neural network energy landscape. arXiv preprint arXiv:1803.00885, 2018. Large scale structure of neural network loss landscapes. Stanislav Fort, Stanislaw Jastrzebski, NeurIPS. Stanislav Fort and Stanislaw Jastrzebski. Large scale structure of neural network loss landscapes. In NeurIPS, 2019. The Goldilocks zone: Towards better understanding of neural network loss landscapes. Stanislav Fort, Adam Scherlis, AAAI. Stanislav Fort and Adam Scherlis. The Goldilocks zone: Towards better understanding of neural network loss landscapes. In AAAI, 2019. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. Yarin Gal, Zoubin Ghahramani, ICML. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. Loss surfaces, mode connectivity, and fast ensembling of DNNs. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, P Dmitry, Andrew G Vetrov, Wilson, In NeurIPS. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. In NeurIPS, 2018. Qualitatively characterizing neural network optimization problems. CoRR, abs/1412. Ian J Goodfellow, Oriol Vinyals, 6544Ian J. Goodfellow and Oriol Vinyals. Qualitatively characterizing neural network optimization problems. CoRR, abs/1412.6544, 2014. Practical variational inference for neural networks. Alex Graves, NeurIPS. Alex Graves. Practical variational inference for neural networks. In NeurIPS, 2011. Evaluating scalable Bayesian deep learning methods for robust computer vision. Martin Fredrik K Gustafsson, Thomas B Danelljan, Schön, arXiv:1906.01620arXiv preprintFredrik K Gustafsson, Martin Danelljan, and Thomas B Schön. Evaluating scalable Bayesian deep learning methods for robust computer vision. arXiv preprint arXiv:1906.01620, 2019. Benchmarking neural network robustness to common corruptions and perturbations. Dan Hendrycks, Thomas Dietterich, ICLR. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019. Averaging weights leads to wider optima and better generalization. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson, arXiv:1803.05407arXiv preprintPavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Av- eraging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018. What uncertainties do we need in Bayesian deep learning for computer vision? In NeurIPS. Alex Kendall, Yarin Gal, Alex Kendall and Yarin Gal. What uncertainties do we need in Bayesian deep learning for computer vision? In NeurIPS, 2017. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Learning multiple layers of features from tiny images. Alex Krizhevsky, Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. Simple and scalable predictive uncertainty estimation using deep ensembles. Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell, NeurIPS. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017. Why M heads are better than one: Training a diverse ensemble of deep networks. Stefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall, Dhruv Batra, arXiv:1511.06314arXiv preprintStefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall, and Dhruv Batra. Why M heads are better than one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314, 2015. Measuring the intrinsic dimension of objective landscapes. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, Jason Yosinski, In ICLR. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. In ICLR, 2018. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. Christos Louizos, Max Welling, ICML. Christos Louizos and Max Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. In ICML, 2017. Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, JMLRLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. JMLR, 2008. Bayesian methods for adaptive models. J C David, Mackay, California Institute of TechnologyPhD thesisDavid JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of Technology, 1992. Stochastic gradient descent as approximate Bayesian inference. Stephan Mandt, D Matthew, David M Hoffman, Blei, Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate Bayesian inference. JMLR, 2017. Bayesian Learning for Neural Networks. M Radford, Neal, Springer-Verlag New York, IncRadford M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc., 1996. Can you trust your model's uncertainty?. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, Sebastian Sculley, Joshua V Nowozin, Balaji Dillon, Jasper Lakshminarayanan, Snoek, arXiv:1906.02530Evaluating predictive uncertainty under dataset shift. arXiv preprintYaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530, 2019. Bayesian optimization with robust Bayesian neural networks. Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter, In NeurIPS. Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian optimization with robust Bayesian neural networks. In NeurIPS, 2016. Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, JMLRNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. Bayesian Learning via Stochastic Gradient Langevin Dynamics. Max Welling, Yee Whye Teh, ICML. Max Welling and Yee Whye Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In ICML, 2011. Flipout: Efficient pseudoindependent weight perturbations on mini-batches. Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, Roger Grosse, arXiv:1803.04386arXiv preprintYeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient pseudo- independent weight perturbations on mini-batches. arXiv preprint arXiv:1803.04386, 2018.
[]
[ "Tight-binding models for ultracold atoms in optical lattices: general formulation and applications", "Tight-binding models for ultracold atoms in optical lattices: general formulation and applications" ]
[ "Michele Modugno \nDepartamento de Fisica Teórica e Historia de la Ciencia\nUniversidad del Pais Vasco UPV/EHU\n48080BilbaoSpain\n\nBasque Foundation for Science\nIKERBASQUE\n48011BilbaoSpain\n", "Julen Ibañez-Azpiroz \nPeter Grünberg Institute and Institute for Advanced Simulation\nForschungszentrum Jülich & JARA\nD-52425JülichGermany\n", "Giulio Pettini \nDipartimento di Fisica e Astronomia\nUniversità di Firenze\nINFN\n50019Sesto FiorentinoItaly\n" ]
[ "Departamento de Fisica Teórica e Historia de la Ciencia\nUniversidad del Pais Vasco UPV/EHU\n48080BilbaoSpain", "Basque Foundation for Science\nIKERBASQUE\n48011BilbaoSpain", "Peter Grünberg Institute and Institute for Advanced Simulation\nForschungszentrum Jülich & JARA\nD-52425JülichGermany", "Dipartimento di Fisica e Astronomia\nUniversità di Firenze\nINFN\n50019Sesto FiorentinoItaly" ]
[ "Invited Review . SCIENCE CHINA Physics, Mechanics & Astronomy" ]
Tight-binding models for ultracold atoms in optical lattices can be properly defined by using the concept of maximally localized Wannier functions for composite bands. The basic principles of this approach are reviewed here, along with different applications to lattice potentials with two minima per unit cell, in one and two spatial dimensions. Two independent methods for computing the tight-binding coefficients -one ab initio, based on the maximally localized Wannier functions, the other through analytic expressions in terms of the energy spectrum -are considered. In the one dimensional case, where the tight-binding coefficients can be obtained by designing a specific gauge transformation, we consider both the case of quasi resonance between the two lowest bands, and that between s and p orbitals. In the latter case, the role of the Wannier functions in the derivation of an effective Dirac equation is also reviewed. Then, we consider the case of a two dimensional honeycomb potential, with particular emphasis on the Haldane model, its phase diagram, and the breakdown of the Peierls substitution. Tunable honeycomb lattices, characterized by movable Dirac points, are also considered. Finally, general considerations for dealing with the interaction terms are presented.
10.1007/s11433-015-0514-5
[ "https://arxiv.org/pdf/1512.05995v2.pdf" ]
119,279,176
1512.05995
66c7a456f59081cd8f63bcfe09ff0b399f026bb2
Tight-binding models for ultracold atoms in optical lattices: general formulation and applications Michele Modugno Departamento de Fisica Teórica e Historia de la Ciencia Universidad del Pais Vasco UPV/EHU 48080BilbaoSpain Basque Foundation for Science IKERBASQUE 48011BilbaoSpain Julen Ibañez-Azpiroz Peter Grünberg Institute and Institute for Advanced Simulation Forschungszentrum Jülich & JARA D-52425JülichGermany Giulio Pettini Dipartimento di Fisica e Astronomia Università di Firenze INFN 50019Sesto FiorentinoItaly Tight-binding models for ultracold atoms in optical lattices: general formulation and applications Invited Review . SCIENCE CHINA Physics, Mechanics & Astronomy No.:doiultracold atomsoptical latticestight-binding modelsWannier functionseffective Dirac equationhoneycomb lattices PACS number(s): 4755nb4720Ky4711Fg Citation: Michele ModugnoJulen Ibañez-Azpirozand Giulio PettiniSci China-Phys Mech Astron()doi: Tight-binding models for ultracold atoms in optical lattices can be properly defined by using the concept of maximally localized Wannier functions for composite bands. The basic principles of this approach are reviewed here, along with different applications to lattice potentials with two minima per unit cell, in one and two spatial dimensions. Two independent methods for computing the tight-binding coefficients -one ab initio, based on the maximally localized Wannier functions, the other through analytic expressions in terms of the energy spectrum -are considered. In the one dimensional case, where the tight-binding coefficients can be obtained by designing a specific gauge transformation, we consider both the case of quasi resonance between the two lowest bands, and that between s and p orbitals. In the latter case, the role of the Wannier functions in the derivation of an effective Dirac equation is also reviewed. Then, we consider the case of a two dimensional honeycomb potential, with particular emphasis on the Haldane model, its phase diagram, and the breakdown of the Peierls substitution. Tunable honeycomb lattices, characterized by movable Dirac points, are also considered. Finally, general considerations for dealing with the interaction terms are presented. Introduction Experiments with ultracold atoms in optical lattices have undergone an enormous development in recent years to the point that nowadays they represent a solid platform for the quantum simulation of condensed matter physics [1,2]. These experiments, where atoms are trapped in crystal-like structures made by laser light, offer the possibility to tune most of the relevant parameters with great flexibility and precision, and even to control the dimensionality of the system. Depending on the beam geometry, one can realize one-, two-, or three-dimensional periodic lattices, with one or more wells per unit cell [3,4], as well as quasiperiodic structures [5][6][7][8][9]. Among the various possibilities, honeycomb lattices are attracting an increasing interest owing to the presence of topological defects in their spectrum, the so-called Dirac points, which leads to remarkable relativistic effects [10][11][12][13][14][15][16][17][18][19][20][21][22], in analogy to the case of graphene [23][24][25][26]. Though continuous potentials describing optical lattices can be expressed in simple analytic forms as the combination of a number of sinusoidal potentials, from the theoretical point of view it is often convenient to employ a description in terms of tight-binding models defined on a discrete lattice, as for electrons in a crystal lattice. Paradigmatic models are the Hubbard model for fermions [27], the Bose-Hubbard model for bosons [28], and the Haldane model [29] in the presence of an external vector gauge field. The motivation for using a tight-binding model is twofold. First of all, it allows to reduce the complexity of the continuous description to a limited set of parameters, each one playing a specific role (for example, the hopping between different states). In addition, it permits a sort of pictorial description in terms of particles sitting at specific lattice sites, that can tunnel to other sites, or interact between each other. The first aspect is more general, as in principle one can use a projection over any complete basis set, not necessarily of localized functions. In that case the particles would occupy specific basis states, not necessarily associated to a precise position in space. However, having a basis of functions that are localized in real space not only permits to play with a pictorial description, but also reduces the number of parameters needed for an accurate tight-binding description (e.g. tunneling amplitudes between distant sites are suppressed). The tight-binding regime is easily accessible with ultracold atoms in optical lattices, as the lattice intensity can be tuned to sufficiently high values so that the atoms are deeply localized in the lowest vibrational states of the potential wells. Therefore, each well can be associated to a site of a discrete lattice, making the tight-binding description the natural choice for theoretical calculations. Usually, these models are restricted to a few coefficients associated to the hopping between neighboring sites, and to the onsite interaction among the atoms [1]. Obviously, additional terms are also possible (for example, next-to-nearest or density-assisted tunneling terms), depending on the order of the tight-binding expansion. In all cases, the existence of a basis of functions localized around the potential minima is not only important conceptually -in order to justify the tight-binding expansionbut also from the practical point of view, as a precise knowledge of the basis functions is needed to connect the tightbinding coefficients with the actual parameters that can be accessed experimentally. In the case of optical lattices with a cubic-like arrangement -with a single well per unit cell -the natural basis is provided by the exponentially decaying Wannier functions discussed by Kohn [30,31]. Notably, in this case the expressions for the tunneling coefficients depend just on the Bloch spectrum [32], being therefore independent on the basis choice. Instead, the interaction coupling still depends on the specific basis, by construction. Analytic expressions for both coefficients can be obtained by means of different approximations [33,34]. Nevertheless, in general this approach is not suitable when the potential has more than one well per unit cell, because the Kohn-Wannier functions display the same symmetry as the local potential structure, and may not be maximally localized [31]. For example, for a potential with two degenerate minima in the unit cell, these functions occupy both wells and cannot be associated to a single lattice site [24,25]. Then, in order to deal with non-trivial cell structures, one has to resort to different strategies. A common approach that is found in the literature is that of the so-called atomic orbitals [23,26,35], that has been recently employed e.g. for the case of a symmetric double-well unit cell of twodimensional graphene-like optical lattices [14]. This method is based on a specific ansatz, according to which tight-binding Wannier functions are constructed from linear combinations of wave functions deeply localized in the two potential wells of the unit cell. A more general and powerful approach, that has been successfully employed for describing real material structures [36], is represented by the maximally localized Wannier functions (MLWFs) introduced in a seminal paper by Marzari and Vanderbilt [37]. The MLWFs are obtained by minimizing the spread of a set of generalized Wannier functions by means of a suitable gauge transformation of the Bloch eigenfunctions for composite bands, and they usually present an exponential decay [38,39]. This approach reproduces the results discussed by Kohn for the single band case, and it can be extended to more complex situations when generalized MLWFs for composite bands are needed. This method is currently implemented by means of a software package, and is largely employed for computing MLWFs of real condensed matter systems [40]. Here we shall review these concepts, following the lines of Refs. [41][42][43][44][45][46][47]. First, in sect. 2 we introduce the tight-binding expansion from general principles, by considering the specific implementation for periodic structures with two lattice sites per unit cell. Here we also discuss different strategies for the numerical implementation. Various tight-binding models in one and two spatial dimensions are then considered in the rest of the paper. Sect. 3 is devoted to the one dimensional case, where it is possible to write down a set of differential equations for the gauge mixing transformation that allow to efficiently compute both the MLWFs and the tunneling coefficients. Explicit results for the case of quasi resonance between the two lowest bands, and that of s and p orbitals, are discussed. Here we also review the role of the Wannier functions in the derivation of an effective Dirac equation. In sect. 4 we consider the case of two dimensional honeycomb lattices, which exhibit Dirac points in their energy spectrum and are therefore closely connected to the physics of graphene. In particular, there we discuss in detail the ab initio derivation of the celebrated Haldane model, that is characterized by the presence of a periodic magnetic field, with vanishing flux through the unit cell. We analyze the corresponding topological phase diagram, as well as the breakdown of the Peierls substitution. In addition, we also review the case of stretched honeycomb lattices, and derive a low-energy expansion around the merging points of the Dirac points, that can be moved and merged by tuning the lattice parameters. Then, in sect. 5 various possible forms of interaction terms, and general considerations for dealing with them, are reviewed. Conclusions and perspectives are drawn in sect. 6. Tight-binding expansion and MLWFs Let us consider a system of non interacting bosonic or fermionic particles in the presence of a D-dimensional periodic potential. The tight-binding expansion can be fully carried out both in a first or second-quantized formalism; here we adopt the latter and write the non-interacting many-body Hamiltonian asĤ 0 = d D rψ † (r)Ĥ 0ψ (r),(1) whereψ(r) is the field operator, r is the position vector in D−dimensions,Ĥ 0 = −( 2 /2m)∇ 2 + V(r) the one-particle Hamiltonian, and V(r) the optical potential describing the lattice, generated by laser beams of wavevectors with amplitude k L 1 . The periodicity of the potential implies that V(r) = V(r + R), where R belongs to the associated Bravais lattice B = {R ; R = n 1 a 1 +· · ·+n D a D ; n 1 , . . . , n D = 0, ±1, ±2, . . . }. The corresponding reciprocal space is generated by the vectors b j that satisfy a i · b j = 2πδ i j . The Hamiltonian (1) can be conveniently mapped onto a discrete lattice corresponding to the minima of the potential V(r) by expanding the field operator in terms of a complete set of functions {w jν (r)} localized at each minimum, ψ(r) ≡ jνâ jν w jν (r),(2) where ν is a bandlike index andâ † jν (â jν ) the creation (destruction) operator of a single particle in the j-th cell. These operators satisfy the usual commutation (or anti commutation) rules following from those of the fieldψ(r). In the following, we shall consider generalized Wannier functions obtained by performing a unitary mixing of N Bloch eigenstates w jν (r) = 1 √ V B B d k e −ikR j N m=1 U νm (k)ψ mk (r),(3) with V B being the volume of the first Brillouin zone and U νm (k) ∈ U(N) a unitary matrix obeying periodicity conditions in order to preserve the Bloch theorem. In general, N corresponds to the number of minima in the unit cell. Then, the Hamiltonian (1) can be written in terms of Wannier states |w jν asĤ 0 = ν,ν j, j â † jνâ j ν w jν |Ĥ 0 |w j ν ,(4) where the matrix elements w jν |Ĥ 0 |w j ν depend only on i = j − j due to the translational invariance of the lattice. These matrix elements correspond to tunneling amplitudes between different lattice sites, except for the special case i = 0, ν = ν , that corresponds to onsite energies 2 . Then, by defininĝ d νk = 1 √ V B j e −ik·R jâ jν ,(5) H 0 is transformed aŝ H 0 = ν,ν B d 2 k h νν (k)d † νkd ν k ,(6) with h νν (k) = i e ik·R i w 0ν |Ĥ 0 |w iν 1 In the rest of the paper we shall fix k L = 1, = 1, m = 1/2 without loss of generality. This corresponds to measuring lengths in units of 1/k L and energies in units of the recoil energy E R = 2 k 2 L /2m. 2 The terms with the same index j and different ν refer to different sites inside the same cell. = 1 V B i B dq e i(k−q)·R i n U * νn (q)U ν n (q)ε n (q) (7) being the Hamiltonian density in quasimomentum space. Notice that the above expression is exact -we have restricted the analysis to a specific subset composed by N Bloch bands, but made no further approximations. Then, by using the following summation rule (valid for an infinite lattice) 1 V B i e iR i ·(k −k) = δ(k − k),(8) eq. (7) can be rewritten as h νν (k) = n U * νn (k)U ν n (k)ε n (k),(9) whose eigenvalues coincide with the exact bands ε ν (k) by construction, and are therefore independent on the specific choice of MLWFs. This is an obvious result owing to the completeness of any Wannier basis. However, for practical purposes the summation over i must be truncated by retaining only a finite number of matrix elements. This, in a nutshell, is the essence of the tight-binding expansion. Notice that the actual number of terms needed to reproduce the properties of the system within a certain degree of accuracy crucially depends on the properties of the basis functions w jν (r). Given a continuous Hamiltonian, the optimal choice of the Wannier basis requires to fix both N, which is the number of bands to be mixed, and the specific form for the matrix U(k). In the following, it is convenient to distinguish between the case N = 1 and N > 1. Single band case. When there is just one well per unit cell the band mixing is not necessary, so that N = 1. In this case the procedure greatly simplifies as the gauge group is U(1) and the matrix U(k) takes the form of a diagonal phase transformation U νm (k) = e iφ ν (k) δ νm , with the phases φ ν (k) being periodic over the first Brillouin zone. Then, from eq. (7) one has [32] w 0ν |Ĥ 0 |w iν = δ νν V B B dq e −iq·R i ε ν (q),(11) so that both the onsite energies and the tunneling coefficients are independent of the phases φ ν (k). Therefore, in the single band case the tight-binding expansion is gauge independent, namely it does not depend on the choice of the Wannier functions. Composite band case. Let us now consider a lattice potential with N minima per unit cell, with N > 1. In this case, it is easy to verify that the tight-binding parameters depend explicitly on the specific choice of the matrices U nm (k), so that these parameters -and any other physical quantity calculated at a finite order of the tight-binding expansion -are gauge-dependent. Let us now turn to the specific set of MLWFs introduced by Marzari and Vanderbilt in Ref. [37]. They are defined via a transformation U nm (k) that minimizes the total spread Ω ≡ ν [ r 2 ν − r 2 ν ]. This quantity can be decomposed as Ω = Ω I +Ω, the first term being gauge invariant. In turn, the gauge dependent part,Ω, can be written as the sum of a diagonal and an off-diagonal component 3 ,Ω = Ω D + Ω OD . Both Ω D and Ω OD can be expressed in terms of the generalized Berry vector potentials A νν (k) defined as A νν (k) = iV B u νk |∇ k |u ν k ,(12) which is an hermitian matrix. In particular, one has Ω D = ν ( A νν (k) − A νν B ) 2 B = ν Ω Dν ,(13)Ω OD = ν ν |A νν | 2 B ,(14) with . . . B representing the integral over the first Brillouin zone. In one-dimension (1D), both Ω D and Ω OD can be made strictly vanishing, and the corresponding gauge is called parallel transport gauge, as the off-diagonal Berry connections A νν (with ν ν ) vanish. This may not be the case in higher dimensions, where usually the minimal value of the spread is finite. We also remark that in general (though in the absence of a formal proof) one may assume that the gauge where the spread of Wannier functions is minimal corresponds to the one that provides the best tight-binding approximation of individual Bloch bands. As anticipated, the use of composite instead of single band transformations is required in case of a set of almost degenerate bands (well separated from the others), that usually corresponds to having more that one minimum per unit cell. Here we focus our attention on systems whose Wigner-Seitz cell contains two basis points, say A and B. When two Bloch bands are sufficiently separated from the others, the optimal tight-binding expansion for the corresponding sector of the spectrum is achieved by means of the MLWFs for those two Bloch bands, via the gauge transformation in eq. (3). Then, for a two-level system it is customary to write the Hamiltonian density in eq. (7) as convention is chosen in such a way that when the tunneling coefficients are real, they are all positive defined. Above, the index ν = 1, 2 (see eq. (2)) has been traded to ν = A, B since the associated MLWFs are located around the minima A and B. In the following, it is also convenient to fix the arbitrary energy offset such that E A = and E B = − . The matrix h νν (k) in eq. (15) can also be rewritten in a compact form, by using the basis formed by the 2 × 2 identity matrix, I, and of the three Pauli matrices, σ i . One has [48] h(k) = h 0 (k)I + h(k) · σ,(18) with h ≡ (h 1 , h 2 , h 3 ) and h 0 (k) = f A (k) + f B (k) 2 ≡ f + (k),(19)h 1 (k) = Re[z(k)],(20)h 2 (k) = −Im[z(k)],(21)h 3 (k) = + f A (k) − f B (k) 2 ≡ + f − (k).(22) Finally, the tight-binding expansion for the two energy bands under consideration are obtained as the eigenvalues of the matrix (15), namely ± (k) = h 0 (k) ± |h(k)| = f + (k) ± | + f − (k)| 2 + |z(k)| 2 .(23) This expression can be further simplified when the two minima of type A and B are degenerate ( = 0) so that f A (k) = f B (k) ≡ f (k) and ± (k) = f (k) ± |z(k)|.(24) Numerical implementation. The final step for completing the tight-binding expansion corresponds to determining the specific values of the tight-binding coefficients for a given configuration of the underlying continuous potential. Here we review two independent complementary approaches. The first one consists in using the definition of the tightbinding coefficients as expectation values of the singleparticle Hamiltonian over Wannier states, as in eqs. (16)- (17). This is an ab-initio approach, that requires the determination of the gauge transformation in eq. (3), and gives direct access to the MLWFs as well as to the whole set of tight-binding parameters, at any order of the expansion. In practice, the functional minimization of the spread (see eqs. (13)(14)) is most conveniently done in k-space. This procedure is implemented in the WANNIER90 software package [40], a powerful tool that is largely employed for computing MLWFs of real condensed matter systems [36]. The required input quantities are the Bloch spectrum and the matrix elements between Bloch eigenfunctions. For the results presented in this review, these quantities have been calculated with a modified version of the QUANTUM-ESPRESSO package [49] adapted for simulating optical lattices [42,43,46] 4 . Though the details of the method will not be covered in this review, as an example here we illustrate the properties of the MLWFs obtained for a honeycomb potential in the presence of breaking of timereversal, that are complex valued [38,39], see Figure 1. This specific example will be thoroughly analyzed later on in sect. 4. As shown in the Figure, the MLWFs are localized around the potential minima, and rapidly decay when moving away from their center. When the Hamiltonian is invariant under time-reversal, the decay of the tails is exponentially ( Figure 1a); in this case both the MLWFs and the tunneling coefficients can be chosen real [38,39]. The maximal localization of the MLWFs implies that the tunneling amplitudes associated to the hopping between two lattice sites decrease very fast as their distance is increased, offering an optimal choice for the construction of tight-binding models with a minimal set of tunneling coefficients. In addition, the MLWFs are able to adapt and capture diverse features of the system Hamiltonian, such as complex deformations of the honeycomb structure (see sect. 4.2) and even the presence of a vector potential that breaks the underlying time-reversal symmetry of the lattice (see sect. 4.1). It is noteworthy that in the latter case the breaking of time-reversal symmetry manifests itself by inducing a finite imaginary part of the MLWFs, as shown in Fig. 1b. Interestingly, this imaginary part is what ultimately determines the topological properties of the system, as it will be thoroughly discussed for the particular case of the Haldane model reviewed in sect. 4.1. The second approach instead makes explicit use of the tight-binding expression of the energy spectrum (see eq. (23)), and consists in writing down analytical expressions for the tight-binding parameters in terms of specific properties of the exact spectrum [46]. In general the functions z(k) and f ν (k) in eqs. (17) and (16) can be expressed as z(k) = α T α Z α (k) (25) f ν (k) = β J β ν F β (k)(26) where Z α and F β are functions of the quasimomentum k, and depend only on the geometrical structure of the lattice. Then, from the dispersion relation in eq. (23), it follows ± (k) = β J β A + J β B F β (k) (27) ± + β J β A − J β B F β (k) 2 + α T α Z α (k) 2 . Given a certain tight-binding approximation, defined by a finite number of coefficients T α and J β ν (plus ), one can identify a corresponding number of relations evaluated at specific k points, to be inverted in order to express the tight-binding parameters in terms of specific properties of the exact spectrum. Generally this approach -though it does not give access to the MLWFs -can provide accurate results if the order of the tight-binding expansion is properly chosen, and it has the advantage of requiring a minimal computation effort, as the exact Bloch spectrum can be readily computed by means of a standard Fourier decomposition [14]. The two approaches have been explicitly compared for the case of two-dimensional honeycomb potential discussed in sect. 4, finding a remarkable agreement [42,46,47]. In order to quantify the degree of accuracy of a given tight-binding model, it is convenient to consider its fidelity in reproducing the exact single-particle Bloch spectrum of the continuous Hamiltonian. This can be measured by considering the following quantity δε n ≡ 1 ∆ε n d 2π B d k[ε n (k) − ε tb n (k)] 2 ,(28) which represents the ratio of the quadratic spread between the exact Bloch spectrum ε n (k) and the corresponding tightbinding energies 5 to the bandwidth ∆ε n ≡ (ε max n − ε min n ). Specific examples will be considered in the following sections. One-dimensional double-well systems As a specific one-dimensional system, here we shall consider an optical lattice with two wells in the unit cell, described by a potential V(x) of the form V(x) = V 0 sin 2 (k L x + φ 0 ) + sin 2 (2k L x + θ 0 + 2φ 0 ) ,(29) where V 0 is the overall amplitude of the potential, a dimensionless parameter, θ 0 and φ 0 two arbitrary phases (the latter represents just a rigid shift of the whole potential) and k L the laser wavelength. When > 0 the potential has period d = π/k L , V(x + d) = V(x). For convenience, here the unit cell is defined as having two (absolute) maxima at the cell borders, and it is centered in x = 0 by a suitable choice of the phase φ 0 , x ∈ [−d/2, d/2]. Different configurations can be realized by varying and the phase θ 0 . They can be divided in three classes according to the value of θ 0 6 , as shown in Figure 2. (a) θ 0 = nπ (n ∈ Z): all the maxima are degenerate and the periodic potential has two classes of parity centers, located at the two (inequivalent) minima. (b) θ 0 ∈ (0, π/2) + nπ/2: the unit cell is an asymmetric double-well with no symmetry centers. (c) θ 0 = π/2+nπ: the unit cell is a symmetric doublewell, and the potential has two centers of parity placed at the two (inequivalent) maxima. (29): (a) two different minima, with the overall potential having two centers of parity, for θ 0 = 0 (φ 0 π/4); (b) an asymmetric double-well with parity that is broken globally -in this example θ 0 = π/4 (φ 0 π/8); (c) a symmetric double-well, θ 0 = π/2 (φ 0 = 0). The black dots in (a), (c) represent the parity centers of the whole periodic potential. Here = 2. 0 1 2 3 -0.5 0 0.5 (a) V(x)/V 1 x/d 0 0.5 (b) x/d 0 0.5 (c) x/d Depending on the value of and θ 0 one can have interesting situations in which the two lowest bands or the the first two excited bands are almost degenerate, making the use of the composite band approach the optimal choice for defining a set of MLWFs. The tight-binding Hamiltonian Here we consider the tight-binding Hamiltonian including all the terms corresponding to nearest-neighboring cells, namelŷ H 0 ν=A,B j E νn j ν − ν=A,B j J ν (â † j νâ ( j+1) ν + h.c.) (30) − j T ABâ † j Aâ j B + J AB +â † j Aâ ( j+1) B + J AB −â † j Aâ ( j−1) B + h.c. , with the tunneling coefficients as indicated in Figure 3. This model will be referred to as the (single particle) extended tight-binding model. By posing J ν = 0 = J AB+ one recovers the usual nearest-neighbor approximation, commonly used in the literature for both single-well [51] and double-well lattices [52][53][54]. The latter is a reasonable assumption for a single well lattice in the tight-binding regime [32,55], but may not be fully justified in the range of the typical experimental 5 Here ε tb n (k) stands for ± (k) in eq. (23). 6 θ 0 can be restricted to the range [0, π/2] without loss of generality. parameters in the double well case [41]. For this reason, in general it is convenient to consider the extended model in eq. (30). ν (k) ≡ E ν − 2J ν cos(kd),(31)z(k) ≡ −(T AB + J AB + e −ikd + J AB − e ikd ).(32) The corresponding tight-binding spectrum for composite bands, given by eq. (23), can be written as ε tb ± (k) = + (k) ± 2 − (k) + |z(k)| 2 ,(33)where ± (k) ≡ ( A (k) ± B (k))/2. In the single band case we simply have [1] ε sb n (k) = E sb n − 2J sb n cos(kd) (34) with (see eq. (11)) E sb n = d 2π B dk ε n (k) , J sb n = − d 2π B dk ε n (k)e ikd ,(35) ε n (k) being the exact Bloch spectrum. Notably, as discussed in sect. 2, these expressions do not depend on the choice of the Wannier basis. Generalized Wannier functions Here we discuss a specific method -complementary to the standard WANNIER90 approach -for constructing the MLWFs of a one-dimensional double-well potential. The method consists in deriving a set of ordinary differential equations (with periodic boundary conditions) for the gauge transformation in eq. (3), by using the expressions for Ω D and Ω OD in terms of the Berry connections in eqs. (13)-(14) [41]. As discussed in sect. 2, in one dimension the spreadΩ can be made strictly vanishing in the parallel transport gauge, where the matrix A nm (k) is diagonal, with the diagonal elements being constant and equal to their mean values. In general, the diagonal and off-diagonal spreads Ω D and Ω OD can be minimized either simultaneously or independently, but the latter is more convenient from the practical point of view. In particular, the gauge transformation can be decomposed as U nm (k) = e iφ n (k) S nm (k)(36) where S nm ∈ S U(2) is a transformation that makes Ω OD vanishing (it also affects the diagonal elements A nn , but this is not relevant at this stage), and exp{iφ n } a diagonal unitary transformation that makes Ω D vanishing without affecting Ω OD . The two transformations are constructed as follows. Let us first consider a diagonal, single band transformation U(n) of the form |u nk → |ũ nk = e iφ n (k) |u nk , with φ n (k) being a real differentiable function of k, such that φ n (k + 2k L ) = φ n (k) + 2π ( integer) in order to have periodic and single valued Bloch eigenstates (here we set = 0, without loss of generality). Then, since under this transformation (13)) A nn (k) → A nn (k) − ∂ k φ n (k), we have (see eq.Ω Dn →Ω Dn = (A nn − ∂ k φ n − A nn B ) 2 B ,(38) that can be made vanishing by imposing ∂ k φ n = A nn − A nn B .(39) This equation can be readily solved numerically, as discussed in [41]. Notice that Ω OD is not affected by this transformation, as A 12 (k) just acquires a phase factor. Let us now turn to a generic composite band gauge transformation |u nk → |ũ nk = m U nm (k)|u mk(40) with U nm (k + 2k L ) = U nm (k). Then, the generalized Berry potentials transform as A nm →Ã nm = i l U * nl ∂ k U ml + l,l U * nl U ml A ll .(41) At this point, it is convenient to use the decomposition 7 U(k) = z 1 (k) −z * 3 (k) z 3 (k) z * 1 (k) 1 0 0 r(k)(42) with |z 1 | 2 + |z 3 | 2 = 1, r(k) = e iχ(k) , and the parametrization S = e iα σ ·n/2 withn = (cos ϕ sin θ, sin ϕ sin θ, cos θ) and σ i being the Pauli matrices (that is valid for any matrix S ∈ S U(2)). Then, one finds U =        cos α 2 + i sin α 2 cos θ ie i(χ − ϕ) sin θ sin α 2 ie iϕ sin θ sin α 2 e iχ cos α 2 − i sin α 2 cos θ        (43) with χ = χ(k), ϕ = ϕ(k), α = α(k) and θ = θ(k). Since Ω transforms as 7 In general, the group U(N) can be written as a semidirect product S U(N) U(1), with U(1) being subgroup of U(N) consisting of matrices of the form diag(1, 1, . . . , e iχ ). in order to getΩ = 0 one has to impose (see eq. (41)) Ω Dn →Ω Dn = (Ã nn (k) − Ã nn B ) 2 B (44) Ω OD →Ω OD = 2 |Ã 12 | 2 B(45)A nn (k) ≡ i l U * nl ∂ k U nl + l,l U * nl U nl A ll = Ã nn B (46) A 12 (k) ≡ i l U * 1l ∂ k U 2l + l,l U * 1l U 2l A ll = 0.(47) At this point it is worth to notice that the right-hand term in eq. (46) is not known a priori, so that this equation is useless in practice. However, one can still consider eq. (47), that defines a gauge transformation for making Ω OD vanishing. In fact, the spread Ω D can be set to zero afterwords, with a diagonal transformation. By combining eqs. (43) and (47), one gets a system of four differential equations for α, θ, ϕ, χ, whose normal form 8 is ∂ k α 2 = − cos 2θ sin θ A R 12 cos η + A I 12 sin η (48) − cotg α 2 cotgθ A R 12 sin η − A I 12 cos η + cos θ(A 11 − A 22 ) ∂ k θ = cos θ sin α sin 2 (α/2) (A R 12 cos η + A I 12 sin η) (49) + cos α sin 2 (α/2) (A R 12 sin η − A I 12 cos η) − cotg α 2 sin θ(A 11 − A 22 ) ∂ k χ = 0 , ∂ k ϕ = 0(50) where we have defined η ≡ ϕ − χ. Form eq. (50) it follows ∂ k η = 0, so that the gauge transformation is determined only by the difference η = ϕ 0 − χ 0 , that plays the role of a constant parameter. Then, one can safely pose χ 0 ≡ 0, η = ϕ 0 , without loss of generality. This simplifies the expression (42), r(k) ≡ 1, so that only the S U(2) component S nm survives. Finally, we remind that single band MLWFs can be obtained by using just the diagonal gauge transformation, and correspond to the exponentially decaying Wannier functions discussed by Kohn [31,37]. Both gauge transformations can be solved by using the representation of Bloch functions in k-space; the reader is referred to Ref. [41] for the details of the numerical implementation. Tunneling coefficients By using eqs. (3), (16)-(17), and (36), the onsite energies and the tunneling coefficients can be conveniently expressed as E ν = d 2π B dk 2 m=1 |S νm (k)| 2 ε m (k) (51) J ν = − d 2π B dke −ikd 2 m=1 |S νm (k)| 2 ε m (k)(52)T AB = − d 2π B dk e i∆φ(k) 2 m=1 S * 1m (k)S 2m (k)ε m (k)(53)J AB ± = − d 2π B dk e i(∆φ(k)∓kd) 2 m=1 S * 1m (k)S 2m (k)ε m (k) (54) with ∆φ(k) = φ 2 (k) − φ 1 (k). Notice that the terms E ν and J ν , that involve sites of the same types, depend just on the S nm transformation. Instead, those connecting sites of type A and B, namely T AB and J AB ± , also depend on the diagonal transformation. The above formulas can be used also in the single band case, by replacing S nm with δ nm . Remarkably, in this case the tunneling amplitudes between A and B sites are vanishing at any order (T AB , J AB ± , and so on) owing to the orthogonality of states belonging to different Bloch bands. Therefore, the description in terms of single band MLWFs contains just hopping terms between homologous sites (either of type A or B) belonging to different cells, and cannot be used to describe hopping between sites of type A and B. A number of specific applications of the present formalism are presented in the following section. Applications A conventional case As a first example, we consider the case in which the two lowest bands are almost degenerate, as discussed in Ref. [41]; here we fix V 0 = 10E R . In order to characterize the band structure one can consider the quantity R ≡ δ 12 /δ 23 , with δ 12 (δ 23 ) being the band gap between the first and second (second and third) band. Its behavior as a function of θ 0 and is shown as a density plot in Figure 4. In general, the composite band approach provides an optimal basis of MLWFs up to R ≈ 1 [41]. MLWFs. A comparison between the single and composite band MLWFs is shown in Figures 5,6, for = 2. Figure 5 refers to a symmetric double well, θ 0 = π/2. In this case, the single band MLWFs have the same symmetry of the potential [31], and they occupy both wells of the unit cell. Instead, each of the composite band MLWFs nicely localizes in one of the sub-wells. For this symmetric case, a reasonable estimate for the bulk properties of the composite band MLWFs, and for the nearest-neighbor tunneling coefficients, can also be obtained by considering symmetric and antisymmetric combinations of the single band MLWFs. This approach is particularly effective when each unit cell can be regarded as a single double-well, namely when there are large barriers at the cell borders. However, since the tails of the proper MLWFs decay much faster, this approximation fails in reproducing higher order tunneling coefficients. In fact, it can proved analytically that one would get J AB− = J AB+ , that is manifestly incorrect (see Figure 3). In Figure 6, it is shown the case of an asymmetric double well, for θ 0 = 0.2π. Here the gap between the first two bands is of the order of the one between the second and third band. In this case even the single band MLWFs are almost localized within a single well. However, we remind that they cannot be used to describe hopping between sites of type A and B, as discussed in the previous section. We also remark that the exponential decay of the Wannier functions for a given band is controlled by a parameter of the order of the smallest band gap separating that band from the neighboring ones, see Refs. [24,31,32,[56][57][58][59]. For the present situation, the relevant (minimal) gap in the single band picture is that between the two bands, and this explains why this approach fails when the two bands are close to each other (the gap vanishes in the degenerate limit). Instead, the localization properties of the composite band MLWFs are controlled by the gaps with the outer bands, making their use the right choice when the internal gap vanishes. |w(x)| 2 x/d (c) arb. units -1/2 0 1/2 (d) arb. units x/d Tunneling coefficients. Let us now turn to the tunneling coefficients. They have a weak dependence on θ 0 , the only notable effect being that for θ 0 = 0 (where all the maxima are degenerate) T AB = J AB− , whereas at θ 0 = π/2 we have J A = J B [41]. Their behavior as a function of is shown in Figure 7, for θ 0 = π/2. This Figure reveals that the nearestneighbor approximation improves when is increased, as one may expect owing to the increased localization of the ML-WFs. Accuracy. As anticipated, the accuracy of a given tightbinding approximation can be measured by the average energy mismatch δε n defined in eq. (28). This quantity is shown in Figures 8a,b as a function of θ 0 ( = 2) and (θ 0 = π/2), respectively. These Figures reveal that in the regime R 1 (cfr. Figure 4) the extended tight-binding model of eq. (30) reproduces the exact energies with great accuracy. Instead, the commonly used nearest neighbor model is less accurate, and its use may not always be justified. "s-p" resonance Another interesting situation emerges when the first two excited bands become resonant, a situation that occurs when the p-like orbital in the deepest well and the s-like orbital in the other one are almost degenerate [45,60]. This can be realized e.g. with θ 0 = 0 and V 0 = 32E R , for which the s − p resonance occurs at = 2 [45]. This configuration is also relevant for the realization of an effective Dirac dynamics [44,61,62], that will be considered in the following section. We recall that for θ 0 = 0 all the maxima are degenerate, so that T AB = J AB− . Moreover, in this regime the term J AB+ can be safely neglected (for 1.5, see later on). For convenience, here it is also natural to change the notation from A, B to s, p, as indicated in Figure 9. MLWFs. An example of the shape of the MLWFs below and above the resonance is shown in Figures 10, 11 for = 1.5 and = 3 respectively. As anticipated, the ML-WFs have the form of an s-like state in the shallow well, and a p-like state in the deeper one. At = 2 the energies of the two states become degenerate, see Figure 12a. It is also interesting to note that, at resonance, almost optimally localized states can be built from a simple analytic approach that captures the essential features of the system and leads to simple analytic expressions for the tunneling coefficients, see Ref. [45]. Tunneling coefficients. The behavior of the tunneling amplitudes as a function of is shown Figure 12b. They present a monotonic decrease for increasing , as a consequence of the deepening of the potential wells. Notice that the degeneracy between J s and J p takes place below the resonance, at = 1.5 [45]. This figure also shows the behavior of the tunneling coefficients J s and J p obtained from the single band approach (thin lines), for comparison. We recall that the nearest neighbors tunneling amplitude J -that is the dominant term in the composite band approach (blue dotted line in the Figure) -is strictly vanishing in this case, i.e. J ≡ 0. In fact, the single band approach is expected to be reliable only far from the resonance, where it may be convenient to adopt a picture with the two sublattices s and p being completely de-coupled (except for the effect of external forces or interaction terms). Accuracy. The accuracy in reproducing the single particle spectrum of the single and composite band approaches are compared in Figure 13, where the energy mismatch δε n (see eq. 28) is shown as a function of . The accuracy of the composite-band approach increases monotonically with , and provides a very good approximation starting from ≈ 1.5. For lower values, additional tunneling terms, or even a different band mixing (the fourth band approaches to the third one), may become necessary. As for the singleband approach, it fails in the resonance region, as expected. Remarkably, close to = 3 both the single and composite band approaches provide an accurate description of the system, even though they correspond to very different pictorial representations. Semiclassical dynamics The Wannier functions play a relevant role also in the derivation of effective dynamical equations in the semiclassical regime, obtained by a corse-graining procedure [63]. To illustrate this, let us consider the case of a single particle in the presence of a periodic potential V L (x) (of period d) and an additional slowly varying potential V(x). The Schrödinger equation reads i ∂ t Ψ(x, t) = [H L (x) + V(x)] Ψ(x, t),(55) where H L (x) = −( 2 /2M)∇ 2 + V L (x) is the unperturbed lattice Hamiltonian, whose eigenvectors are Bloch functions ψ n (k, x) = e ikx u n (k, x) ≡ x|n, k . The above equation can be mapped onto quasimomentum space as (see e.g. [63,64]) i ∂ t ϕ n (k, t) = E n (k)ϕ n (k, t) + n k n, k|V|n , k ϕ n (k , t)(56) where ϕ n (k, t) represent the expansion coefficients of a generic wave-packet Ψ(x, t) on the Bloch basis, namely Ψ(x, t) = n k ϕ n (k, t)ψ n (k, x), and k runs over the first Brillouin zone (the dependence on t will be omitted in the following). The above equation can be written in vectorial form as i ∂ t ϕ(k) = H L (k)ϕ(k) + k Ṽ (k, k )ϕ(k ),(57) with H L (k) = E n (k)δ nn ,Ṽ(k, k ) = n, k|V|n , k . Let us now consider a subset of two bands. In the following section we shall discuss an effective Dirac dynamics, for which it is convenient to introduce a S O(2) rotation R(θ(k)) [61] R(θ(k)) = cos θ(k) − sin θ(k) sin θ(k) cos θ(k) , where θ(k) will be specified later on. Then, eq. (57) can be written as (59) with ϕ = Rϕ, and i ∂ t ϕ (k) = H L (k)ϕ (k) + k R(k)Ṽ(k, k )R T (k )ϕ (k )H L (k) = R(k)H L (k)R T (k) = c k −mc 2 −mc 2 −c k .(60) Eq. (59) can be transformed back in coordinate space by projection on a basis of Wannier functions, as discussed in the following. We recall that a generic wave packet Ψ(x) can be expanded as Ψ(x) = n,i χ n (R i )w n (x − R i ), where the amplitudes χ n (R i ) can be obtained from the Bloch coefficients by a simple Fourier transform 9 χ n (R i ) = d 2π k ϕ n (k)e ikR i .(61) When the Wannier functions in the rotated basis are sufficiently localized in each cell, the rotated amplitudes χ n (R i ) play the role of envelope functions associated to the site R i , corresponding to a corse graining on the scale of a single cell [63,65]. In general, the coefficients χ (R i ) can be supposed to be differentiable functions of R i , the latter being considered as a continuous variable. This holds when χ (R i ) is slowly varying on the scale of the lattice period, namely in case of a "smooth" wave packet. Then, by using the properties of the Fourier transform [63,65], the Hamiltonian H L in coordinate space can be obtained by the replacement k → −i∇ R i , so that eq. (59) can be mapped in coordinate space as i ∂ t χ (R i ) = H L (−i∇ R i )χ (R i )(62)+ j k k e ik·R i R(k)Ṽ(k, k )R T (k )e −ik ·R j χ (R j ). In addition, it is easy to show that k k R(k)Ṽ(k, k )R T (k )e ik·R i e −ik ·R j nn = x w n * (x − R i )V(x)w n (x − R j ) ≡ V i j nn ,(63) yielding i ∂ t χ (R i ) = H L (−i∇)χ (R i ) + j V i j χ (R j ).(64) Moreover, if the potential V(x) is slowly varying on the lattice scale, one has V i j nn ≈ V(R i )δ nn δ i j ,(65) so that we eventually get (R i → x) i ∂ t χ (x) = H L (−i∇) + V(x) χ (x).(66) In order to check how the approximation (65) behaves under rotation, one can perform a series expansion around x = R j , yielding the following result for the first order correction [45] δV ( ) nn (R j ) E R ≈ d E R ∂V ∂x R j · 1 d x ( ) nn − R j δ nn δ 0 ,(67) with l = j − j. The term (d/E R )(∂V/∂x)| R j represents the variation of the potential on the scale of the lattice spacing d, divided by the characteristic energy scale E R of the lattice. Since this term is small under the assumption of a slowly varying potential, one has to check that the remaining term, ∆ ( ) nn ≡ ( x ( ) nn −R j δ nn δ 0 )/d, is sufficiently smaller than unity. In principle, this condition is expected to be satisfied when the Wannier functions are sufficiently localized within each lattice cell. At this point we remark that in the presence of a constraint fixing the mixing angle θ(k), the general approach for defining the MLWFs cannot be applied. In fact, in this case the only freedom left is the choice of the phases of the single band Bloch functions before the rotation, that should be determined in order to minimize the diagonal 10 spreadΩ D in eq. (46), for a gauge transformation of the form U(k) = R(θ(k)) × diag(e iφ 1 (k) , e iφ 2 (k) ). Unfortunately, in general this results in complicated integro-differential expression, whose solution is not viable. So, a minimal approach that one may adopt consists in starting with the single band MLWFs for the original Bloch bands, and verify that the rotation (58) does not affect substantially their localization properties. Effective Dirac equation. As an application, here we consider the case of an s − p resonance between the second and third Bloch band, that gives rise to an effective Dirac dynamics owing to the "relativistic" form of the dispersion relation around k = 0, E ± (k) = ± m 2 c 4 + c 2 ( k) 2 [44,61,62]. By choosing the rotation angle as 11 tan θ(k) = − mc 2 c k + m 2 c 4 + c 2 ( k) 2(68) and applying the U(2) transformation U = 1 √ 2 1 −1 1 1(69) to the vector χ , ψ ≡ Uχ , eq. (66) becomes i ∂ t ψ(x) = V(x) + mc 2 cp cp V(x) − mc 2 ψ(x),(70) corresponding to the canonical form of the Dirac equation in 1 + 1 dimensions, in the presence of a scalar potential V(x) [61]. The case of Ref. [62]. As a specific example we consider the case of the experiment in Ref. [62], namely V 0 = −5E R , = 1.6. The single-band MLWFs, and the corresponding rotated MLWFs 12 are shown in Figs. 14, for different values of the angle φ. This Figure shows that the rotation does not affect dramatically their localization properties, the rotated Wannier functions having a behavior similar to the MLWFs for the original Bloch band. As a matter of fact, though the two sets of Wannier functions have a different microscopic structure, the corresponding values of the participation ratio P = d dx|w n | 4 −1 -that measure of the extent of the Wannier functions w n (x), in units of the lattice period d -do not differ too much. Finally, for the "slowly varying" potential used in Refs. [61,62], V(x) = V 0 exp[−2(x/x 0 ) 2 ] − F x, it is possible to verify that |∆ ( ) nn | < 0.5 for = 0, ±1, ±2 for all the cases in Figs. 14. Two dimensional honeycomb lattices Ultracold atoms in honeycomb optical lattices have been the subject of intense research activity in recent years, due to their analogies with graphene, and the possibility of simulating the physics of Dirac points [ V L (r) =2sE R cos [(b 1 − b 2 ) · r](71)+ cos b 1 · r − π 3 χ A + cos (b 2 · r) with r = (x, y), b 1/2 = ( √ 3k L /2)( √ 3e x ∓ e y ), and χ A being a phase associated to the breaking of parity. The dimensionless parameter s represents the amplitude of the potential in units of the recoil energy E R . This potential is characterized by two minima per unit cell, arranged at the vertices of a regular honeycomb, see Figure 15a. The Bravais lattice is generated by the two basis vectors a 1/2 = (2π/3k L )(e x , ∓ √ 3e y ), obeying a i · b j = 2πδ i j , with a diamond-shaped elementary cell with basis A and B, as shown in Figure 16. The vectors b 1/2 generate the corresponding reciprocal space, whose first Brillouin zone is a regular hexagon as well. When χ A = 0 the two minima are degenerate, and the spectrum is characterized by Dirac points at the six vertices k D of the Brillouin zone, where the two lowest bands E ± (k) are degenerate, and their local dispersion is linear (corresponding to relativistic particles with vanishing mass) [14], see Figure 15b. These points are defined by z(k D ) = 0 (see eq. (24)) 13 and come in pairs in the presence of time-reversal invariance, that implies z * (k D ) = z(−k D ) [20,68,69]. 3k L ). The system is invariant under discrete translations generated by the Bravais vectors a 1/2 and under rotations by θ = 2π/3 radians around any vertex of the lattice. The former implies that next-to-nearest tunneling amplitudes t 1 along the same direction are conjugate pairs (solid and dashed lines); from the latter follows the equivalence of the hopping amplitudes separated by 2π/3 radians. When the sites A and B are degenerate, the system is also invariant under rotations by π radians around the center of any elementary cell; this implies that t 0 is real. The tight-binding model. When one considers MLWFs as basis functions, one can restrict the tight-binding expansion up to next-to-nearest neighbors, corresponding to the tunneling coefficients t 0 and t 1 shown in Figure 16. In fact, the analysis of Ref. [42] proves that the successive terms are neg-ligible for s > 3. The tight-binding Hamiltonian readŝ H 0 = ν=A,B j E νn jν +t 0 j â † jAâ jB + h.c. +t 1 ν=A,B j, j â † jνâ j ν( 72) where both t 0 and t 1 can be chosen to be real [46,47]. From the general definitions in eqs. (16), (17), we have f (k) = t 1         2 cos (k · (a 1 + a 2 )) + 2 i=1,2 cos (k · a i )         ≡ t 1 F(k) (73) z(k) = t 0 1 + e ik·a 1 + e −ik·a 2 ≡ t 0 Z(k).(74) For the specific case of degenerate minima (the general case will be considered in the next section) the expression for the tight-binding spectrum follows from eq. (23) ± (k) = t 1 F(k) ± |t 0 Z(k)|.(75) Then, it is possible to express t 0 and t 1 as t 0 = (¯ + (0) −¯ − (0))/6 (76) t 1 = (¯ + (0) +¯ − (0))/18,(77) where the value of¯ + (0 = 0) can be easily extracted from the numerical computation of the Bloch spectrum at k = 0. This is an extremely effective method, that provides a remarkable agreement with the prediction of the ab-initio approach based on the MLWFs, as discussed at the end of sect. 2. In Figure 7 we plot the tunneling coefficients as a function of the lattice amplitude 14 s. Their behavior can be modeled with an analytic expression of the form t i /E R = As α e −β √ s (i = 0, 1, 2), where A, α, and β are parameters to be extracted from a numerical fit. One has [42] t 0 /E R = 1.16s 0.95 e −1.634 √ s ,(78)t 1 /E R = 0.78s 1.85 e −3.404 √ s ,(79) corresponding to the lines in Figure 17. As shown in Ref. [42] these estimates permit to reproduce the exact spectrum with great accuracy 15 , with the energy mismatch δε n in eq. (75) being well below 1% for s > 3. The Haldane model By adding a microscopic vector potential A to the configuration considered in the previous case, it is possible to realize the celebrated Haldane model [29,48]. For such purpose, the vector potential must have the same periodicity of the main lattice and zero magnetic flux through each unit cell [29]. This model, originally proposed for electrons in a two-dimensional crystal lattice, describes a Chern insulator [72], characterized by the presence of the quantum Hall effect (QHE) [73] in the absence of a macroscopic magnetic field. In this case, the tunneling amplitude t 1 acquires a complex phase due to the breaking of time-reversal symmetry. Interestingly, an effective experimental realization of the Haldane model has been recently reported in [74]. In order to derive the Haldane model from first principles [46,47], let us consider the minimal-coupling Hamilto-nianĤ 0 = 1 2m p − A(r) 2 + V L (r)(80) with V L (r) being the honeycomb potential in eq. (71). The vector potential A(r) is assumed to have the same periodicity of V L (r), with the flux across the unit cell of the corresponding magnetic field B = ∇ × A being null. As a specific realization, we adopt the Coulomb gauge, ∇ · A(r) = 0, and consider the case of Refs. [48,70] A(r) = αk L sin((b 2 − b 1 )· r) + 1 2 sin(b 2 · r) − 1 2 sin(b 1 · r) e x − √ 3 2 (sin (b 1 · r) + sin (b 2 · r)) e y       ,(81) shown in Figure 16. The parameter α represents the amplitude of the vector potential in units of k L . The tight-binding model. The Haldane model is obtained by considering a tight-binding expansion up to nextto-nearest neighbors, as in the previous section. Even in the presence of the vector potential, the nearest-neighboring tunneling coefficients are all equal and can be chosen to be real 16 . Instead, the next-to-nearest tunneling coefficients acquire a complex phase, one for each site type (ν = A, B), namely t ± 1ν = |t 1ν |e ±iϕ ν ( j = 1, 2, 3). These properties follow from the symmetries of the microscopic Hamiltonian, see Figure 16, and no additional hypothesis (like the use of the Peierls substitution [29,48]) are required [46,47]. Then, the expression for Z(k) is that of eq. (74), whereas F ν (k) becomes F ν (k) = 2 cos k·(a 1 + a 2 ) + ϕ ν + 2 i=1,2 cos (k· a i − ϕ ν ) .(82) In addition, in order to recover the original model proposed by Haldane [29], some approximations are needed [47]. In particular, one should pose |t 1A | = |t 1B | ≡ |t 1 |, and ϕ A = −ϕ B ≡ ϕ. Both assumptions are reasonable in the tightbinding regime, as we shall see later on. Dirac Points. As discussed in the previous section, for α = 0 and χ A = 0 the two lowest energy bands are degenerate at the Dirac points, located at k ± D = ±(1, 0)k L 17 . In general, when the time-reversal or the inversion symmetry are broken (α 0, χ A 0) two inequivalent gaps open at k + D and k − D , see eq. (23) δ ± = 2|h 3 (k ± D )| = 2 ± 3 √ 3|t 1 | sin ϕ .(83) For certain values of α and χ A one of the two gaps may close again, namely when h(k ± D ; α, χ A ) = 0 (see eq. (23)). This relation identifies the boundary between the normal and topological insulator phases [29], as discussed later on in sect. 4.1.3. Tight-binding parameters As anticipated in sect. 2, it is possible to derive a closed set of analytical expressions in terms of specific properties of the spectrum, namely the gaps at the Dirac points δ ± in eq. (83), and the following bandwidths [47] ∆ ± + = +[ + (0) − + (k ± D )],(84)∆ ± − = −[ − (0) − − (k ± D )].(85) Then, by considering e.g. , ϕ 0 18 , one has = δ + ± δ − 4 ,(86)t 0 = 1 6 ∆ + + + ∆ + − + δ + 2 − (δ + ± δ − ) 2 4 ,(87)|t 1 | = 1 18 ∆ + + − ∆ + − 2 + 3 4 (δ + ∓ δ − ) 2 ,(88)ϕ = tg −1       √ 3 2 δ + ∓ δ − ∆ + + − ∆ + −       .(89) where the signs in ± refer to the normal and topological insulator phases, respectively. The behavior of the various tightbinding parameters as a function of the intensity of the vector potential α, and of the lattice amplitude s, will be discussed in the following. Three regimes can be identified, according to the symmetries of the system. Parity conserving, time-reversal breaking case (α 0, χ A = 0) 19 . In this regime we have = 0, so that the energy gaps in eq. (83) become degenerate, δ + = δ − ≡ δ D , and the four bandwidths in (85) merge into two, namely ∆ + + = ∆ − + ≡ ∆ + and ∆ + − = ∆ − − ≡ ∆ − . The behavior of the tight-binding parameters as functions of the amplitude α of the vector potential is shown in Figure 19 for different values of the lattice amplitude s. From these figures we can identify two regimes: (i) α 0.5, where t 0 and |t 1 | are almost constant and the phase ϕ is linear in α; (ii) α 0.5, with the phase ϕ deviating from the linear behavior, and the tunneling amplitudes t 0 and |t 1 | being largely suppressed. Time-reversal conserving, parity breaking case (χ A 0, α = 0). This situation corresponds to a honeycomb lattice with non-degenerate minima, with the energy gaps at the Dirac points still being degenerate, so that there are only two bandwidths, ∆ ± , as in the previous case. Notably, now we have ϕ A = ϕ B = ϕ = 0, implying that the system behaves as a normal insulator. The behavior of , t 0 and |t 1 | as a function of χ A is shown in Figure 20. Notice that both tunneling coefficients, t 0 and |t 1 | are barely affected by parity breaking in this range of parameters. We also remarks that, though in principle the degeneracy between |t 1A | and |t 1B | is formally broken, their difference is actually negligible (see later on). General case (α 0, χ A 0). This is the most interesting case, in which both time-reversal and inversion symmetry are broken. The corresponding tight-binding parameters are shown in Figure 21 as a function of α, for s = 5 and χ A = 0.001. It is interesting to note that the two set of solutions corresponding to the two sign choices in eqs. (86)(87)(88)(89) connect smoothly across the boundary between normal and topological insulator regimes. Finally, let us comment on the approximations employed in deriving the Haldane model. We remind that the Haldane model relies on the following approximations: |t 1A | = |t 1B | ≡ |t 1 |, ϕ A = −ϕ B ≡ ϕ, that in principle are not consistent with the broken degeneracy between sites A and B in the presence of parity breaking. Their accuracy can be checked by using the ab initio values of those terms [47]. This is done in Fig. 22, where we compare the relative deviations from the average values of the phase, ∆ ϕ ≡ 1 − ϕ A,B /ϕ, and of the magnitude of the next-to-nearest tunneling coefficient, ∆ t 1 ≡ 1−|t 1A,B |/|t 1 |, for χ A = 0.001, s = 5. This figure demonstrates that the maximum relative deviation in both cases is below ∼ 1%. It can be verified that this holds for all values of s and χ A considered here, thus justifying the assumptions of the Haldane model in the whole range of parameters. Breakdown of the Peierls substitution Remarkably, the behavior of the tunneling coefficients as a function of the amplitude α of the vector potential, shown e.g. in Figure 19a, is dramatically different from that dictated by the so-called Peierls substitution, a widely employed approximation for describing tight-binding electrons in the presence of a slowly varying external vector field. We recall that in the tight-binding formulation the Peierls substitution 20 consists in replacing the tunneling coefficients t i j with t i j exp{ie j i Adr} [48,79], with the integral to be evaluated along the straight path connecting sites i and j [29,80]. Its formal demonstration requires the hypothesis of a same-site, same-orbital interaction with the vector field, w jν | A(r)|w j ν = A(R jν ) w jν |w j ν [80], corresponding to a vector potential varying on a length scale that is much larger than the lattice spacing. Though this condition is explicitly violated in the Haldane model -the vector field A(r) has the same periodicity of the underlying lattice -this point is often underrated in the literature, see e.g. [29,48]. In fact, the explicit failure of the Peierls substitution has been reported only recently [46] (see also [78,81] for what concerns the semiclassical approach). Actually, Figure 19a shows that the value of the phase ob-tained from the Peierls substitution, ϕ P ≡ [48], differs by more than one order of magnitude from the actual values, even in the regime of low vector potential amplitude (α < 0.5), where the calculated phase is also linear. Moreover, ϕ P does not account for the dependence on the amplitude s of the scalar potential, that is appreciable even in the full tight-binding regime [46], nor the fact that when one moves away from the linear regime, both tunneling coefficients t 0 and t 1 are strongly suppressed, and that the phase ϕ deviates from the linear behavior. This originates from the usual implicit assumption that the basis of localized orbitals is not affected by the vector potential (see e.g. [80]), whereas in fact the presence of the vector potential may significantly affect both the Bloch eigenfunctions ψ mk [31] and the gauge transformation U νm entering eq. (3) [78]. Remarkably, the fact that the phase ϕ is limited by a maximal value implies that ϕ can only access a restricted range of values, so that only a limited portion of the nominal phase diagram can be physically accessed [47]. r A −a 1 r A A · dr = (2π/ √ 3)α Topological phase diagram The Haldane model is characterized by different insulating phases, associated to the values of the Chern number (or topological index) [82] C = i 2π BZ d k occ ν ∂ k u νk | × |∂ k u νk ,(90) where u νk (r) = e −ik·r ψ νk (r) are the periodic part of the Bloch eigenfunctions, the sum over ν being restricted only to occupied bands 21 . C takes only integer values, and it is non vanishing for a topological insulator. The structure of the phase diagram is intimately connected to the presence of the gaps at the Dirac points. When only time-reversal symmetry is broken (α 0, χ A = 0), the gap at the Dirac points is always finite, and the system is in a topological insulating state (C 0). On the other hand, if a gap is opened solely by inversion symmetry breaking, the state of the system is topologically trivial (C = 0). When both symmetries are broken, the behavior of the system depends on the relative strength of the inversion and time-reversal symmetry breaking. In particular, when χ A is relatively small, the gap δ − 22 vanishes for two different values of α, as shown in Figures 23a. In between these two values (grey shaded area in the Figures) the state of the system corresponds to a topological insulator (C = 1); this phase shrinks and eventually disappears by increasing χ A , see Figure 23b. The phase diagram is traditionally drawn as a function of ϕ and /|t 1 | [29,48], with the boundary between normal and topological insulator phases corresponding to the vanishing 20 The Peierls substitution, named after the original work by R. Peierls [75], was originally formulated within the semiclassical approximation as a modification of the dispersion relation, E(k) → E(−i ∇ − (e/c)A) [76][77][78]. 21 The Haldane model consists just of a valence and a conduction band, so that only the lowest band enters the sum over occupied states. 22 The role of δ + and δ − is exchanged for α < 0. of the gap at one of the two inequivalent Dirac points in eq. (83), namely /|t 1 | = ±3 √ 3 sin ϕ. The original formulation of the model is obtained by means of the Peierls substitution [29,48], so that the whole phase diagram is accessible. However, since the possible values of ϕ are actually limited to a finite range that depends on s, only a finite portion of the nominal phase diagram can be accessed, see Figure 24. √ 3 sin ϕ between the topological insulating phases (C = ±1, colored areas) and the normal insulator phase (C = 0, white areas). Actually, only the region in between the two vertical red dashed lines -corresponding to the maximum value of ϕ shown in Figure 19 -is physically accessible (here s = 5; for higher values that region shrinks even further). The topological phase diagram can also be drawn in terms of the physical parameters that characterize the underlying continuous Hamiltonian. This is shown in Figure 25, where we plot the phase diagram in the α−χ A plane, for three different values of s. Remarkably, the topological insulating phase shrinks substantially by increasing s (that is, as the system becomes more and more tight-binding). A stretched honeycomb Another interesting setup for exploring the topological transition at the merging of Dirac points is represented by the tunable honeycomb lattice of Tarruell et al. [18]. In this case the system can be described in terms of a minimal tightbinding model defined on a square lattice, characterized by three tunneling coefficients t 0 , t 1 and t 2 , or by means of an universal tight-binding Hamiltonian that provides a low energy effective description in the vicinity of the Dirac points [19,68,69,71]. Obviously, one can also employ the ab-initio approach discussed in the preceding sections, that allows to take into account the full geometry of the system. This has been considered in Ref. [43], and will be reviewed in the following. Let us start with the tunable honeycomb potential employed in Ref. [18] V(x, y) = −V X cos 2 (k L x + θ/2) − V X cos 2 (k L x) (91) − V Y cos 2 (k L y) − 2α V X V Y cos(k L x) cos(k L y) cos(ϕ), where, by varying the laser intensities V X , V X and V Y , several structures can be realized by continuous deformations, including square, triangular, chequerboard, dimer, honeycomb and 1D chain geometries [18]. Here we consider a set of parameters that guarantee a proper tight-binding regime, with V X = 0.56, V Y = 3.6, and V X variable in the range [6,12] [43]. The corresponding Bravais lattice -see Figure 26 is generated by the two basis vectors a 1,2 = π(e x ∓ e y )/k L , with the reciprocal vectors being b 1,2 = k L (e x ∓ e y ). Here we shall consider an extended tight-binding model including all the tunnelings coefficients indicated in Figure 26. Notice that the ordering of the tunneling coefficients does not necessarily correspond to the hierarchy of their magnitudes. The latter depends on the regime of the potential parameters considered, see e.g. Figure 27. In this case, the functions z(k) and f ν (k) are given by (see eqs. (16), (17)) z(k) = − t 0 + 2t 1 cos(πk y )e −iπk x + t 2 e −2iπk x + 2t 3 cos(2πk y ) ,(92)f ν (k) = 2 j ν 1 cos 2πk y + 2 j ν 2 cos πk y cos (πk x ) + j ν 3 cos (2πk x ) .(93) The behavior of the tunneling coefficients as a function of V X is shown in Figure 27. Dirac points. Among all the possible lattice configurations [18,43], let us consider the case with two degenerate minima per unit cell (θ = π, ϕ = 0), that is particularly interesting owing to the presence of massless Dirac points. The corresponding spectrum is ε ± (k) = f (k) ± |z(k)| (see eq. (24)). It is characterized by Dirac points where the two bands are degenerate, with a linear dispersion along at least one direction. These points are defined by z(k D ) = 0, and their existence and position depend on the geometry of the lattice. In the present case they can be moved inside the Brillouin zone, as shown in [18]. In particular, given the actual hierarchy of the tunneling coefficients [43], the solutions inside the first Brillouin zone correspond to k x = 0, and k y given by the following expression k y = ± 1 π cos −1             −t 1 + t 2 1 + 4t 3 (2t 3 − t 0 − t 2 ) 4t 3             .(94) When t 3 is negligible, this expression reduces to [68,69] k y ± 1 π cos −1 − t 0 + t 2 2t 1 .(95) The behavior of k y as a function of V X is shown in Figure 28, where the predictions of both eqs. (94) and (95), are compared with the exact values extracted from the Bloch spectrum. As the position of the Dirac points is not fixed, for certain values of the parameters they merge and eventually disappear [17,68,69]. This is particularly interesting, as it is associated to a topological phase transition from a semimetallic to an insulating phase. The merging occurs when the two solutions of eq. (94) coincide modulo a reciprocal space vector G = pb 1 + qb 2 (with p, q ∈ Z), namely at k M = G/2 = (pb 1 + qb 2 )/2 [68,69]. In principle, the current geometry of the lattice would permit four possible inequivalent merging points, namely (p, q) = (0, 0), (0, 1), (1, 0), (1, 1), see Figure 29. However, considering the actual values of the tunneling coefficients, the only possible solutions inside the first Brillouin are at k M ≡ (0, ±1). For present parameter regimes, the merging occurs at V X 6.94, see Figure 28. Following Refs. [19,68,69], one can expand the Hamiltonian density around one of the two merging points 23 , by definingk ≡ k − k M . The real and imaginary parts of the off-diagonal component z(k) contributes with a linear term iñ k x and a quadratic term ink y , respectively. The leading terms of the expansion are z R (k) − [t 0 − 2t 1 + t 2 + 2t 3 ] + π 2 (4t 3 − t 1 )k 2 y z I (k) 2π (t 2 − t 1 )k x .(96) As for the diagonal term F(k), it also contributes with a quadratic term ink y , that accounts for the asymmetry between the two bands [43]. Neglecting an irrelevant constant term, one has F(k) −2π 2 (2 j 1 − j 2 )k 2 y . Then, close to the merging point, the Hamiltonian density can be cast into the following form h νν (k) k 2 y 2µ ⊗ I +        ∆ +k 2 y 2m *        ⊗ σ x + ck x ⊗ σ y (98) with ∆ ≡ − [t 0 − 2t 1 + t 2 + 2t 3 ] (99) 1 2m * ≡ π 2 (4t 3 − t 1 ) (100) c ≡ 2π (t 1 − t 2 ) (101) 1 2µ ≡ −2π 2 (2 j 1 − j 2 ).(102) The corresponding dispersion law is ε ± (k) k 2 y 2µ ±        ∆ +k 2 y 2m *        2 + c 2k2 x ,(103) that accurately reproduces the exact Bloch spectrum close to the merging point, as shown in Figure 30. As in the present regime of parameters m * is always negative, the topological transition between the semi-metallic and insulating phases is driven by the sign of ∆. When ∆ is positive the system describes a semi-metal, characterized by the presence of Dirac points as in panels (a),(d). For ∆ = 0 two Dirac points belonging to adjacent Brillouin zones eventually merge (panels (b),(e)). Finally, when ∆ is negative a gap opens at the merging point, see panels (c),(f). Notice that a gap can be opened also by breaking the parity symmetry, when the angle θ is tuned away from π [18]. In fact, in this case the two minima in the unit cell -and so the diagonal terms ν and j ν (ν = A, B) -are no longer degenerate, and a finite Dirac mass is generated. Even in this case, the extended tight-binding model discussed here provides an accurate description of the microscopic Hamiltonian [43]. 103), as a function of k y (at k x = 0) (a,b,c), and of k x (at k y = 1) (d,e,f). Each column corresponds to a different value of V X : (a,d) V X = 8, (b,e) V X = 6.94 (merging point), V X = 6.54 (c,f) . Note that the cut along k x in (d) does not cross the Dirac point, as the latter is located at k y 0.68. Interaction terms In this final section we shall discuss how to deal with the effect of interaction between the particles, by considering the specific case of bosonic particles 24 with contact interaction. Namely, we consider an Hamiltonian term of the form 23 A similar expansion can be derived around a generic Dirac point. 24 The fermionic case is analogous, one should only make attention to the Pauli exclusion principle. H int = g 2 d D r ψ † (r)ψ(r) 2 ,(104) with g being a coupling constant. Then, it is rather obvious that if interactions are not too strong, the same optimal basis of MLWFs obtained in the free particle limit is the natural choice for expanding the Hamiltonian (104), though other approaches -aimed at minimizing the contribution of the terms not included in the expansion -can also be found in the literature [83]. In general, one haŝ H int = g 2 {ν i }=A,B { j i }â † j 1 ν 1â † j 2 ν 2â j 3 ν 3â j 4 ν 4 · · dr w * j 1 ν 1 (r)w * j 2 ν 2 (r)w j 3 ν 3 (r)w j 4 ν 4 (r). The leading term is represented by the usual Bose-Hubbard on-site interaction [51], namelŷ H onsite = ν=A,B U ν jνn jν n jν − 1 ,(106) with U α = (g/2) dr w jν (r) 4 . In addition, one may also consider the effect of next-to-leading terms that couple neighboring wells inside the elementary cell [45,83], see Figure 31. They include a density-density interaction term H dens−dens = g 2 I 2A2B ·n jAn jB , Here I iAkB represent a shorthand notation for the superposition integral in eq. (105), namely I iAkB ≡ dr (w jA (r)) i (w jB (r)) k . As an example, let us consider the one-dimensional case discussed in sect. 3. The relative weight of the various integrals in eq. (110) is reported in Figure 32, for both the singleand composite-band approaches. This Figure shows that, as far as interactions are concerned, the composite-band model outperforms the single-band one, as the next-to-leading terms are significantly smaller in the former case. Notice also that (outside the resonant region) the values for on-site interaction given by the two approaches are almost indistinguishable. This is expected due to the very similar behavior of the bulk profiles of the Wannier functions. From Figure 32 it is also evident that, in the range of considered here, the s − p density-density interaction is completely negligible with respect to the on-site interaction. Similarly small is the pair tunneling contribution -as the integrals involved are exactly the same for contact interactions. The significance of the density induced tunnelings with respect to the leading order tunneling inĤ 0 depends on the value of the interaction constant g and the lattice filling factors, and in general when g is small they can be safely neglected. Let us conclude this section reminding the reader that the relative role of different terms may be different for long-range interactions like in dipolar condensates, see Ref. [84] and references therein. Conclusions In this paper we have reviewed a general method for constructing tight-binding models for ultracold atoms in optical lattices, by means of the maximally localized Wannier functions (MLWFs) for composite bands introduced by Marzari and Vanderbilt [37]. The MLWFs, obtained through a gauge transformation that minimizes their spread, constitute a powerful tool for calculating the tight-binding coefficients ab initio, allowing for a direct connection between the continuous microscopic potential and the discrete tight-binding hamiltonian. A general method for extracting the values of the tight-binding parameters from the spectrum, by means of a set of suitable analytical relations, has also been discussed. Several applications to one and two dimensional systems with two lattice sites per unit cell -whose minimal description requires a set of two Bloch bands -have been considered. In the one dimensional case, where the gauge transformation can be obtained by solving a set of ordinary differential equations with periodic boundary conditions, we have considered the case of quasi resonance between the two lowest bands, and that between s − p orbitals that is particularly interesting due to the presence of a Dirac point. The role of the MLWFs in the derivation of the effective Dirac equation close to the Dirac point has also been discussed. As for two-dimensional systems, we have considered several applications to regular and stretched honeycomb lattices, which represent a powerful platform for simulating the physics of graphene and for investigating different topological phases. In particular, we have considered the case of the Haldane model, a paradigmatic model for topological insulators, that can be realized by adding an artificial magnetic field with the same periodicity of the lattice and vanishing flux through the unit cell. The present analysis, based on first principles, has permitted to reveal a number of important results, including the breakdown of the Peierls substitution and the fact that, in general, only a small portion of the nominal phase diagram of the Haldane model can be accessed. For the case of stretched honeycomb lattices, whose Dirac points can be moved and merged by tuning the lattice parameters, we have discussed the low-energy expansion around the merging points, that follows naturally from the tight-binding expansion. In all cases, the approach based on the MLWFs allows to accurately parametrize the optimal tight-binding parameters, providing a direct connection with the parameters that are accessed experimentally. The present formulation allows also to include the effects of inter-particle interactions, that can be accounted for by expanding the corresponding terms on the basis of noninteracting MLWFs, in a perturbative approach. This has been briefly discussed in the last section of this Review, where we have considered the case of bosonic particles in a one dimensional superlattice, as an example. Regarding this point, we mention that only in the limit of strong interactions the single particle basis may not be the optimal one, and other approaches may be considered [85][86][87][88][89]. tight-binding Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 3.2 Generalized Wannier functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Tunneling coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4.2 "s-p" resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.5 Semiclassical dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 1 : 1(Color online) Density plot (in logarithmic scale) of the real (left) and imaginary (right) parts of the MLWFs for sublattice A of a honeycomb potential in the tight-binding regime (see sect. 4) [47]. The solid and dashed lines denote the unit cell and the honeycomb lattice, respectively. Lenghts are in units of the lattice spacing. Figure 2 : 2(Color online) The three possible configurations for the unit cell of the potential in Figure 3 : 3(Color online) A sketch of the double-well structure and of the tunneling coefficients considered here. Then, according to eqs. (16)-(17), we have Figure 4 : 4(Color online) Density plot of the ratio between the first and second band gaps, R ≡ δ 12 /δ 23 as a function of θ 0 and . The dashed dotted line corresponds to R = 1. The color scale is saturated at R = 1. Figure 5 : 5(Color online) Plot of the density of the two lowest single band (black lines) and generalized (red lines) MLWFs, in log (a,b) and linear scale (c,d). The dotted line in (c,d) represents the potential, while the horizontal orange stripes are the first three Bloch bands (on the same scale of the potential). Here = 2, θ 0 = π/2. Figure 6 : 6(Color online) Same asFigure 5for θ 0 = 0.2π. Figure 7 : 7(Color online) Plot of the tunneling coefficients (in modulus), as a function of , for θ 0 = π/2. Figure 8 : 8(Color online) Plot of the quantity δε n (see text), as a function of θ 0 for = 2 (a), and as a function of for θ 0 = π/2 (b). Solid line: extended tight-binding model; Dotted-dashed line: nearest-neighbor approximation. Empty squares: first band; Solid squares: second band. Figure 9 : 9(Color online) A sketch of the various tunneling terms of the extended Bose-Hubbard model in case of s − p resonance. Due to the parity properties of the p orbital, the nearest-neighbor tunneling amplitudes from a given site to the left and to the right have the same magnitude J, but opposite sign. We remind that this term is strictly vanishing within the single band approach. Figure 10 :Figure 11 : 1011(Color online) (a) Plot of the s− (red dotted dashed line) and p−like (black solid line) MLWFs for = 1.5. The potential is represented by the dotted (blue) line, whereas the horizontal orange stripes represent the lowest four Bloch bands (on the same scale as the potential). (b) The density plot of the same composite-band MLWFs is shown here in logarithmic scale. Note the exponential decay of the tails. (Color online) Same as above, but for = 3. Figure 12 : 12(Color online) (a) Onsite energies and (b) tunneling amplitudes as a function of . The thin lines in panel (b) correspond to the single band values for J s and J p (see text). Figure 13 : 13(Color online) Plot of δε n (n = s, p, see text) as a function of , as obtained from the single-and composite-band approaches (indicated in the label as sb and cb, respectively). Figure 14 : 14(Color online) Plot of the density of the Wannier functions, for the first and second excited bands (left and right, respectively) and φ = 0, 0.8π, π (from top to bottom). The MLWFs for the original Bloch bands are shown as solid line, those rotated as dotted-dashed line; the dotted line represents the lattice potential. At resonance (φ = π), the mass m vanishes, and there is no rotation, R = I. The numbers in the legend correspond to the values of the participation ratio P (see text). Figure 15 : 15(Color online) (a) Density plot of the honeycomb potential in eq. (71) for χ A = 0. Hot and cold colors correspond to maxima and minima of the potential, respectively. (b) Bloch spectrum E ± (k) of the lowest two bands for s = 5. The Dirac points (where the two bands are degenerate) represents the vertices of the first Brillouin zone (a regular hexagon). Lengths in units of k −1 L , momenta in units of k L , and energies in units of E R . Figure 16 : 16(Color online) Bravais lattice associated to the honeycomb potential in eq. (71). Filled and empty circles refer to minima of type A and B, respectively. The elementary cell is highlighted in yellow. The tunneling coefficients up to next-to-nearest neighbors (t 0 , t 1 ) are indicated for the site of type A in the central cell. The length of each side of the hexagon is a = 4π/(3 √ Figure 17 : 17(Color online) Behavior of the tunneling coefficients as a function of the lattice amplitude s. The lines are the result of a fit of the numerical data, and coincide with the values extracted from the Bloch spectrum (points), see eqs.(76),(77). Figure 18 : 18(Color online) Structure of the vector potentials A. Lengths in units of k −1 L Figure 19 :Figure 20 : 1920(Color online) Plot of the tight-binding parameters as a function of α for the parity-symmetric case (χ A = 0), with s = 5 (green, squares), 7 (red, circles) and 9 (blue, triangles). The solid lines correspond to the ab initio calculation, whereas symbols are obtained from the analytic formulas (87)-(89). The gray background indicates that the system is in the topological insulating phase. (a) Phase ϕ. The dotted-dashed line represents the phase ϕ P obtained from the Peierls substitution. (b,c) tunneling coefficients t 0 and |t 1 |, respectively. (Color online) Plot of the tight-binding parameters as a function of χ A , for α = 0 and s = 5 (green, squares), 7 (red, circles) and 9 (blue, triangles). The solid lines correspond to the ab initio calculation, whereas symbols are obtained from the analytic formulas (86)-(88). (a) On site energy difference = δ D /2. (b,c) tunneling coefficients t 0 and |t 1 |, respectively. Figure 21 : 21(Color online) Plot of the tight-binding coefficients as a function of α, for s = 5 and χ A = 0.001. The solid line corresponds to the ab-initio calculations from the MLWFs, whereas the points (squares and circles) are obtained from the the analytical expressions in eqs. (86)-(89). The grey area denotes the topological insulator regime. Figure 22 : 22(Color online) Relative deviations from the average values of (a) the phase, ∆ ϕ , and (b) the magnitude of the next-to-nearest tunneling coefficient, ∆ t 1 , for χ A = 0.001, s = 5 E R . These quantities have been calculated ab-initio by using the MLWFs approach[47]. Figure 23 :Figure 24 : 2324(Color online) behavior of the gaps δ + (black, squares) and δ − (red, circles) as a function of α, for s = 5 and χ A = 2 · 10 −3 (a), and 1.9 · 10 −3 (b). The latter corresponds to the maximal value of |χ A | for which the system can be in the topological insulating phase. The grey shaded area identifies the topological insulator phase (C = 1), whereas the white background corresponds to a normal insulating state (C = 0). (Color online) Nominal phase diagram of the Haldane model as a function of ϕ and /|t 1 |. The solid (black) line denotes the analytical boundary /|t 1 | = 3 Figure 25 : 25(Color online) Topological phase diagram of the continuous Hamiltonian in eq. (80), as a function of α and χ A , for three different values of the scalar potential amplitude s. The non-trivial topological state is indicated by the colored dots. The black dashed lines represent a guide to the eye for the phase boundaries for each value of s. Figure 26 : 26(Color online) Bravais lattice for the stretched honeycomb configuration of the potential in eq. (91). Filled and empty circles refers to minima of type A and B, respectively. The elementary cell is highlighted in yellow. The various diagonal and off-diagonal tunneling coefficients considered here are indicated for the site of type A in the central cell. Figure 27 : 27(Color online) Plot of the various tunneling coefficients as function of V X . Figure 28 : 28(Color online) Position of the Dirac points along the k y -axis as a function of V X for the parameter regime of Tarruell et al. [18]. The exact positions (circles) extracted from the Bloch spectrum are compared with the predictions of eqs. (94) (solid line) and (95) (dashed dotted line), that are almost indistinguishable. Only around the merging point, at V X 6.94, it is preferable to use the complete expression (94) instead of eq. (95). Figure 29 : 29(Color online) Unit cell in reciprocal space (first Brillouin zone). The location of all possible locations of the merging of the Dirac points are indicated by color dots. Equivalent points (connected by a reciprocal space vector G) are depicted with the same color. Given the actual values of the tunneling coefficients, only the points at k M ≡ (0, ±1) can be realized (larger red dots). Figure 30 : 30(Color online) Cuts of the energy bands around the merging point k M = (0, 1), for V X = 0.56, V Y = 3.6. The exact Bloch bands (red solid lines) are compared to the approximate expressions in eq. ( Figure 31 : 31(Color online) A sketch of the various tunneling and interaction terms of the extended Bose-Hubbard model. Figure 32 : 32(Color online) Plot of the (modulus of) the integrals I iskp characterizing the amplitude of various interaction terms, as a function of . Lines with symbols correspond to the composite band approach, whereas plain lines refer to the single band case. The color code is the same as that inFigure 31. jAâ jBâ jB + h.c..a density induced tunnelinĝ H dens−tun = g 2 I 1A3Bâ † jAn jBâ jB + g 2 I 3A1Bâ † jAn jAâ jB + (A ↔ B), (108) and the tunneling of pairŝ H pair−tun = g 2 I 2A2B ·â † jAâ † Both componets are non negative. The off-diagonal term is absent in case of a single band, N = 1. A different implementation, designed for optical lattice potentials in one and two spatial dimensions, has been discussed in Ref.[50]. For one-dimensional systems it is also possible to write analytically a set of ordinary differential equations for a specific gauge transformation, as we shall see in the next section. Notice that eq. (47) actually represents two real equations, for its real and imaginary components. The normal form consists in using a matrix notation, by defining a vector with the four angle derivatives as components. Since initially there are only two equations, only two of the four final equations can be non trivial. The same relation holds in the rotated basis[9]. The the off diagonal spread is fixed by the rotation R(θ).11 This corresponds to an inverse free-particle Foldy-Wouthuysen transformation in the momentum representation[44,66,67].12 The parameters mc 2 and c are obtained from a fit of the energy dispersion around k = 0[44,61]. This is valid at any order of the tight-binding expansion. Notice that for this specific case, χ A = 0, s is the only free parameter of the system.15 Note that the inclusion of t 1 is crucial for reproducing the band asymmetry in the low s regime. Although the Wannier functions w jν (r) are complex, t 0 can be chosen real by means of a suitable global gauge fixing.17 In the full model, where in case of parity breaking it is |t 1A | |t 1B | , the position of the Dirac points may be slightly shifted from ±(1, 0)k L[47].18 The solutions corresponding to a different regime can be obtained straightforwardly from symmetry considerations, by exchanging the role of the two basis points A, B and/or of the two Dirac points.19 In this regime |t 1A | = |t 1B | and ϕ A = −ϕ B = ϕ, so that the Haldane model with = 0 is strictly recovered. We would like to thank them all. This work has been supported by the Universidad del Pais Vasco/Euskal Herriko Unibertsitatea under Program No. UFI 11/55, the Ministerio de Economia y Competitividad through Grants No. FIS2012-36673-C03-03, the Basque Government through Grant No. IT-472-10. A Bergara, X Eiguren, J Lopez-Gonzalez, J Sisti, Zakrzewski, the Helmholtz Gemeinschaft Deutscher-Young Investigators Group Program No. VH-NG-717 (Functional Nanoscale Structure and Probe Simulation Laboratory) and the Impuls und Vernetzungsfonds der Helmholtz-Gemeinschaft Postdoc Programme. of the material presented in this review is the result of previous collaborations with A.of the material presented in this review is the result of previous col- laborations with A. Bergara, A. Eiguren, X. Lopez-Gonzalez, J. Sisti, J. Za- krzewski. We would like to thank them all. This work has been supported by the Universidad del Pais Vasco/Euskal Herriko Unibertsitatea under Pro- gram No. UFI 11/55, the Ministerio de Economia y Competitividad through Grants No. FIS2012-36673-C03-03, the Basque Government through Grant No. IT-472-10, the Helmholtz Gemeinschaft Deutscher-Young Investiga- tors Group Program No. VH-NG-717 (Functional Nanoscale Structure and Probe Simulation Laboratory) and the Impuls und Vernetzungsfonds der Helmholtz-Gemeinschaft Postdoc Programme. Many-body physics with ultracold gases. I Bloch, J Dalibard, W Zwerger, 10.1103/RevModPhys.80.885doi:10.1103/ RevModPhys.80.885Rev. Mod. Phys. 80I. Bloch, J. Dalibard, W. Zwerger, Many-body physics with ultra- cold gases, Rev. Mod. Phys. 80 (2008) 885-964. doi:10.1103/ RevModPhys.80.885. Ultracold Atoms in Optical Lattices -Simulating quantum many-body systems. M Lewenstein, A Sanpera, V Ahufinger, Oxford University PressOxford, U.K.M. Lewenstein, A. Sanpera, V. Ahufinger, Ultracold Atoms in Optical Lattices -Simulating quantum many-body systems, Oxford University Press, Oxford, U.K., 2012. Cold atoms in double-well optical lattices. V I Yukalov, E P Yukalova, 10.1103/PhysRevA.78.063610Phys. Rev. A. 7863610V. I. Yukalov, E. P. Yukalova, Cold atoms in double-well optical lat- tices, Phys. Rev. A 78 (2008) 063610. doi:10.1103/PhysRevA.78. 063610. Cold bosons in optical lattices. V Yukalov, 10.1134/S1054660X09010010Laser Physics. 191V. Yukalov, Cold bosons in optical lattices, Laser Physics 19 (1) (2009) 1-110. doi:10.1134/S1054660X09010010. Sen(De), U. Sen, Ultracold atomic gases in optical lattices: mimicking condensed matter physics and beyond. M Lewenstein, A Sanpera, V Ahufinger, B Damski, A , 10.1080/00018730701223200Advances in Physics. 562M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen(De), U. Sen, Ultracold atomic gases in optical lattices: mimicking con- densed matter physics and beyond, Advances in Physics 56 (2) (2007) 243-379. doi:10.1080/00018730701223200. Bose-Einstein condensates in optical quasicrystal lattices. L Sanchez-Palencia, L Santos, 10.1103/PhysRevA.72.053607Phys. Rev. A. 7253607L. Sanchez-Palencia, L. Santos, Bose-Einstein condensates in opti- cal quasicrystal lattices, Phys. Rev. A 72 (2005) 053607. doi: 10.1103/PhysRevA.72.053607. L Fallani, C Fort, M Inguscio, 10.1016/S1049-250X(08)00012-8Bose-Einstein Condensates in Disordered Potentials. Academic Press56L. Fallani, C. Fort, M. Inguscio, Bose-Einstein Condensates in Dis- ordered Potentials, Vol. 56 of Advances In Atomic, Molecular, and Optical Physics, Academic Press, 2008, pp. 119 -160. doi: 10.1016/S1049-250X(08)00012-8. Anderson localization of a non-interacting Bose-Einstein condensate. G Roati, C Errico, L Fallani, M Fattori, C Fort, M Zaccanti, G Modugno, M Modugno, M Inguscio, 10.1038/nature07071Nature. 4537197G. Roati, C. D'Errico, L. Fallani, M. Fattori, C. Fort, M. Zaccanti, G. Modugno, M. Modugno, M. Inguscio, Anderson localization of a non-interacting Bose-Einstein condensate, Nature 453 (7197) (2008) 895-898. doi:10.1038/nature07071. Exponential localization in one-dimensional quasiperiodic optical lattices. M Modugno, 10.1088/1367-2630/11/3/033023New Journal of Physics. 11333023M. Modugno, Exponential localization in one-dimensional quasi- periodic optical lattices, New Journal of Physics 11 (3) (2009) 033023. doi:10.1088/1367-2630/11/3/033023. Simulation and Detection of Dirac Fermions with Cold Atoms in an Optical Lattice. S.-L Zhu, B Wang, L.-M Duan, 10.1103/PhysRevLett.98.260402Phys. Rev. Lett. 98260402S.-L. Zhu, B. Wang, L.-M. Duan, Simulation and Detection of Dirac Fermions with Cold Atoms in an Optical Lattice, Phys. Rev. Lett. 98 (2007) 260402. doi:10.1103/PhysRevLett.98.260402. Flat Bands and Wigner Crystallization in the Honeycomb Optical Lattice. C Wu, D Bergman, L Balents, S Das Sarma, 10.1103/PhysRevLett.99.070401Phys. Rev. Lett. 9970401C. Wu, D. Bergman, L. Balents, S. Das Sarma, Flat Bands and Wigner Crystallization in the Honeycomb Optical Lattice, Phys. Rev. Lett. 99 (2007) 070401. doi:10.1103/PhysRevLett.99.070401. p x,y -orbital counterpart of graphene: Cold atoms in the honeycomb optical lattice. C Wu, S Das Sarma, 10.1103/PhysRevB.77.235107Phys. Rev. B. 77235107C. Wu, S. Das Sarma, p x,y -orbital counterpart of graphene: Cold atoms in the honeycomb optical lattice, Phys. Rev. B 77 (2008) 235107. doi:10.1103/PhysRevB.77.235107. Dirac-point engineering and topological phase transitions in honeycomb optical lattices. B Wunsch, F Guinea, F Sols, 10.1088/1367-2630/10/10/103027New J. Phys. 1010103027B. Wunsch, F. Guinea, F. Sols, Dirac-point engineering and topological phase transitions in honeycomb optical lattices, New J. Phys. 10 (10) (2008) 103027. doi:10.1088/1367-2630/10/10/103027. Ultracold fermions in a graphene-type optical lattice. K L Lee, B Grémaud, R Han, B.-G Englert, C Miniatura, 10.1103/PhysRevA.80.043411Phys. Rev. A. 8043411K. L. Lee, B. Grémaud, R. Han, B.-G. Englert, C. Miniatura, Ultracold fermions in a graphene-type optical lattice, Phys. Rev. A 80 (2009) 043411. doi:10.1103/PhysRevA.80.043411. Sengstock, Multi-component quantum gases in spin-dependent hexagonal lattices. P Soltan-Panahi, J Struck, P Hauke, A Bick, W Plenkers, G Meineke, C Becker, P Windpassinger, M Lewenstein, K , 10.1038/NPHYS1916doi:10.1038/ NPHYS1916Nature Physics. 75P. Soltan-Panahi, J. Struck, P. Hauke, A. Bick, W. Plenkers, G. Meineke, C. Becker, P. Windpassinger, M. Lewenstein, K. Sen- gstock, Multi-component quantum gases in spin-dependent hexago- nal lattices, Nature Physics 7 (5) (2011) 434-440. doi:10.1038/ NPHYS1916. Quantum phase transition to unconventional multi-orbital superfluidity in optical lattices. P Soltan-Panahi, D.-S Luhmann, J Struck, P Windpassinger, K Sengstock, 10.1038/nphys2128Nat Phys. 81P. Soltan-Panahi, D.-S. Luhmann, J. Struck, P. Windpassinger, K. Sen- gstock, Quantum phase transition to unconventional multi-orbital su- perfluidity in optical lattices, Nat Phys 8 (1) (2012) 71-75. doi: 10.1038/nphys2128. Montambaux, Manipulation of Dirac points in graphene-like crystals. R D Gail, J N Fuchs, M O Goerbig, F Piéchon, G , 10.1016/j.physb.2012.01.072Physica B: Physics of Condensed Matter. 1R. d. Gail, J. N. Fuchs, M. O. Goerbig, F. Piéchon, G. Montam- baux, Manipulation of Dirac points in graphene-like crystals, Physica B: Physics of Condensed Matter (2012) 1-5doi:10.1016/j.physb. 2012.01.072. Creating, moving and merging Dirac points with a Fermi gas in a tunable honeycomb lattice. L Tarruell, D Greif, T Uehlinger, G Jotzu, T Esslinger, 10.1038/nature10871doi:10.1038/ nature10871Nature. 4837389302L. Tarruell, D. Greif, T. Uehlinger, G. Jotzu, T. Esslinger, Creat- ing, moving and merging Dirac points with a Fermi gas in a tunable honeycomb lattice, Nature 483 (7389) (2012) 302. doi:10.1038/ nature10871. Bloch-Zener Oscillations across a Merging Transition of Dirac Points. L.-K Lim, J.-N Fuchs, G Montambaux, 10.1103/PhysRevLett.108.175303Phys. Rev. Lett. 10817175303L.-K. Lim, J.-N. Fuchs, G. Montambaux, Bloch-Zener Oscillations across a Merging Transition of Dirac Points, Phys. Rev. Lett. 108 (17) (2012) 175303. doi:10.1103/PhysRevLett.108.175303. Merging Dirac points and topological phase transitions in the tight-binding model on the generalized honeycomb lattice. Y Hasegawa, K Kishigi, 10.1103/PhysRevB.86.165430Phys. Rev. B. 86165430Y. Hasegawa, K. Kishigi, Merging Dirac points and topological phase transitions in the tight-binding model on the generalized honeycomb lattice, Phys. Rev. B 86 (2012) 165430. doi:10.1103/PhysRevB. 86.165430. Topological semimetal in a fermionic optical lattice. K Sun, W V Liu, A Hemmerich, S Das Sarma, 10.1038/nphys2134Nat. Phys. 81K. Sun, W. V. Liu, A. Hemmerich, S. Das Sarma, Topological semimetal in a fermionic optical lattice, Nat. Phys. 8 (1) (2012) 67-70. doi:10.1038/nphys2134. Interband tunneling near the merging transition of Dirac cones. J.-N Fuchs, L.-K Lim, G Montambaux, 10.1103/PhysRevA.86.063613Phys. Rev. A. 8663613J.-N. Fuchs, L.-K. Lim, G. Montambaux, Interband tunneling near the merging transition of Dirac cones, Phys. Rev. A 86 (2012) 063613. doi:10.1103/PhysRevA.86.063613. The Band Theory of Graphite. P R Wallace, 10.1103/PhysRev.71.622Phys. Rev. 71P. R. Wallace, The Band Theory of Graphite, Phys. Rev. 71 (1947) 622-634. doi:10.1103/PhysRev.71.622. Orthogonal Orbitals and Generalized Wannier Functions. J D Cloizeaux, 10.1103/PhysRev.129.554Phys. Rev. 129J. D. Cloizeaux, Orthogonal Orbitals and Generalized Wannier Func- tions, Phys. Rev. 129 (1963) 554-566. doi:10.1103/PhysRev.129. 554. Analytical Properties of n-Dimensional Energy Bands and Wannier Functions. J D Cloizeaux, 10.1103/PhysRev.135.A698Phys. Rev. 135J. D. Cloizeaux, Analytical Properties of n-Dimensional Energy Bands and Wannier Functions, Phys. Rev. 135 (1964) A698-A707. doi: 10.1103/PhysRev.135.A698. Tight-binding description of graphene. S Reich, J Maultzsch, C Thomsen, P Ordejón, 10.1103/PhysRevB.66.035412Phys. Rev. B. 6635412S. Reich, J. Maultzsch, C. Thomsen, P. Ordejón, Tight-binding de- scription of graphene, Phys. Rev. B 66 (2002) 035412. doi: 10.1103/PhysRevB.66.035412. Electron Correlations in Narrow Energy Bands, Proceedings of the. J Hubbard, 10.1098/rspa.1963.0204doi:10.1098/ rspa.1963.0204Royal Society of London A: Mathematical, Physical and Engineering Sciences. 276J. Hubbard, Electron Correlations in Narrow Energy Bands, Proceed- ings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 276 (1365) (1963) 238-257. doi:10.1098/ rspa.1963.0204. Boson localization and the superfluid-insulator transition. M P A Fisher, P B Weichman, G Grinstein, D S Fisher, 10.1103/PhysRevB.40.546Phys. Rev. B. 40M. P. A. Fisher, P. B. Weichman, G. Grinstein, D. S. Fisher, Boson localization and the superfluid-insulator transition, Phys. Rev. B 40 (1989) 546-570. doi:10.1103/PhysRevB.40.546. Model for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the "Parity Anomaly. F D M Haldane, 10.1103/PhysRevLett.61.2015Phys. Rev. Lett. 61F. D. M. Haldane, Model for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the "Parity Anomaly", Phys. Rev. Lett. 61 (1988) 2015-2018. doi:10.1103/PhysRevLett.61. 2015. The Structure of Electronic Excitation Levels in Insulating Crystals. G H Wannier, 10.1103/PhysRev.52.191doi:10.1103/ PhysRev.52.191Phys. Rev. 52G. H. Wannier, The Structure of Electronic Excitation Levels in In- sulating Crystals, Phys. Rev. 52 (1937) 191-197. doi:10.1103/ PhysRev.52.191. Analytic Properties of Bloch Waves and Wannier Functions. W Kohn, 10.1103/PhysRev.115.809Phys. Rev. 115W. Kohn, Analytic Properties of Bloch Waves and Wannier Functions, Phys. Rev. 115 (1959) 809-821. doi:10.1103/PhysRev.115.809. Exponential Decay Properties of Wannier Functions and Related Quantities. L He, D Vanderbilt, 10.1103/PhysRevLett.86.5341Phys. Rev. Lett. 86L. He, D. Vanderbilt, Exponential Decay Properties of Wannier Func- tions and Related Quantities, Phys. Rev. Lett. 86 (2001) 5341-5344. doi:10.1103/PhysRevLett.86.5341. Mott-Hubbard transition of cold atoms in optical lattices. W Zwerger, 10.1088/1464-4266/5/2/352Journal of Optics B: Quantum and Semiclassical Optics. 529W. Zwerger, Mott-Hubbard transition of cold atoms in optical lattices, Journal of Optics B: Quantum and Semiclassical Optics 5 (2) (2003) S9. doi:10.1088/1464-4266/5/2/352. Interference pattern and visibility of a Mott insulator. F Gerbier, A Widera, S Fölling, O Mandel, T Gericke, I Bloch, 10.1103/PhysRevA.72.053606Phys. Rev. A. 7253606F. Gerbier, A. Widera, S. Fölling, O. Mandel, T. Gericke, I. Bloch, In- terference pattern and visibility of a Mott insulator, Phys. Rev. A 72 (2005) 053606. doi:10.1103/PhysRevA.72.053606. . N Ashcroft, N Mermin, Saunders College, PhiladelphiaSolid State PhysicsN. Ashcroft, N. Mermin, Solid State Physics, Saunders College, Philadelphia, 1976. Maximally localized Wannier functions: Theory and applications. N Marzari, A A Mostofi, J R Yates, I Souza, D Vanderbilt, 10.1103/RevModPhys.84.1419Rev. Mod. Phys. 84N. Marzari, A. A. Mostofi, J. R. Yates, I. Souza, D. Vanderbilt, Max- imally localized Wannier functions: Theory and applications, Rev. Mod. Phys. 84 (2012) 1419-1475. doi:10.1103/RevModPhys. 84.1419. Maximally localized generalized Wannier functions for composite energy bands. N Marzari, D Vanderbilt, 10.1103/PhysRevB.56.12847Phys. Rev. B. 56N. Marzari, D. Vanderbilt, Maximally localized generalized Wannier functions for composite energy bands, Phys. Rev. B 56 (1997) 12847- 12865. doi:10.1103/PhysRevB.56.12847. Exponential Localization of Wannier Functions in Insulators. C Brouder, G Panati, M Calandra, C Mourougane, N Marzari, 10.1103/PhysRevLett.98.046402Phys. Rev. Lett. 9846402C. Brouder, G. Panati, M. Calandra, C. Mourougane, N. Marzari, Ex- ponential Localization of Wannier Functions in Insulators, Phys. Rev. Lett. 98 (2007) 046402. doi:10.1103/PhysRevLett.98.046402. Bloch Bundles, Marzari-Vanderbilt Functional and Maximally Localized Wannier Functions. G Panati, A Pisante, 10.1007/s00220-013-1741-ydoi:10.1007/ s00220-013-1741-yCommunications in Mathematical Physics. 3223G. Panati, A. Pisante, Bloch Bundles, Marzari-Vanderbilt Functional and Maximally Localized Wannier Functions, Communications in Mathematical Physics 322 (3) (2013) 835-875. doi:10.1007/ s00220-013-1741-y. Wannier90: a tool for obtaining maximally-localised wannier functions. A A Mostofi, J Yates, Y S Lee, I Souza, D Vanderbilt, I Marzari, 10.1016/j.cpc.2007.11.016Comput. Phys. Commun. 1789A. A. Mostofi, J. Yates, Y. S. Lee, I. Souza, D. Vanderbilt, I. Marzari, Wannier90: a tool for obtaining maximally-localised wannier func- tions, Comput. Phys. Commun. 178 (9) (2008) 685-699. doi: 10.1016/j.cpc.2007.11.016. Maximally localized Wannier functions for ultracold atoms in one-dimensional double-well periodic potentials. M Modugno, G Pettini, 10.1088/1367-2630/14/5/055004New J. Phys. 14555004M. Modugno, G. Pettini, Maximally localized Wannier functions for ul- tracold atoms in one-dimensional double-well periodic potentials, New J. Phys. 14 (5) (2012) 055004. doi:10.1088/1367-2630/14/5/ 055004. Tight-binding models for ultracold atoms in honeycomb optical lattices. J Azpiroz, A Eiguren, A Bergara, G Pettini, M Modugno, 10.1103/PhysRevA.87.011602Phys. Rev. A. 8711602J. Ibañez Azpiroz, A. Eiguren, A. Bergara, G. Pettini, M. Modugno, Tight-binding models for ultracold atoms in honeycomb optical lat- tices, Phys. Rev. A 87 (2013) 011602. doi:10.1103/PhysRevA. 87.011602. Self-consistent tight-binding description of Dirac points moving and merging in two-dimensional optical lattices. J Azpiroz, A Eiguren, A Bergara, G Pettini, M Modugno, 10.1103/PhysRevA.88.033631Phys. Rev. A. 8833631J. Ibañez Azpiroz, A. Eiguren, A. Bergara, G. Pettini, M. Modugno, Self-consistent tight-binding description of Dirac points moving and merging in two-dimensional optical lattices, Phys. Rev. A 88 (2013) 033631. doi:10.1103/PhysRevA.88.033631. Effective Dirac equation for ultracold atoms in optical lattices: Role of the localization properties of the Wannier functions. X Lopez-Gonzalez, J Sisti, G Pettini, M Modugno, 10.1103/PhysRevA.89.033608Phys. Rev. A. 8933608X. Lopez-Gonzalez, J. Sisti, G. Pettini, M. Modugno, Effective Dirac equation for ultracold atoms in optical lattices: Role of the localization properties of the Wannier functions, Phys. Rev. A 89 (2014) 033608. doi:10.1103/PhysRevA.89.033608. Wannier functions for one-dimensional s − p optical superlattices. W Ganczarek, M Modugno, G Pettini, J Zakrzewski, 10.1103/PhysRevA.90.033621Phys. Rev. A. 9033621W. Ganczarek, M. Modugno, G. Pettini, J. Zakrzewski, Wannier func- tions for one-dimensional s − p optical superlattices, Phys. Rev. A 90 (2014) 033621. doi:10.1103/PhysRevA.90.033621. Breakdown of the Peierls substitution for the Haldane model with ultracold atoms. J Azpiroz, A Eiguren, A Bergara, G Pettini, M Modugno, 10.1103/PhysRevA.90.033609doi:10.1103/ PhysRevA.90.033609Phys. Rev. A. 9033609J. Ibañez Azpiroz, A. Eiguren, A. Bergara, G. Pettini, M. Modugno, Breakdown of the Peierls substitution for the Haldane model with ul- tracold atoms, Phys. Rev. A 90 (2014) 033609. doi:10.1103/ PhysRevA.90.033609. Ab initio analysis of the topological phase diagram of the Haldane model. J Azpiroz, A Eiguren, A Bergara, G Pettini, M Modugno, Phys. Rev. B. 92195132J. Ibañez Azpiroz, A. Eiguren, A. Bergara, G. Pettini, M. Modugno, Ab initio analysis of the topological phase diagram of the Haldane model, Phys. Rev. B 92 (2015) 195132. Realizing and Detecting the Quantum Hall Effect without Landau Levels by Using Ultracold Atoms. L B Shao, S.-L Zhu, L Sheng, D Y Xing, Z D Wang, 10.1103/PhysRevLett.101.246810Phys. Rev. Lett. 101246810L. B. Shao, S.-L. Zhu, L. Sheng, D. Y. Xing, Z. D. Wang, Realiz- ing and Detecting the Quantum Hall Effect without Landau Levels by Using Ultracold Atoms, Phys. Rev. Lett. 101 (2008) 246810. doi:10.1103/PhysRevLett.101.246810. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. P Giannozzi, S Baroni, N Bonini, M Calandra, R Car, C Cavazzoni, D Ceresoli, G L Chiarotti, M Cococcioni, I Dabo, A D Corso, S De Gironcoli, S Fabris, G Fratesi, R Gebauer, U Gerstmann, C Gougoussis, A Kokalj, M Lazzeri, L Martin-Samos, N Marzari, F Mauri, R Mazzarello, S Paolini, A Pasquarello, L Paulatto, C Sbraccia, S Scandolo, G Sclauzero, A P Seitsonen, A Smogunov, P Umari, R M Wentzcovitch, 10.1088/0953-8984/21/39/395502Journal of Physics: Condensed Matter. 21395502P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smo- gunov, P. Umari, R. M. Wentzcovitch, QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, Journal of Physics: Condensed Matter 21 (2009) 395502. doi:10.1088/0953-8984/21/39/395502. Ab initio derivation of Hubbard models for cold atoms in optical lattices. R Walters, G Cotugno, T H Johnson, S R Clark, D Jaksch, 10.1103/PhysRevA.87.043613Phys. Rev. A. 8743613R. Walters, G. Cotugno, T. H. Johnson, S. R. Clark, D. Jaksch, Ab initio derivation of Hubbard models for cold atoms in optical lattices, Phys. Rev. A 87 (2013) 043613. doi:10.1103/PhysRevA.87.043613. Cold Bosonic Atoms in Optical Lattices. D Jaksch, C Bruder, J I Cirac, C W Gardiner, P Zoller, 10.1103/PhysRevLett.81.3108Phys. Rev. Lett. 81D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, P. Zoller, Cold Bosonic Atoms in Optical Lattices, Phys. Rev. Lett. 81 (1998) 3108- 3111. doi:10.1103/PhysRevLett.81.3108. d-Wave Resonating Valence Bond States of Fermionic Atoms in Optical Lattices. S Trebst, U Schollwöck, M Troyer, P Zoller, 10.1103/PhysRevLett.96.250402Phys. Rev. Lett. 96250402S. Trebst, U. Schollwöck, M. Troyer, P. Zoller, d-Wave Resonating Va- lence Bond States of Fermionic Atoms in Optical Lattices, Phys. Rev. Lett. 96 (2006) 250402. doi:10.1103/PhysRevLett.96.250402. Quantum transport of bosonic cold atoms in double-well optical lattices. Y Qian, M Gong, C Zhang, 10.1103/PhysRevA.84.013608Phys. Rev. A. 8413608Y. Qian, M. Gong, C. Zhang, Quantum transport of bosonic cold atoms in double-well optical lattices, Phys. Rev. A 84 (2011) 013608. doi:10.1103/PhysRevA.84.013608. Many-body Landau-Zener transition in cold-atom double-well optical lattices. Y Qian, M Gong, C Zhang, 10.1103/PhysRevA.87.013636Phys. Rev. A. 8713636Y. Qian, M. Gong, C. Zhang, Many-body Landau-Zener transition in cold-atom double-well optical lattices, Phys. Rev. A 87 (2013) 013636. doi:10.1103/PhysRevA.87.013636. Mobility edges in bichromatic optical lattices. D J Boers, B Goedeke, D Hinrichs, M Holthaus, 10.1103/PhysRevA.75.063404Phys. Rev. A. 7563404D. J. Boers, B. Goedeke, D. Hinrichs, M. Holthaus, Mobility edges in bichromatic optical lattices, Phys. Rev. A 75 (2007) 063404. doi:10.1103/PhysRevA.75.063404. Energy Bands and Projection Operators in a Crystal: Analytic and Asymptotic Properties. J D Cloizeaux, 10.1103/PhysRev.135.A685Phys. Rev. 135J. D. Cloizeaux, Energy Bands and Projection Operators in a Crystal: Analytic and Asymptotic Properties, Phys. Rev. 135 (1964) A685- A697. doi:10.1103/PhysRev.135.A685. E Blount, 10.1016/S0081-1947(08)60459-2Formalisms of Band Theory. F. Seitz, D. TurnbullAcademic Press13Solid State PhysicsFormalisms of Band TheoryE. Blount, Formalisms of Band Theory, in: F. Seitz, D. Turnbull (Eds.), Formalisms of Band Theory, Vol. 13 of Solid State Physics, Aca- demic Press, 1962, pp. 305 -373. doi:10.1016/S0081-1947(08) 60459-2. Self-Consistent Pseudopotentials and Ultralocalized Functions for Energy Bands. P W Anderson, 10.1103/PhysRevLett.21.13Phys. Rev. Lett. 21P. W. Anderson, Self-Consistent Pseudopotentials and Ultralocalized Functions for Energy Bands, Phys. Rev. Lett. 21 (1968) 13-16. doi:10.1103/PhysRevLett.21.13. Wannier functions in one-dimensional disordered systems: Application to fractionally charged solitons. S Kivelson, 10.1103/PhysRevB.26.4269Phys. Rev. B. 26S. Kivelson, Wannier functions in one-dimensional disordered systems: Application to fractionally charged solitons, Phys. Rev. B 26 (1982) 4269-4277. doi:10.1103/PhysRevB.26.4269. Evidence for orbital superfluidity in the P-band of a bipartite optical square lattice. G Wirth, M Olschlager, A Hemmerich, 10.1038/nphys1857Nat Phys. 72G. Wirth, M. Olschlager, A. Hemmerich, Evidence for orbital superflu- idity in the P-band of a bipartite optical square lattice, Nat Phys 7 (2) (2011) 147-153. doi:10.1038/nphys1857. Effective Dirac dynamics of ultracold atoms in bichromatic optical lattices. D Witthaut, T Salger, S Kling, C Grossert, M Weitz, 10.1103/PhysRevA.84.033601Phys. Rev. A. 8433601D. Witthaut, T. Salger, S. Kling, C. Grossert, M. Weitz, Effective Dirac dynamics of ultracold atoms in bichromatic optical lattices, Phys. Rev. A 84 (2011) 033601. doi:10.1103/PhysRevA.84.033601. Klein Tunneling of a Quasirelativistic Bose-Einstein Condensate in an Optical Lattice. T Salger, C Grossert, S Kling, M Weitz, 10.1103/PhysRevLett.107.240401Phys. Rev. Lett. 107240401T. Salger, C. Grossert, S. Kling, M. Weitz, Klein Tunneling of a Quasirelativistic Bose-Einstein Condensate in an Optical Lattice, Phys. Rev. Lett. 107 (2011) 240401. doi:10.1103/PhysRevLett.107. 240401. Multiband envelope function model for quantum transport in a tunneling diode. O Morandi, M Modugno, 10.1103/PhysRevB.71.235331Phys. Rev. B. 71235331O. Morandi, M. Modugno, Multiband envelope function model for quantum transport in a tunneling diode, Phys. Rev. B 71 (2005) 235331. doi:10.1103/PhysRevB.71.235331. J Callaway, Energy Band Theory. New YorkAcademicJ. Callaway, Energy Band Theory, Academic, New York, 1964. Motion of an Electron in a Perturbed Periodic Potential. E N Adams, 10.1103/PhysRev.85.41Phys. Rev. 85E. N. Adams, Motion of an Electron in a Perturbed Periodic Potential, Phys. Rev. 85 (1952) 41-50. doi:10.1103/PhysRev.85.41. J Bjorken, S Drell, Relativistic Quantum Mechanics. New YorkMcGraw-HillJ. Bjorken, S. Drell, Relativistic Quantum Mechanics, McGraw-Hill, New York, 1964. W Greiner, Relativistic Quantum Mechanics-Wave Equations. BerlinSpringerW. Greiner, Relativistic Quantum Mechanics-Wave Equations, Springer, Berlin, 2000. A universal Hamiltonian for motion and merging of Dirac points in a twodimensional crystal. G Montambaux, F Piéchon, J N Fuchs, M Goerbig, 10.1140/epjb/e2009-00383-0509-520. doi:10.1140/ epjb/e2009-00383-0The European Physical Journal B-Condensed Matter and Complex Systems. 724G. Montambaux, F. Piéchon, J. N. Fuchs, M. Goerbig, A univer- sal Hamiltonian for motion and merging of Dirac points in a two- dimensional crystal, The European Physical Journal B-Condensed Matter and Complex Systems 72 (4) (2009) 509-520. doi:10.1140/ epjb/e2009-00383-0. Merging of Dirac points in a two-dimensional crystal. G Montambaux, F Piéchon, J.-N Fuchs, M O Goerbig, 10.1103/PhysRevB.80.153412Phys. Rev. B. 80153412G. Montambaux, F. Piéchon, J.-N. Fuchs, M. O. Goerbig, Merging of Dirac points in a two-dimensional crystal, Phys. Rev. B 80 (2009) 153412. doi:10.1103/PhysRevB.80.153412. Topological insulators and metals in atomic optical lattices. T D Stanescu, V Galitski, J Y Vaishnav, C W Clark, S Das Sarma, 10.1103/PhysRevA.82.013608Phys. Rev. A. 79553639T. D. Stanescu, V. Galitski, J. Y. Vaishnav, C. W. Clark, S. Das Sarma, Topological insulators and metals in atomic optical lattices, Phys. Rev. A 79 (5) (2009) 053639. doi:10.1103/PhysRevA.82.013608. Double transfer through Dirac points in a tunable honeycomb optical lattice. T Uehlinger, D Greif, G Jotzu, L Tarruell, 10.1140/epjst/e2013-01761-ydoi:10. 1140/epjst/e2013-01761-yThe European Physics Journal Special Topics. 2171T. Uehlinger, D. Greif, G. Jotzu, L. Tarruell, Double transfer through Dirac points in a tunable honeycomb optical lattice, The European Physics Journal Special Topics 217 (1) (2013) 121-133. doi:10. 1140/epjst/e2013-01761-y. Topological quantization of the spin Hall effect in two-dimensional paramagnetic semiconductors. X.-L Qi, Y.-S Wu, S.-C Zhang, 10.1103/PhysRevB.74.085308Phys. Rev. B. 7485308X.-L. Qi, Y.-S. Wu, S.-C. Zhang, Topological quantization of the spin Hall effect in two-dimensional paramagnetic semiconductors, Phys. Rev. B 74 (2006) 085308. doi:10.1103/PhysRevB.74.085308. New Method for High-Accuracy Determination of the Fine-Structure Constant Based on Quantized Hall Resistance. K V Klitzing, G Dorda, M Pepper, 10.1103/PhysRevLett.45.494doi:10.1103/ PhysRevLett.45.494Phys. Rev. Lett. 45K. v. Klitzing, G. Dorda, M. Pepper, New Method for High-Accuracy Determination of the Fine-Structure Constant Based on Quantized Hall Resistance, Phys. Rev. Lett. 45 (1980) 494-497. doi:10.1103/ PhysRevLett.45.494. Experimental realization of the topological Haldane model with ultracold fermions. G Jotzu, M Messer, R Desbuquois, M Lebrat, T Uehlinger, D Greif, T Esslinger, 10.1038/nature13915doi:10.1038/ nature13915Nature. 515G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, T. Esslinger, Experimental realization of the topological Haldane model with ultracold fermions, Nature 515 (237-240). doi:10.1038/ nature13915. Zur theorie des diamagnetismus von leitungselektronen. R Peierls, Z. Phys. 80R. Peierls, Zur theorie des diamagnetismus von leitungselektronen, Z. Phys. 80 (11-12) (1933) 763-791. The Effect of a Magnetic Field on Electrons in a Periodic Potential. J Luttinger, 10.1103/PhysRev.84.814doi:10.1103/ PhysRev.84.814Phys. Rev. 844J. Luttinger, The Effect of a Magnetic Field on Electrons in a Peri- odic Potential, Phys. Rev. 84 (4) (1951) 814-817. doi:10.1103/ PhysRev.84.814. Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields. D Hofstadter, 10.1103/PhysRevB.14.2239Phys. Rev. B. 146D. Hofstadter, Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields, Phys. Rev. B 14 (6) (1976) 2239-2249. doi:10.1103/PhysRevB.14.2239. Energy of electrons on a twodimensional lattice in a magnetic field: perturbation theory versus "Peierls substitution. A S Alexandrov, H Capellmann, 10.1007/BF01309424Z. Phys. B. 832A. S. Alexandrov, H. Capellmann, Energy of electrons on a two- dimensional lattice in a magnetic field: perturbation theory versus "Peierls substitution", Z. Phys. B 83 (2) (1991) 237-244. doi: 10.1007/BF01309424. B A Bernevig, Topological Insulators and Topological Superconductors. Princeton University PressB. A. Bernevig, Topological Insulators and Topological Superconduc- tors, Princeton University Press, 2013. Electromagnetic coupling and gauge invariance in the empirical tight-binding method. T B Boykin, R C Bowen, G Klimeck, 10.1103/PhysRevB.63.245314Phys. Rev. B. 6324245314T. B. Boykin, R. C. Bowen, G. Klimeck, Electromagnetic coupling and gauge invariance in the empirical tight-binding method, Phys. Rev. B 63 (24) (2001) 245314. doi:10.1103/PhysRevB.63.245314. Orbital diamagnetism of twodimensional electrons. A S Alexandrov, H Capellmann, 10.1103/PhysRevLett.66.365Phys. Rev. Lett. 66A. S. Alexandrov, H. Capellmann, Orbital diamagnetism of two- dimensional electrons, Phys. Rev. Lett. 66 (1991) 365-368. doi: 10.1103/PhysRevLett.66.365. Insulator/Chern-insulator transition in the Haldane model. T Thonhauser, D Vanderbilt, 10.1103/PhysRevB.74.235111doi:10.1103/ PhysRevB.74.235111Phys. Rev. B. 74235111T. Thonhauser, D. Vanderbilt, Insulator/Chern-insulator transition in the Haldane model, Phys. Rev. B 74 (2006) 235111. doi:10.1103/ PhysRevB.74.235111. Quantum phases in tunable state-dependent hexagonal optical lattices. D.-S Lühmann, O Jürgensen, M Weinberg, J Simonet, P Soltan-Panahi, K Sengstock, 10.1103/PhysRevA.90.013614Phys. Rev. A. 9013614D.-S. Lühmann, O. Jürgensen, M. Weinberg, J. Simonet, P. Soltan- Panahi, K. Sengstock, Quantum phases in tunable state-dependent hexagonal optical lattices, Phys. Rev. A 90 (2014) 013614. doi: 10.1103/PhysRevA.90.013614. Non-standard Hubbard models in optical lattices: a review. O Dutta, M Gajda, P Hauke, M Lewenstein, D.-S Lühmann, B A Malomed, T Sowiński, J Zakrzewski, 10.1088/0034-4885/78/6/066001Reports on Progress in Physics. 78666001O. Dutta, M. Gajda, P. Hauke, M. Lewenstein, D.-S. Lühmann, B. A. Malomed, T. Sowiński, J. Zakrzewski, Non-standard Hubbard mod- els in optical lattices: a review, Reports on Progress in Physics 78 (6) (2015) 066001. doi:10.1088/0034-4885/78/6/066001. Effective threebody interactions of neutral bosons in optical lattices. P R Johnson, E Tiesinga, J V Porto, C J Williams, 10.1088/1367-2630/11/9/093022New Journal of Physics. 11993022P. R. Johnson, E. Tiesinga, J. V. Porto, C. J. Williams, Effective three- body interactions of neutral bosons in optical lattices, New Journal of Physics 11 (9) (2009) 093022. doi:10.1088/1367-2630/11/9/ 093022. Multiband and nonlinear hopping corrections to the three-dimensional Bose-Fermi-Hubbard model. A Mering, M Fleischhauer, 10.1103/PhysRevA.83.063630Phys. Rev. A. 8363630A. Mering, M. Fleischhauer, Multiband and nonlinear hopping correc- tions to the three-dimensional Bose-Fermi-Hubbard model, Phys. Rev. A 83 (2011) 063630. doi:10.1103/PhysRevA.83.063630. Effective multibodyinduced tunneling and interactions in the Bose-Hubbard model of the lowest dressed band of an optical lattice. U Bissbort, F Deuretzbacher, W Hofstetter, 10.1103/PhysRevA.86.023617Phys. Rev. A. 8623617U. Bissbort, F. Deuretzbacher, W. Hofstetter, Effective multibody- induced tunneling and interactions in the Bose-Hubbard model of the lowest dressed band of an optical lattice, Phys. Rev. A 86 (2012) 023617. doi:10.1103/PhysRevA.86.023617. Multi-orbital and densityinduced tunneling of bosons in optical lattices. D.-S Lühmann, O Jürgensen, K Sengstock, 10.1088/1367-2630/14/3/033021New Journal of Physics. 14333021D.-S. Lühmann, O. Jürgensen, K. Sengstock, Multi-orbital and density- induced tunneling of bosons in optical lattices, New Journal of Physics 14 (3) (2012) 033021. doi:10.1088/1367-2630/14/3/033021. Dynamics of cold bosons in optical lattices: effects of higher Bloch bands. M Łcacki, D Delande, J Zakrzewski, 10.1088/1367-2630/15/1/013062New Journal of Physics. 15113062M. Łcacki, D. Delande, J. Zakrzewski, Dynamics of cold bosons in op- tical lattices: effects of higher Bloch bands, New Journal of Physics 15 (1) (2013) 013062. doi:10.1088/1367-2630/15/1/013062.
[]
[ "Multi-bump solutions for Choquard equation with deepening potential well", "Multi-bump solutions for Choquard equation with deepening potential well" ]
[ "Claudianor O Alves \nDepartment of Mathematics\nUniversidade Federal de Campina Grande Unidade Acadêmica de Matemática CEP\n58429-900Campina Grande -PbBrazil\n\nZhejiang Normal University Jinhua\n321004P. R. China\n", "Alânnio B Nóbrega \nDepartment of Mathematics\nUniversidade Federal de Campina Grande Unidade Acadêmica de Matemática CEP\n58429-900Campina Grande -PbBrazil\n\nZhejiang Normal University Jinhua\n321004P. R. China\n", "Minbo Yang ", "B " ]
[ "Department of Mathematics\nUniversidade Federal de Campina Grande Unidade Acadêmica de Matemática CEP\n58429-900Campina Grande -PbBrazil", "Zhejiang Normal University Jinhua\n321004P. R. China", "Department of Mathematics\nUniversidade Federal de Campina Grande Unidade Acadêmica de Matemática CEP\n58429-900Campina Grande -PbBrazil", "Zhejiang Normal University Jinhua\n321004P. R. China" ]
[]
In this paper we study the existence of multi-bump solutions for the following Choquard equationwhere µ ∈ (0, 3), p ∈ (2, 6 − µ), λ is a positive parameter and the nonnegative continuous function a(x) has a potential well Ω := int(a −1 (0)) which possesses k disjoint bounded components Ω := ∪ k j=1 Ωj . We prove that if the parameter λ is large enough, then the equation has at least 2 k − 1 multi-bump solutions.Mathematics Subject Classifications (2010): 35J20, 35J65
10.1007/s00526-016-0984-9
[ "https://arxiv.org/pdf/1510.01409v3.pdf" ]
119,602,062
1510.01409
87abe66a5b286a0f4a7d8f0f10d94d0a36443de8
Multi-bump solutions for Choquard equation with deepening potential well 20 Apr 2016 Claudianor O Alves Department of Mathematics Universidade Federal de Campina Grande Unidade Acadêmica de Matemática CEP 58429-900Campina Grande -PbBrazil Zhejiang Normal University Jinhua 321004P. R. China Alânnio B Nóbrega Department of Mathematics Universidade Federal de Campina Grande Unidade Acadêmica de Matemática CEP 58429-900Campina Grande -PbBrazil Zhejiang Normal University Jinhua 321004P. R. China Minbo Yang B Multi-bump solutions for Choquard equation with deepening potential well 20 Apr 2016Choquard equationmulti-bump solutionvariational methods In this paper we study the existence of multi-bump solutions for the following Choquard equationwhere µ ∈ (0, 3), p ∈ (2, 6 − µ), λ is a positive parameter and the nonnegative continuous function a(x) has a potential well Ω := int(a −1 (0)) which possesses k disjoint bounded components Ω := ∪ k j=1 Ωj . We prove that if the parameter λ is large enough, then the equation has at least 2 k − 1 multi-bump solutions.Mathematics Subject Classifications (2010): 35J20, 35J65 Introduction The nonlinear Choquard equation − ∆u + V (x)u = 1 |x| µ * |u| p |u| p−2 u in R 3 , (1.1) p = 2 and µ = 1, goes back to the description of the quantum theory of a polaron at rest by S. Pekar in 1954 [30] and the modeling of an electron trapped in its own hole in 1976 in the work of P. Choquard, as a certain approximation to Hartree-Fock theory of one-component plasma [20]. In some particular cases, this equation is also known as the Schrödinger-Newton equation, which was introduced by Penrose in his discussion on the selfgravitational collapse of a quantum mechanical wave function [31]. The existence and qualitative properties of solutions of (1.1) have been widely studied in the last decades. In [20], Lieb proved the existence and uniqueness, up to translations, of the ground state. Later, in [22], Lions showed the existence of a sequence of radially symmetric solutions. In [12,25,26] the authors showed the regularity, positivity and radial symmetry of the ground states and derived decay property at infinity as well. Moreover, Moroz and Van Schaftingen in [27] considered the existence of ground states under the assumptions of Berestycki-Lions type. When V is a continuous periodic function with inf R 3 V (x) > 0, noticing that the nonlocal term is invariant under translation, we can obtain easily the existence result by applying the Mountain Pass Theorem, see [3] for example. For periodic potential V that changes sign and 0 lies in the gap of the spectrum of the Schrödinger operator −∆+ V , the problem is strongly indefinite, and the existence of solution for p = 2 was considered in [8] by reduction arguments. For a general case, Ackermann [3] proposed a new approach to prove the existence of infinitely many geometrically distinct weak solutions. For other related results, we refer the readers to [11,18] for the existence of sign-changing solutions, [28,34,37] for the existence and concentration behavior of the semiclassical solutions and [29] for the critical nonlocal part with respect to the Hardy-Littlewood-Sobolev inequality. In the present paper, we are interested in the nonlinear Choquard equation with deepening potential well −∆u + (λa(x) + 1)u = 1 |x| µ * |u| p |u| p−2 u in R 3 , (C) λ where µ ∈ (0, 3), p ∈ (2, 6 − µ) and a(x) is a nonnegative continuous function with Ω = int(a −1 (0)) being a non-empty bounded open set with smooth boundary ∂Ω. Moreover, Ω has k connected components, more precisely, Ω = k j=1 Ω j (1.2) with dist(Ω i , Ω j ) > 0 for i = j. (1.3) Moreover, we suppose that there exists M 0 > 0 such that |{x ∈ R 3 ; a(x) ≤ M 0 }| < +∞. (1.4) Hereafter, if A ⊂ R 3 is a mensurable set, |A| denotes its Lebesgue's measure. The purpose of the present paper is to study the existence and the asymptotic shape of the solutions for (C) λ when λ is large enough, more precisely, we will show the existence of multi-bump type solutions. The motivation of the present paper arises from the results for the local Schrödinger equations with deepening potential well − ∆u + (λa(x) + b(x))u = |u| p−2 u in R N , (1.5) where a(x), b(x) are suitable continuous functions and p ∈ (2, 2N N − 2 ) if N ≥ 3; p ∈ (1, ∞) if N = 1, 2. In [5], for b(x) = 1, Bartsch and Wang proved the existence of a least energy solution for large λ and that the sequence of solutions converges strongly to a least energy solution for a problem in bounded domain. They also showed the existence of at least catΩ positive solutions for large λ, where Ω = int(a −1 (0)), and the exponent p is close to the critical exponent. The same results were also established by Clapp and Ding [10] for critical growth case. We also refer to [6] for nonconstant b(x) > 0, where the authors prove the existence of k solutions that may change sign for any k and λ large enough. For other results related to Schrödinger equations with deepening potential well, we may refer the readers to [33,32,36] The existence and characterization of the solutions for problem (1.5) with large parameter λ were considered in [1,15], by supposing that a(x) has a potential well Ω = int(a −1 (0)) consisting of k disjoint bounded components Ω 1 , · · · , Ω k , the authors studied the multiplicity and multi-bump shape of the solutions associated to the number of the components of the domain Ω = int(a −1 (0)). In [15], by using of penalization ideas developed in [14], Ding and Tanaka were able to overcome the loss of compactness and then they applied the deformation flow arguments found in [9,35] to prove the existence at least 2 k − 1 solutions u λ for large values of λ. More precisely, for each non-empty subset Γ of {1, . . . , k}, it was proved that, for any sequence λ n → ∞ one can extract a subsequence (λ ni ) such that (u λn i ) converges strongly in H 1 R N to a function u, which satisfies u = 0 outside Ω Γ = j∈Γ Ω j and u |Ω j , j ∈ Γ, is a least energy solution for    −∆u + u = |u| p−2 u, in Ω j , u ∈ H 1 0 Ω j , u > 0, in Ω j . (1.6) As we all know, the problem (1.6) on bounded domain plays an important role in the study of multi-bump shaped solutions for problem (1.5). By using of "gluing" techniques, Ding and Tanaka used the ground states of problem (1.6) as building bricks to construct minimax values and then proved the existence of multibump solutions by deformation flow arguments. From the commentaries above, it is quite natural to ask if the results in [1,15] still hold for the generalized Choquard equation. Unfortunately, we can not draw a similar conclusion in a straight way, since the nonlinearity of the generalized Choquard equation is a nonlocal one. For Γ = {1, · · · , l} with l ≤ k and Ω Γ = j∈Γ Ω j , it is easy to see that ΩΓ ΩΓ |u| p |x − y| µ dy |u| p dx = l i=1 Ωi Ωi |u| p |x − y| µ dy |u| p dx. Thus, it is impossible to repeat the same arguments explored in [15] to use the least energy solution of          −∆u + u = Ωj |u| p |x − y| µ dy |u| p−2 u, in Ω j , u = 0, in Ω j , u ∈ H 1 0 Ω j , j ∈ Γ,(C) −∆u + u = ΩΓ |u| p |x − y| µ dy |u| p−2 u, in Ω Γ , u ∈ H 1 0 Ω Γ , (C) ∞,Γ plays the role of the limit problem for equation (C) λ as λ goes to infinity. Moreover, noticing that the solution may disappear on some component, for the Dirichlet problem of Choquard equation with components, it is not easy to prove the existence of the least energy solution that is nonzero on each component Ω j , j ∈ Γ. In order to find this type of least energy solution we will study the minimizing problem on a subset of the Nehari manifold, see Section 2 for more details. Here, we will also avoid the penalization arguments found in [14], because by using this method we are led to assume more restrictions on the constants µ and p. For that reason, instead of the penalization method, we will follow the approach explored by Alves and Nóbrega in [2], which showed the existence of multi-bump solution for problem (1.5) driven by the biharmonic operator. Thus, as in [2], we will work directly with the energy functional associated with (C) λ , and we will modify in a different way the set of pathes where Deformation Lemma is used, see Sections 5 and 6 for more details. To prove the existence of positive multi-bump solutions for (C) λ , the first step is to consider the limit Dirichlet problem (C) ∞,Γ and to look for the existence of least energy solution that is nonzero on each component Ω j , j ∈ Γ . Having this in mind, we proved the following result. Theorem 1.1. Suppose that µ ∈ (0, 3) and p ∈ [2, 6 − µ). Then problem (C) ∞,Γ possesses a least energy solution u that is nonzero on each component Ω j of Ω Γ , j ∈ Γ. Using the above theorem, we are able to state our main result. Theorem 1.2. Suppose that µ ∈ (0, 3) and p ∈ (2, 6−µ). There exists a constant λ 0 > 0, such that for any non-empty subset Γ ⊂ {1, · · · , k} and λ ≥ λ 0 , the problem C λ has a positive solution u λ , which possesses the following property: For any sequence λ n → ∞ we can extract a subsequence (λ ni ) such that (u λn i ) converges strongly in H 1 (R 3 ) to a function u, which satisfies u = 0 outside Ω Γ = j∈Γ Ω j , and u |Ω Γ is a least energy solution for (C) ∞,Γ in the sense of Theorem 1.1. Remark 1.3. Noting that (u λn i ) converges strongly in H 1 (R 3 ) to a function u, which is zero outside Ω Γ and nonzero on each component Ω j , j ∈ Γ. In this way, we can conclude that the solutions (u λ ) have the shape of multi-bump if λ is large enough. Remark 1.4. By the Hardy-Littlewood-Sobolev inequality, the natural interval for considering the Choquard equation is ( 6 − µ 3 , 6 − µ) , however, the case 6 − µ 3 < p ≤ 2 is not considered in Theorem 1.2. This is due to the fact that the method applied here do have some limitations in proving the intersection property for the pathes and the set M Γ defined in Section 5. Inspired by a recent paper by Ghimenti, Moroz and Van Schaftingen [19], we will consider the case p = 2 in a future paper by approximation with p ↓ 2. In order to apply variational methods to obtain the solutions for problems (C) λ and (C) ∞,Γ , the following classical Hardy-Littlewood-Sobolev inequality will be frequently used. Let s, r > 1 and 0 < µ < 3 with 1/s + µ/3 + 1/r = 2. Let f ∈ L s (R 3 ) and h ∈ L r (R 3 ). There exists a sharp constant C(s, µ, r), independent of f, h, such that R 3 R 3 f (x)h(y) |x − y| µ dydx ≤ C(s, µ, r)|f | s |h| r . In the sequel, we fix E λ = E, · λ where E = u ∈ H 1 (R 3 ) ; R 3 a(x)|u| 2 dx < ∞ , and u λ = R 3 (|∇u| 2 + (λa(x) + 1)|u| 2 )dx 1 2 . Obviously, E λ is a Hilbert space, E λ ֒→ H 1 (R 3 ) continuously for λ ≥ 0 and E λ is compactly embedded in L s loc (R 3 ), for all 1 ≤ s < 6. We will study the existence of solutions for problem (C) λ by looking for critical points of the energy functional I λ : E λ → R given by I λ (u) = 1 2 R 3 |∇u| 2 + λa(x) + 1 u 2 dx − 1 2p R 3 1 |x| µ * |u| p |u| p dx. For p ∈ ( 6 − µ 3 , 6−µ), the Hardy-Littlewood-Sobolev inequality and the Sobolev embeddings imply that the functional I λ ∈ C 1 (E λ , R) with I ′ λ (u)v = R 3 ∇u∇v + (λa(x) + 1)uvdx − R 3 1 |x| µ * |u| p |u| p−2 uvdx, ∀u, v ∈ E λ . Hence, the critical points of I λ are in fact the weak solutions for problem (C) λ . This paper is organized as follows. In Section 2, we study a nonlocal problem set on bounded domain with 2 disjoint components for simplicity. By minimizing and deformation flow arguments, we are able to prove the existence of least energy solution which is nonzero on each component. In Section 3, we adapt the method used in [2] for the nonlocal situation, which permits us to prove that the energy functional satisfies the (P S) condition for λ large enough. In Section 4, we study the behavior of (P S) ∞ sequence. In Section 5 and 6, we adapt the deformation flow method to establish the existence of a special critical point, which is crucial for showing the existence of multi-bump solutions for λ large enough. 2 The problem (C) ∞,Γ First, we need to study the Dirichlet problem (C) ∞,Γ with several components and investigate the existence of least energy solution that is nonzero on each component. The main idea is to prove that the energy functional associated to (C) ∞,Γ defined by I Γ (u) = 1 2 ΩΓ (|∇u| 2 + |u| 2 )dx − 1 2p ΩΓ ΩΓ |u| p |x − y| µ dy |u| p dx (2.1) achieves a minimum value on M Γ = {u ∈ N Γ : I ′ Γ (u)u i = 0 and u i = 0, i ∈ Γ} where Γ ⊂ {1, · · · , k}, u i = u| Ωi and N Γ is the Nehari manifold of I Γ defined by N Γ = {u ∈ H 1 0 (Ω Γ ) \ {0} : I ′ Γ (u)u = 0} . More precisely, we will prove that there is w ∈ M Γ such that I Γ (w) = inf u∈MΓ I Γ (u) and I ′ Γ (w) = 0. (2.2) Hereafter, we say that w ∈ H 1 0 (Ω Γ ), satisfying w i = w| Ωi = 0, i ∈ Γ, is a least energy solution for (C) ∞,Γ if the above condition (2.2) holds. This feature will be used to characterize the multi-bump shape of the solutions of (C) λ . Without loss of generality, we will only consider Γ = {1, 2} for simplicity. Moreover, we denote by Ω, M and N the sets Ω Γ , M Γ and N Γ respectively, and I Γ will be denoted by I. Thereby, Ω = Ω 1 ∪ Ω 2 , M = {u ∈ N : I ′ (u)u i = 0 and u i = 0, i = 1, 2.} and N = {u ∈ H 1 0 (Ω) \ {0} : I ′ (u)u = 0}. In what follows, we denote by || ||, || || 1 and || || 2 the norms in H 1 0 (Ω), H 1 0 (Ω 1 ) and H 1 0 (Ω 2 ) given by 1 2 respectively. ||u|| = Ω (|∇u| 2 + |u| 2 )dx 1 2 , ||u|| 1 = Ω1 (|∇u| 2 + |u| 2 )dx 1 2 and ||u|| 2 = Ω2 (|∇u| 2 + |u| 2 )dx The following Lemma shows that the set M is not empty. Lemma 2.1. Let 0 < µ < 3, 2 ≤ p < 6 − µ and v ∈ H 1 0 (Ω) with v j = 0 for j = 1, 2, then there exists (β 1 , β 2 ) ∈ (0, +∞) 2 such that β 1 v 1 + β 2 v 2 ∈ M which means M = ∅ and moreover, c 0 = inf u∈M I(u) > 0. Proof. Fix v ∈ H 1 0 (Ω) with v i = 0 for i = 1, 2 , and for the case p = 2, without loss of generality, we may additionally assume that v i 2 i = Ωi Ωi |v i | 2 |x − y| µ dy |v i | 2 dx for i = 1, 2. (2.3) Adapt some ideas in [18] and [19] by changing variables t j = s 1 p j , we define the function G(s 1 , s 2 ) = I(s 1 p 1 v 1 + s 1 p 2 v 2 ) = s 2 p 1 2 v 1 2 1 + s 2 p 2 2 v 2 2 2 − 1 2p Ω 1 |x| µ 2 * (s 1 |v 1 | p + s 2 |v 2 | p ) 2 dx. As G is a continuous function and G(s 1 , s 2 ) → 0 as |(s 1 , s 2 )| → +∞, we have that G possesses a global maximum point (a, b) ∈ [0, +∞) 2 . However, as G is strictly concave function, it follows that (a, b) ∈ (0, +∞) 2 , (a, b) is the unique global maximum point and ∇G(a, b) = (0, 0), which implies that M = ∅. Here, we would like to point out that if p > 2, it is easy to check that a, b = 0. While for the case p = 2, we are able to show this fact only with the restriction (2.3). In fact, argue by contradiction that a = 0, notice that (0, b) is the maximum point of G then there holds v 2 2 2 = b 2 Ω2 Ω2 |v 2 | 2 |x − y| µ dy |v 2 | 2 dx, therefore b = 1. Consider the function g : [0, +∞) → R given by g(t) = G(0, b + αt), where α is to be determined later. A direct computation shows that g ′ (0) = 1 2 v 1 2 1 − b 2 Ω1 Ω2 |v 2 | 2 (y)|v 1 | 2 (x) |x − y| µ dydx + α 2 v 2 2 2 − b Ω2 Ω2 |v 2 | 2 |x − y| µ dy |v 2 | 2 dx . Consequently, if α is suitably chosen, we have g ′ (0) > 0, which obviously is a contradiction. Next, we will show that c 0 > 0. To begin with, we recall that if w ∈ M, then w 2 = Ω Ω |w(y)| p |w(x)| p |x − y| µ dydx. Using the Hardy-Littlewood-Sobolev inequality, there is C > 0 such that w 2 ≤ C w 4 . As w = 0, the last inequality yields there is τ > 0 satisfying w 2 ≥ τ, ∀w ∈ M. From this, I(w) = I(w) − 1 2p I ′ (w)w = 1 2 − 1 2p w 2 ≥ 1 2 − 1 2p τ > 0, ∀w ∈ M. and so, c 0 ≥ 1 2 − 1 2p τ > 0. Let us state more a technical lemma. Lemma 2.2. Let 0 < µ < 3, 2 ≤ p < 6 − µ and (w n ) be a bounded sequence in M with w n ⇀ w in H 1 0 (Ω). If w n,j → 0, then w j = 0 , where w n,j = w n | Ωj and w j = w| Ωj for j = 1, 2. Proof. Assume by contradiction that w 1 = 0. By the Hardy-Littlewood-Sobolev inequality and the Sobolev embeddings, we see that Ωj Ω |w n | p |x − y| µ dy |w n,1 | p dx → 0. On the other hand, as I ′ (w n )(w n,j ) = 0, or equivalently w n,1 2 1 = Ω1 Ω |w n | p |x − y| µ dy |w n,1 | p dx, we derive that w n,1 2 1 → 0 which is an absurd. The case w 2 = 0 is made of similar way. Now, we are able to show the existence of least energy solution for (C) ∞,Γ . 2.1 Proof of Theorem 1.1 From Lemma 2.1, c 0 > 0 and there is a sequence (w n ) ⊂ M such that lim n I(w n ) = c 0 . It is easy to see that (w n ) is a bounded sequence. Hence, without loss of generality, we may suppose that w n ⇀ w in H 1 0 (Ω) and w n → w in L q (Ω) ∀ q ∈ [1, 6), as n → ∞. By considering the function G(s 1 , s 2 ) = I(s 1 p 1 (w n ) 1 +s 1 p 2 (w n ) 2 ) = s 2 p 1 2 (w n ) 1 2 1 + s 2 p 2 2 (w n ) 2 2 2 − 1 2p Ω 1 |x| µ 2 * (s 1 |(w n ) 1 | p + s 2 |(w n ) 2 | p ) 2 dx. we know by the previous study that ∇G(1, 1) = (0, 0). As G is strictly concave function, (1, 1) is its global maximum point. Thus, I(w n ) = I((w n ) 1 + (w n ) 2 ) = max t,s≥0 I(t(w n ) 1 + s(w n ) 2 ). Using the above information, we also know that w j = 0 for j = 1, 2. Then, by Lemma 2.1 there are t 1 , t 2 > 0 such that t 1 w 1 + t 2 w 2 ∈ M, and so, c 0 ≤ I(t 1 w 1 + t 2 w 2 ). By using the fact that w n ⇀ w in H 1 0 (Ω) and the compact Sobolev embeddings, we get I(t 1 w 1 + t 2 w 2 ) ≤ lim inf n→+∞ I(t 1 (w n ) 1 + t 2 (w n ) 2 ) ≤ lim inf n→+∞ I(w n ) = c 0 , from where it follows that c 0 = I(t 1 w 1 + t 2 w 2 ) with t 1 w 1 + t 2 w 2 ∈ M. Now, we we will show that w * = t 1 w 1 + t 2 w 2 is a critical point for I. Assume by contradiction that I ′ (w * ) > 0 and fix α > 0 such that I ′ (w * ) ≥ α. Moreover, we will fix r > 0 small enough such that if (t, s) ∈ B = B r (1, 1) ⊂ R 2 , then there exists some ǫ 0 > 0 such that I(t 1 p (w * ) 1 + s 1 p (w * ) 2 ) < c 0 − 2ǫ 0 , ∀(t, s) ∈ ∂B. (2.4) In the sequel we fix ǫ ∈ (0, ǫ 0 ) and δ > 0 small emough such that I ′ (u) ≥ α 2 ≥ 4ǫ δ ∀u ∈ I −1 {[c 0 − 2ǫ, c 0 + 2ǫ]} ∩ S where S = {t 1 p (w * ) 1 + s 1 p (w * ) 2 : (t, s) ∈ B}. By using the Deformation Lemma, there exists a continuous map η : H 1 0 (Ω) → H 1 0 (Ω), such that η(u) = u ∀u / ∈ I −1 {[c 0 − 2ǫ, c 0 + 2ǫ]} ∩ S (2.5) and η(I c0+ǫ ∩ S) ⊂ I c0−ǫ ∩ S δ (2.6) where S δ = {v ∈ H 1 0 (Ω) : dist(v, S) ≤ δ}. In the sequel, we fix δ > 0 of a way that v ∈ S δ ⇒ v 1 , v 2 = 0. (2.7) Now, setting γ(t, s) = η(t 1 p (w * ) 1 + s 1 p (w * ) 2 ), (2.4) and (2.5) imply that γ(s, t) = t 1 p (w * ) 1 + s 1 p (w * ) 2 , ∀(s, t) ∈ ∂B. (2.8) Moreover, since max t,s≥0 I(t 1 p (w * ) 1 + s 1 p (w * ) 2 ) = I(t 1 ()w * ) 1 + t 2 (w * ) 2 ) = c 0 , by (2.6), we know I(γ(t, s)) ≤ c 0 − ǫ. Claim 2.3. There is (t 0 , s 0 ) ∈ B such that I ′ (γ(t 0 , s 0 ))(γ(t 0 , s 0 ) 1 ), I ′ (γ(t 0 , s 0 ))(γ(t 0 , s 0 ) 2 ) = (0, 0). Assuming for a moment the claim is true, we deduce that γ(t 0 , s 0 ) ∈ M, and so, c 0 ≤ I(γ(t 0 , s 0 )) ≤ c 0 − ǫ which is absurd. Here, (2.7) was used to ensure that γ(t 0 , s 0 ) j = 0 for j = 1, 2. Consequently, w * = t 1 w 1 + t 2 w 2 is a critical point for I. Proof of Claim 2.3: First of all, note that I ′ (γ(t, s))(γ(t, s) 1 ), I ′ (γ(t, s))(γ(t, s) 2 ) = (0, 0) ⇔ 1 t I ′ (γ(t, s))(γ(t, s) 1 ), 1 s I ′ (γ(t, s))(γ(t, s) 2 ) = (0, 0) and by (2.8) 1 t I ′ (γ(t, s))(γ(t, s) 1 ), 1 s I ′ (γ(t, s))(γ(t, s) 2 ) = 1 t I ′ (γ 0 (t, s))(γ 0 (t, s) 1 ), 1 s I ′ (γ 0 (t, s))(γ 0 (t, s) 2 ) ∀(t, s) ∈ ∂B where γ 0 (t, s) = t 1 p (w * ) 1 + s 1 p (w * ) 2 ∀(s, t) ∈ B. Considering the function G(t, s) = I(t 1 p (w * ) 1 + s 1 p (w * ) 2 ), we deduce that ( 1 t I ′ (γ 0 (t, s))(γ 0 (t, s) 1 ), 1 s I ′ (γ 0 (t, s))(γ 0 (t, s) 2 )) = ∇G(t, s) ∀(t, s) ∈ B. Since G is is strictly concave function and ∇G(1, 1) = (0, 0), it follows that 0 > ∇G(t, s) − ∇G(1, 1), (t, s) − (1, 1) = ∇G(t, s), (t, s) − (1, 1) ∀(s, t) = (1, 1), and so, 0 > ∇G(t, s), (t, s) − (1, 1) for |(s, t) − (1, 1)| = r. Setting H : R 2 → R 2 by H(t, s) = 1 t I ′ (γ(t, s))(γ(t, s) 1 ), 1 s I ′ (γ(t, s))(γ(t, s) 2 ) and f (t, s) = H(t + 1, s + 1), we have that 0 > f (t, s), (t, s) for |(s, t)| = r. By using the Brouwer's fixed point Theorem, we know there exists (t * , s * ) ∈ B r (0, 0) such that f (t * , s * ) = (0, 0), that is, H(t * + 1, s * + 1) = (0, 0), from where it follows that there is (t 0 , s 0 ) ∈ B such that I ′ (γ(t, s))(γ(t, s) 1 ), I ′ (γ(t, s))(γ(t, s) 2 ) = (0, 0), which completes the proof of the claim. 3 The (P S) c condition for I λ In this section, we will prove some convergence properties for the (P S) sequences of the functional I λ . Our main goal is to prove that, for given c ≥ 0 independent of λ, the functional I λ satisfies the (P S) d condition for d ∈ [0, c), provided that λ is large enough. Lemma 3.1. Let (u n ) ⊂ E λ be a (P S) c sequence for I λ , then (u n ) is bounded. Furthermore, c ≥ 0. Proof. Since (u n ) is a (P S) c sequence, I λ (u n ) → c and I ′ λ (u n ) → 0. Then, for n large enough I λ (u n ) − 1 2p I ′ λ (u n )u n ≤ c + 1 + u n λ .(3.1) On the other hand, I λ (u n ) − 1 2p I ′ λ (u n )u n = 1 2 − 1 2p u n 2 λ . (3.2) Therefore, from (3.1) and (3.2) we get the inequality below 1 2 − 1 2p u n 2 λ ≤ c + 1 + u n λ , which shows the boundedness of (u n ). Thereby, by (3.2), 0 ≤ 1 2 − 1 2p u n 2 λ ≤ c + o n (1),(3.3) and the lemma follows by taking the limit of n → +∞. Corollary 3.2. Let (u n ) ⊂ E λ be a (P S) 0 sequence for I λ . Then u n → 0 in E λ . Proof. An immediate consequence of the arguments used in the proof of Lemma 3.1. Next we prove a splitting property for the functional I λ , which is related to the Brezis-Lieb type Lemma for nonlocal nonlinearities [3,27]. Lemma 3.3. Let c ≥ 0 and (u n ) be a (P S) c sequence for I λ . If u n ⇀ u in E λ , then I λ (v n ) − I λ (u n ) + I λ (u) = o n (1) (3.4) I ′ λ (v n ) − I ′ λ (u n ) + I ′ λ (u) = o n (1), (3.5) where v n = u n − u. Furthermore, (v n ) is a (P S) c−I λ (u) sequence. Proof. First of all, note that I λ (v n ) − I λ (u n ) + I λ (u) = 1 2 v n 2 λ − u n 2 λ + u 2 λ − − 1 2p R 3 R 3 |v n (y)| p |v n (x)| p − |u n (y)| p |u n (x)| p + |u(y)| p |u(x)| p |x − y| µ dy dx. Since u n ⇀ u in E λ , we have I λ (v n ) − I λ (u n ) + I λ (u) = o n (1) + 1 2p R 3 R 3 |v n (y)| p (− |v n (x)| p + |u n (x)| p − |u(x)| p ) |x − y| µ dy dx + 1 2p R 3 R 3 |u n (y)| p (− |v n (x)| p + |u n (x)| p − |u(x)| p ) |x − y| µ dy dx (3.6) + 1 2p R 3 R 3 |u(y)| p (− |v n (x)| p + |u n (x)| p − |u(x)| p ) |x − y| µ dy dx + 1 p R 3 R 3 |v n (y)| p |u(x)| p |x − y| µ dy dx. By the Hardy-Litllewood-Sobolev inequality, R 3   R 3 |v n (y)| p |v n (x)| p − |u n (x)| p + |u(x)| p |x − y| µ dy   dx ≤ C |v n | p 6p 6−µ R 3 |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx 6−µ 6 . Notice that R 3 |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx = BR(0) |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx + (3.7) R 3 \BR(0) |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx. where, R > 0 will be fixed subsequently. As u n ⇀ u in E λ , we know that • u n → u, in L 6p 6−µ (B R (0)); • u n (x) → u(x) a.e. in R 3 , and there is h 1 ∈ L 6p 6−µ (B R (0)) such that |u n (x)| ≤ h 1 (x) a.e. in R 3 . From this, |v n (x)| p − |u n (x)| p + |u(x)| p → 0 a.e. in R 3 , and |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ ≤ (2 p + 1) 6 6−µ (h 1 (x) + |u(x)|) 6p 6−µ ∈ L 1 (B R (0)). Thus, by the Lebesgue Dominated Convergence Theorem, BR(0) |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx → 0. (3.8) Furthermore, we also have |u n (x) − u(x)| p − |u n (x)| p ≤ p2 p−1 (|u n (x)| p−1 |u(x)| + |u(x)| p ), and so, R 3 \BR(0) |u n (x) − u(x)| p −|u n (x)| p 6 6−µ dx ≤ C R 3 \BR(0) |u n (x)| 6(p−1) 6−µ |u(x)| 6 6−µ dx+C R 3 \BR(0) |u(x)| 6p 6−µ dx. For ε > 0, we can choose R > 0 such that R 3 \BR(0) |u(x)| 6p 6−µ dx ≤ ε, the Hölder inequality combined with the boundedness of (u n ) implies that R 3 \BR(0) |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx ≤ ε. (3.9) Gathering together the boundedness of (v n ) and (3.7)-(3.9), we deduce that R 3 |v n (x)| p − |u n (x)| p + |u(x)| p 6 6−µ dx → 0. To finish the proof, we need to prove that R 3 R 3 |v n (y)| p |u(x)| p |x − y| µ dy dx → 0. Once v n ⇀ 0 in E λ and p ∈ (2, 6 − µ), the sequence (|v n | p ) is bounded in L 6 6−µ (R 3 ). As v n (x) → 0 a.e. in R 3 , we ensure that |v n | p converges weakly to 0 in L 6 6−µ (R 3 ). Using again the Hardy-Littlewood-Sobolev inequality, we know that the linear functional F : L 6 6−µ (R 3 ) → R defined by F (w) = R 3 1 |x| µ * w |u(x)| p dx is continuous. Consequently, F (|v n | p ) → 0, or equivalently, R 3 1 |x| µ * |v n (x)| p |u(x)| p dx → 0, and the proof is complete. Lemma 3.4. Let (u n ) be a (P S) c sequence for I λ . Then c = 0, or there exists c * > 0, independent of λ, such that c ≥ c * , for all λ > 0. Proof. By Lemma 3.1, we know c ≥ 0. Suppose that c > 0. On one hand, we know c + o n (1) u n λ = I λ (u n ) − 1 2p I ′ λ (u n )u n ≥ p − 1 2p u n 2 λ , equivalently, lim sup n→+∞ u n 2 λ ≤ 2pc p − 1 . (3.10) On the other hand, the Hardy-Littlewood-Sobolev inequality together with the Sobolev embedding theorems imply that I ′ λ (u n )u n ≥ 1 2 u n 2 λ − K u n 2p λ , where K is a positive constant. Thus, there exists δ > 0 such that I ′ λ (u n )u n ≥ 1 4 ||u n || 2 λ , for ||u n || λ < δ. (3.11) Consider c * = δ 2 p − 1 2p and c < c * . Then it follows that u n λ ≤ δ (3.12) for n large enough. Hence, I ′ λ (u n )u n ≥ 1 4 u n 2 λ , and thus u n 2 λ → 0. Thereby, I λ (u n ) → I λ (0) = 0, which contradicts the fact that (u n ) is a (P S) c sequence with c > 0. Therefore, c ≥ c * . Lemma 3.5. Let (u n ) be a (P S) c sequence for I λ . Then, there exists δ 0 > 0 independent of λ, such that lim inf n→+∞ |u n | 2p 6p 6−µ ≥ δ 0 c. Proof. Note that c = lim n→+∞ I λ (u n ) − 1 2 I ′ λ (u n )u n = 1 2 − 1 2p lim n→+∞ R 3 1 |x| µ * |u n | p |u n | p dx, by the Hardy-Littlewood-Sobolev inequality, we obtain c ≤ 1 2 − 1 2p K lim inf n→+∞ |u n | 2p 6p 6−µ . Therefore, the conclusion follows by setting Proof. For R > 0, consider δ 0 = p − 1 2p K −1 > 0.A(R) = {x ∈ R 3 / |x| > R and a(x) ≥ M 0 } and B(R) = {x ∈ R 3 / |x| > R and a(x) < M 0 }. Then, A(R) u 2 n dx ≤ 1 (λM 0 + 1) R 3 (λa(x) + 1)u 2 n dx ≤ 1 (λM 0 + 1) ||u n || 2 λ (3.13) ≤ 1 (λM 0 + 1) 1 2 − 1 2p −1 c + o n (1) ≤ 1 (λM 0 + 1) 1 2 − 1 2p −1 c 1 + o n (1) . Once c 1 is independent of λ, by (3.13) there is Λ > 0 such that lim sup n→+∞ A(R) u 2 n dx < ε 2 , ∀λ ≥ Λ. (3.14) On the other hand, using the Hölder inequality for s ∈ [1, 3] and the continuous embedding E λ ֒→ L 2s (R 3 ), we see that B(R) u 2 n dx ≤ β ||u n || 2 λ |B(R)| 1 s ′ ≤ c 1 1 2 − 1 2p −1 |B(R)| 1 s ′ + o n (1), where β is a positive constant. Now, by assumption (1.4) on the potential a(x), we know that Proof. Let (u n ) be a (P S) c sequence. Lemma 3.1 implies that (u n ) is bounded. Passing to a subsequence if necessary, |B(R)| → 0,   u n ⇀ u, in E λ ; u n (x) → u(x), a.e. in R 3 ; u n → u, in L s loc (R 3 ), 1 ≤ s < 6. Then, I ′ λ (u) = 0 and I λ (u) ≥ 0. Setting v n = u n − u, Lemma 3.3 ensures that (v n ) is a (P S) d sequence with d = c − I λ (u). Furthermore, 0 ≤ d = c − I λ (u) ≤ c ≤ c 1 We claim that d = 0. Otherwise, suppose that d > 0. By Lemma 3.4 and Lemma 3.5, we know d ≥ c * and lim inf n→+∞ |v n | 2p 6p 6−µ ≥ δ 0 c * > 0. (3.16) Applying Lemma 3.6 with ε = δ 0 c * 2 > 0, there exist Λ, R > 0 such that lim sup n→+∞ |v n | 2p 6p 6−µ ,B C R (0) ≤ δ 0 c * 2 , for λ ≥ Λ. (3.17) Combining (3.16) and (3.17), we obtain lim inf n→+∞ |v n | 2p 6p 6−µ ,BR(0) ≥ δ 0 c * 2 > 0, which is absurd, because as v n ⇀ 0 in E λ , the compact embedding E λ ֒→ L The (P S) ∞ condition A sequence (u n ) ⊂ H 1 (R 3 ) is called a (P S) ∞ sequence for the family (I λ ) λ≥1 , if there exist d ∈ [0, c Γ ] and a sequence (λ n ) ⊂ [1, ∞) with λ n → ∞, such that I λn (u n ) → d and I ′ λn (u n ) E * λn → 0, as n → ∞. Proposition 4.1. Suppose that 0 < µ < 3, 2 ≤ p < 6 − µ and (u n ) ⊂ H 1 (R 3 ) is a (P S) ∞ sequence for (I λ ) λ≥1 with 0 < d ≤ c Γ . Then, up to subsequence, there exists u ∈ H 1 (R 3 ) such that u n ⇀ u in H 1 (R 3 ). Furthermore, (i) u n → u in H 1 (R 3 ); (ii) u = 0 in R 3 \ Ω and u ∈ H 1 0 (Ω) is a solution for −∆u + u = Ω |u| p |x − y| µ dy |u| p−2 u in Ω; (iii) λ n R 3 a(x)|u n | 2 → 0; (iv) u n − u 2 λ,Ω → 0; (v) u n 2 λ,R 3 \Ω → 0; (vi) I λn (u n ) → 1 2 Ω (|∇u| 2 + |u| 2 )dx − 1 2p Ω Ω |u| p |x − y| µ dy |u| p dx. Proof. By hypothesis, I λn (u n ) → d and I ′ λn (u n ) E ′ λn → 0. Then, the same arguments employed in the proof of Lemma 3.1 imply that ( u n λn ) and (u n ) are bounded in R and H 1 (R 3 ) respectively. And so, up to subsequence, there exists u ∈ H 1 (R 3 ) such that u n ⇀ u in H 1 (R 3 ) and u n (x) → u(x) for a.e. x ∈ R 3 . Now, for each m ∈ N, we define C m = x ∈ R 3 ; a(x) ≥ 1 m . Without loss of generality, we may assume that λ n < 2(λ n − 1), ∀n ∈ N. Thus Cm |u n | 2 dx ≤ 2m λ n Cm λ n a(x) + 1)|u n | 2 dx ≤ C λ n . By Fatou's lemma, we derive that Cm |u| 2 dx = 0, which implies that u = 0 in C m , and so, u = 0 in R 3 \ Ω. From this, we are able to prove (i) − (vi). (i) By a simple computation, we see that u n − u 2 λn = I ′ λn (u n )u n − I ′ λn (u n )u + R 3 1 |x| µ * |u n | p |u n | p−2 u n (u n − u)dx + o n (1), then, u n − u 2 λn = R 3 1 |x| µ * |u n | p |u n | p−2 u n (u n − u)dx + o n (1). As in the proof of Lemma 3.3, u n − u 2 λn → 0, which means that u n → u in H 1 (R 3 ). (ii) Since u ∈ H 1 (R 3 ) and u = 0 in R 3 \ Ω, we know u ∈ H 1 0 (Ω) and u| Ωj ∈ H 1 0 (Ω j ), for j ∈ {1, 2..., k}. Moreover, taking into account that u n → u in H 1 (R 3 ) and I ′ λn (u n )ϕ → 0 for ϕ ∈ C ∞ 0 (Ω), we get Ω (∇u∇ϕ + uϕ)dx − Ω Ω |u| p |x − y| µ dy |u| p−2 uϕdx = 0,(4.1) which shows that u| Ω is a solution for the nonlocal problem −∆u + u = Ω |u| p |x − y| µ dy |u| p−2 u in Ω. (iii) In view of (i), λ n R 3 a(x)|u n | 2 dx = R 3 λ n a(x)|u n − u| 2 dx ≤ u n − u 2 λn . Then λ n R 3 a(x)|u n | 2 dx → 0. (iv) For each j ∈ {1, 2..., k}, |u n − u| 2 2,Ωj , |∇u n − ∇u| 2 2,Ωj → 0. (see (i)). Therefore, Ω (|∇u n | 2 − |∇u| 2 )dx → 0 and Ω (|u n | 2 − |u| 2 )dx → 0. In view of (iii), we know Ω λ n a(x)|u n | 2 dx → 0, then u n 2 λn,Ω → Ω (|∇u| 2 + |u| 2 )dx. (v) Summarizing (i) and u n − u 2 λn → 0, we obtain u n 2 λn,R 3 \Ω → 0. (vi) We can write the functional I λn in the following way I λn (u n ) = 1 2 Ω (|∇u n | 2 + (λ n a(x) + 1)|u n | 2 )dx + 1 2 R 3 \Ω (|∇u n | 2 + (λ n a(x) + 1)|u n | 2 )dx − 1 2p R 3 \Ω 1 |x| µ * |u n | p |u n | p dx − 1 2p Ω 1 |x| µ * |u n | p )|u n | p dx. Using (i) − (v), we get 1 2 Ω (|∇u n | 2 + (λ n a(x) + 1)|u n | 2 )dx → 1 2 Ω (|∇u| 2 + |u| 2 )dx, 1 2 R 3 \Ω (|∇u n | 2 + (λ n a(x) + 1)|u n | 2 )dx → 0, Ω 1 |x| µ * |u n | p |u n | p dx → Ω Ω |u| p |x − y| µ dy |u| p dx, R 3 \Ω 1 |x| µ * |u n | p |u n | p dx → 0. Therefore, we can conclude that I λn (u n ) → 1 2 Ω (|∇u| 2 + |u| 2 )dx − 1 2p Ω Ω |u| p |x − y| µ dy |u| p dx. Further propositions for c Γ In the sequel, without loss of generality, we consider Γ = {1, · · · , l}, with l ≤ k. Moreover, let us denote by Ω ′ Γ = ∪ j∈Γ Ω ′ j , where Ω ′ j is an open neighborhood of Ω j with Ω ′ j ∩ Ω ′ i = ∅ if j = i. Using this notion, we introduce the functional I λ,Γ (u) = 1 2 Ω ′ Γ (|∇u| 2 + (λa(x) + 1)|u| 2 )dx − 1 2p Ω ′ Γ Ω ′ Γ |u| p |x − y| µ dy |u| p dx, which is the energy functional associated to the Choquard equation with Neumann boundary condition        −∆u + (λa(x) + 1)u = Ω ′ Γ |u| p |x − y| µ dy |u| p−2 u, in Ω ′ Γ , ∂u ∂η = 0, on ∂Ω ′ Γ . (CN λ ) In what follows, we denote by c Γ the number given by c Γ = inf u∈MΓ I Γ (u) where M Γ = {u ∈ N Γ : I ′ Γ (u)u j = 0 and u j = 0, ∀j ∈ Γ} with u j = u| Ωj and N Γ = {u ∈ H 1 (Ω Γ ) \ {0} : I ′ Γ (u)u = 0} . Similarly, we denote by c λ,Γ the number given by c λ,Γ = inf u∈M ′ Γ I λ,Γ (u) where M ′ Γ = {u ∈ N ′ Γ : I ′ λ,Γ (u)u j = 0 and u j = 0, ∀j ∈ Γ} with u j = u| Ω ′ j and N ′ Γ = {u ∈ H 1 (Ω ′ Γ ) \ {0} : I ′ λ,Γ (u)u = 0}. Repeating the same arguments in Section 2, we know that there exist w Γ ∈ H 1 0 (Ω Γ ) and w λ,Γ ∈ H 1 (Ω ′ Γ ) such that I Γ (w Γ ) = c Γ and I ′ Γ (w Γ ) = 0 and I λ,Γ (w λ,Γ ) = c λ,Γ and I ′ λ,Γ (w λ,Γ ) = 0. We have the following proposition, which describes an important relation between c Γ and c λ,Γ . Lemma 5.1. There holds that (i) 0 < c λ,Γ ≤ c Γ , ∀λ ≥ 0; (ii) c λ,Γ → c Γ , as λ → ∞. Proof. (i) Since H 1 0 (Ω Γ ) ⊂ H 1 (Ω ′ Γ ), it is easy to see that 0 < c λ,Γ ≤ c Γ . (ii) Let λ n → ∞. From the above commentaries, for each λ n there exists w n ∈ H 1 (Ω ′ ) with I λn,Γ (w n ) = c λn,Γ and I ′ λn,Γ (w n ) = 0. As c λn,Γ is bounded, there exists (w ni ), subsequence of (w n ), such that (I λn i ,Γ (w ni )) converges and I ′ λn i ,Γ (w ni ) = 0. Repeating the same ideas explored in the proof of Proposition 4.1, we know that there exists w ∈ H 1 0 (Ω Γ ) \ {0} ⊂ H 1 (Ω ′ Γ ) such that w j = w| Ωj = 0, j ∈ Γ and w ni → w in H 1 (Ω ′ Γ ), as n i → ∞. Furthermore, we also have that c λn i ,Γ = I λn i ,Γ (w ni ) → I Γ (w) and 0 = I ′ λn i ,Ω ′ (w ni ) → I ′ Γ (w). By the definition of c Γ , lim i c λn i ,Γ ≥ c Γ . Then, combining the last limit with conclusion (i), we can guarantee that c λn i ,Γ → c Γ , as n i → ∞. This establishes the asserted result. In the sequel, we denote by w ∈ H 1 0 (Ω Γ ) the least energy solution obtained in Section 2, that is, w ∈ M Γ , I Γ (w) = c Γ and I ′ Γ (w) = 0. (5.1) Changing variables by t j = s 1 p j , it is obvious that I Γ t 1 w 1 + · · · + t l w l = l j=1 t 2 j 2 ||w j || 2 j − 1 2p ΩΓ ΩΓ l j=1 t j w j p |x − y| µ dy l j=1 t j w j p dx = l j=1 s 2 p j 2 ||w j || 2 j − 1 2p ΩΓ ΩΓ l j=1 s j |w j | p |x − y| µ dy l j=1 s j |w j | p dx. Arguing as in [18], ΩΓ ΩΓ l j=1 s j |w j | p |x − y| µ dy l j=1 s j |w j | p dx = ΩΓ   1 |x| µ/2 * l j=1 s j |w j | p   2 dx. As s → s 2/p is concave and s → s 2 is strictly convex, we concluded that the function G(s 1 , s 2 , · · · , s l ) = I Γ (s 1 p 1 w 1 + · · · + s 1 p l w l ) is strictly concave with ∇G(1, · · · , 1) = 0. Hence, (1, · · · , 1) is the unique global maximum point of G on [0, +∞) l with G(1, · · · , 1) = c Γ . In the sequel, we denote by w ∈ H 1 0 (Ω Γ ) the least energy solution obtained in Section 2, that is, w ∈ M Γ , I Γ (w) = c Γ and I ′ Γ (w) = 0. Assuming p > 2, there are r > 0 small enough and R > 0 large enough such that I ′ Γ ( l j=1,j =i t j w j (x) + Rw i )(Rw i ) < 0, for i ∈ Γ, ∀t j ∈ [r, R] and j = i, (5.2) I ′ Γ ( l j=1,j =i t j w j (x) + rw i )(rw i ) > 0, for i ∈ Γ, ∀t j ∈ [r, R] and j = i. (5.3) and I Γ l j=1 t j w j (x) < c Γ , ∀(t 1 , · · · , t l ) ∈ ∂[r, R] l ,(5.4) where w j := w| Ωj , j ∈ Γ. Using these information, we can define γ 0 (t 1 , · · · , t l )(x) = l j=1 t j w j (x) ∈ H 1 0 (Ω Γ ), ∀(t 1 , · · · , t l ) ∈ [r, R] l . and denote by Γ * the class of continuous pathes γ ∈ C [r, R] l , E λ \{0} which satisfies the following conditions: (a) γ = γ 0 on ∂[r, R] l , and (b) Φ Γ (γ) = 1 2 R 3 \Ω ′ Γ |∇γ| 2 + (λa(x) + 1)|γ| 2 dx − 1 p R 3 \Ω ′ Γ 1 |x| µ * |γ| p |γ| p dx ≥ 0, where R > 1 > r > 0 are the positive constants obtained in (5.2) and (5.3). Since γ 0 ∈ Γ * , we know that Γ * = ∅. And by (a) for the path γ and (5.4), we have I λ γ(t 1 , · · · , t l ) < c Γ , ∀(t 1 , · · · , t l ) ∈ ∂[r, R] l , ∀γ ∈ Γ * . (5.5) The following lemma will be used to describe the intersection property of the paths and the set M Γ in the final section. Lemma 5.2. For all γ ∈ Γ * , there exists (t 1 , . . . , t l ) ∈ (r, R) l such that I ′ λ,Γ (γ(t 1 , . . . , t l ))γ j (t 1 , . . . , t l ) = 0, where γ j (t 1 , . . . , t l ) = γ(t 1 , . . . , t l )| Ω ′ j , j ∈ Γ. Proof. Since p > 2 and γ = γ 0 on ∂[r, R] l , by using of (5.2) and (5.3), we see that the result follows by Miranda's Theorem [24]. 6 Proof of Theorem 1.2 In this section, we are ready to find nonnegative solutions u λ for large values of λ, which converges to a least energy solution of (C) ∞,Γ as λ → ∞. To this end, we will prove two propositions which, together with Propositions 4.1, will help us to show the main result in Theorem 1.2. Henceforth, we denote by Θ = u ∈ E λ : u λ,Ω ′ j > rτ 2 j = 1, · · · , l , where r was fixed in (5.2) and τ is the positive constant such that u j j > τ, ∀u ∈ Υ Γ = {u ∈ M Γ : I Γ (u) = c Γ } and ∀j ∈ Γ. Furthermore, I cΓ λ denotes the set I cΓ λ = u ∈ E λ ; I λ (u) ≤ c Γ . Fixing δ = rτ 8 , for ξ > 0 small enough, we set A λ ξ = u ∈ Θ 2δ : Φ Γ (u) ≥ 0, ||u|| λ,R 3 \Ω ′ Γ ≤ ξ and |I λ (u) − c Γ | ≤ ξ . (6.1) We observe that w ∈ A λ ξ ∩ I cΓ λ , showing that A λ ξ ∩ I cΓ λ = ∅. We have the following uniform estimate of I ′ λ (u) E * λ on the region A λ 2ξ \ A λ ξ ∩ I cΓ λ . Proposition 6.1. For each ξ > 0, there exist Λ * ≥ 1 and σ 0 > 0 independent of λ such that I ′ λ (u) E * λ ≥ σ 0 , for λ ≥ Λ * and u ∈ A λ 2ξ \ A λ ξ ∩ I cΓ λ . (6.2) Proof. We assume that there exist λ n → ∞ and u n ∈ A λn 2ξ \ A λn ξ ∩ I cΓ λn such that I ′ λn (u n ) E * λn → 0. Since u n ∈ A λn 2ξ , we know ( u n λn ) and I λn (u n ) are both bounded. Passing to a subsequence if necessary, we may assume that (I λn (u n )) converges. Thus, from Proposition 4.1, there exists u ∈ H 1 0 (Ω Γ ) such that u is a solution for −∆u + u = ΩΓ |u| p |x − y| µ dy |u| p−2 u in Ω Γ with u n → u in H 1 (R 3 ) , u n λn,R 3 \Ω → 0 and I λn (u n ) → I Γ (u). As (u n ) ⊂ Θ 2δ , we derive that u n λn,Ω ′ j > rτ 4 , j = 1, · · · , l. Letting n → +∞, we get the inequality u j ≥ rτ 4 > 0, j = 1, · · · , l, which yields u |Ω j = 0, j = 1, · · · , l and I ′ Γ (u) = 0. Consequently, I Γ (u) ≥ c Γ . However, from the fact that I λn (u n ) ≤ c Γ and I λn (u n ) → I Γ (u), we derive that I Γ (u) = c Γ , and so, u ∈ Υ Γ . Thus, for n large enough u n j > rτ 2 and |I λn (u n ) − c Γ | ≤ ξ, j = 1, · · · , l. So u n ∈ A λn ξ , which is a contradiction, finishing the proof. In the sequel, ξ 1 , ξ * will be defined as ξ 1 = min (t1,··· ,t l )∈∂[r,R] l |I Γ (γ 0 (t 1 , · · · , t l )) − c Γ | > 0 and ξ * = min{ξ 1 /2, δ, ρ/2}, where δ was given in(6.1) and ρ = 4R 2 c Γ , where R was fixed in (5.2). Moreover, for each s > 0, B λ s denotes the set B λ s = u ∈ E λ ; u 2 λ ≤ s for s > 0. Proposition 6.2. Suppose that 0 < µ < 3 and 2 < p < 6 − µ. Let ξ ∈ (0, ξ * ) and Λ * ≥ 1 given in the previous proposition. Then, for λ ≥ Λ * , there exists a solution u λ of (C λ ) such that u λ ∈ A λ ξ ∩ I cΓ λ ∩ B λ 2ρ+1 . Proof. Let λ ≥ Λ * . Assume that there are no critical points of I λ in A λ ξ ∩ I cΓ λ ∩ B λ 2ρ+1 . Since I λ verifies the (P S) d condition with 0 < d ≤ c Γ , there exists a constant ν λ > 0 such that I ′ λ (u) E * λ ≥ ν λ , for all u ∈ A λ ξ ∩ I cΓ λ ∩ B λ 2ρ+1 . From Proposition 6.1, we have I ′ λ (u) E * λ ≥ σ 0 , for all u ∈ A λ 2ξ \ A λ ξ ∩ I cΓ λ , where σ 0 > 0 is small enough and it does not depend on λ. In what follows, Ψ : E λ → R is a continuous functional verifying Ψ(u) = 1, for u ∈ A λ 3 2 ξ ∩ Θ δ ∩ B λ 2ρ , Ψ(u) = 0, for u / ∈ A λ 2ξ ∩ Θ 2δ ∩ B λ 2ρ+1 and 0 ≤ Ψ(u) ≤ 1, ∀u ∈ E λ . We also consider H : I cΓ λ → E λ given by H(u) = −Ψ(u) Y (u) −1 Y (u), for u ∈ A λ 2ξ ∩ B λ 2ρ+1 , 0, for u / ∈ A λ 2ξ ∩ B λ 2ρ+1 , where Y is a pseudo-gradient vector field for I λ on K = {u ∈ E λ ; I ′ λ (u) = 0}. Observe that H is well defined, once I ′ λ (u) = 0, for u ∈ A λ 2ξ ∩ I cΓ λ . The inequality H(u) ≤ 1, ∀λ ≥ Λ * and u ∈ I cΓ λ , guarantees that the deformation flow η : [0, ∞) × I cΓ λ → I cΓ λ defined by dη dt = H(η), η(0, u) = u ∈ I cΓ λ verifies d dt I λ η(t, u) ≤ −Ψ η(t, u) I ′ λ η(t, u) ≤ 0, (6.3) dη dt λ = H(η) λ ≤ 1 (6.4) and η(t, u) = u for all t ≥ 0 and u ∈ I cΓ λ \ A λ 2ξ ∩ B λ 2ρ+1 . (6.5) We study now two paths, which are relevant for what follows: • The path (t 1 , · · · , t l ) → η t, γ 0 (t 1 , · · · , t l ) , where (t 1 , · · · , t l ) ∈ [r, R] l . Since ξ ∈ (0, ξ * ), we have that γ 0 (t 1 , · · · , t l ) / ∈ A λ 2ξ , ∀(t 1 , · · · , t l ) ∈ ∂[r, R] l . and I λ γ 0 (t 1 , · · · , t l ) < c Γ , ∀(t 1 , · · · , t l ) ∈ ∂[r, R] l . Once γ 0 (t 1 , · · · , t l ) ∈ Θ 2δ , for all (t 1 , · · · , t l ) ∈ [r, R] l , (6.5) gives that η t, γ 0 (t 1 , · · · , t l ) | Ω ′ j = 0, t ≥ 0. Moreover, it is also easy to see that 1 2 R 3 \Ω ′ |∇η t, γ 0 | 2 + (λa(x) + 1)|η t, γ 0 | 2 dx − 1 p R 3 \Ω ′ 1 |x| µ * |η t, γ 0 | p |η t, γ 0 | p dx ≥ 0. Consequently, η t, γ 0 (t 1 , · · · , t l ) ∈ Γ * , t ≥ 0. • The path (t 1 , · · · , t l ) → γ 0 (t 1 , · · · , t l ), where (t 1 , · · · , t l ) ∈ [r, R] l . We observe that supp γ 0 (t 1 , · · · , t l ) ⊂ Ω Γ and I λ γ 0 (t 1 , · · · , t l ) does not depend on λ ≥ 1, for all (t 1 , · · · , t l ) ∈ [r, R] l . Moreover, I λ γ 0 (t 1 , · · · , t l ) ≤ c Γ , ∀(t 1 , · · · , t l ) ∈ [r, R] l and I λ γ 0 (t 1 , · · · , t l ) = c Γ if, and only if, t j = 1, j = 1, · · · , l. Therefore m 0 = sup I λ (u) ; u ∈ γ 0 [r, R] l \ A λ ξ is independent of λ and m 0 < c Γ . In the following, we suppose that there exists K * > 0 such that I λ (u) − I λ (v) ≤ K * u − v λ , ∀u, v ∈ B λ 2ρ . Now, we will prove that max (t1,··· ,t l )∈[r,R] l I λ η T, γ 0 (t 1 , · · · , t l ) ≤ c Γ − σ 0 ξ 2K * , (6.6) for T > 0 large. In fact, writing u = γ 0 (t 1 , · · · , t l ), (t 1 , · · · , t l ) ∈ [r, R] l , if u / ∈ A λ ξ , from (6.3), we deduce that I λ η(t, u) ≤ I λ (u) ≤ m 0 , ∀t ≥ 0, and we have nothing more to do. And so we assume that u ∈ A λ ξ and set η(t) = η(t, u), ν λ = min {ν λ , σ 0 } and T = σ 0 ξ K * ν λ . Now, we will discuss two cases: Case 1: η(t) ∈ A λ 3 2 ξ ∩ Θ δ ∩ B λ 2ρ , ∀t ∈ [0, T ]. Case 2: η(t 0 ) / ∈ A λ 3 2 ξ ∩ Θ δ ∩ B λ 2ρ , for some t 0 ∈ [0, T ]. Analysis of Case 1 In this case, we have Ψ η(t) = 1 and I ′ λ η(t) ≥ ν λ for all t ∈ [0, T ]. Hence, from (6.3), we know I λ η(T ) = I λ (u) + T 0 d ds I λ η(s) ds ≤ c Γ − T 0 ν λ ds, that is, I λ η(T ) ≤ c Γ − ν λ T ≤ c Γ − σ 0 ξ 2K * , showing (6.6). Analysis of Case 2: In this case we have the following situations: (c):η(t) ∈ Θ δ ∩ B λ 2ρ for all t ∈ [0, T ], and there are 0 ≤ T 1 ≤ T 2 ≤ T such thatη(t) ∈ A λ 3 2 ξ \ A λ ξ for all t ∈ [T 1 , T 2 ] with |I λ (η(T 1 )) − c Γ | = ξ and |I λ (η(T 2 )) − c Γ | = 3ξ 2 From definition of K * , we have η(T 2 ) −η(T 1 ) ≥ 1 K * I λ (η(T 2 )) − I λ (η(T 1 )) ≥ 1 2K * ξ, then the mean value theorem implies that T 2 − T 1 ≥ 1 2K * ξ. Notice that I λ η(T ) ≤ I λ (u) − showing that u = 0 in Ω j , for all j / ∈ Γ. This finishes the proof of Theorem 1.2. ACKNOWLEDGMENTS The authors would like to thank the anonymous referee for his/her useful comments and suggestions which help to improve and clarify the paper greatly. Proposition 1.5. [21] [Hardy − Littlewood − Sobolev inequality]: Lemma 3. 6 . 6Let c 1 > 0 be a constant independent of λ. Given ε > 0, there exist Λ = Λ(ε) and R = R(ε, c 1 ) such that, if (u n ) is a (P S) c sequence for I λ with c ∈ [0, c 1 ], Proposition 3. 7 . 7Given c 1 > 0, independent of λ, there exists Λ = Λ(c 1 ) > 0 such that if λ ≥ Λ, then I λ verifies the (P S) c condition for all c ∈ [0, c 1 ]. Thereby, d = 0 and (v n ) is a (P S) 0 sequence. Hence, by Corollary 3.2, v n → 0 in E λ . Thus, I λ satisfies the (P S) c condition for c ∈ [0, c 1 ] if λ is large enough. There exists T 2 ∈ [0, T ] such thatη(t 2 ) / ∈ Θ δ . Let T 1 = 0 it follows that η(T 2 ) −η(T 1 ) ≥ δ > µ, becauseη(T 1 ) = u ∈ Θ. (b): There exists T 2 ∈ [0, T ] such thatη(T 2 ) / ∈ B λ 2ρ . Let T 1 = 0, we get η(T 2 ) −η(T 1 ) ≥ ρ > µ,sinceη(T 1 ) = u ∈ B λ ρ . deduce thatI λ η(T ) ≤ c Γ − T2 T1 σ 0 ds = c Γ − σ 0 (T 2 − T 1 ) ≤ c Γ − σ 0 ξ 2K * ,However, the solution u verifiesu H 1 (R N \ΩΓ) = 0, when R → +∞,by increasing R and Λ if necessary.then we can choose R large enough such that lim sup n→+∞ B(R) u 2 n dx < ε 2 . (3.15) Using (3.14) and (3.15), we obtain that lim sup n→+∞ R 3 u 2 n dx < ε. The last inequality combined with interpolation implies that lim sup n→+∞ R 3 \BR(0) |u n | 6p 6−µ dx < ε, λ > Λ, which proves(6.6).So, defining η(t 1 , · · · , t l ) = η T, γ 0 (t 1 , · · · , t l ) , (t 1 , · · · , t l ) ∈ [r, R] l , we have that η ∈ Γ * andOn the other hand, we can estimateSince η ∈ Γ * , it follows thatBy (6.6) and (6.7), applying Lemma 5.2, we havethis contradicts with the conclusion (ii) of Lemma 5.1.[Proof of Theorem 1.2: Conclusion] From the last Proposition, there exists (u λn ) with λ n → +∞ satisfying:(a) I ′ λn (u λn ) = 0, ∀n ∈ N;Therefore, from of Proposition 4.1, we derive that (u λn ) converges in H 1 (R 3 ) to a function u ∈ H 1 (R 3 ), which satisfies u = 0 outside Ω and u |Ω j = 0, j = 1, · · · , l. Now, we claim that u = 0 in Ω j , for all j / ∈ Γ. Indeed, it is possible to prove that there is σ 1 > 0, which is independent of j, such that if v is a nontrivial solution of (C) ∞,Γ , then v H 1 0 (ΩΓ) ≥ σ 1 . Existence of multi-bump solutions for a class of quasilinear problems. C O Alves, Adv. Nonlinear Stud. 6C.O. Alves, Existence of multi-bump solutions for a class of quasilinear problems, Adv. Nonlinear Stud. 6(2006),491-509 . Existence of multi-bump solutions for a class of elliptic problems involving the biharmonic operator. C O B Alves &amp; A, Nóbrega, arXiv:1602.03112v1C.O. Alves & A.B. Nóbrega, Existence of multi-bump solutions for a class of elliptic problems involving the biharmonic operator, arXiv:1602.03112v1 On a periodic Schrödinger equation with nonlocal superlinear part. N Ackermann, Math. Z. 248N. Ackermann, On a periodic Schrödinger equation with nonlocal superlinear part, Math. Z., 248(2004), 423-443. Existence and multiplicity results for some superlinear elliptic problems on R N. T Q Bartsch &amp; Z, Wang, Comm. Part. Diff. Equ. 20T.Bartsch & Z.Q. Wang, Existence and multiplicity results for some superlinear elliptic problems on R N , Comm. Part. Diff. Equ. 20(1995) 1725-1741. Multiple positive solutions for a nonlinear Schrödinger equations. T Q Bartsch &amp; Z, Wang, Z. Angew. Math. Phys. 51T.Bartsch & Z.Q. Wang, Multiple positive solutions for a nonlinear Schrödinger equations, Z. Angew. Math. Phys. 51(2000) 366-384. Nonlinear Schrödinger equations with steep potential well. T Bartsch, A &amp; Z Q Pankov, Wang, Commun. Contemp. Math. 3T. Bartsch, A. Pankov & Z. Q. Wang, Nonlinear Schrödinger equations with steep potential well, Commun. Contemp. Math. 3, (2001), 549-569. Nonlinear scalar field equations, I Existence of a ground state. H L Berestycki &amp; P, Lions, Arch. Ration. Mech. Anal. 82H. Berestycki & P.L. Lions, Nonlinear scalar field equations, I Existence of a ground state, Arch. Ration. Mech. Anal. 82(1983),313-346. Existence of a nontrivial solution to a strongly indefinite semilinear equation. B Buffoni, L &amp; C A Jeanjean, Stuart, Proc. Amer. Math. Soc. 119B. Buffoni, L. Jeanjean & C.A. Stuart, Existence of a nontrivial solution to a strongly indefinite semilinear equation, Proc. Amer. Math. Soc., 119(1993), 179-186. Homoclinic type solutions for semilinear elliptic PDE on R N. V Zelati, &amp; R Rabinowitz, Comm. Pure. Appl. Math. LV. V. Coti Zelati & R. Rabinowitz, Homoclinic type solutions for semilinear elliptic PDE on R N , Comm. Pure. Appl. Math. LV (1992), 1217-1269. Minimal nodal solutions of a Schrödinger equation with critical nonlinearity and symmetric potential. M Clapp, Y H Ding, Diff. Int. Equa. 16M. Clapp & Y.H. Ding, Minimal nodal solutions of a Schrödinger equation with critical nonlinearity and symmetric potential, Diff. Int. Equa. 16(2003), 981-992 Positive and sign changing solutions to a nonlinear Choquard equation. M Clapp, &amp; D Salazar, J. Math. Anal. Appl. 407M. Clapp & D. Salazar, Positive and sign changing solutions to a nonlinear Choquard equation, J. Math. Anal. Appl. 407 (2013), 1-15. Multiple solutions to a magnetic nonlinear Choquard equation. S Cingolani, M Clapp, &amp; S Secchi, Z. Angew. Math. Phys. 63S. Cingolani, M. Clapp & S. Secchi, Multiple solutions to a magnetic nonlinear Choquard equation, Z. Angew. Math. Phys., 63 (2012), 233-248. Multipeak bound states of nonlinear Schrödinger equations. M Pino, &amp; P Felmer, Ann. Inst. H. Poincaré Anal. Non Linéaire. 15M. del Pino & P. Felmer, Multipeak bound states of nonlinear Schrödinger equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 15(1998), 127-149. Local Mountain Pass for semilinear elliptic problems in unbounded domains. M Pino, &amp; P Felmer, Calc. Var. Partial Differential Equations. 4M. del Pino & P. Felmer, Local Mountain Pass for semilinear elliptic problems in unbounded domains, Calc. Var. Partial Differential Equations, 4(1996), 121-137. Multiplicity of positive solutions of a nonlinear Schrödinger equation. Y Ding, &amp; K Tanaka, Manus. Math. 112Y. Ding & K. Tanaka, Multiplicity of positive solutions of a nonlinear Schrödinger equation, Manus. Math., 112, (2003), 109-135. Bound states for semilinear Schrödinger equations with sign-changing potential. Y Ding, &amp; A Szulkin, Calc. Var. Partial Differential Equations. 29Y. Ding & A. Szulkin, Bound states for semilinear Schrödinger equations with sign-changing potential, Calc. Var. Partial Differential Equations 29, (2007), 397-419. Nonspreading wave pachets for the packets for the cubic Schrödinger with a bounded potential. A Floer, &amp; A Weinstein, J. Funct. Anal. 69A. Floer & A. Weinstein, Nonspreading wave pachets for the packets for the cubic Schrödinger with a bounded potential, J. Funct. Anal., 69(1986), 397-408. M Ghimenti, &amp; J Van Schaftingen, arXiv:1503.06031v1Nodal solutions for the Choquard equation. M. Ghimenti & J. Van Schaftingen, Nodal solutions for the Choquard equation, arXiv:1503.06031v1 M Ghimenti, V Moroz, &amp; J Van Schaftingen, arXiv:1511.04779v1Least Action nodal solutions for ghe quadratic Choquard equation. M. Ghimenti, V. Moroz & J. Van Schaftingen, Least Action nodal solutions for ghe quadratic Choquard equation, arXiv:1511.04779v1 Existence and uniqueness of the minimizing solution of Choquard's nonlinear equation. E H Lieb, Studies in Appl. Math. 57E. H. Lieb, Existence and uniqueness of the minimizing solution of Choquard's nonlinear equation, Studies in Appl. Math., 57(1976/77), 93-105. Analysis. E Lieb, &amp; M Loss, Gradute Studies in Mathematics. AMSE. Lieb & M. Loss, "Analysis," Gradute Studies in Mathematics, AMS, Providence, Rhode island, 2001. The Choquard equation and related questions. P L Lions, Nonlinear Anal. 4P.L. Lions, The Choquard equation and related questions, Nonlinear Anal., 4(1980), 1063- 1072. Some properties of weak solutions of nonlinear scalar field equations. G B Li, Ann. Acad. Sci. Fenn. Math. 14G.B. Li. Some properties of weak solutions of nonlinear scalar field equations, Ann. Acad. Sci. Fenn. Math. 14 (1989), 27-36. . C Miranda, Un &apos; Brouwer, Bol. Un. Mat. Ital. 3C. Miranda, Un' osservazione su un teorema di Brouwer, Bol. Un. Mat. Ital., 3 (1940) 5-7. Classification of positive solitary solutions of the nonlinear Choquard equation. L Ma, &amp; L Zhao, Arch. Ration. Mech. Anal. 195L. Ma & L. Zhao, Classification of positive solitary solutions of the nonlinear Choquard equation, Arch. Ration. Mech. Anal., 195(2010), 455-467. Ground states of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics. V Moroz, J Van Schaftingen, J. Funct. Anal. 265V. Moroz & J. Van Schaftingen, Ground states of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics, J. Funct. Anal., 265(2013), 153-184. Existence of groundstates for a class of nonlinear Choquard equations. V Moroz, J Van Schaftingen, 10.1090/S0002-9947-2014-06289-2Trans. Amer. Math. Soc. V. Moroz & J. Van Schaftingen, Existence of groundstates for a class of nonlinear Choquard equations, Trans. Amer. Math. Soc. doi:10.1090/S0002-9947-2014-06289-2 Semi-classical states for the Choquard equation. V Moroz, J Van Schaftingen, Calc. Var. Partial Differential Equations. 52V. Moroz & J. Van Schaftingen, Semi-classical states for the Choquard equation, Calc. Var. Partial Differential Equations, 52 (2015), 199-235. Groundstates of nonlinear Choquard equations: Hardy-Littlewood-Sobolev critical exponent. V Moroz, J Van Schaftingen, Commun. Contemp. Math. 175ppV. Moroz & J. Van Schaftingen, Groundstates of nonlinear Choquard equations: Hardy- Littlewood-Sobolev critical exponent, Commun. Contemp. Math. 17 (2015), no. 5, 1550005, 12 pp. S Pekar, Untersuchungüber die Elektronentheorie der Kristalle. BerlinAkademie VerlagS. Pekar, Untersuchungüber die Elektronentheorie der Kristalle, Akademie Verlag, Berlin, 1954. On gravity's role in quantum state reduction. R Penrose, Gen. Relativ. Gravitat. 28R. Penrose, On gravity's role in quantum state reduction, Gen. Relativ. Gravitat., 28(1996) , 581-600. Global branch of solutions for non-linear Schrödinger equations with deepening potential well. C A S Stuart &amp; H, Zhou, Proc. London Math. Soc. 92C.A. Stuart & H.S. Zhou, Global branch of solutions for non-linear Schrödinger equations with deepening potential well, Proc. London Math. Soc. 92(2006), 655-681. Sign-changing multi-bump solutions for nonlinear Schrödinger equations with steep potential wells. Y Sato, &amp; K Tanaka, Trans. Amer. Math. Soc. 361Y. Sato & K. Tanaka, Sign-changing multi-bump solutions for nonlinear Schrödinger equations with steep potential wells, Trans. Amer. Math. Soc., 361(2009), 6205-6253. A note on Schrödinger-Newton systems with decaying electric potential. S Secchi, Nonlinear Anal. 72S. Secchi, A note on Schrödinger-Newton systems with decaying electric potential, Nonlinear Anal., 72 (2010), 3842-3856. Existence of infinitely many homoclinic orbits in Hamiltonian systems. E Séré, Math. Z. 209E. Séré, Existence of infinitely many homoclinic orbits in Hamiltonian systems, Math. Z. 209(1992), 27-42. Positive solutions for nonlinear Schrödinger equations with deepening potential well. Z P S Wang &amp; H, Zhou, J. Eur. Math. Soc. JEMS)Z.P. Wang & H.S. Zhou, Positive solutions for nonlinear Schrödinger equations with deepening potential well. J. Eur. Math. Soc. (JEMS) 11 (2009), 545-573. Strongly Interacting Bumps for the Schrödinger-Newton Equations. J Wei, &amp; M Winter, J. Math. Phys. 5012905J. Wei & M. Winter, Strongly Interacting Bumps for the Schrödinger-Newton Equations, J. Math. Phys., 50(2009), 012905. . M Willem, Minimax TheoremsBirkhäuserM. Willem, Minimax Theorems, Birkhäuser, 1996.
[]
[ "A Quantum Monte Carlo study of the structural, energetic, and magnetic properties of two-dimensional (2D) H and T phase VSe 2", "A Quantum Monte Carlo study of the structural, energetic, and magnetic properties of two-dimensional (2D) H and T phase VSe 2" ]
[ "Daniel Wines \nMaterials Science and Engineering Division\nNational Institute of Standards and Technology (NIST)\n20899GaithersburgMD\n", "Juha Tiihonen \nDepartment of Physics, Nanoscience Center\nUniversity of Jyväskylä\nP.O. Box 35Finland\n", "Kayahan Saritas \nMaterial Science and Technology Division\nOak Ridge National Laboratory\n37831Oak RidgeTennessee\n", "Jaron Krogel \nMaterial Science and Technology Division\nOak Ridge National Laboratory\n37831Oak RidgeTennessee\n", "Can Ataca \nDepartment of Physics\nUniversity of Maryland Baltimore County\n21250BaltimoreMD\n" ]
[ "Materials Science and Engineering Division\nNational Institute of Standards and Technology (NIST)\n20899GaithersburgMD", "Department of Physics, Nanoscience Center\nUniversity of Jyväskylä\nP.O. Box 35Finland", "Material Science and Technology Division\nOak Ridge National Laboratory\n37831Oak RidgeTennessee", "Material Science and Technology Division\nOak Ridge National Laboratory\n37831Oak RidgeTennessee", "Department of Physics\nUniversity of Maryland Baltimore County\n21250BaltimoreMD" ]
[]
Previous works have controversially claimed near-room temperature ferromagnetism in two-dimensional (2D) VSe 2 , with conflicting results throughout the literature. These discrepancies in magnetic properties between both phases (T and H phase) of 2D VSe 2 are most likely due to the structural parameters being coupled to the magnetic properties. Specifically, both phases have a close lattice match and similar total energies, which makes it difficult to determine which phase is being observed experimentally. In this study, we used a combination of density functional theory (DFT), highly accurate diffusion Monte Carlo (DMC) and a surrogate Hessian line-search optimization technique to resolve the previously reported discrepancy in structural parameters and relative phase stability. With DMC accuracy, we determined the freestanding geometry of both phases and constructed a phase diagram. Our findings demonstrate the successes of the DMC method coupled with the surrogate Hessian structural optimization technique when applied to a 2D magnetic system. arXiv:2301.11404v1 [cond-mat.str-el] Density functional theory (DFT) benchmarks for the T and H phase of 2D VSe 2 were performed using the Vienna Ab initio Simulation Package (VASP) code with projector augmented wave (PAW) pseudopotentials. 1,2 For these calculations, the local density approxi-S1 arXiv:2301.11404v1 [cond-mat.str-el]
null
[ "https://export.arxiv.org/pdf/2301.11404v1.pdf" ]
256,358,877
2301.11404
cb5e63f90efb33e82313f88de7e6049725db93fd
A Quantum Monte Carlo study of the structural, energetic, and magnetic properties of two-dimensional (2D) H and T phase VSe 2 Daniel Wines Materials Science and Engineering Division National Institute of Standards and Technology (NIST) 20899GaithersburgMD Juha Tiihonen Department of Physics, Nanoscience Center University of Jyväskylä P.O. Box 35Finland Kayahan Saritas Material Science and Technology Division Oak Ridge National Laboratory 37831Oak RidgeTennessee Jaron Krogel Material Science and Technology Division Oak Ridge National Laboratory 37831Oak RidgeTennessee Can Ataca Department of Physics University of Maryland Baltimore County 21250BaltimoreMD A Quantum Monte Carlo study of the structural, energetic, and magnetic properties of two-dimensional (2D) H and T phase VSe 2 (Dated: 30 January 2023) Previous works have controversially claimed near-room temperature ferromagnetism in two-dimensional (2D) VSe 2 , with conflicting results throughout the literature. These discrepancies in magnetic properties between both phases (T and H phase) of 2D VSe 2 are most likely due to the structural parameters being coupled to the magnetic properties. Specifically, both phases have a close lattice match and similar total energies, which makes it difficult to determine which phase is being observed experimentally. In this study, we used a combination of density functional theory (DFT), highly accurate diffusion Monte Carlo (DMC) and a surrogate Hessian line-search optimization technique to resolve the previously reported discrepancy in structural parameters and relative phase stability. With DMC accuracy, we determined the freestanding geometry of both phases and constructed a phase diagram. Our findings demonstrate the successes of the DMC method coupled with the surrogate Hessian structural optimization technique when applied to a 2D magnetic system. arXiv:2301.11404v1 [cond-mat.str-el] Density functional theory (DFT) benchmarks for the T and H phase of 2D VSe 2 were performed using the Vienna Ab initio Simulation Package (VASP) code with projector augmented wave (PAW) pseudopotentials. 1,2 For these calculations, the local density approxi-S1 arXiv:2301.11404v1 [cond-mat.str-el] I. INTRODUCTION One of the most promising two-dimensional (2D) magnetic materials that has been extensively studied experimentally and theoretically is 2D VSe 2 . Similar to other 2D transition metal dichalcogenides (such as MoS 2 ) 1 , VSe 2 exists in two phases, the T (octahedral phase (1T)-centered honeycombs) phase which is metallic and the H (the trigonal prismatic phase (2H)-hexagonal honeycombs, see an estimate of the Curie temperature of H-VSe 2 (291 K) 2 , but the model Ising Hamiltonian used did not take into account the magnetic anisotropy energies, which are essential for an accurate estimation of the Curie temperature of a 2D lattice. The Curie temperature of multilayered 2D H-VSe 2 has been experimentally measured to be 425 K, with the ferromagnetism softening as the thickness of the sample increases 3 . Additionally, the experimental Curie temperature for monolayer T-VSe 2 has ranged from 300 K to 470 K 4,5 depending on which substrate is used (MoS 2 , graphite, SiO 2 -coated silicon). The experimental magnetization of T-VSe 2 has also been met with controversy, with values of 15 µ B and 5 µ B (per formula unit) being reported from two separate studies 4,6 . Insight has also been reported with regards to how the ferromagnetism is enhanced with defects, molecular adsorption and the choice of substrate for VSe 2 4,5,7 . A wide range of values have also been reported for the charge density wave (CDW) transition temperature for T-VSe 2 , ranging from 120 K to 350 K 3,6,[8][9][10] . These discrepancies in the electronic and magnetic properties of either phase of 2D VSe 2 arise from the structural parameters of each phase being coupled closely to the magnetic and electronic properties and the external factors (substrates, defects) of the individual samples. One example of this has been a reported discrepancy on which phase (T or H) is energetically more favorable. Both the T and H phases have a close lattice match and similar total energies, which makes it difficult to determine which phase is being observed experimentally. Recently, it has been reported experimentally that the T phase is favored for bulk VSe2, but with dimensionality decrease, the H phase is favored 3,11 . It has also been reported that a T-to-H phase transition can be realized by thermal annealing 11 . This same structural phase transition has even been reported by applying a biaxial strain of ≈ 3 % (from calculated results) 7,11,12 . Researchers have proposed that this lattice strain can be induced by the mismatch that occurs from putting 2D VSe 2 on a substrate 7,12 . From a computational perspective, results for VSe 2 depend heavily on which methodology is employed. In most cases, DFT with an empirical Hubbard correction (+U) for correlated electrons is used 13 . For example, if the U correction is applied for T and H-VSe 2 , the T phase is more energetically favorable, while if no U correction is applied, the H phase is more favorable 14 . In addition to the discrepancies in results calculated with DFT+U, results between van der Waals (vdW) corrected functionals and hybrid functionals are also inconclusive 14 in terms of predicting the relative phase stability. In order to alleviate the uncertainty in DFT methods, more sophisticated methods can be used such as Diffusion Monte Carlo (DMC) 15 . DMC is a correlated, many-body electronic structure method that has demonstrated success for the electronic and magnetic properties of a variety of bulk and 2D systems [16][17][18][19][20][21][22][23][24] . This method has a weaker dependence on the starting density functional and U parameter and can successfully achieve results with an accuracy beyond the DFT+U 15 . Due to the fact that T and H-VSe 2 have structural parameters that are coupled to their electronic and magnetic properties, it makes it difficult to produce conclusive results that rely solely on DFT or DFT+U. For this reason, we employed our recently developed energy-based surrogate Hessian method for structural optimization with stochastic electronic structure theories (such as DMC) 22 to obtain the geometry of T and H-VSe 2 with DMC accuracy, resulting in high-accuracy bond lengths that resolve previous functional dependent structural discrepancies. After obtaining an accurate geometry for both structures, we constructed a phase diagram between T and H-VSe 2 using DMC calculated energies and obtained accurate magnetic properties of each structure. The accurate estimates for lattice geometry, relative phase energy and the DMC phase diagram assist in clarifying previously inconclusive theoretical and experimental results regarding T and H phase VSe 2 . For full details of the computational methods used, see the Supporting Information (SI). As an initial starting point for our study, we performed benchmarking DFT and DFT+U calculations using a variety of density functionals (local density approximation (LDA) 25 , Perdew-Burke-Ernzerhof (PBE) 26 , and strongly constrained and appropriately normed (SCAN) 27 meta-GGA functionals, see SI for more details) and the Vienna Ab initio Simulation Package (VASP) code for monolayer T-VSe 2 and H-VSe 2 . The goal of these simulations were to assess how sensitive the relative energy between the T and H phase is with respect to functional and material geometry. Another goal of these simulations was to benchmark the structural parameters of each material with respect to several density functionals. It is advantageous to perform these reference calculations with VASP and PAW pseudopotentials as a precursor to the more expensive DMC calculations due to the fact that they require a much smaller cutoff energy and are more cost effective for a large number of simulations. It is important to note that for all DFT and DMC simulations, we assumed a ferromagnetic ground state for both T and H-VSe 2 . Although recent reports have suggested that T-VSe 2 could be experimentally paramagnetic 3 , we infer that this paramagnetism can be in-duced by magnetic anisotropy. In addition, the modeling of paramagnetism with computational methods imposes a great challenge, which is why we focus on the freestanding ferromagnetic ground states of both phases. A more robust treatment of the magnetic structure can be explored in future work, but is beyond the scope of this work which primarily focuses on determining the geometric structure and phase stability of 2D T and H-VSe 2 . In Fig. 2 we present a comprehensive look at the difference in total energy between T-VSe 2 and H-VSe 2 , using several DFT functionals under different geometric constraints. We performed these calculations for a variety of U values in three different ways: fully relaxing the structure at each value of U (Fig. 2 a) ), fixing the lattice and atomic positions to the U = 0 eV relaxed geometry of that particular functional and calculating the static energy at each value of U (Fig 2 b)), fixing the lattice to the U = 0 eV relaxed geometry of that particular functional and relaxing just the atomic positions at each value of U (Fig. 2 c)). The results in Fig. 2 indicate that there is a significant disagreement between DFT functionals, U value used, and material geometries, with all three factors playing a significant role in the energy difference between T and H phase. Specifically, regardless of relaxation method, all bare (no U correction) SCAN, PBE, and PBEsol functionals predict H favorable, while bare LDA predicts T favorable. For all functionals, there is a critical value of U that reverses the relative phase stability, which is dependent on functional and relaxation method. The SCAN functional with a U correction predicts T phase favorable, with larger energy differences. As seen in Fig. 2, the trends in the relative phase stability between Fig. 2 b) and c) are nearly identical, but significantly vary from Fig. a). This implies that the density functional is strongly coupled to material geometry, but the lattice constant change has more of an effect on phase stability than atomic positions and bond distances. This is most prevalent for higher U values (> 2 eV), where the relaxed geometry changes more drastically with U. The interrelated nature of the material's geometry, density functional, and value of U are reasons to seek out higher levels of theory beyond DFT/DFT+U such as DMC to accurately determine the optimal geometry and relative energy between the phases of 2D VSe 2 . The relaxed lattice constants, V-Se distances, and T -H energies from Fig. 2 a) are presented in Table I and Fig. 3, along with additional VASP reference calculations performed with the vdW corrected functionals (PBE-D2 28 , PBE-D3 29 , SCAN+rvv10 30 ). The DMC computed parameters are also given for comparison in Table I and Fig. 3 (more discussion to follow). We observe a ≈ 7 % variability in lattice constant across the different methods for T-VSe 2 and a ≈ 4 % variability in lattice constant across the different methods for H-VSe 2 . Between both phases, we observe a ≈ 3 % variability in V-Se distance (d V−Se ). Most strikingly, the energy difference between the T and H phases (E T−H ) drastically varies depending on the material geometry and computational methodology, ranging from -0.2 eV/f.u. to 0.06 eV/f.u.. Due to the fact that a strain-induced phase transition has been reported between T-and H-VSe 2 7,11,12 , we decided to perform additional VASP benchmarking calculations that involved the applica- tion of tensile and compressive strain for each monolayer. We performed these calculations for PBE, SCAN, and LDA (with U = 0 eV and U = 2 eV), starting from the U = 0 eV geometry for each functional. The resulting equations of state are depicted in Fig. S3. As seen in the figure, the equation of state and resulting strain-induced phase transition is entirely dependent on the functional and U value, with no consistent trend. Method a (Å) d V−Se (Å) a (Å) d V−Se (Å) E T−H (eV/f.u.) PBE 3. The strong sensitivity of each monolayer with respect to geometry and functional are grounds for using a higher-order method such as DMC to obtain a statistically accurate estimate of the lattice parameters and relative energy between phases. Prior to performing the DMC/line-search calculations, we optimized our nodal surface (orbitals selected for DFT wavefunction generation). Since DMC has the zero-variance property, it means that as the trial wave function approaches the exact ground state, the statistical fluctuations in the energy reduce to zero 15 . Although there have been instances where various sophisticated methods have been used to optimize the nodal surface 31-34 , we employed the PBE+U approach, where the Hubbard (U) value was used as a variational parameter to optimize the nodal surface using DMC (similar to other successful DMC studies of magnetic materials 16,20,21,24,[35][36][37] ). We performed these calculations for both T and H-VSe 2 (24 atom supercells), where we tuned the U value from (1 to 4) eV while creating the trial wavefunction and computed the DMC energy. The results of these calculations are depicted in Fig. S4, where we observe that U = 2 eV yields the lowest energy for both phases. It is important to note that for the H phase, the DMC energies for U = 1 and U = 2 eV are statistically identical. Based on this, we created the trial wavefunction using PBE+U (U = 2 eV) for all subsequent DMC calculations within the surrogate Hessian line-search for both phases (all 52 DMC energy evaluations). Since we obtained an optimal U value of 2 eV for both materials, we focused our DFT+U benchmarking efforts more on U = 2 eV (Fig. 3, Fig 5, Table I, Fig. 2, Fig. S3). Based on the DMC line-search results, we determined accurate bounds on the lattice parameter (a) and off-plane displacement of Se (z), within an error tolerance of 0.018 Å or lower for both parameters. This translates to within ≈ 0.5% accuracy in a parameter set of a and d V−Se with 95% confidence. Convergence (absence of significant displacements outside of the error tolerance) was achieved after two parallel line-search iterations for both phases. This convergence is illustrated in Fig. S5, where the convergence of the parameter offsets of a and z and the convergence of the total energy per f.u. are depicted for both T and H phase 2D VSe 2 for the initial DFT relaxed structure (1) and both subsequent iterations of DMC (2 -3). In addition, the final energy of both of the fitted structures (square points) are given. E T-H -E T-H (DMC) (eV/f.u.) d V-Se -d V-Se (DMC) (Å) d V-Se -d V The final geometric parameters and relative phase energies determined with DMC are given in Table I and Fig. 3. For T-VSe 2 , we determined a lattice constant of 3.414(12) Å and a V-Se distance of 2.505(7) Å . For H-VSe 2 , we determined a lattice constant of 3.335(8) Å and a V-Se distance of 2.503(5) Å . The DMC finite-size extrapolated energy difference (T -H) between the two phases was determined to be 0.06(2) eV/f.u., indicating that in freestanding form at the equilibrium geometry, H-VSe 2 is favored over T-VSe 2 . When comparing these DMC results to the other DFT functionals in Table I and Fig. 3, it is clear that very few DFT functionals can reproduce the DMC results for lattice constant, V-Se distance and relative energy difference. The SCAN functional comes the closest to reproducing all three simultaneous DMC values, but still falls slightly short for the V-Se distances of both phases and the lattice constant of T-VSe 2 . The fact that SCAN+U successfully predicts the structural properties (for H-VSe 2 ) and the fact that SCAN+rvv10 produces an energy difference closest to the average DMC energy difference for both phases loosely implies that a simultaneous description of correlated magnetism and vdW interactions are both needed to correctly represent the physics of VSe 2 . Experimental measurements of the lattice constant and V-Se distance of freestanding monolayer VSe 2 are scarce and often times dependent on external factors such as the substrate (more discussion to follow) and sample preparation technique 4,5,38,39 . However, Chen et al. 38 have recently reported a lattice constant of 3.4 Å for thin films of T-VSe 2 and Liu et al. 39 have recently reported a lattice constant of 3.3 Å for epitaxially grown monolayer H-VSe 2 . Both of these measured values are in excellent agreement with our DMC computed lattice constants. Additionally, we determined the near-equilibrium PES of both T and H 2D VSe 2 with DMC accuracy, which are both depicted in Fig. S6. The phase diagram presented in Fig. 4 is based on similar fits to data, where the z displacement has been remapped to d V−Se . This DMC phase diagram can directly be compared to the energy vs. strain DFT benchmarking calculations in Fig. S3, which emphasizes the need for an accurate representation of the phase boundary between the two phases. The freestanding geometries of both T and H lie in the energetic H phase, but a slice of the phase diagram along d V−Se = 2.505 Å indicates that the T phase becomes favorable over H at biaxial strain of a 3.5 Å. This implies that in freestanding form, once T-VSe 2 is positively strained at least ≈ 2.5 %, T phase is favored over H. Alternatively, if freestanding H-VSe 2 is positively strained at least ≈ 5 %, T phase is also favored over H This strain can easily be accomplished by placing monolayer VSe 2 on a substrate with significant lattice mismatch. In fact, this type of mismatch has been reported to alter the material properties 4,5,40,41 , significantly contributing to the controversies of T and H-VSe 2 (for energetic favorability, magnetic properties). Whether or not the changes in energetic favorability or magnetic properties with respect to the substrate are due to lattice mismatch or more complicated interactions between the substrate and the monolayer remains to be answered and is beyond the scope of this work, which has focused solely on the freestanding forms of T and H-VSe 2 . However, such calculations can be employed for future work using higher order methods such as DMC. The proximity of the phase boundary between T and H phase (Fig. 4) is emphasized by the small energy difference between the two phases (0.06(2) eV/f.u., at the equilibrium geometry) between the two curves. Since this energy difference is so close to room temperature (≈ 0.024 eV), this implies that a process such as thermal annealing can easily induce a phase transition. In fact, recently it was demonstrated that a structural phase transition of multilayer VSe 2 from T to H occurs through annealing at 650 K, along with a metal-insulator transition 11 . To gain a deeper understanding of the magnetic properties of 2D T and H-VSe 2 , we extracted the spin densities (using a trial wavefunction at U = 2 eV and 24 atom supercell at the final equilibrium geometry predicted by DMC/line-search). The spin density isosurfaces of each phase (ρ up -ρ down ) are depicted in the insets of Fig. 5 a) and c) for T-VSe 2 and H-VSe 2 respectively. For both phases, we observe the V atoms are highly spin-polarized, while the Se atoms are slightly antiparallel with respect to the V atoms. For more calculation details regarding spin density, see SI. We went on to plot the radial averaged spin densities as a function of distance, separately for V and Se for T and H-VSe 2 (depicted in Fig. 5 a) -d)). This allows us to view the spatial variations in spin density. Additionally, we benchmarked these V and Se radially averaged densities with PBE+U (U = 2 eV) using NC pseudopotentials at the equilibrium geometry (the calculation required to create the trial WF for the subsequent DMC runs). As seen in Fig. 5 a) and c), there is a substantial difference in the V spin density between DMC and PBE+U (U = 2 eV) for both T and H phase. This same substantial difference between DMC and PBE+U also occurs for the total charge density. This discrepancy is most prevalent near the radial density peak (peak of d orbital) and can be attributed to the fact that DFT functionals (even with the added Hubbard correction) tend to delocalize and unsuccessfully capture 3d orbitals. This large discrepancy in the spin densities highlights the need for more accurate, many-body computational methodologies for correlated materials such as VSe 2 , where DFT fails. In contrast, there is closer agreement between the DMC and PBE+U spin densities for Se in T and H-VSe 2 (see Fig. 5 b) and d). Finally, we estimated the site-averaged atomic magnetic moments per V and Se for both T and H phase by integrating the DMC and PBE+U spin densities depicted in Fig. 5. At the DMC level, we estimated a magnetic moment of 1.06(2) µ B for V and -0.09(2) µ B for Se in T-VSe 2 and a magnetic moment of 1.02(1) µ B for V and -0.14(1) µ B for Se in H-VSe 2 . At the PBE+U (U = 2 eV) level, we estimated a magnetic moment of 1.30 µ B for V and -0.12 µ B for Se in T-VSe 2 and a magnetic moment of 1.40 µ B for V and -0.15 µ B for Se in H-VSe 2 . Consistent with the radial spin density results in Fig. 5, we find that the DMC and PBE+U magnetic moments for Se are in much closer agreement than for V (for both T and H phase). By analyzing the spin densities and obtaining the on-site magnetic moments, we obtain a clear picture of how the magnetization of each ion depends on the computational method used, serving as a benchmark for the magnetic properties of 2D VSe 2 . In this work, we used a combination of DFT, DMC and a recently developed surrogate Hessian line-search optimization technique to resolve the previously reported discrepancy in structural parameters and relative phase stability of monolayer T-VSe 2 and H-VSe 2 . Using these methods, we determined the lattice constant and V-Se distance (with DMC accuracy) to be 3.414(12) Å and 2.505(7) Å respectively for T-VSe 2 and 3.335(8) Å and 2.503(5) respectively for H-VSe 2 . In addition, we find the relative energy between the phases (T -H) to be 0.06(2) eV/f.u. at the DMC level, indicating that in freestanding form, H-VSe 2 is more energetically favorable than T-VSe 2 . We went on to obtain a phase diagram between T and H phase from the PES and determined that a phase transition can be induced by strain or mechanisms such as thermal annealing. Additionally, we benchmarked the magnetic properties such as spin density and on-site magnetic moment for both phases and find substantial differences between DMC and DFT. The results of this study demonstrate the successes of the DMC method coupled with the surrogate Hessian linesearch structural optimization technique when applied to a 2D magnetic system. The estimates for lattice constant, bond distance, relative phase energy and the extracted structural- dependent phase diagram assist in clarifying previously inconclusive theoretical and experimental results regarding T and H phase VSe 2 . II. CODE AVAILABILITY STATEMENT Software packages mentioned in the article can be found at https://github.com/usnistgov/jarvis. Please note that the use of commercial software (VASP) does not imply recommendation by the National Institute of Standards and Technology. III. COMPETING INTERESTS mation (LDA), 3 Perdew-Burke-Ernzerhof (PBE), 4 and strongly constrained and appropriately normed (SCAN) 5 meta-GGA functionals were used with the added Hubbard correction (U ) 6 to treat the on-site Coulomb interaction of the 3d orbitals of the V atoms. At least 20 A of vacuum was given between periodic layers of VSe 2 in the c-direction. In addition, we used a reciprocal grid of 24x24x1 and a kinetic energy cutoff of 400 eV. Our Quantum Monte Carlo (QMC) simulations used DFT-PBE to generate the trial wavefunction for fixed-node diffusion Monte Carlo (DMC) calculations. The Quantum Espresso (QE) 7 code was used for our DFT calculations to create the trial wavefunction. This trial wavefunction was created for the ferromagnetic configuration of 2D VSe 2 using different U values with the goal of variationally determining the optimal nodal surface (U value that yields the lowest total energy). For V, we used norm-conserving (NC) RRKJ (OPT) pseudopotentials 8 and for Se, we used NC Burkatzki-Fillipi-Dolg (BFD) pseudopotentials. 9 After testing at the DFT level, a kinetic energy cutoff of 4,080 eV (300 Ry) and a k-grid of 6x6x1 was used (see Fig. S1 and S2) to generate trial wavefunctions for DMC. To accelerate the line-search method convergence for the metallic T phase, we increased the k-grid to 12x12x1. After the trial wavefunction was generated with DFT, Variational Monte Carlo (VMC) and DMC 10,11 calculations were performed using the QMCPACK 12,13 code. The single determinant DFT wavefunction is converted into a many-body wavefunction by use of the Jastrow parameters, 14,15 which assist in modeling electron correlation with the goal of reducing the statistical uncertainty in DMC calculations. 16,17 Up to two-body Jastrow 18 correlation functions were included, where the linear method 19 was used to minimize the variance and energy of the VMC energies. The cost function of the variance optimization is 100 % variance minimization and the cost function of the energy optimization is split as 95 % energy minimization and 5 % variance minimization, which has been proven to reduce the uncertainty of DMC calculated results. 16 The Nexus 20 software suite was used to automate the DFT-VMC-DMC workflow. The locality approximation 17 was used to evaluate the nonlocal S2 part of the pseudopotentials in DMC and an optimal timestep of 0.01 Ha −1 was determined for DMC simulations due to the fact that it yielded an acceptance ratio greater than 99 % (see Table S1). A full summary of the VMC and DMC methods can be found in reference. 10 The total charge density and spin density was extracted from our DMC calculations. The spin density is defined as the difference between the spin-up contribution to the total charge density and the spin-down contribution to the total charge density (ρ up − ρ down ). We used an extrapolation scheme on the DMC charge densities with the goal of eliminating the bias that occurs from using a mixed estimator. Since the charge density estimator does not commute with the fixed-node Hamiltonian, the DMC charge density was obtained from a mixed estimator between the pure fixed-node DMC and VMC densities. The extrapolation formula takes the form: 10 ρ 1 = 2ρ DMC − ρ VMC + O[(Φ − Ψ T ) 2 ](1) where ρ DMC and ρ VMC are the DMC and VMC charge densities respectively. Φ is the trial wavefunction from the DMC Hamiltonian and Ψ T is the trial wavefunction from VMC. In addition, we integrated the DFT+U and DMC spin densities up to a cutoff radius r cut (which we define as 1.34Å , due to the fact that it is approximately half of the V-Se bond distance in 2D T and H-VSe 2 ) in order to estimate the site-averaged atomic magnetic moment per V and Se. To obtain these magnetic moments per atom (M A ), we sum over the spherically interpolated spin densities: M A = 4π rcut 0 r 2 ρ s (r)dr ≈ 4π rcut/∆r i=0 r 2 i ρ s (r i )∆r(2) where r i is the distance from the center of the atom to a given point on the grid and ∆r is the radial grid size. To optimize the structural parameters of both T and H-VSe 2 according to the DMC potential energy surface (PES), we use a surrogate Hessian accelerated optimization method. 21 S3 In the method, we consider the PES around equilibrium as the second-order expansion in Wyckoff parameter space, p: E(p) = E 0 + 1 2 (p − p 0 ) T H p (p − p 0 ),(3) where H p is the Hessian, or the force-constant matrix, E 0 is the energy minimum and p 0 the energy-minimizing parameters. Diagonalizing the parameter Hessian, i.e., H p = U T ΛU , forms an optimal basis for a conjugate line-search in the parameter space, namely the eigenvectors U . The line-searches along U can be conducted in parallel, and ideally, they locate the minimum in just one parallel iteration within the quadratic region. Here, we conduct the line-search according to a set of 2 parameters: the lattice constant a and the Wyckoff parameter z, which is the unsigned displacement of the Se atoms along the z axis (see Fig. 1). For reporting purposes, the line-search parameters a and z are remapped to a and d, where d is the V-Se distance. In the surrogate Hessian scheme, we obtain a cheap but relatively accurate Hessian from DFT, and use it to the inform line-search on the DMC PES, in particular by providing the search directions. We also resample the DFT PES to predict fitting errors. Thus, we may minimize the computational cost of the DMC runs, while maintaining an error tolerance. Figure S4: DMC calculated total energies of a 24-atom supercell (normalized per formula unit (f.u.)) of 2D T (blue) and H (red) phase VSe 2 calculated as a function of the U parameter used to variationally determine the optimal trial wave function. The DMC error bars represent the standard error about the mean. Figure S5: The convergence of the a and z parameters and DMC energies per f.u. for both T (blue) and H (red) phase of 2D VSe 2 based on parallel line-search iterations along the DMC PES. The starting parameters (iteration 1) are from DFT, the zero offset is the mean over iterations 2 and 3, and dotted lines indicate the error tolerances for each case (95 % confidence). The DMC energies from respective equilibrium geometries are plotted with 1SEM (one standard error of the mean) uncertainties, with extra squares marking energies from the predicted minimum geometry. FIG. 1 . 1Fig. 1) phase which is semiconducting. Several experimental and theoretical studies have controversially claimed near-room temperature ferromagnetism in VSe 2 , with conflicting results throughout the literature. Density functional theory (DFT) along with classical Monte Carlo simulations have been used to obtain Top and side view of the atomic structure of monolayer VSe 2 in the a) 1T and b) 2H phase. a) Electronic mail: [email protected] b) Electronic mail: [email protected] FIG. 2 . 2Relative (T -H) energy between T and H phase 2D VSe 2 as a function of U parameter for several density functionals and methods of atomic relaxation: a) fully relaxing the structure, b) fixing the lattice and atomic positions to the U = 0 eV relaxed geometry of that particular functional and calculating the static energy, c) fixing the lattice to the U = 0 eV relaxed geometry of that particular functional and relaxing just the atomic positions. The dotted line indicates 0 eV. FIG. 3 . 3A summary of the deviation of the geometric properties relative to the DMC calculated geometric properties for a) T-VSe 2 and b) H-VSe 2 and c) the the deviation of T -H energy relative to the DMC calculated T -H energy for a variety of DFT functionals (U = 2 eV), where the DMC error bar (standard error about the mean) is represented by the red bars. FIG . 4. (Top) The phase diagram of 2D VSe 2 in terms of a and d V−Se . The phase boundary (solid line, black) is estimated from bicubic fits. To assure quality of the fits, the estimated ±0.01 eV error contours (dotted line) and the minima from the fits ('x') and the line-search ('o') are all well separated. (Bottom) Slices of the PES at d V−Se = 2.505 Å. FIG. 5 . 5The radially averaged spin density (ρ up -ρ down ) as a function of distance, calculated with DMC and PBE+U (U = 2 eV) of a) V and b) Se for 2D T-VSe 2 and c) V and d) Se for 2D H-VSe 2 . The inset of a) and c) depicts the spin isosurface density of T-VSe 2 and H-VSe 2 respectively, where the isosurface value was set to 6 x 10 −3 e/Å 3 . The standard error about the mean for DMC is indicated by error bars in blue. Figure S1 :Figure S2 : S1S2The total energy per atom of the unit cell (3 atoms) of 2D a) T-VSe 2 and b) H-VSe 2 as a function of plane wave cutoff energy for the norm-conserving pseudopotentials calculated with DFT using the PBE functional at a k-point grid of 6x6x1. The results show a converged cutoff energy of 4,080 eV (300 Ry) for both phases. The total energy per atom of the unit cell (3 atoms) of 2D a) T-VSe 2 and b) H-VSe 2 as a function of K-point grid for the norm-conserving pseudopotentials calculated with DFT (PBE) at the converged cutoff energy (seeFig. S1). The results show a converged k-point grid of 6x6x1(36) for both monolayers. The number of K-points was scaled appropriately to obtain the converged grid depending on the supercell size and shape for all DFT and DMC calculations. Figure S3 : S3Total energy as a function of lattice strain for T (blue) and H (red) phase 2D VSe 2 , calculated with various functionals and U values. Density functionals include LDA, PBE, and SCAN.S6 Figure S6 : S6Contour reconstructions of the DMC PESs (eV) of T (left) and H (right) phases of 2D VSe 2 with respect to a and z parameters. The contours are based on bicubic fits to sparse data, and thus, subject to biases and statistical uncertainties not indicated in the figures. The markers ('x' and '+') indicate data points from two parallel line-search iterations. TABLE I . ITabulated results for lattice constant, V-Se distance, and relative energy (T -H) for both T and H phase 2D VSe 2 for several computational methods. DMC error bars (standard error about the mean) are included in parenthesis. T-VSe 2 H-VSe 2 The surrogate DFT PES was based on QE with a 4,080 eV (300 Ry) cutoff using PBE with no DFT+U correction. The DMC PES was based on DFT-PBE with U = 2 eV orbitals and finite-size extrapolation through supercell sizes of 9 and 24 atoms. Each line-search was based on a 3rd order polynomial fit and set to contain 7 points, or displaced geometries, totaling 13 energy evaluations per phase, per iteration. However, alternative techniques, including (bi)polynomial fitting, were used in some parts to incorporate auxiliary DMC data and ensure convergence to the quadratic region. Effectively, two parallel line-search iterations for both phases were carried out, and the convergence was claimed in the absence of significant displacements. Table S1 : S1Tabulated results for the DMC timestep convergence of a 12 atom cell of 2D T-VSe 2 and H-VSe 2 . The acceptance ratio of 0.99 indicates that 0.01 Ha −1 is an appropriate timestep to use for all subsequent DMC simulations. Timestep (Ha −1 ) DMC Total Energy (Ha) Error (Ha) Acceptance Ratio Total Energy (eV/f.u.)T-VSe 2 Timestep (Ha −1 ) DMC Total Energy (Ha) Error (Ha) Acceptance Ratio 0.02 -361.730 0.001 0.985 0.01 -361.709 0.002 0.994 0.005 -361.709 0.003 0.997 0.002 -361.702 0.002 0.999 H-VSe 2 0.02 -361.673 0.001 0.985 0.01 -361.657 0.002 0.994 0.005 -361.654 0.002 0.998 0.002 -361.657 0.003 0.999 1 2 3 4 U (eV) -2460.30 -2460.25 -2460.20 -2460.15 -2460.10 -2460.05 -2460.00 -2459.95 T H The authors declare no competing interests.IV. ACKNOWLEDGMENTSThe authors thank the National Institute of Standards and Technology for funding, computational, and datamanagement resources. The authors thank Dr. Kamal Choudhary and Dr. Francesca Tavazza for fruitful discussions. We acknowledge grants of computer capacity from the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533). Stable, single-layer MX 2 transitionmetal oxides and dichalcogenides in a honeycomb-like structure. C Ataca, H Şahin, S Ciraci, Physical Chemistry C. 116C. Ataca, H.Şahin, and S. Ciraci, "Stable, single-layer MX 2 transition- metal oxides and dichalcogenides in a honeycomb-like structure," The Jour- nal of Physical Chemistry C 116, 8983-8999 (2012). Newtype single-layer magnetic semiconductor in transitionmetal dichalcogenides vx2 (x = s, se and te). H.-R Fuh, C.-R Chang, Y.-K Wang, R F L Evans, R W Chantrell, H.-T Jeng, Scientific Reports. 632625H.-R. Fuh, C.-R. Chang, Y.-K. Wang, R. F. L. Evans, R. W. Chantrell, and H.-T. Jeng, "Newtype single-layer magnetic semiconductor in transition- metal dichalcogenides vx2 (x = s, se and te)," Scientific Reports 6, 32625 (2016). Ferromagnetism in 2d vanadium diselenide. X Wang, D Li, Z Li, C Wu, C.-M Che, G Chen, X Cui, ACS Nano. 15ACS NanoX. Wang, D. Li, Z. Li, C. Wu, C.-M. Che, G. Chen, and X. Cui, "Ferromag- netism in 2d vanadium diselenide," ACS Nano, ACS Nano 15, 16236-16241 (2021). Strong room-temperature ferromagnetism in VSe 2 monolayers on van der Waals substrates. M Bonilla, S Kolekar, Y Ma, H C Diaz, V Kalappattil, R Das, T Eggers, H R Gutierrez, M.-H Phan, M Batzill, Nature Nanotechnology. 13M. Bonilla, S. Kolekar, Y. Ma, H. C. Diaz, V. Kalappattil, R. Das, T. Eggers, H. R. Gutierrez, M.-H. Phan, and M. Batzill, "Strong room-temperature ferromagnetism in VSe 2 monolayers on van der Waals substrates," Nature Nanotechnology 13, 289-293 (2018). Chemically exfoliated vse2 monolayers with room-temperature ferromagnetism. W Yu, J Li, T S Herng, Z Wang, X Zhao, X Chi, W Fu, I Abdelwahab, J Zhou, J Dan, Z Chen, Z Chen, Z Li, J Lu, S J Pennycook, Y P Feng, J Ding, K P Loh, 10.1002/adma.201903779Advanced Materials. 311903779W. Yu, J. Li, T. S. Herng, Z. Wang, X. Zhao, X. Chi, W. Fu, I. Abdelwahab, J. Zhou, J. Dan, Z. Chen, Z. Chen, Z. Li, J. Lu, S. J. Pennycook, Y. P. Feng, J. Ding, and K. P. Loh, "Chemically exfoliated vse2 monolayers with room-temperature ferromagnetism," Advanced Materials 31, 1903779 (2019), https://onlinelibrary.wiley.com/doi/pdf/10.1002/adma.201903779. . G Duvjir, B K Choi, I Jang, S Ulstrup, S Kang, T Thi Ly, S Kim, Y H Choi, C Jozwiak, A Bostwick, E Rotenberg, J.-G Park, R Sankar, K.-S , G. Duvjir, B. K. Choi, I. Jang, S. Ulstrup, S. Kang, T. Thi Ly, S. Kim, Y. H. Choi, C. Jozwiak, A. Bostwick, E. Rotenberg, J.-G. Park, R. Sankar, K.-S. Emergence of a metal-insulator transition and high-temperature charge-density waves in vse2 at the monolayer limit. J Kim, Y J Kim, Chang, Nano Letters. 18Nano LettersKim, J. Kim, and Y. J. Chang, "Emergence of a metal-insulator transition and high-temperature charge-density waves in vse2 at the monolayer limit," Nano Letters, Nano Letters 18, 5432-5438 (2018). Unveiling the origin of roomtemperature ferromagnetism in monolayer vse2: the role of extrinsic effects. D W Boukhvalov, A Politano, Nanoscale. 12D. W. Boukhvalov and A. Politano, "Unveiling the origin of room- temperature ferromagnetism in monolayer vse2: the role of extrinsic ef- fects," Nanoscale 12, 20875-20882 (2020). Pressureinduced suppression of charge density wave and emergence of superconductivity in 1t − vse 2. S Sahoo, U Dutta, L Harnagea, A K Sood, S Karmakar, Phys. Rev. B. 10114514S. Sahoo, U. Dutta, L. Harnagea, A. K. Sood, and S. Karmakar, "Pressure- induced suppression of charge density wave and emergence of supercon- ductivity in 1t − vse 2 ," Phys. Rev. B 101, 014514 (2020). Electronic structure and enhanced charge-density wave order of monolayer vse2. J Feng, D Biswas, A Rajan, M D Watson, F Mazzola, O J Clark, K Underwood, I Marković, M Mclaren, A Hunter, D M Burn, L B Duffy, S Barua, G Balakrishnan, F Bertran, P Le Fèvre, T K Kim, G Van Der Laan, T Hesjedal, P Wahl, P D C King, Nano Letters. 18J. Feng, D. Biswas, A. Rajan, M. D. Watson, F. Mazzola, O. J. Clark, K. Un- derwood, I. Marković, M. McLaren, A. Hunter, D. M. Burn, L. B. Duffy, S. Barua, G. Balakrishnan, F. Bertran, P. Le Fèvre, T. K. Kim, G. van der Laan, T. Hesjedal, P. Wahl, and P. D. C. King, "Electronic structure and enhanced charge-density wave order of monolayer vse2," Nano Letters 18, 4493-4499 (2018). Unique gap structure and symmetry of the charge density wave in single-layer vse 2. P Chen, W W Pai, Y.-H Chan, V Madhavan, M Y Chou, S.-K Mo, A.-V Fedorov, T.-C Chiang, Phys. Rev. Lett. 121196402P. Chen, W. W. Pai, Y.-H. Chan, V. Madhavan, M. Y. Chou, S.-K. Mo, A.-V. Fedorov, and T.-C. Chiang, "Unique gap structure and symmetry of the charge density wave in single-layer vse 2 ," Phys. Rev. Lett. 121, 196402 (2018). Structural phase transition of multilayer vse2. D Li, X Wang, C Kan, D He, Z Li, Q Hao, H Zhao, C Wu, C Jin, X Cui, ACS Applied Materials & Interfaces. 12D. Li, X. Wang, C.-m. Kan, D. He, Z. Li, Q. Hao, H. Zhao, C. Wu, C. Jin, and X. Cui, "Structural phase transition of multilayer vse2," ACS Applied Materials & Interfaces 12, 25143-25149 (2020). Structural phase transitions in vse2: energetics, electronic structure and magnetism. G V Pushkarev, V G Mazurenko, V V Mazurenko, D W Boukhvalov, Phys. Chem. Chem. Phys. 21G. V. Pushkarev, V. G. Mazurenko, V. V. Mazurenko, and D. W. Boukhvalov, "Structural phase transitions in vse2: energetics, electronic structure and magnetism," Phys. Chem. Chem. Phys. 21, 22647-22653 (2019). Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study. S L Dudarev, G A Botton, S Y Savrasov, C J Humphreys, A P Sutton, Phys. Rev. B. 57S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, and A. P. Sutton, "Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study," Phys. Rev. B 57, 1505-1509 (1998). The electronic structure and spin states of 2d graphene/vx2 (x = s, se) heterostructures. Z I Popov, N S Mikhaleva, M A Visotin, A A Kuzubov, S Entani, H Naramoto, S Sakai, P B Sorokin, P V Avramov, Phys. Chem. Chem. Phys. 18Z. I. Popov, N. S. Mikhaleva, M. A. Visotin, A. A. Kuzubov, S. Entani, H. Naramoto, S. Sakai, P. B. Sorokin, and P. V. Avramov, "The electronic structure and spin states of 2d graphene/vx2 (x = s, se) heterostructures," Phys. Chem. Chem. Phys. 18, 33047-33052 (2016). Quantum Monte Carlo simulations of solids. W M C Foulkes, L Mitas, R J Needs, G Rajagopal, Rev. Mod. Phys. 73W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal, "Quantum Monte Carlo simulations of solids," Rev. Mod. Phys. 73, 33-83 (2001). Ab initio Quantum Monte Carlo calculations of spin superexchange in cuprates: The benchmarking case of Ca 2 CuO 3. K Foyevtsova, J T Krogel, J Kim, P R C Kent, E Dagotto, F A Reboredo, Phys. Rev. X. 431003K. Foyevtsova, J. T. Krogel, J. Kim, P. R. C. Kent, E. Dagotto, and F. A. Reboredo, "Ab initio Quantum Monte Carlo calculations of spin superex- change in cuprates: The benchmarking case of Ca 2 CuO 3 ," Phys. Rev. X 4, 031003 (2014). Competing collinear magnetic structures in superconducting FeSe by firstprinciples quantum Monte Carlo calculations. B Busemeyer, M Dagrada, S Sorella, M Casula, L K Wagner, Phys. Rev. B. 9435108B. Busemeyer, M. Dagrada, S. Sorella, M. Casula, and L. K. Wagner, "Competing collinear magnetic structures in superconducting FeSe by first- principles quantum Monte Carlo calculations," Phys. Rev. B 94, 035108 (2016). A first-principles Quantum Monte Carlo study of two-dimensional (2D) GaSe. D Wines, K Saritas, C Ataca, The Journal of Chemical Physics. 153154704D. Wines, K. Saritas, and C. Ataca, "A first-principles Quantum Monte Carlo study of two-dimensional (2D) GaSe," The Journal of Chemical Physics 153, 154704 (2020). A pathway toward high-throughput quantum Monte Carlo simulations for alloys: A case study of twodimensional (2d) GaS x Se 1−x. D Wines, K Saritas, C Ataca, The Journal of Chemical Physics. 155194112D. Wines, K. Saritas, and C. Ataca, "A pathway toward high-throughput quantum Monte Carlo simulations for alloys: A case study of two- dimensional (2d) GaS x Se 1−x ," The Journal of Chemical Physics 155, 194112 (2021). Intrinsic ferromagnetism of twodimensional (2d) mno2 revisited: A many-body quantum monte carlo and dft+u study. D Wines, K Saritas, C Ataca, The Journal of Physical Chemistry C. 126D. Wines, K. Saritas, and C. Ataca, "Intrinsic ferromagnetism of two- dimensional (2d) mno2 revisited: A many-body quantum monte carlo and dft+u study," The Journal of Physical Chemistry C 126, 5813-5821 (2022). A systematic dft+u and quantum monte carlo benchmark of magnetic two-dimensional (2d) crx 3 (x = i, br, cl, f). D Wines, K Choudhary, F Tavazza, D. Wines, K. Choudhary, and F. Tavazza, "A systematic dft+u and quantum monte carlo benchmark of magnetic two-dimensional (2d) crx 3 (x = i, br, cl, f)," (2022). Surrogate hessian accelerated structural optimization for stochastic electronic structure theories. J Tiihonen, P R C Kent, J T Krogel, 10.1063/5.0079046The Journal of Chemical Physics. 15654104J. Tiihonen, P. R. C. Kent, and J. T. Krogel, "Surrogate hes- sian accelerated structural optimization for stochastic electronic struc- ture theories," The Journal of Chemical Physics 156, 054104 (2022), https://doi.org/10.1063/5.0079046. Optimized structure and electronic band gap of monolayer GeSe from Quantum Monte Carlo methods. H Shin, J T Krogel, K Gasperich, P R C Kent, A Benali, O Heinonen, Phys. Rev. Materials. 524002H. Shin, J. T. Krogel, K. Gasperich, P. R. C. Kent, A. Benali, and O. Heinonen, "Optimized structure and electronic band gap of mono- layer GeSe from Quantum Monte Carlo methods," Phys. Rev. Materials 5, 024002 (2021). A combined first principles study of the structural, magnetic, and phonon properties of monolayer cri3. D Staros, G Hu, J Tiihonen, R Nanguneri, J Krogel, M C Bennett, O Heinonen, P Ganesh, B Rubenstein, The Journal of Chemical Physics. 15614707The Journal of Chemical PhysicsD. Staros, G. Hu, J. Tiihonen, R. Nanguneri, J. Krogel, M. C. Bennett, O. Heinonen, P. Ganesh, and B. Rubenstein, "A combined first principles study of the structural, magnetic, and phonon properties of monolayer cri3," The Journal of Chemical Physics, The Journal of Chemical Physics 156, 014707 (2021). Inhomogeneous electron gas. P Hohenberg, W Kohn, Phys. Rev. 136P. Hohenberg and W. Kohn, "Inhomogeneous electron gas," Phys. Rev. 136, B864-B871 (1964). Generalized gradient approximation made simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77J. P. Perdew, K. Burke, and M. Ernzerhof, "Generalized gradient approxi- mation made simple," Phys. Rev. Lett. 77, 3865-3868 (1996). Strongly constrained and appropriately normed semilocal density functional. J Sun, A Ruzsinszky, J P Perdew, Phys. Rev. Lett. 11536402J. Sun, A. Ruzsinszky, and J. P. Perdew, "Strongly constrained and appro- priately normed semilocal density functional," Phys. Rev. Lett. 115, 036402 (2015). Semiempirical GGA-type density functional constructed with a long-range dispersion correction. S Grimme, Journal of Computational Chemistry. 27S. Grimme, "Semiempirical GGA-type density functional constructed with a long-range dispersion correction," Journal of Computational Chemistry 27, 1787-1799 (2006). Effect of the damping function in dispersion corrected density functional theory. S Grimme, S Ehrlich, L Goerigk, 10.1002/jcc.21759Journal of Computational Chemistry. 32S. Grimme, S. Ehrlich, and L. Goerigk, "Effect of the damp- ing function in dispersion corrected density functional theory," Journal of Computational Chemistry 32, 1456-1465 (2011), https://onlinelibrary.wiley.com/doi/pdf/10.1002/jcc.21759. Versatile van der Waals density functional based on a meta-generalized gradient approximation. H Peng, Z.-H Yang, J P Perdew, J Sun, Phys. Rev. X. 641005H. Peng, Z.-H. Yang, J. P. Perdew, and J. Sun, "Versatile van der Waals density functional based on a meta-generalized gradient approximation," Phys. Rev. X 6, 041005 (2016). Effects of three-body and backflow correlations in the two-dimensional electron gas. Y Kwon, D M Ceperley, R M Martin, Phys. Rev. B. 48Y. Kwon, D. M. Ceperley, and R. M. Martin, "Effects of three-body and backflow correlations in the two-dimensional electron gas," Phys. Rev. B 48, 12037-12046 (1993). Effects of backflow correlation in the three-dimensional electron gas: Quantum monte carlo study. Y Kwon, D M Ceperley, R M Martin, Phys. Rev. B. 58Y. Kwon, D. M. Ceperley, and R. M. Martin, "Effects of backflow corre- lation in the three-dimensional electron gas: Quantum monte carlo study," Phys. Rev. B 58, 6800-6806 (1998). Inhomogeneous backflow transformations in quantum monte carlo calculations. P López Ríos, A Ma, N D Drummond, M D Towler, R J Needs, Phys. Rev. E. 7466701P. López Ríos, A. Ma, N. D. Drummond, M. D. Towler, and R. J. Needs, "Inhomogeneous backflow transformations in quantum monte carlo calcu- lations," Phys. Rev. E 74, 066701 (2006). Systematic reduction of sign errors in many-body calculations of atoms and molecules. M Bajdich, M L Tiago, R Q Hood, P R C Kent, F A Reboredo, Phys. Rev. Lett. 104193001M. Bajdich, M. L. Tiago, R. Q. Hood, P. R. C. Kent, and F. A. Reboredo, "Systematic reduction of sign errors in many-body calculations of atoms and molecules," Phys. Rev. Lett. 104, 193001 (2010). cri 3 revisited with a many-body ab initio theoretical approach. T Ichibha, A L Dzubak, J T Krogel, V R Cooper, F A Reboredo, Phys. Rev. Materials. 564006T. Ichibha, A. L. Dzubak, J. T. Krogel, V. R. Cooper, and F. A. Reboredo, "cri 3 revisited with a many-body ab initio theoretical approach," Phys. Rev. Materials 5, 064006 (2021). Structural, electronic, and magnetic properties of bulk and epitaxial LaCoO 3 through Diffusion Monte Carlo. K Saritas, J T Krogel, S Okamoto, H N Lee, F A Reboredo, Phys. Rev. Materials. 3124414K. Saritas, J. T. Krogel, S. Okamoto, H. N. Lee, and F. A. Reboredo, "Struc- tural, electronic, and magnetic properties of bulk and epitaxial LaCoO 3 through Diffusion Monte Carlo," Phys. Rev. Materials 3, 124414 (2019). Diffusion monte carlo: A pathway towards an accurate theoretical description of manganese oxides. K Saritas, J T Krogel, P R C Kent, F A Reboredo, Phys. Rev. Materials. 285801K. Saritas, J. T. Krogel, P. R. C. Kent, and F. A. Reboredo, "Diffusion monte carlo: A pathway towards an accurate theoretical description of man- ganese oxides," Phys. Rev. Materials 2, 085801 (2018). Correlating structural, electronic, and magnetic properties of epitaxial Vse 2 thin films. G Chen, S T Howard, A B Maghirang, K Nguyen Cong, R A B Villaos, L.-Y Feng, K Cai, S C Ganguli, W Swiech, E Morosan, I I Oleynik, F.-C Chuang, H Lin, V Madhavan, Phys. Rev. B. 102115149G. Chen, S. T. Howard, A. B. Maghirang, K. Nguyen Cong, R. A. B. Villaos, L.-Y. Feng, K. Cai, S. C. Ganguli, W. Swiech, E. Morosan, I. I. Oleynik, F.-C. Chuang, H. Lin, and V. Madhavan, "Correlating structural, electronic, and magnetic properties of epitaxial Vse 2 thin films," Phys. Rev. B 102, 115149 (2020). Epitaxially grown monolayer vse2: an air-stable magnetic two-dimensional material with low work function at edges. Z.-L Liu, X Wu, Y Shao, J Qi, Y Cao, L Huang, C Liu, J.-O Wang, Q Zheng, Z.-L Zhu, K Ibrahim, Y.-L Wang, H.-J Gao, Science Bulletin. 63Z.-L. Liu, X. Wu, Y. Shao, J. Qi, Y. Cao, L. Huang, C. Liu, J.-O. Wang, Q. Zheng, Z.-L. Zhu, K. Ibrahim, Y.-L. Wang, and H.-J. Gao, "Epitaxially grown monolayer vse2: an air-stable magnetic two-dimensional material with low work function at edges," Science Bulletin 63, 419-425 (2018). Modification of monolayer 1t-vse2 by selective deposition of vanadium and tellurium. A Karn, Y H Chan, U Chazarin, P Chen, W W Pai, 10.1063/6.0001402AIP Advances. 1235240A. Karn, Y. H. Chan, U. Chazarin, P. Chen, and W. W. Pai, "Modification of monolayer 1t-vse2 by selective deposition of vanadium and tellurium," AIP Advances 12, 035240 (2022), https://doi.org/10.1063/6.0001402. Structural and transport properties of 1t-vse2 single crystal under high pressures. D Song, Y Zhou, M Zhang, X He, X Li, 10.3389/fmats.2021.710849Frontiers in Materials. 8D. Song, Y. Zhou, M. Zhang, X. He, and X. Li, "Structural and transport properties of 1t-vse2 single crystal under high pressures," Frontiers in Ma- terials 8 (2021), 10.3389/fmats.2021.710849. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. G Kresse, J Furthmüller, Phys. Rev. B. 54Kresse, G.; Furthmüller, J. Efficient iterative schemes for ab initio total-energy calcu- lations using a plane-wave basis set. Phys. Rev. B 1996, 54, 11169-11186. From ultrasoft pseudopotentials to the projector augmentedwave method. G Kresse, D Joubert, Phys. Rev. B. 59Kresse, G.; Joubert, D. From ultrasoft pseudopotentials to the projector augmented- wave method. Phys. Rev. B 1999, 59, 1758-1775. . P Hohenberg, W Kohn, Inhomogeneous Electron Gas. Phys. Rev. 136Hohenberg, P.; Kohn, W. Inhomogeneous Electron Gas. Phys. Rev. 1964, 136, B864- B871. Generalized Gradient Approximation Made Simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 1996, 77, 3865-3868. Strongly Constrained and Appropriately Normed Semilocal Density Functional. J Sun, A Ruzsinszky, J P Perdew, Phys. Rev. Lett. 36402Sun, J.; Ruzsinszky, A.; Perdew, J. P. Strongly Constrained and Appropriately Normed Semilocal Density Functional. Phys. Rev. Lett. 2015, 115, 036402. . S L Dudarev, G A Botton, S Y Savrasov, C J Humphreys, A P Sutton, S9, Dudarev, S. L.; Botton, G. A.; Savrasov, S. Y.; Humphreys, C. J.; Sutton, A. P. S9 Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study. Phys. Rev. B. 57Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study. Phys. Rev. B 1998, 57, 1505-1509. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. P Giannozzi, Journal of Physics: Condensed Matter. 21395502Giannozzi, P. et al. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. Journal of Physics: Condensed Matter 2009, 21, 395502. Pseudopotentials for quantum Monte Carlo studies of transition metal oxides. J T Krogel, J A Santana, F A Reboredo, Phys. Rev. B. 75143Krogel, J. T.; Santana, J. A.; Reboredo, F. A. Pseudopotentials for quantum Monte Carlo studies of transition metal oxides. Phys. Rev. B 2016, 93, 075143. Energy-consistent pseudopotentials for Quantum Monte Carlo calculations. M Burkatzki, C Filippi, M Dolg, The Journal of Chemical Physics. 126234105Burkatzki, M.; Filippi, C.; Dolg, M. Energy-consistent pseudopotentials for Quantum Monte Carlo calculations. The Journal of Chemical Physics 2007, 126, 234105. Quantum Monte Carlo simulations of solids. W M C Foulkes, L Mitas, R J Needs, G Rajagopal, Rev. Mod. Phys. 73Foulkes, W. M. C.; Mitas, L.; Needs, R. J.; Rajagopal, G. Quantum Monte Carlo simulations of solids. Rev. Mod. Phys. 2001, 73, 33-83. Continuum Variational and Diffusion Quantum Monte Carlo calculations. R J Needs, M D Towler, N D Drummond, P L Ríos, Journal of Physics: Condensed Matter. 2223201Needs, R. J.; Towler, M. D.; Drummond, N. D.; Ríos, P. L. Continuum Variational and Diffusion Quantum Monte Carlo calculations. Journal of Physics: Condensed Matter 2009, 22, 023201. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids. J Kim, Journal of Physics: Condensed Matter. 30Kim, J. et al. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids. Journal of Physics: Condensed Matter 2018, 30, 195901. QMCPACK: Advances in the development, efficiency, and application of auxiliary field and real-space Variational and Diffusion Quantum Monte Carlo. P R C Kent, The Journal of Chemical Physics. 152174105Kent, P. R. C. et al. QMCPACK: Advances in the development, efficiency, and applica- tion of auxiliary field and real-space Variational and Diffusion Quantum Monte Carlo. The Journal of Chemical Physics 2020, 152, 174105. The Theory of Complex Spectra. J C Slater, Phys. Rev. 34Slater, J. C. The Theory of Complex Spectra. Phys. Rev. 1929, 34, 1293-1322. Many-Body Problem with Strong Forces. R Jastrow, Phys. Rev. 98Jastrow, R. Many-Body Problem with Strong Forces. Phys. Rev. 1955, 98, 1479-1484. Energy and Variance Optimization of Many-Body Wave Functions. C J Umrigar, C Filippi, Phys. Rev. Lett. 150201Umrigar, C. J.; Filippi, C. Energy and Variance Optimization of Many-Body Wave Functions. Phys. Rev. Lett. 2005, 94, 150201. Nonlocal pseudopotentials and Diffusion Monte Carlo. L Mitas, E L Shirley, D M Ceperley, The Journal of Chemical Physics. 95Mitas, L.; Shirley, E. L.; Ceperley, D. M. Nonlocal pseudopotentials and Diffusion Monte Carlo. The Journal of Chemical Physics 1991, 95, 3467-3475. Jastrow correlation factor for atoms, molecules, and solids. N D Drummond, M D Towler, R J Needs, Phys. Rev. B. 235119Drummond, N. D.; Towler, M. D.; Needs, R. J. Jastrow correlation factor for atoms, molecules, and solids. Phys. Rev. B 2004, 70, 235119. Alleviation of the Fermion-Sign Problem by Optimization of Many-Body Wave Functions. C J Umrigar, J Toulouse, C Filippi, S Sorella, R G Hennig, Phys. Rev. Lett. 110201Umrigar, C. J.; Toulouse, J.; Filippi, C.; Sorella, S.; Hennig, R. G. Alleviation of the Fermion-Sign Problem by Optimization of Many-Body Wave Functions. Phys. Rev. Lett. 2007, 98, 110201. Nexus: A modular workflow management system for quantum simulation codes. J T Krogel, Computer Physics Communications. 198Krogel, J. T. Nexus: A modular workflow management system for quantum simulation codes. Computer Physics Communications 2016, 198, 154 -168. Surrogate Hessian accelerated structural optimization for stochastic electronic structure theories. J Tiihonen, P R C Kent, J T Krogel, The Journal of Chemical Physics. 15654104Tiihonen, J.; Kent, P. R. C.; Krogel, J. T. Surrogate Hessian accelerated structural op- timization for stochastic electronic structure theories. The Journal of Chemical Physics 2022, 156, 054104.
[ "https://github.com/usnistgov/jarvis." ]
[ "Quantum Initial Conditions for Curved Inflating Universes", "Quantum Initial Conditions for Curved Inflating Universes" ]
[ "M I Letey \nKavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK\n\nPerimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooOntarioCanada\n", "Z Shumaylov \nKavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK\n\nDepartment of Applied Mathematics and Theoretical Physics\nUniversity of Cambridge\nWilberforce RdCB3 0WACambridgeUK\n", "F J Agocs \nKavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK\n\nCenter for Computational Mathematics\nFlatiron Institute\n162 Fifth AvenueNew YorkNew YorkUSA\n\nAstrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n", "W J Handley \nKavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK\n\nAstrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n", "M P Hobson \nAstrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n", "A N Lasenby \nKavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK\n\nAstrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n" ]
[ "Kavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK", "Perimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooOntarioCanada", "Kavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK", "Department of Applied Mathematics and Theoretical Physics\nUniversity of Cambridge\nWilberforce RdCB3 0WACambridgeUK", "Kavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK", "Center for Computational Mathematics\nFlatiron Institute\n162 Fifth AvenueNew YorkNew YorkUSA", "Astrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK", "Kavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK", "Astrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK", "Astrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK", "Kavli Institute for Cosmology\nMadingley RoadCB3 0HACambridgeUK", "Astrophysics Group\nCavendish Laboratory\nJ. J. Thomson AvenueCB3 0HECambridgeUK" ]
[]
We discuss the challenges of motivating, constructing, and quantising a canonically-normalised inflationary perturbation in spatially curved universes. We show that this has historically proved challenging due to the interaction of non-adiabaticity with spatial curvature. We propose a novel curvature perturbation which is canonically normalised, unique up to a single scalar parameter. This corrected quantisation has potentially observational consequences via modifications to the primordial power spectrum at large angular scales, as well as theoretical implications for quantisation procedures in curved cosmologies filled with a scalar field. * [email protected]; equal contribution † [email protected]; equal contribution arXiv:2211.17248v2 [gr-qc] 7 May 2023
null
[ "https://export.arxiv.org/pdf/2211.17248v2.pdf" ]
254,096,165
2211.17248
4af87d5e708073c60a2d5a150c9289215526b2d2
Quantum Initial Conditions for Curved Inflating Universes M I Letey Kavli Institute for Cosmology Madingley RoadCB3 0HACambridgeUK Perimeter Institute for Theoretical Physics N2L 2Y5WaterlooOntarioCanada Z Shumaylov Kavli Institute for Cosmology Madingley RoadCB3 0HACambridgeUK Department of Applied Mathematics and Theoretical Physics University of Cambridge Wilberforce RdCB3 0WACambridgeUK F J Agocs Kavli Institute for Cosmology Madingley RoadCB3 0HACambridgeUK Center for Computational Mathematics Flatiron Institute 162 Fifth AvenueNew YorkNew YorkUSA Astrophysics Group Cavendish Laboratory J. J. Thomson AvenueCB3 0HECambridgeUK W J Handley Kavli Institute for Cosmology Madingley RoadCB3 0HACambridgeUK Astrophysics Group Cavendish Laboratory J. J. Thomson AvenueCB3 0HECambridgeUK M P Hobson Astrophysics Group Cavendish Laboratory J. J. Thomson AvenueCB3 0HECambridgeUK A N Lasenby Kavli Institute for Cosmology Madingley RoadCB3 0HACambridgeUK Astrophysics Group Cavendish Laboratory J. J. Thomson AvenueCB3 0HECambridgeUK Quantum Initial Conditions for Curved Inflating Universes (Dated: May 9, 2023) We discuss the challenges of motivating, constructing, and quantising a canonically-normalised inflationary perturbation in spatially curved universes. We show that this has historically proved challenging due to the interaction of non-adiabaticity with spatial curvature. We propose a novel curvature perturbation which is canonically normalised, unique up to a single scalar parameter. This corrected quantisation has potentially observational consequences via modifications to the primordial power spectrum at large angular scales, as well as theoretical implications for quantisation procedures in curved cosmologies filled with a scalar field. * [email protected]; equal contribution † [email protected]; equal contribution arXiv:2211.17248v2 [gr-qc] 7 May 2023 I. INTRODUCTION Cosmological inflation [1][2][3][4] provides explanatory power for observations of CMB anisotropies [5][6][7][8], by yielding the quantum fluctuations that seed large scale structure today [9]. Additionally, inflation also resolves the horizon and curvature problems, both of which can be thought of as initial conditions for the universe [10]. From a theoretical standpoint, it is inconsistent to assume a flat universe at the start of inflation, and instead one could consider inflation as starting in a general KΛCDM universe [11][12][13][14][15]. Such investigations are further motivated in light of the conversation in the literature regarding the statistical significance (or lack thereof) of the preference in CMB and BAO data for present-day curvature [16][17][18][19][20][21], and it is undeniably true that any observation of present day curvature would have profound implications for theories of inflation [22][23][24][25]. The imprint of quantum perturbations on the CMB as anisotropies is described by the power spectrum for the gauge-invariant comoving curvature perturbation variable R. Thus, towards the goal of computing the power spectrum for R in a curved inflationary spacetime, Ref. [26] finds the Mukhanov-Sasaki equation of motion for R for non-zero K, in analogy with standard inflationary calculations. Introducing curvature markedly complicates the equation of motion for R, which only simplifies in two important cases: Firstly when K = 0; secondly when one takes the matter field to be either a scalar field without potential (i.e. a stiff fluid) or hydrodynamical matter [9,55]. During very early evolution the inflaton can be approximated as a stiff fluid, and during slow roll as hydrodynamical matter [54], thus simplifying the equation of motion. However, this does not occur in the period between the two regimes, which we show is due to the interaction of non-adiabatic perturbations with curvature. This motivates the need for a novel inflationary perturbation variable, which we introduce. To compute the power spectrum, the curvature perturbation variable must be evolved according to its equation of motion from some initial conditions. In flat spacetimes, initial conditions from the Bunch-Davies vacuum set arbitrarily far back in the past are typically used. It is non-trivial to generalise such quantisation schemes to a curved inflationary spacetime on two counts: In the first instance, eternal inflation is impossible in the context of curvature [23]; refer further to fig. 1 for a timeline of inflationary KΛCDM cosmology. In the second, traditionally posed methods such as the Bunch-Davies vacuum, Danielsson vacuum, and Hamiltonian diagonalisation have multiple theoretical issues which we discuss. The contributions in this paper are structured as follows. In section II, we introduce standard cosmological perturbation theory generalised to curved spacetimes, set up generalised equations for R, and discuss vacuum selection. Section III provides formalism clarifying what prevents R from having a canonically-normalised wave equation of motion in the curved case, motivating section IV, in which we propose a novel curvature perturbation variable that admits a simple harmonic oscillator wave equation of motion. This allows for connections to established work in quantum fields in curved spacetimes, in addition to allowing this variable to be easily quantised by the minimised-RSET procedure given in section V, which is a generalisation of the procedure proposed by Ref. [27]. These calculations are as general as possible, written in terms of inflationary sound speed, which allows extensions to non-standard inflationary Lagrangians. Finally, section VI discusses how to use this novel variable as a means of setting initial conditions, not only for R but for any first order scalar perturbation variable. The resulting power spectrum for R is plotted and we discuss and conclude in section VII. k min = 10 −4 Mpc −1 k * = 0.05Mpc −1 kmax = 1Mpc −1 CMB FIG. 1. Timeline of a KΛCDM cosmology using Planck best-fit parameters [7] with plikTTTEEE+lensing for a cosmology including spatial curvature and V (φ) ∝ φ 4/3 potential with instant reheating [23]. The universe begins in a kinetically dominated pre-inflationary epoch [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42], with the comoving horizon 1 aH and curvature ΩK = − K (aH) 2 parameters growing. Inflation begins when the scale factor of the universe reaches around 10 5 Planck lengths, acting to flatten the universe and dramatically shrinking the comoving horizon through a period of slow roll. At a scale factor of around 1mm the inflaton reaches the bottom of its potential, with oscillations about the minima simulating a matter dominated universe. At some point in between 10mm and 10m the universe undergoes a reheating phase which we model as instantaneous, but can easily be extended to allow greater freedom [23]. The universe then grows in a protracted radiation-dominated phase, undergoing several phase transitions until recombination at around the time that the universe transitions into a matter dominated epoch when the scale factor is around 0.1Gly. Post recombination/CMB the universe eventually enters into a late time dark energy dominated epoch and the curvature and comoving horizon start to shrink again, until we reach a universe today with radius a0 ≈ 50Gpc. The best-fit parameters today even with a small amount of curvature place strong constraints on inflationary potential consistent with this history, and only a relatively small range of primordial curvatures prove compatible. For more detail, consult Hergt et al. [23]. (H0 = 64.03kms −1 Mpc −1 , Ωm = 0.3453, ΩK = −0.0092, log 10 10 As = 3.0336, ns = 0.9699, z * = 1089.61) II. BACKGROUND We begin with a discussion of the relevant inflationary perturbation theory setup, summarising previous results before motivating the need for both an alternate curvature perturbation variable and a more robust method of quantisation. By convention, we will work in conformal time η, where we will denote ≡ d/dη, and use the conformal Hubble parameter H = aH = a /a. A. Perturbation Theory Setup To leading order, both spacetime and the inflaton field can be taken to be homogeneous and isotropic, i.e. dependent only on η. We will consider the background to be a general Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime, whose metric is given by ds 2 = a 2 (η){dη 2 − c ij dx i dx j },(1) with the spatial metric c ij dx i dx j = dr 2 1 − Kr 2 + r 2 (dθ 2 + sin 2 θ dφ 2 ).(2) This background is then subject to scalar perturbations ds 2 = a 2 (η){(1 + 2Φ) dη 2 − ((1 − 2Ψ)c ij ) dx i dx j },(3) where we have picked the Newtonian gauge and omitted vector and tensor perturbations as they decouple from the scalar perturbations. See Ref. [10] for a more indepth discussion of gauge choices. We restrict our attention to scalars in this instance since vectors generically decay during inflation, and tensors are already canonically normalised, even for K = 0 [26]. The inflaton is taken to be a scalar field minimallycoupled to the curved spacetime, S = d 4 x |g| 1 2 R + 1 2 ∇ µ φ∇ µ φ − V (φ) .(4) which is homogeneous and isotropic to zeroth order and is perturbed by δφ(η, x). The usual background inflation dynamics are given by H 2 = a 2 3 ρ − K,(5)H = H 2 + K − a 2 2 (ρ + p),(6) where (5) and (6) are the Friedmann and Raychaudhuri equations respectively. The background energy density ρ and pressure p for the inflaton are ρ = 1 2a 2 (φ ) 2 + V (φ),(7)p = 1 2a 2 (φ ) 2 − V (φ).(8) B. Gauge Invariant Curvature Perturbation Typical analyses now proceed by defining the gaugeinvariant comoving curvature perturbation, R = Ψ + H φ δφ.(9) R is of interest as it controls the perturbations to spatial curvature, as seen from the spatial Ricci scalar R (3) = 6 K a 2 + 4 a 2 (∇ 2 + 3K)R.(10) These perturbations are directly related to CMB anisotropies, as variations in curvature at the time of last scattering will correspond to the scattered photons being more or less redshifted, resulting in the CMB temperature distribution [10,43]. As derived in standard references [10], the resulting equation of motion for R when K = 0 is (zR) − ∇ 2 + z z (zR) = 0 (11) z = aφ H .(12) Note that here ∇ 2 ≡ c ij ∇ i ∇ j refers to the Laplacian defined by the curved spatial metric c ij . However for K = 0, by expanding the Einstein and conservation equations to first order in the perturbation variables, Ref. [26] shows that R obeys the second-order Mukhanov-Sasaki equation 0 =(D 2 − KE)R + 2 z z D 2 − KHE R + K 1 + E − 2 H z z D 2 + K 2 E − D 4 R,(13)D 2 = ∇ 2 + 3K, E = a 2 (ρ + p) 2H 2 .(14) This can be written more concisely by first modedecomposing into Fourier space, namely writing ∇ 2 Y k (x) = −κ 2 (k)Y k (x)(15) for Y k (x) the appropriate hyperspherical Bessel functions [43,44] that give the eigenspectrum of the spatiallycurved Laplacian. Thus we identify D 2 with −κ 2 (k)+3K, where κ 2 (k) = k 2 , k ∈ R, k > 0 : K = 0, −1, κ 2 (k) = k(k + 2), k ∈ Z, k > 2 : K = +1,(16) Define wavenumber-dependent Z as Z = z D 2 D 2 − KE = a 2D 2 E D 2 − KE(17) Equation (13) can be recast as (ZR k ) + κ 2 − Z Z − 2K − 2KZ HZ (ZR k ) = 0, (18) where R k (η) is the Fourier component of R(η, x) with wavenumber of magnitude k. To connect this theoretical framework to observation, we must compute the primordial power spectrum from R at horizon crossing. This is an initial value problem: R can be numerically computed according to its equations of motion eqs. (13) and (18) from some initial conditions R(η 0 ), R (η 0 ), or equivalently, a vacuum state for the corresponding quantised variable. The correct theoretical choice for initial conditions in this case is far from clear [10,45,46]; we will now discuss possible vacuum choices. C. Vacuum Choice For quantum fields on a curved spacetime, the choice of vacuum state is either physically unclear or ambiguous. At zeroth order, the slow-roll inflaton mimics a positive cosmological constant, and thus the background dynamics of an inflationary FLRW spacetime are well-described by de Sitter space. This is true for general K. Since de Sitter space is maximally symmetric, there exists a natural choice of vacuum for a scalar field in this spacetime by means of the Bunch-Davies (BD) vacuum [47], which is invariant under all isometries of de Sitter space. This motivates the BD vacuum as a physically reasonable choice. An alternative choice is setting initial conditions by means of Hamiltonian Diagonalisation (HD). However, it is questionable whether this choice of vacuum is physically meaningful in an expanding spacetime as the time-dependent Hamiltonian will yield infinite particle density after the instant the vacuum is set [45,46]. In the K = 0 case, the BD vacuum can be painlessly applied to quantise the Mukhanov variable zR, as its equation of motion eq. (11) is analogous to the resulting Klein-Gordon equation of motion for a scalar field, with a time-dependent mass term given by z. However, applying the BD vacuum to quantise perturbations in inflation requires the spacetime to be quasi-de Sitter at the time of quantisation, which is not possible for theories of finite inflation [23,37,41]. This can be seen further in fig. 1, which illustrates a period of kinetic dominance of the inflaton. Further, in the curved case, R no longer possesses a canonically-normalised action nor equation of motion, i.e. it does not behave like a simple harmonic oscillator (SHO) as eq. (11). This can be seen from the k−dependence of Z in eq. (18). As such, we can make no connections to the large body of work on quantum fields in curved spacetime in order to provide insight on what initial conditions to use. Almost all existing literature regarding second quantisation in curved spacetimes deal with a massive scalar field with constant coupling to gravity [45,48], admitting an SHO-like equation of motion to then quantise the scalar field by analogy with the time-independent quantum harmonic oscillator. Thus, the first step in generalising inflationary theories to nonflat primordial curvature is to find a curvature perturbation variable that obeys an analogous SHO equation of motion, unlike R. Additionally, as shown by Ref. [49] for K = 0, neither the BD vacuum nor the vacuum from HD are robust against canonical transformations. Namely, under a canonical transformation of phase space preserving the field's equation of motion, these vacuum setting procedures yield ambiguous vacuum initial conditions which would be observationally distinguishable. Another potential vacuum choice is the one proposed by Danielsson [50]. This vacuum is derived in the Heisenberg picture for the field operators, and initial conditions are set by considering the time at which each mode reaches Planckian lengths. Unfortunately, this choice is also not invariant under canonical transformations when K = 0. However, as proposed in Ref. [27], one can instead set the vacuum by minimising the renormalised stress energy tensor (RSET). This formulation avoids consideration of the tricky concept of particles. Furthermore, it does not require any assumptions about the asymptotic behaviour of the inflationary spacetime, and so allows for non-eternal theories of inflation. More crucially, this method yields canonically invariant vacuum conditions [49] when K = 0. We expect that an analogous result will persist even for the case of K = 0, which motivates our calculations of initial conditions resulting from this procedure. We leave it to future work to extend the calculations of Ref. [49] to K = 0 with the help of results from section VI. As discussed by Fulling [45], computing the correct form of the RSET for a given field subject to a general action is challenging, and in the case of R, is virtually intractable due to the convoluted form of its action [26], S (2) R = 1 2 dη d 3 xa |c| (φ ) 2 H 2 RD 2 R + R − K R H D 2 D 2 − KE R − K R H .(19) As discussed further in section V, a more feasible task is finding the RSET for a massless, minimally coupled scalar field ψ on a curved spacetime, with resulting equation of motion (aψ) − ∇ 2 + a a (aψ) = 0(20) for which the renormalised stress energy tensor has been derived by Birrell and Davies [44]. As proposed and discussed in Ref. [27], when K = 0, an analogy can be drawn between eq. (20) and the equation of motion eq. (11) for R in flat space, by noting that during flat slow-roll inflation a a ≈ z z .(21) Thus aψ and zR share an equation of motion, and so R can be quantised directly through the minimised RSET conditions for this arbitrary scalar field ψ. However, for K = 0, R cannot yet be quantised using the minimised-RSET proceedure as above, since it does not have the equation of motion of a SHO. Thus, in what follows, we will motivate and derive a novel perturbation variable which obeys a canonically-normalised wave equation of motion. Finally, we will derive initial conditions for this variable in section V by means of the minimised-RSET procedure in Ref. [27] which we have generalised to curved spacetimes. III. MOTIVATION FOR A NEW VARIABLE As discussed above, we aim to construct a scalar perturbation variable obeying a SHO equation of motion analogous to eq. (11) in order to make connections with the existing literature concerning quantum fields in curved spacetime and their quantisation, and further, to be able to apply minimised-RSET as a well-motivated choice of vacuum selection. Progress towards finding a canonically normalised perturbation variable has been made by Brechet et al. [51], in which the proposed variable obeys an equation of motion that recovers the wave equation seen for R in the K = 0 case. In order to understand the results of this paper as they relate to curved inflation, let us set up again a perturbed spacetime, filled instead by a perfect fluid. Note that, with care, a scalar field can be seen as a special case of a perfect fluid to zeroth order [52]. The stress energy tensor for this component fluid can be expanded to first order as T 0 0 = −ρ − δρ T j i = −(p + δp)δ j i(22) Again, we consider a perturbed spacetime in the Newtonian gauge, ds 2 = (1 + 2Φ)dt 2 − a 2 (1 − 2Ψ)c ij dx i dx j(23) Following Unnikrishnan and Sriramkumar [53], by expanding the Einstein field and conservation equations to first order in perturbation variables δρ, δp, Φ, Ψ, given component tensor eq. (22) and metric eq. (23), we find the Bardeen equation of motion for Bardeen potential Φ to be Φ + 3H 1 + c 2 a Φ − c 2 a ∇ 2 Φ + (1 + 3c 2 a )H 2 − K(1 + 3c 2 a ) + 2H Φ = 0,(24) where c 2 a ≡ p ρ(25) is the adiabatic sound speed. Adapted to the notation used above, Brechet et al. [51] then defines an alternate comoving curvature perturbation scalar ζ PF = R − 2K a 2 (ρ + p) Φ.(26) Note briefly that ζ PF = R when K = 0, and that c 2 a ≈ −1 during slow-roll inflation. The equation of motion for ζ PF can then be found simply by writing ζ PF in terms of Φ as ζ PF = Φ + 2H a 2 (ρ + p) (HΦ + Φ ) − 2K a 2 (ρ + p) Φ.(27) Equation (24) then reduces to the wave equation (z 0 ζ PF ) − c 2 a ∇ 2 v + z 0 z 0 (z 0 ζ PF ) = 0 (28) z 0 = a 2 √ ρ + p c a H ,(29) Thus, given the above canonically-normalised wave equation of motion for ζ PF , we can canonically quantise curvature perturbations for a perfect fluid on a curved spacetime, and set initial conditions using minimised-RSET as discussed in section V. This framework can be then applied to the specific case of the inflaton, by calculating ρ φ , p φ , δρ φ , δp φ , δΣ φ from the stress-energy tensor for a perturbed inflationary scalar field φ(η) + δφ(η, x). For the sake of generality, we consider the following extension of action eq. (4), S = d 4 x √ −g 1 2 R − P (X, φ) ,(30) where X = 1 2 g µν ∇ µ φ∇ ν φ.(31) A more general treatment of inflationary P (X, φ) theories can be seen in Refs. [54,55]. Applying the results in this paper to alternate-Lagrangian theories of inflation, such as DBI inflation, could be a fruitful future extension of this work. Then, following Garriga and Mukhanov [55], we define the inflationary sound speed, or the effective sound speed of the perturbations, c 2 s ≡ ∂ X P ∂ X P + 2X∂ XX P .(32) Note that the inflaton is typically described by a standard Lagrangian P (X, φ) = X − V (φ).(33) Thus usually c 2 s = 1. Equation (32) allows us to recast the equivalent nonadiabatic pressure perturbation for a scalar field as δp en = 2 a 2 (c 2 s − c 2 a )D 2 Φ(34) Equations (32) and (34) allow us to write the Bardeen equation of motion for a scalar field as Φ + 3H 1 + c 2 a Φ − c 2 a ∇ 2 Φ + (1 + 3c 2 a )H 2 − K(1 + 3c 2 a ) + 2H Φ = a 2 2 δp en (35) We highlight that this only differs from eq. (24), the Bardeen equation of motion for a spacetime with component perfect fluid, by the addition of the entropic pressure term a 2 2 δp en on the RHS. Given Φ described by eq. (24), i.e. for the toy-universe filled with a perfect fluid, we have ζ PF = 2Hc 2 a a 2 (ρ + p) ∇ 2 Φ.(36) However, for a perturbed FLRW universe filled instead with the inflationary scalar field, where Φ is described by eq. (35), ζ PF = 2Hc 2 s a 2 (ρ + p) ∇ 2 Φ + 6 KH(c 2 s − c 2 a ) a 2 (ρ + p) Φ,(37) The discrepancy between eqs. (36) and (37) suggests that a SHO wave equation of motion is impossible for this variable unless K = 0 or c 2 a = c 2 s . Writing eq. (37) more explicitly as ζ PF = 2Hc 2 a a 2 (ρ + p) ∇ 2 Φ + 2H ρ + p δp en ,(38) we see that the term preventing the desired equation of motion is proportional to the entropic pressure perturbation, which is dependent of spatial curvature through D. IV. NOVEL CURVATURE VARIABLE Firstly, the Bardeen eq. (35), which fully describes the evolution of perturbations in a curved FLRW spacetime with a perturbed inflaton, is equivalent to Φ + 3H 1 + c 2 a Φ − c 2 s ∇ 2 Φ + (1 + 3c 2 a )H 2 − K(1 + 3c 2 s ) + 2H Φ = 0(39) We aim to define a variable whose dynamics simplify to a canonically normalisable wave equation under eq. (39). Thus consider ζ = g(η)Φ + 2Hg(η) a 2 (ρ + p) (HΦ + Φ ) − 2Kf (η) a 2 (ρ + p) Φ,(40) where recall R = Φ + 2H a 2 (ρ + p) (HΦ + Φ ),(41)ζ PF = Φ + 2H a 2 (ρ + p) (HΦ + Φ ) − 2K a 2 (ρ + p) Φ. (42) So far, g(η), f (η) are yet unspecified functions, and so ζ can represent any linear combination of Φ, Φ , i.e. of the form A(η)Φ + B(η)Φ . Note, that currently this is the most general form for ζ, since it must be a first order perturbation, and thus a linear combination of other gauge-invariant quantities. We can see that under eq. (39), the derivative of ζ simplifies as ζ = 2Hg a 2 (ρ + p) c 2 s D 2 Φ + Φ K H − f g K H + g g + Φ K + g g H + a 2 (ρ + p) 2H + K H f g 2H + (ρ + p) (ρ + p) − f f .(43) Now, by inspection of eq. (43) and by analogy with eq. (36), we will want an anzatz where ζ is proportional to only D 2 Φ, and not to Φ or Φ . ζ = g 2Hc 2 s a 2 (ρ + p) D 2 Φ.(44) Once this is picked as an ansatz, we are guaranteed to arrive at a wave equation, as long as we can solve equations specifying functions f (η), g(η): f (η) = g H K + g (45) f f = K + g g H + a 2 (ρ + p) 2H + K H f g 2H + (ρ + p) (ρ + p) .(46) Both of these equations can be rewritten to simplify by taking G, b such that G = b b ,(47)G 2 − G = H 2 − H + K,(48) and setting g(η) = a H G b ,(49)f (η) = g H K + g,(50) which we will use as the definition for our functions f (η) and g(η). We leave it to future work to more formally and constructively motivate ansatz eq. (44), and to ascertain whether or not it's a necessary condition (in addition to being sufficient) for a SHO equation for ζ under eq. (39). For ζ defined such that eq. (44) holds, the resulting equation of motion simplifies to (z g ζ) + c 2 s D 2 − z g z g (z g ζ) = 0,(51)z g (η) = z g = a 2 (ρ + p) 1 2 gc s H ,(52) as expected from section III. The only thing left is to see whether a solution G satisfying eq. (48) exists. By rewriting eq. (48) using eq. (47), we can see that we have a linear second order differential equation for 1/b: b 1 b = a 1 a + K.(53) Unfortunately, eq. (53) does not have a closed form solution, unless K = 0. We can solve it numerically, however. In order to justify a selection of initial conditions for this differential equation, we should note, that there is an overall scaling freedom in the definition of b and therefore there is only one effective degree of freedom. Furthermore, selection of initial conditions on b andḃ at some initial time t 0 is equivalent to picking initial values of f and g at time t 0 as can be seen from: b 0 = g 0 H 0 a 0 b 2 0 (54) b 0 = a 0Ḣ 0 − K a 2 0 K a 2 0 (f 0 − g 0 ) + g 0Ḣ0 ,(55) where subscript 0 corresponds to value at initial time t 0 . Therefore, picking which initial conditions to consider for the variable ζ is equivalent to asking which variable we want to quantise at t 0 . We leave it to future work to address the question of existence of solutions to eq. (53) in a more general setting. With the wave equation in place, we can discuss a few things of note about the new variable. First of all, ζ defined via eqs. (40) and (47) to (50) has an overall arbitrary scaling, due to a scaling freedom of both g and f . However, the resultant Mukhanov variable v = z g ζ does not, and furthermore always collapses to the original flat Mukhanov variable v flat = zR when K = 0, even when we pick b ∝ a as the solution to eq. (53). We have thus constructed ζ, such that the equation of motion for ζ has canonically-normalisable wave equation form, and can be quantised using the minimised-RSET vacuum conditions derived below in section V. V. VACUUM CONDITIONS VIA RSET In what follows, we will demonstrate how to apply minimised-RSET to quantise ζ with k-space equation of motion (z g ζ k ) + c 2 s (η)κ 2 D (k) − z g z g (z g ζ k ) = 0(56) where κ D gives the wavespace decomposition of the D 2 operator − D 2 ↔ κ 2 D (k) = κ 2 (k) − 3K(57) We highlight that none of the following calculations are particular to the definitions of z g , ζ, and c s , so this procedure is easily applicable for the quantisation of a wider class of variables with comparable equation of motion. Compare eq. (56) with a massless minimally-coupled scalar field given by S = 1 2 d 4 x |g| (g µν ∇ µ ψ∇ ν ψ) .(58) Note that this is not the inflaton scalar field, but merely introduced for computational convenience; the RSET for such ψ have been calculated by Birrell and Davies [44], while such a calculation for ζ is not yet obvious. ψ has equation of motion in k-space given by (aψ k ) + κ 2 (k) − a a (aψ k ) = 0,(59) i.e. another wave-equation of motion with mass function given by the scale factor. As discussed in-depth in appendix A, and in analogy with Ref. [49], introduce four extra degrees of freedom in the form of time re-definitions and field rescalings η −→ η ζ ,(60)ζ −→ χ ζ = ζ(η ζ ) h ζ (η ζ ) (61) η −→ η ψ ,(62)ψ −→ χ ψ = ψ(η ψ ) h ψ (η ψ )(63) We show there exist unique h ζ , h ψ , η ζ , η ψ such that the redefined fields corresponding to ζ and ψ have identical wave equations of motion. Then, generalising the calculations performed in Ref. [49], we find initial conditions for ζ at η = η 0 to be |ζ(η 0 )| 2 = 1 2c s (η 0 )z 2 g (η 0 )κ D , ζ ζ (η 0 ) = −iκ D + a a (η 0 ) − z g z g (η 0 ) − 1 2 c s c s (η 0 ) (64) or at t = t 0 by |ζ(t 0 )| 2 = 1 2c s (t 0 )z 2 g (t 0 )κ D , ζ ζ (t 0 ) = −i 1 a(t 0 ) κ D +ȧ a (t 0 ) −ż g z g (t 0 ) − 1 2ċ s c s (t 0 ) (65) VI. POWER SPECTRUM AND RESULTS In order to ascertain whether the addition of spatial curvature provides a better description of observations, we aim to compute the power for R. We can write R, R in terms of ζ and ζ as R = ζ g + K H f g 2 c 2 s D 2 ζ (66) R = 1 gD 2 K 2 f gc 2 s H 2 + (D 2 − KE) ζ + K gH ζ(67) for E as in eq. (14). Now note that ζ still describes a family of functions, as ζ is formulated in terms of a family of functions f and g, which are defined by the second-order differential eq. (53). For this section, rather than considering ζ as a physically-meaningful perturbation variable in its own right, we shall use it as a means of setting wellmotivated initial conditions for R: if one takes f (η 0 ) = 0 and g(η 0 ) = 1, then R = ζ at η 0 . Thus it is appropriate to define initial conditions from R from the minimised-RSET initial conditions for one such choice in the ζ family, defined at η = η 0 by f (η 0 ) = 0 and g(η 0 ) = 1. Equations (64), (66) and (67) together yield R R η0 = 1 + Kz 2 2a 2 κ 2 D −iκ D + a a − z z − c s 2c s − K H + K H η0 |R(η 0 )| 2 = 1 2c s κ D z 2 η0 ,(68) written in terms of the usual mass variable z = gz g . Of note is the independence of eq. (68) on b, since it was only introduced as a mathematical tool to construct the desired equation of motion. We highlight that this technique would generalise easily, as any perturbation scalar can be written in terms of Φ and Φ , and thus equivalently, in terms of ζ, ζ . Thus for an appropriate choice of initial f, g values, we can set initial conditions for any perturbation scalar by means of minimised-RSET on ζ. Using an oscillatory solver [56] and approximating slow-roll with V (φ) ∝ φ 4/3 , one can numerically evolve R from t 0 to the time of mode re-entry using eq. (68) and equation of motion eq. (13). We take a parametric form for the primordial power spectrum given by P KΛCDM R (k) = A s k k * ns−1 .(69) The resulting power spectrum for R using minimised-RSET initial conditions is given in fig. 2. We can compare this with a naive application of the flat-case minimised-RSET conditions, as given in Refs. [23,26,27], as well as with the Bunch-Davies vacuum conditions [10,26]. This is shown in fig. 3, where we see a clear increase in power spectrum oscillations when using the new minimised-RSET conditions derived in this paper, thus hopefully corresponding to better detectability. We also highlight that the differences between vacuum choices is particularly pronounced for larger values of primordial curvature and for lower k modes (where the presence curvature is more relevant), as expected. We leave a more thorough study of the power spectrum and the effects of proposed initial conditions in curved space to future work. VII. CONCLUSION We began by introducing the relevant setup, highlighting work by Ref. [26] in deriving the K = 0 Mukhanov-Sasaki equation of motion for the gauge-invariant comoving curvature perturbation R. We discussed why this equation of motion prevents setting initial conditions for R in a well-defined and physically-motivated way, and motivated the method of minimised-RSET. We then propose a novel perturbation scalar ζ, inspired by the case of perfect-fluid filled universe as in Ref. [51], and with particular emphasis on eq. (37). Under the Bardeen equation of motion eq. (39), describing the evolution of perturbation scalars in our inflationary universe, the equation of motion for ζ collapses to the desired canonically-normalisable wave equation eq. (51) (i.e. of the same form as the typically-considered equation of motion eq. (11) for R when K = 0). We believe this variable will be an important catalyst for further work on the subject of inflation in curved spacetimes, as its SHO form allows for connection to standard inflationary and QFT literature. Building on calculations in Ref. [49], we generalise the minimised-RSET vacuum-setting procedure proposed in Ref. [27] to curved spacetimes, allowing us to quantise ζ (and any variable with analogous equation of motion). Finally, we demonstrate how to use the family of ζ variables to construct a well-motivated set of initial conditions for R, allowing us to plot the resulting power spectrum. We see changes in χ 2 in all cases compared to previously studied BD and flat-RSET conditions, particularly noting an increase in power spectrum oscillatory behaviour. This work however leaves several questions open. First, what are the correct initial conditions to choose for b(η) and therefore g(η)? Ideally there would be a unique, well-motivated theoretical choice, though of course given its impact on the primordial power spectrum, this initial condition is a single degree of freedom which could also be fit for. Second, the link between curvature and nonadiabaticity given by eqs. (34), (35) and (38) should be further analytically explored in the directions of explaining the non-locality of action eq. (19) and identifying the degree of uniqueness of the variable ζ. Third, the calculations made by Ref. [49] in examining the effects of canonical transformations on vacuum choice should be extended to curved universes. Finally, it remains to be seen to what extent (if at all) these vacuum states can be constrained using modern cosmological data. FIG. 2 . 2Left: representative best-fit primordial power spectra corresponding to a range of allowed primordial curvatures. Right: corresponding low-effects on the CMB power spectrum. FIG. 3 . 3Comparison of the effect of initial conditions on the power spectrum of R. Initial conditions considered are the novel initial conditions eq. (68) for R by means of generalised-RSET on ζ; a naive application of flat-space RSET conditions computed in Ref.[27]; and Bunch-Davies ICs. Power spectrum is computed for minimum and maximum primordial curvature cases. Note that in the maximum case (right hand graph) the BD condition solution numerically coincides with the naive-flat condition. for this work. Z. Shumaylov was funded through a summer project by DAMTP CMP. F. J. Agocs was supported by STFC. W.J. Handley was supported by a Royal Society University Research Fellowship. We aim to relateby means of time-redefinition and field rescaling defined in eqs. (60) to (63), so that we may quantise ζ by analogy with ψ. Time-redefinition from η to η ζ giveswhere denotes differentiation with respect to conformal time η. Then eq. (A1) becomes the equivalent equation of motion for χ ζ given byWe aim to have an equation of motion with no ∂ η ζ χ ζ terms, i.e. want to choose h ζ , η ζ such thatThus, fix C 2 ζ to be constant. ThenThe equivalent formulation for the rescaled and timeredefined ψ field isAs argued in Ref.[49],Hence the only remaining condition to be fixed ishaving shifted the wavenumber for ψ so thatNow promote χ ψ to an operator, as is standard in canonical quantisation; then ψ may be expanded in terms of creation and annihilation operators asis the eigenfunction of the curved spacial Laplacian, namelyThe renormalised T 00 component is then given bywhereCrucially, the de-Witt Schwinger geometrical terms given byT are independent of the field variables χ ψ (or ψ), shown in Birrell and Davies[44](Chapter 6, Section 6.4). Thus, we expandwhere for brevity, h = h ψ , χ = χ ψ,k and Y = Y k . Then0|T 00 | ren =T + 1 2The spherical-harmonic-like functions Y k (x) are normalised asso that the canonical commutation relation for the redefined field is given byFinally, minimising eq. (A19) with respect to field variables χ ψ,k , χ * ψ,k and their derivatives ∂ η ψ χ ψ,k , ∂ η ψ χ * ψ,k subject to the constraint eq. (A21) givesThe equation of motion for χ ζ in terms of η ζ at k shift (k) is now identical to the equation of motion for χ ψ in terms of η ψ at k. Thus we have equivalence of solutionsSubstituting χ ψ (k) = χ ζ (k shift ), h ψ for h ζ , and converting to conformal time givesIntroducing 2 arbitrary time redefinitions and 2 arbitrary field rescalings gives 4 degrees of freedom. Thus far, we still have one remaining degree of freedom: eqs. (A11) and (A12) each fix one degree of freedom, and eqs. (A7) and (A10) together fix another degree of freedom (as the values of C ζ , C ψ are arbitrary). Thus we're free to set C ζ = C ψ . In conclusion, we can initialise ζ at η = η 0 byor at t = t 0 by Spectrum of relict gravitational radiation and the early state of the universe. A A Starobinskii, Journal of Experimental and Theoretical Physics Letters. 30682A. A. Starobinskii, Spectrum of relict gravitational ra- diation and the early state of the universe, Journal of Experimental and Theoretical Physics Letters 30, 682 (1979). Inflationary universe: A possible solution to the horizon and flatness problems. A H Guth, 10.1103/PhysRevD.23.347Physical Review D. 23347A. H. Guth, Inflationary universe: A possible solution to the horizon and flatness problems, Physical Review D 23, 347 (1981). A new inflationary universe scenario: A possible solution of the horizon, flatness, homogeneity, isotropy and primordial monopole problems. A Linde, 10.1016/0370-2693(82)91219-9Physics Letters B. 108389A. Linde, A new inflationary universe scenario: A possible solution of the horizon, flatness, homogeneity, isotropy and primordial monopole problems, Physics Let- ters B 108, 389 (1982). Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking. A Albrecht, P J Steinhardt, 10.1103/PhysRevLett.48.1220Phys. Rev. Lett. 481220A. Albrecht and P. J. Steinhardt, Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking, Phys. Rev. Lett. 48, 1220 (1982). 10.1051/0004-6361/201833880Planck 2018 results. I. Overview and the cosmological legacy of Planck. 6411Planck Collaboration, Planck 2018 results. I. Overview and the cosmological legacy of Planck, Astronomy & As- trophysics 641, A1 (2020). 10.1051/0004-6361/201936386Planck 2018 results. V. CMB power spectra and likelihoods. 6415Planck Collaboration, Planck 2018 results. V. CMB power spectra and likelihoods, Astronomy & Astro- physics 641, A5 (2020). 10.1051/0004-6361/201833910Planck 2018 results. VI. Cosmological parameters. 6416Planck Collaboration, Planck 2018 results. VI. Cosmo- logical parameters, Astronomy & Astrophysics 641, A6 (2020). 10.1051/0004-6361/201833887Planck 2018 results. X. Constraints on inflation. 64110Planck Collaboration, Planck 2018 results. X. Con- straints on inflation, Astronomy & Astrophysics 641, A10 (2019). Theory of cosmological perturbations. V F Mukhanov, H A Feldman, R H Brandenberger, 10.1016/0370-1573(92)90044-ZPhys. Rep. 215203V. F. Mukhanov, H. A. Feldman, and R. H. Branden- berger, Theory of cosmological perturbations, Phys. Rep. 215, 203 (1992). . D Baumann, arXiv:0907.5424arXiv:0907.5424TASI Lectures on Inflation. arXiv e-printshep-thD. Baumann, TASI Lectures on Inflation, arXiv e-prints , arXiv:0907.5424 (2009), arXiv:0907.5424 [hep-th]. Why Not Consider Closed Universes?. M White, D Scott, 10.1086/176904The Astrophysical Journal. 459415M. White and D. Scott, Why Not Consider Closed Uni- verses?, The Astrophysical Journal 459, 415 (1996). Closed universes, de Sitter space, and inflation. A Lasenby, C Doran, 10.1103/PhysRevD.71.063502Physical Review D. 7163502A. Lasenby and C. Doran, Closed universes, de Sit- ter space, and inflation, Physical Review D 71, 063502 (2005). Inflation in the closed FLRW model and the CMB. B Bonga, B Gupt, N Yokomizo, 10.1088/1475-7516/2016/10/031Journal of Cosmology and Astroparticle Physics. 20161031B. Bonga, B. Gupt, and N. Yokomizo, Inflation in the closed FLRW model and the CMB, Journal of Cosmology and Astroparticle Physics 2016 (10), 031. Tensor perturbations during inflation in a spatially closed Universe. B Bonga, B Gupt, N Yokomizo, 10.1088/1475-7516/2017/05/021Journal of Cosmology and Astroparticle Physics. 20170521B. Bonga, B. Gupt, and N. Yokomizo, Tensor perturba- tions during inflation in a spatially closed Universe, Jour- nal of Cosmology and Astroparticle Physics 2017 (05), 021. Cosmological constraints on slow roll inflation: An update. M Forconi, W Giarè, E Di Valentino, A Melchiorri, 10.1103/PhysRevD.104.103528arXiv:2110.01695Physical Review D. 104103528M. Forconi, W. Giarè, E. Di Valentino, and A. Mel- chiorri, Cosmological constraints on slow roll inflation: An update, Physical Review D 104, 103528 (2021), arXiv:2110.01695. Planck evidence for a closed Universe and a possible crisis for cosmology. E Di Valentino, A Melchiorri, J Silk, 10.1038/s41550-019-0906-9Nature Astronomy. 1E. Di Valentino, A. Melchiorri, and J. Silk, Planck evi- dence for a closed Universe and a possible crisis for cos- mology, Nature Astronomy , 1 (2019). Curvature tension: Evidence for a closed universe. W Handley, 10.1103/PhysRevD.103.L041301arXiv:1908.09139Phys. Rev. D. 10341301astro-ph.COW. Handley, Curvature tension: Evidence for a closed universe, Phys. Rev. D 103, L041301 (2021), arXiv:1908.09139 [astro-ph.CO]. The evidence for a spatially flat Universe. G Efstathiou, S Gratton, 10.1093/mnrasl/slaa093Monthly Notices of the Royal Astronomical Society: Letters. 49691G. Efstathiou and S. Gratton, The evidence for a spa- tially flat Universe, Monthly Notices of the Royal Astro- nomical Society: Letters 496, L91 (2020). The galaxy power spectrum take on spatial curvature and cosmic concordance. S Vagnozzi, E Di Valentino, S Gariazzo, A Melchiorri, O Mena, J Silk, 10.1016/j.dark.2021.100851arXiv:2010.02230Physics of the Dark Universe. 33100851astro-ph.COS. Vagnozzi, E. Di Valentino, S. Gariazzo, A. Melchiorri, O. Mena, and J. Silk, The galaxy power spectrum take on spatial curvature and cosmic concordance, Physics of the Dark Universe 33, 100851 (2021), arXiv:2010.02230 [astro-ph.CO]. Eppur è piatto? The Cosmic Chronometers Take on Spatial Curvature and Cosmic Concordance. S Vagnozzi, A Loeb, M Moresco, 10.3847/1538-4357/abd4dfarXiv:2011.11645ApJ. 908astro-ph.COS. Vagnozzi, A. Loeb, and M. Moresco, Eppur è pi- atto? The Cosmic Chronometers Take on Spatial Cur- vature and Cosmic Concordance, ApJ 908, 84 (2021), arXiv:2011.11645 [astro-ph.CO]. Full-shape galaxy power spectra and the curvature tension. A Glanville, C Howlett, T Davis, 10.1093/mnras/stac2891arXiv:2205.05892MNRAS. 517astro-ph.COA. Glanville, C. Howlett, and T. Davis, Full-shape galaxy power spectra and the curvature tension, MNRAS 517, 3087 (2022), arXiv:2205.05892 [astro-ph.CO]. Spatial curvature falsifies eternal inflation. M Kleban, M Schillo, 10.1088/1475-7516/2012/06/029Journal of Cosmology and Astroparticle Physics. 20120629M. Kleban and M. Schillo, Spatial curvature falsifies eter- nal inflation, Journal of Cosmology and Astroparticle Physics 2012 (06), 029. Finite inflation in curved space. L T Hergt, F J Agocs, W J Handley, M P Hobson, A N Lasenby, 10.1103/PhysRevD.106.063529arXiv:2205.07374Phys. Rev. D. 10663529astro-ph.COL. T. Hergt, F. J. Agocs, W. J. Handley, M. P. Hob- son, and A. N. Lasenby, Finite inflation in curved space, Phys. Rev. D 106, 063529 (2022), arXiv:2205.07374 [astro-ph.CO]. The multiverse hierarchy, in Universe or Multiverse?. M Tegmark, 10.1017/CBO9781107050990.009arXiv:0905.1283B. CarrCambridge University PressM. Tegmark, The multiverse hierarchy, in Universe or Multiverse? , edited by B. Carr (Cambridge University Press, 2007) Chap. 7, pp. 99-126, arXiv:0905.1283. The case for a closed universe. G Ellis, J Larena, 10.1093/astrogeo/ataa011Astronomy & Geophysics. 6138G. Ellis and J. Larena, The case for a closed universe, Astronomy & Geophysics 61, 1.38 (2020). Primordial power spectra for curved inflating universes. W Handley, 10.1103/PhysRevD.100.123517arXiv:1907.08524Phys. Rev. D. 100123517astro-ph.COW. Handley, Primordial power spectra for curved in- flating universes, Phys. Rev. D 100, 123517 (2019), arXiv:1907.08524 [astro-ph.CO]. Novel quantum initial conditions for inflation. W J Handley, A N Lasenby, M P Hobson, 10.1103/PhysRevD.94.024041arXiv:1607.04148Phys. Rev. D. 9424041gr-qcW. J. Handley, A. N. Lasenby, and M. P. Hobson, Novel quantum initial conditions for inflation, Phys. Rev. D 94, 024041 (2016), arXiv:1607.04148 [gr-qc]. Fast-Roll Inflation. A Linde, 10.1088/1126-6708/2001/11/052Journal of High Energy Physics. 52A. Linde, Fast-Roll Inflation, Journal of High Energy Physics 2001, 052 (2001). Suppressing the lower multipoles in the CMB anisotropies. C R Contaldi, M Peloso, L Kofman, A Linde, 10.1088/1475-7516/2003/07/002Journal of Cosmology and Astroparticle Physics. 072C. R. Contaldi, M. Peloso, L. Kofman, and A. Linde, Sup- pressing the lower multipoles in the CMB anisotropies, Journal of Cosmology and Astroparticle Physics 2003 (07), 002. CMB quadrupole suppression. I. Initial conditions of inflationary perturbations. D Boyanovsky, H J Vega, N G Sanchez, 10.1103/PhysRevD.74.123006Physical Review D. 74123006D. Boyanovsky, H. J. de Vega, and N. G. Sanchez, CMB quadrupole suppression. I. Initial conditions of inflation- ary perturbations, Physical Review D 74, 123006 (2006). CMB quadrupole suppression. II. The early fast roll stage. D Boyanovsky, H J Vega, N G Sanchez, 10.1103/PhysRevD.74.123007Physical Review D. 74123007D. Boyanovsky, H. J. de Vega, and N. G. Sanchez, CMB quadrupole suppression. II. The early fast roll stage, Physical Review D 74, 123007 (2006). CMB quadrupole depression produced by early fast-roll inflation: Monte Carlo Markov chains analysis of WMAP and SDSS data. C Destri, H J Vega, N G Sanchez, 10.1103/PhysRevD.78.023013Physical Review D. 7823013C. Destri, H. J. de Vega, and N. G. Sanchez, CMB quadrupole depression produced by early fast-roll infla- tion: Monte Carlo Markov chains analysis of WMAP and SDSS data, Physical Review D 78, 023013 (2008). φ 4 inflation is not excluded. E Ramirez, D J Schwarz, 10.1103/PhysRevD.80.023525Physical Review D. 8023525E. Ramirez and D. J. Schwarz, φ 4 inflation is not ex- cluded, Physical Review D 80, 023525 (2009). Preinflationary and inflationary fast-roll eras and their signatures in the low CMB multipoles. C Destri, H J Vega, N G Sanchez, 10.1103/PhysRevD.81.063520Physical Review D. 8163520C. Destri, H. J. de Vega, and N. G. Sanchez, Preinflation- ary and inflationary fast-roll eras and their signatures in the low CMB multipoles, Physical Review D 81, 063520 (2010). Predictions of just-enough inflation. E Ramirez, D J Schwarz, 10.1103/PhysRevD.85.103516Physical Review D. 85103516E. Ramirez and D. J. Schwarz, Predictions of just-enough inflation, Physical Review D 85, 103516 (2012). Low power on large scales in just-enough inflation models. E Ramirez, 10.1103/PhysRevD.85.103517Physical Review D. 85103517E. Ramirez, Low power on large scales in just-enough inflation models, Physical Review D 85, 103517 (2012). Kinetic initial conditions for inflation. W J Handley, S D Brechet, A N Lasenby, M P Hobson, 10.1103/PhysRevD.89.063505arXiv:1401.2253Phys. Rev. D. 8963505astroph.COW. J. Handley, S. D. Brechet, A. N. Lasenby, and M. P. Hobson, Kinetic initial conditions for inflation, Phys. Rev. D 89, 063505 (2014), arXiv:1401.2253 [astro- ph.CO]. Tensor to scalar ratio and large scale power suppression from pre-slow roll initial conditions. L Lello, D Boyanovsky, 10.1088/1475-7516/2014/05/029Journal of Cosmology and Astroparticle Physics. 20140529L. Lello and D. Boyanovsky, Tensor to scalar ratio and large scale power suppression from pre-slow roll ini- tial conditions, Journal of Cosmology and Astroparticle Physics 2014 (05), 029. Just enough inflation: power spectrum modifications at large scales. M Cicoli, S Downes, B Dutta, F G Pedro, A Westphal, 10.1088/1475-7516/2014/12/030Journal of Cosmology and Astroparticle Physics. 20141230M. Cicoli, S. Downes, B. Dutta, F. G. Pedro, and A. Westphal, Just enough inflation: power spectrum modifications at large scales, Journal of Cosmology and Astroparticle Physics 2014 (12), 030. Transients in finite inflation. A Scacco, A Albrecht, 10.1103/PhysRevD.92.083506Physical Review D. 9283506A. Scacco and A. Albrecht, Transients in finite inflation, Physical Review D 92, 083506 (2015). Case for kinetically dominated initial conditions for inflation. L T Hergt, W J Handley, M P Hobson, A N Lasenby, 10.1103/PhysRevD.100.023502arXiv:1809.07185Phys. Rev. D. 10023502astro-ph.COL. T. Hergt, W. J. Handley, M. P. Hobson, and A. N. Lasenby, Case for kinetically dominated initial condi- tions for inflation, Phys. Rev. D 100, 023502 (2019), arXiv:1809.07185 [astro-ph.CO]. Unique Contributions to the Scalar Bispectrum in 'Just Enough Inflation. H V Ragavendra, D Chowdhury, L Sriramkumar, 10.1007/978-981-15-6292-1_5Workshop on Frontiers in High Energy Physics. A. Giri and R. MohantaSingaporeSpringerH. V. Ragavendra, D. Chowdhury, and L. Sriramkumar, Unique Contributions to the Scalar Bispectrum in 'Just Enough Inflation', in Workshop on Frontiers in High En- ergy Physics 2019 , edited by A. Giri and R. Mohanta (Springer, Singapore, 2020) pp. 39-47. Fast and accurate CMB computations in non-flat FLRW universes. J Lesgourgues, T Tram, 10.1088/1475-7516/2014/09/032arXiv:1312.2697J. Cosmology Astropart. Phys. 201432astro-ph.COJ. Lesgourgues and T. Tram, Fast and accurate CMB computations in non-flat FLRW universes, J. Cosmol- ogy Astropart. Phys. 2014, 032 (2014), arXiv:1312.2697 [astro-ph.CO]. Stress-tensor renormalization. N D Birrell, P C W Davies, 10.1017/CBO9780511622632.008Quantum Fields in Curved Space. Cambridge University PressN. D. Birrell and P. C. W. Davies, Stress-tensor renormal- ization, in Quantum Fields in Curved Space, Cambridge Monographs on Mathematical Physics (Cambridge Uni- versity Press, 1982) p. 150-224. S A Fulling, 10.1017/CBO9781139172073Aspects of Quantum Field Theory in Curved Spacetime. Cambridge University PressS. A. Fulling, Aspects of Quantum Field Theory in Curved Spacetime, London Mathematical Society Student Texts (Cambridge University Press, 1989). Remarks on positive frequency and Hamiltonians in expanding universes. S A Fulling, 10.1007/BF00756661General Relativity and Gravitation. 10807S. A. Fulling, Remarks on positive frequency and Hamil- tonians in expanding universes., General Relativity and Gravitation 10, 807 (1979). Quantum field theory in de Sitter space -Renormalization by point-splitting. T S Bunch, P C W Davies, 10.1098/rspa.1978.0060Proceedings of the Royal Society of London Series A. 360117T. S. Bunch and P. C. W. Davies, Quantum field theory in de Sitter space -Renormalization by point-splitting, Proceedings of the Royal Society of London Series A 360, 117 (1978). L Parker, D Toms, 10.1017/CBO9780511813924Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity. Cambridge University PressL. Parker and D. Toms, Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity, Cambridge Monographs on Mathematical Physics (Cambridge Uni- versity Press, 2009). Quantum initial conditions for inflation and canonical invariance. F J Agocs, L T Hergt, W J Handley, A N Lasenby, M P Hobson, 10.1103/PhysRevD.102.023507arXiv:2002.07042Phys. Rev. D. 10223507gr-qcF. J. Agocs, L. T. Hergt, W. J. Handley, A. N. Lasenby, and M. P. Hobson, Quantum initial conditions for infla- tion and canonical invariance, Phys. Rev. D 102, 023507 (2020), arXiv:2002.07042 [gr-qc]. Note on inflation and trans-Planckian physics. U H Danielsson, 10.1103/PhysRevD.66.023511arXiv:hep-th/0203198Phys. Rev. D. 6623511hep-thU. H. Danielsson, Note on inflation and trans-Planckian physics, Phys. Rev. D 66, 023511 (2002), arXiv:hep- th/0203198 [hep-th]. First-order adiabatic perturbations of a perfect fluid about a general FLRW background using the 1+3 covariant and gauge-invariant formalism. S D Brechet, M P Hobson, A N Lasenby, arXiv:0909.5384arXiv:0909.5384arXiv e-printsgr-qcS. D. Brechet, M. P. Hobson, and A. N. Lasenby, First-order adiabatic perturbations of a perfect fluid about a general FLRW background using the 1+3 co- variant and gauge-invariant formalism, arXiv e-prints , arXiv:0909.5384 (2009), arXiv:0909.5384 [gr-qc]. Correspondence between a scalar field and an effective perfect fluid. V Faraoni, 10.1103/PhysRevD.85.024040arXiv:1201.1448Phys. Rev. D. 8524040gr-qcV. Faraoni, Correspondence between a scalar field and an effective perfect fluid, Phys. Rev. D 85, 024040 (2012), arXiv:1201.1448 [gr-qc]. A note on perfect scalar fields. S Unnikrishnan, L Sriramkumar, 10.1103/PhysRevD.81.103511arXiv:1002.0820Phys. Rev. D. 81103511astro-ph.COS. Unnikrishnan and L. Sriramkumar, A note on per- fect scalar fields, Phys. Rev. D 81, 103511 (2010), arXiv:1002.0820 [astro-ph.CO]. Primordial power spectra from k -inflation with curvature. Z Shumaylov, W Handley, 10.1103/PhysRevD.105.123532arXiv:2112.07547Phys. Rev. D. 105123532astro-ph.COZ. Shumaylov and W. Handley, Primordial power spec- tra from k -inflation with curvature, Phys. Rev. D 105, 123532 (2022), arXiv:2112.07547 [astro-ph.CO]. Perturbations in kinflation. J Garriga, V F Mukhanov, 10.1016/S0370-2693(99)00602-4arXiv:hep-th/9904176Physics Letters B. 458219hep-thJ. Garriga and V. F. Mukhanov, Perturbations in k- inflation, Physics Letters B 458, 219 (1999), arXiv:hep- th/9904176 [hep-th]. Efficient method for solving highly oscillatory ordinary differential equations with applications to physical systems. F J Agocs, W J Handley, A N Lasenby, M P Hobson, 10.1103/PhysRevResearch.2.013030arXiv:1906.01421Physical Review Research. 213030physics.comp-phF. J. Agocs, W. J. Handley, A. N. Lasenby, and M. P. Hobson, Efficient method for solving highly oscillatory ordinary differential equations with applications to phys- ical systems, Physical Review Research 2, 013030 (2020), arXiv:1906.01421 [physics.comp-ph].
[]
[ "In Search for Linear Relations in Sentence Embedding Spaces", "In Search for Linear Relations in Sentence Embedding Spaces" ]
[ "Petra Barančíková [email protected] \nFaculty of Mathematics and Physics Institute of Formal and Applied Linguistics\nCharles University\n\n", "Ondřej Bojar [email protected] \nFaculty of Mathematics and Physics Institute of Formal and Applied Linguistics\nCharles University\n\n" ]
[ "Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics\nCharles University\n", "Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics\nCharles University\n" ]
[]
We present an introductory investigation into continuous-space vector representations of sentences. We acquire pairs of very similar sentences differing only by a small alterations (such as change of a noun, adding an adjective, noun or punctuation) from datasets for natural language inference using a simple pattern method. We look into how such a small change within the sentence text affects its representation in the continuous space and how such alterations are reflected by some of the popular sentence embedding models. We found that vector differences of some embeddings actually reflect small changes within a sentence.
null
[ "https://arxiv.org/pdf/1910.03375v1.pdf" ]
203,905,870
1910.03375
8d4a717da36b90b089875f2dd5d91fde938b0478
In Search for Linear Relations in Sentence Embedding Spaces Petra Barančíková [email protected] Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Charles University Ondřej Bojar [email protected] Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Charles University In Search for Linear Relations in Sentence Embedding Spaces We present an introductory investigation into continuous-space vector representations of sentences. We acquire pairs of very similar sentences differing only by a small alterations (such as change of a noun, adding an adjective, noun or punctuation) from datasets for natural language inference using a simple pattern method. We look into how such a small change within the sentence text affects its representation in the continuous space and how such alterations are reflected by some of the popular sentence embedding models. We found that vector differences of some embeddings actually reflect small changes within a sentence. Introduction Continuous-space representations of sentences, so-called sentence embeddings, are becoming an interesting object of study, consider e.g. the BlackBox workshop. 1 Representing sentences in a continuous space, i.e. commonly with a long vector of real numbers, can be useful in multiple ways, analogous to continuous word representations (word embeddings). Word embeddings have provably made downstream processing robust to unimportant input variations or minor errors (sometimes incl. typos), they have greatly boosted the performance of many tasks in low data conditions and can form the basis of empiricallydriven lexicographic explanations of word meanings. One notable observation was made in (Mikolov et al., 2013), showing that several interesting relations between words have their immediate geometric counterpart in the continuous vector space. 1 https://blackboxnlp.github.io/ Our aim is to examine existing continuous representations of whole sentences, looking for an analogous behaviour. The idea of what we are hoping for is illustrated in Figure space-ofsentences. As with words, we would like to learn if and to what extent some simple geometric operations in the continuous space correspond to simple semantic operations on the sentence strings. Similarly to (Mikolov et al., 2013), we are deliberately not including this aspect in the training objective of the sentence presentations but instead search for properties that are learned in an unsupervised way, as a side-effect of the original training objective, data and setup. This approach has the potential of explaining the good or bad performance of the examined types of representations in various tasks. The paper is structured as follows: Section related reviews the closest related work. Section examined-sentences,examined-embeddings, respectively, describe the dataset of sentences and the sentence embeddings methods we use. Section operations presents the selection of operations on the sentence vectors. Section experiments provides the main experimental results of our work. We conclude in Section conclusion. Related Work Series of tests to measure how well their word embeddings capture semantic and syntactic information is defined in (Mikolov et al., 2013). These tests include for example declination of adjectives ("easy"→"easier"→"easiest"), changing the tense of a verb ("walking"→"walk") or getting the capital ("Athens"→"Greece") or currency of a state ("Angola"→"kwanza"). Bojanowski et al. (2016) and Kocmi and Bojar (2018) have further refined the support of sub-word units, leading to considerable improvements in representing morpho- Figure 1: An illustration of a continuous multidimensional vector space representing individual sentences, a 'space of sentences ' (upper plot) where each sentence is represented as a dot. Pairs of related sentences are connected with arrows; dashing indicates various relation types. The lower plot illustrates a possible 'space of operations' (here vector difference, so all arrows are simply moved to start at a common origin). The hope is that similar operations (e.g. all vector transformations extracted from sentence pairs differing in the speed of travel "running instead of walking") would be represented close to each other in the space of operations, i.e. form a more or less compact cluster. Space of Sentences Space of Operations syntactic properties of words. Vylomova et al. (2015) largely extended the set of considered semantic relations of words. Sentence embeddings are most commonly evaluated extrinsically in so called 'transfer tasks', i.e. comparing the evaluated representations based on their performance in sentence sentiment analysis, question type prediction, natural language inference and other assignments. introduce 'probing tasks' for intrinsic evaluation of sentence embeddings. They measure to what extent linguistic features like sentence length, word order, or the depth of the syntactic tree are available in a sentence embedding. This work was extended to SentEval (Conneau and Kiela, 2018), a toolkit for evaluating the quality of sentence embedding both intrinsically and extrinsically. It contains 17 transfer tasks and 10 probing tasks. SentEval is applied to many recent sentence embedding techniques showing that no method had a consistently good performance across all tasks (Perone et al., 2018). Voleti et al. (2018) examine how errors (such as incorrect word substitution caused by automatic speech recognition) in a sentence affect its embedding. The embeddings of corrupted sentences are then used in textual similarity tasks and the performance is compared with original embedding. The results suggest that pretrained neural sentence encoders are much more robust to introduced errors contrary to bag-of-words embeddings. Examined Sentences Because manual creation of sentence variations is costly, we reuse existing data from SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018). Both these collections consist of pairs of sentences-a premise and a hypothesis-and their relationship (entailment/contradiction/neutral). The two datasets together contain 982k unique sentence pairs. All sentences were lowercased and tokenized using NLTK (Loper and Bird, 2002). From all the available sentence pairs, we select only a subset where the difference between the sentences in the pair can be described with a simple pattern. Our method goes as follows: given two sentences, a premise p and the corresponding hypothesis h, we find the longest common substring consisting of whole words and replace it with a variable. This is repeated once more, so our sentence patterns can have up to two variables. In the last step, we make sure the pattern is in a canonical form by switching the variables to ensure they are alphabetically sorted in p. The process is illustrated in Figure tab:howisitdone. Ten most common patterns for each NLI relation are shown in Figure patterns. Many of the obtained patterns clearly match the sentence pair label. For instance the pattern no. 2 ("X man Y → X person Y") can be expected to lead to a sentence pair illustrating entailment. If a man appears in a story, we can infer that a person appeared in the story. The contradictions illustrate typical oppositions like man-woman, dog-cat. Neutrals are various refinements of the content described by the sentences, probably in part due to the original instruction in SNLI that hypothesis "might be a true" given the premise in neutral relation. We kept only patterns appearing with at least 20 different sentence pairs in order to have large and variable sets of sentence pairs in subsequent experiments. We also ignored the overall most common pattern, namely the identity, because it actu- Figure 2: Example of our pattern extraction method. In the first step, the longest common subsequence of tokens (ear is playing a guitar .) is found and replaced with the variable X. In the second step, with a tattoo behind is substituted with the variable Y. As the variables are not listed alphabetically in the premise, they are switched in the last step. step premise hypothesis 1. a man with a tattoo behind his ear is playing a guitar . a woman with a tattoo behind her ear is playing a guitar . 2. a man with a tattoo behind his X a woman with a tattoo behind her X 3. a man Y his X a woman Y her X 4. a man X his Y a woman X her Y Figure 3: Top 10 patterns extracted from sentence pairs labelled as entailmens, contradictions and neutrals, respectively. Note the "X → X" pattern indicating no change in the sentence string at all. entailments contradictions neutrals premise hypothesis premise hypothesis premise hypothesis 1. X X 693 X man Y X woman Y 413 X Y X sad Y 701 2. X man Y X person Y 224 X woman Y X man Y 196 X Y X big Y 119 3. X . X 207 X men X women Y 111 X Y X fat Y 69 4. X woman Y X person Y 118 X boy Y X girl Y 109 X young Y X sad Y 68 5. X boy Y X person Y 65 X dog Y X cat Y 98 X people Y X men Y 60 6. X Y Y , X . 61 X girl Y X boy Y 97 X sad X 51 7. X men Y X people Y 56 X women Y X men Y 64 X X 41 8. two X X 56 X Y, X not Y 56 X person Y X man Y 34 9. X girl Y X person Y 55 two X, three X 46 X Y X red Y 30 10. X , Y Y X . 53 X child Y X man Y 44 X Y X busy Y 28 ally does not alter the sentence at all. Strangely enough, identity was observed not just among entailment pairs (693 cases), but also in neutral (41 cases) and contradiction (22) pairs. Altogether, we collected 4,2k unique sentence pairs in 60 patterns. Only 10% of this data comes from MultiNLI, the majority is from SNLI. Sentence Embeddings We experiment with several popular pretrained sentence embeddings. InferSent 2 (Conneau et al., 2017) is the first embedding model that used a supervised learning to compute sentence representations. It was trained to predict inference labels on the SNLI dataset. The authors tested 7 different architectures and BiLSTM encoder with max pooling achieved the best results. InferSent comes in two versions: In-ferSent 1 is trained with Glove embeddings (Pennington et al., 2014) and InferSent 2 with fastText (Bojanowski et al., 2016). InferSent representations are by far the largest, with the dimensionality of 4096 in both versions. Similarly to InferSent, Universal Sentence Encoder (Cer et al., 2018) uses unsupervised learning augmented with training on supervised data from SNLI. There are two models available. USE T 3 is a transformer-network (Vaswani et al., 2017) designed for higher accuracy at the cost of larger memory use and computational time. USE D 4 is a deep averaging network (Iyyer et al., 2015), where words and bi-grams embeddings are averaged and used as input to a deep neural network that computes the final sentence embeddings. This second model is faster and more efficient but its accuracy is lower. Both models output representation with 512 dimensions. Unlike the previous models, BERT 5 (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) is a deep unsupervised language representation, pre-trained using only Table 1: This table presents the quality of pattern clustering in terms of the three cluster evaluation measures in the space of operations. For all the scores, the value of 1 represents a perfect assignment and 0 corresponds to random label assignment. All the numbers were computed using the Scikit-learn library (Pedregosa et al., 2011). Best operation according to each cluster score across the various embeddings in bold. unlabeled text. It has two self-supervised training objectives -masked language modelling and next sentence classification. It is considered bidirectional as the Transformer encoder reads the entire sequence of words at once. We use a pre-trained BERT-Large model with Whole Word Masking. BERT gives embeddings for every (sub)word unit, we take as a sentence embedding a [CLS] token, which is inserted at the beginning of every sentence. BERT embeddings have 1,024-dimensions. ELMo 6 (Embedding from Language Models) (Che et al., 2018) uses representations from a biL-STM that is trained with the language model objective on a large text dataset. Its embeddings are a function of the internal layers of the bi-directional Language Model (biLM), which should capture not only semantics and syntax, but also different meanings a word can represent in different contexts (polysemy). Similarly to BERT, each token representation of ELMo is a function of the entire input sentence -one word gets different embeddings in different contexts. ELMo computes an embedding for every token and we compute the final sentence embedding as the average over all tokens. It has dimensionality 1024. LASER 7 (Language-Agnostic SEntence Representations) (Artetxe and Schwenk, 2018) is a five-layer bi-directional LSTM (BiLSTM) network. The 1,024-dimension vectors are obtained by max-pooling over its last states. It was trained to translate from more than 90 languages to En-6 https://github.com/HIT-SCIR/ ELMoForManyLangs 7 https://github.com/facebookresearch/ LASER glish or Spanish at the same time, the source language was selected randomly in each batch. Mikolov et al. (2013) used a simple vector difference as the operation that relates two word embeddings. For sentences embeddings, we experiment a little and consider four simple operations: addition, subtraction, multiplication and division, all applied elementwise. More operations could be also considered as long as they are reversible, so that we can isolate the vector change for a particular sentence alternation and apply it to the embedding of any other sentence. Hopefully, we would then land in the area where the correspondingly altered sentence is embedded. Choosing Vector Operations The underlying idea of our analysis was already sketched in Figure space-of-sentences. From every sentence pair in our dataset, we extract the pattern, i.e. the string edit of the sentences. The arithmetic operation needed to move from the embedding of the first sentence to the embedding of the second sentence (in the continuous space of sentences) can be represented as a point in what we call the space of operations. Considering all sentence pairs that share the same edit pattern, we obtain many points in the space of operations. If the space of sentences reflects the particular edit pattern in an accessible way, all the corresponding points in the space of operations will be close together, forming a cluster. To select which of the arithmetic operations best suits the data, we test pattern clustering with three common clustering performance evaluation methods: • Adjusted Rand index (Hubert and Arabie, 1985) is measure of the similarity between two cluster assignments adjusted with chance normalization. The score ranges from 1 to +1 with 1 being the perfect match score and values around 0 meaning random label assignment. Negative numbers show worse agreement than what is expected from a random result. • V-measure (Rosenberg and Hirschberg, 2007) is harmonic mean of homogeneity (each cluster should contain only members of one class) and completeness (all members of one class should be assigned to the same cluster). The score ranges from 0 (the worst situation) to 1 (perfect score). • Adjusted Mutual Information (Strehl and Ghosh, 2002) measures the agreement of the two clusterings with the correction of agreement by chance. The random label assignment gets a score close to 0, while two identical clusterings get the score of 1. As the detailed description of these measures is out of scope of this article, we refer readers to related literature (e.g. (Vinh and Epps, 2009)). We use these scores to compare patterns with labels predicted by k-Means (best result of 100 random initialisations). The results are presented in Table tab:vmeasure. It is apparent that the best distribution by far is achieved using the most intuitive operation, vector subtraction. There seems to be a weak correlation between the size of embeddings and the scores. The smallest embeddings USE D and USE T are getting the worst scores, while the largest embeddings In-ferSent 1 are the best scoring embeddings. However, InferSent 2 with dimensionality 4096 is performing poorly. The fact that several of the embeddings were trained on SNLI does not to seem benefit those embeddings. Between the three top scored embeddings, only InferSent 1 was trained on the data that we use for evaluation of embeddings. Experiments For the following exploration of the continuous space of operations, we focus only on the ELMo embeddings. They scored second best in all scores but unlike the best scoring Infersent 1, ELMo was not trained on SNLI, which is the major source of our sentence pairs. The t-SNE (van der Maaten and Hinton, 2008) visualisation of subtractions of ELMo vectors is presented in Figure tsne. The visualisation is constructed automatically and, of course, without the knowledge of the pattern label. It shows that the patterns are generally grouped together into compact clusters with the exception of a 'chaos cloud' in the middle and several outliers. Also there are several patterns that seem inseparable, e.g. "two X → X" and "three X → X", or "X white Y → X Y" and "X black Y → X Y". We identified the patterns responsible for the noisy center and outliers by computing weighted inertia for each pattern (the sum of squared distances of samples to their cluster center divided by the size of sample). The clusters with highest inertia consists of patterns representing a change of word order and/or adding or removing punctuation. These patterns are: X is Y . → Y is X X Y . → Y X . X → X . X , Y . → Y X . X , Y . → Y , X . X Y . → Y , X . X . → X To see if the space of operations can be interpreted also automatically, i.e. if the sentence relations are generalizable, we remove the noisy patterns as above and apply fully unsupervised clustering: we do not even disclose the expected number of patterns, i.e. clusters. We try two metrics for finding the optimal number of clusters: Davies-Bouldin's index (Davies and Bouldin, 1979) and Silhouette Coefficient (Rousseeuw, 1987). They are both designed to measure compactness and separation of the clusters, i.e. they award dense clusters that are far from each other. Both Davies-Bouldin index and Silhouette Coefficient agree that the best separation is achieved at 9 clusters. Running k-Means with 9 clusters, we get the result as plotted in Figure clusters. Manually inspecting the contents of the automatically identified clusters, we see that many clusters are meaningful in some way. For instance, Cluster 1 captures 90% (altogether 264 out of 292) sentence pairs exerting the pattern of generalizing women, boys or girls to people. The counterpart for men belonging to people is spread into Cluster Each cluster is labelled with the set of patterns extracted from sentence pairs assigned to the cluster. The numbers in parentheses indicate how many sentence pairs belong to the given pattern within this cluster and overall, resp. For instance the line "two X → X (52/56)" says that of the 56 sentence pairs differing in the prefix "two", 52 were automatically clustered together based on the subtraction of their ELMo embeddings. various oppositions. Cluster 2 covers all sentence pairs where a person is replaced with a dog. Cluster 3 is primarily connected with sentence pairs introducing bad mood. Cluster 4 unites patterns that represent omitting a numeral/group. Cluster 6 covers gender oppositions in one direction and Cluster 9 adds the other direction (with some noise for child/man and person/man and similar), etc. Conclusion and Future Work We examined vector spaces of sentence representations as inferred automatically by sentence embedding methods such as InferSent or ELMo. Our goal was to find out if some simple arithmetic operations in the vector space correspond to meaningful edit operations on the sentence strings. Our first explorations of 60 sentence edit patterns document that this is indeed the case. Automatically identified frequent patterns with 20 or more occurrences in the SNLI and MultiNLI datasets correspond to simple vector differences. The ELMo space (and others such as Infersent 1, LASER and USE-T, which are omitted due to paper length requirements) exerts this property very well. Unfortunately, choosing ELMo as example might not have been the best option -we compute ELMo embeddings by averaging contextualized word embeddings and majority of the patterns are just removing/adding/changing a single word. Difference between two such sentence embeddings may be a simple difference between the embeddings of the words substituted, depending on the effect of the contextualization. Thus, the differences in vector space would show rather the word embeddings than the sentence embeddings. It should be noted that our search made use of only about 0.5% of the sentence pairs available in SNLI and MultiNLI. The remaining sentence pairs differ beyond what was extractable automatically using our simple pattern method. A different approach for a fine-grained description of the semantic relation between two sentences would have to be taken for a better exploitation of the available data. Our plans for the long term are to further verify these observations using a more diverse set of vector operations and a larger set of sentence alternations, primarily by extending the set of alternation types. We also plan to examine the possibilities of generating sentence strings back from the sentence embedding space. If successful, our method could lead to controlled paraphrasing via the continuous space: take an input sentence, embed it, modify the embedding using a vector operation and generate the target sentence in the standard textual from. Figure 4 : 4t-SNE representation of patterns. The points in the operation space are obtained by subtracting the ELMo embedding of the hypothesis from the ELMo embedding of the premise. Best viewed in color. Colors correspond to the sentence patterns. Figure 5 : 5t-SNE representation of patterns as in Figure tsne with colors coding now fully automatic clusters. A little boy is walking.A sad boy is walking.A little boy is running.Look at my little cat! Look how sad my cat is. A dog is walking past a field. There is a dog running past the field. A man is walking in the field. ...being sad, not little... ...running instead of walking... ...a man instead of a dog... ...a grown-up instead of a child... https://github.com/facebookresearch/ InferSent https://tfhub.dev/google/universalsentence-encoder-large/3 4 https://tfhub.dev/google/universalsentence-encoder/2 5 https://github.com/google-research/ bert (218 out of 227 pairs) for the singular case and not so clean Cluster 7 containing 57/57 of the plural pairs "X men Y → X people Y" together with X Y -> X sad Y X young Y -> X sad Y X woman Y -> X person Y X girl Y -> X person Y X children Y -> X men Y X child Y -> X man Y X child Y -> X person Y X boy Y -> X person Y X red Y -> X Y X blue Y -> X Y X boy Y -> X girl Y X boys Y -> X girls Y X people Y -> X men Y X person Y -> X man X lady Y -> X man Y X woman Y -> X man Y X women Y -> X men Y X girl Y -> X boy Y X man Y -> X woman Y X man Y -> X person Y X -> X . X . -> X X -> there is X X Y -> X is Y a group of X -> X two X -> X three X -> X X men Y -> X people Y X men Y -> X women Y man X -> woman X X white Y -> X Y X black Y -> X Y X Y -> X fat Y X Y -> X busy Y X people Y -> X dogs Y X little Y -> X sad Y AcknowledgmentThis work has been supported by the grant No. 18-24210S of the Czech Science Foundation. It has been using language resources and tools stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Mikel Artetxe, Holger Schwenk, abs/1812.10464CoRRMikel Artetxe and Holger Schwenk. 2018. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. CoRR, abs/1812.10464. Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, abs/1607.04606CoRRPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR, abs/1607.04606. A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational LinguisticsSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- ence. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics. Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. Daniel Cer, Yinfei Yang, Sheng-Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St, Noah John, Mario Constant, Steve Guajardo-Cespedes, Yuan, abs/1803.11175Universal sentence encoder. CoRRDaniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, Ting Liu, abs/1807.03121CoRRWanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD pars- ing: Deep contextualized word embeddings, en- semble, and treebank concatenation. CoRR, abs/1807.03121. Senteval: An evaluation toolkit for universal sentence representations. Alexis Conneau, Douwe Kiela, arXiv:1803.05449arXiv preprintAlexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. arXiv preprint arXiv:1803.05449. Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, Antoine Bordes, abs/1705.02364CoRRAlexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Su- pervised learning of universal sentence representa- tions from natural language inference data. CoRR, abs/1705.02364. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, Marco Baroni, abs/1805.01070CoRRAlexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Prob- ing sentence embeddings for linguistic properties. CoRR, abs/1805.01070. A cluster separation measure. L David, Donald W Davies, Bouldin, IEEE Trans. Pattern Anal. Mach. Intell. 12David L. Davies and Donald W. Bouldin. 1979. A clus- ter separation measure. IEEE Trans. Pattern Anal. Mach. Intell., 1(2):224-227. BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805. Comparing partitions. Lawrence Hubert, Phipps Arabie, 10.1007/BF01908075Journal of Classification. 21Lawrence Hubert and Phipps Arabie. 1985. Compar- ing partitions. Journal of Classification, 2(1):193- 218. Deep unordered composition rivals syntactic methods for text classification. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, Hal Daumé, Iii , 10.3115/v1/P15-1162Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaLong Papers1Association for Computational LinguisticsMohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1681-1691, Beijing, China. Association for Com- putational Linguistics. Subgram: Extending skip-gram word representation with substrings. Tom Kocmi, Ondrej Bojar, abs/1806.06571CoRRTom Kocmi and Ondrej Bojar. 2018. Subgram: Ex- tending skip-gram word representation with sub- strings. CoRR, abs/1806.06571. Nltk: The natural language toolkit. Edward Loper, Steven Bird, Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational LinguisticsPhiladelphiaAssociation for Computational LinguisticsEdward Loper and Steven Bird. 2002. Nltk: The nat- ural language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics. Philadelphia: Association for Computational Linguistics. Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Journal of Machine Learning Research. 9Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605. Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg S Corrado, Jeffrey Dean, Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jef- frey Dean. 2013. Efficient estimation of word repre- sentations in vector space. Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. Evaluation of sentence embeddings in downstream and linguistic probing tasks. Christian S Perone, Roberto Silveira, Thomas S Paula, abs/1806.06259CoRRChristian S. Perone, Roberto Silveira, and Thomas S. Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. CoRR, abs/1806.06259. Vmeasure: A conditional entropy-based external cluster evaluation measure. Andrew Rosenberg, Julia Hirschberg, Proceedings of the. theAndrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Proceedings of the 2007 Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Prague, Czech RepublicAssociation for Computational LinguisticsJoint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 410- 420, Prague, Czech Republic. Association for Com- putational Linguistics. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Peter Rousseeuw, 10.1016/0377-0427(87)90125-7J. Comput. Appl. Math. 201Peter Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math., 20(1):53-65. Cluster ensembles: A knowledge reuse framework for combining partitionings. Alexander Strehl, Joydeep Ghosh, Eighteenth National Conference on Artificial Intelligence. Menlo Park, CA, USAAlexander Strehl and Joydeep Ghosh. 2002. Cluster ensembles: A knowledge reuse framework for com- bining partitionings. In Eighteenth National Confer- ence on Artificial Intelligence, pages 93-98, Menlo Park, CA, USA. American Association for Artificial Intelligence. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. A novel approach for automatic number of clusters detection in microarray data based on consensus clustering. N X Vinh, J Epps, 10.1109/BIBE.2009.19Ninth IEEE International Conference on Bioinformatics and BioEngineering. N. X. Vinh and J. Epps. 2009. A novel approach for automatic number of clusters detection in microar- ray data based on consensus clustering. In 2009 Ninth IEEE International Conference on Bioinfor- matics and BioEngineering, pages 84-91. Investigating the effects of word substitution errors on sentence embeddings. Rohit Voleti, Julie M Liss, Visar Berisha, abs/1811.07021CoRRRohit Voleti, Julie M. Liss, and Visar Berisha. 2018. Investigating the effects of word substitution errors on sentence embeddings. CoRR, abs/1811.07021. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. Ekaterina Vylomova, Laura Rimell, Trevor Cohn, Timothy Baldwin, abs/1509.01692CoRREkaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. 2015. Take and took, gaggle and goose, book and read: Evaluating the utility of vec- tor differences for lexical relation learning. CoRR, abs/1509.01692. A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel R , NAACL-HLT. BowmanAdina Williams, Nikita Nangia, and Samuel R. Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT.
[ "https://github.com/HIT-SCIR/", "https://github.com/facebookresearch/", "https://github.com/facebookresearch/", "https://github.com/google-research/" ]
[ "Probing quantum scars and weak ergodicity breaking through quantum complexity", "Probing quantum scars and weak ergodicity breaking through quantum complexity" ]
[ "Budhaditya Bhattacharjee \nCentre for High Energy Physics\nIndian Institute of Science\nC.V. Raman Avenue560012BangaloreIndia\n", "Samudra Sur \nCentre for High Energy Physics\nIndian Institute of Science\nC.V. Raman Avenue560012BangaloreIndia\n", "Pratik Nandy \nCentre for High Energy Physics\nIndian Institute of Science\nC.V. Raman Avenue560012BangaloreIndia\n\nCenter for Gravitational Physics and Quantum Information\nYukawa Institute for Theoretical Physics\nKyoto University\nKitashirakawa Oiwakecho, Sakyo-ku606-8502KyotoJapan\n" ]
[ "Centre for High Energy Physics\nIndian Institute of Science\nC.V. Raman Avenue560012BangaloreIndia", "Centre for High Energy Physics\nIndian Institute of Science\nC.V. Raman Avenue560012BangaloreIndia", "Centre for High Energy Physics\nIndian Institute of Science\nC.V. Raman Avenue560012BangaloreIndia", "Center for Gravitational Physics and Quantum Information\nYukawa Institute for Theoretical Physics\nKyoto University\nKitashirakawa Oiwakecho, Sakyo-ku606-8502KyotoJapan" ]
[]
Scar states are special many-body eigenstates that weakly violate the eigenstate thermalization hypothesis (ETH). Using the explicit formalism of the Lanczos algorithm, usually known as the forward scattering approximation in this context, we compute the Krylov state (spread) complexity of typical states generated by the time evolution of the PXP Hamiltonian, hosting such states. We show that the complexity for the Néel state revives in an approximate sense, while complexity for the generic ETH-obeying state always increases. This can be attributed to the approximate SU(2) structure of the corresponding generators of the Hamiltonian. We quantify such "closeness" by the q-deformed SU(2) algebra and provide an analytic expression of Lanczos coefficients for the Néel state within the approximate Krylov subspace. We intuitively explain the results in terms of a tight-binding model. We further consider a deformation of the PXP Hamiltonian and compute the corresponding Lanczos coefficients and the complexity. We find that complexity for the Néel state shows nearly perfect revival while the same does not hold for a generic ETH-obeying state. arXiv:2208.05503v3 [quant-ph] 29 Nov 2022
10.1103/physrevb.106.205150
[ "https://export.arxiv.org/pdf/2208.05503v3.pdf" ]
251,492,980
2208.05503
86194826098ffee64522df5dcb9f1309fe05e31d
Probing quantum scars and weak ergodicity breaking through quantum complexity Budhaditya Bhattacharjee Centre for High Energy Physics Indian Institute of Science C.V. Raman Avenue560012BangaloreIndia Samudra Sur Centre for High Energy Physics Indian Institute of Science C.V. Raman Avenue560012BangaloreIndia Pratik Nandy Centre for High Energy Physics Indian Institute of Science C.V. Raman Avenue560012BangaloreIndia Center for Gravitational Physics and Quantum Information Yukawa Institute for Theoretical Physics Kyoto University Kitashirakawa Oiwakecho, Sakyo-ku606-8502KyotoJapan Probing quantum scars and weak ergodicity breaking through quantum complexity Scar states are special many-body eigenstates that weakly violate the eigenstate thermalization hypothesis (ETH). Using the explicit formalism of the Lanczos algorithm, usually known as the forward scattering approximation in this context, we compute the Krylov state (spread) complexity of typical states generated by the time evolution of the PXP Hamiltonian, hosting such states. We show that the complexity for the Néel state revives in an approximate sense, while complexity for the generic ETH-obeying state always increases. This can be attributed to the approximate SU(2) structure of the corresponding generators of the Hamiltonian. We quantify such "closeness" by the q-deformed SU(2) algebra and provide an analytic expression of Lanczos coefficients for the Néel state within the approximate Krylov subspace. We intuitively explain the results in terms of a tight-binding model. We further consider a deformation of the PXP Hamiltonian and compute the corresponding Lanczos coefficients and the complexity. We find that complexity for the Néel state shows nearly perfect revival while the same does not hold for a generic ETH-obeying state. arXiv:2208.05503v3 [quant-ph] 29 Nov 2022 I. INTRODUCTION Thermalization is a well-known fact for a generic, isolated quantum system, where a statistical description emerges at late times in the thermodynamic limit [1][2][3][4][5]. The system relaxes locally, with energy being the only conserved quantity [6]. This is often attributed to the eigenstate thermalization hypothesis (ETH), which states that the highly excited eigenstates of generic many-body systems are thermal [7][8][9][10]. However, integrable systems [11] and many-body localized states [12][13][14][15][16][17][18] are known to strongly violate ETH. Thus, for a quantum-chaotic systems, the local observables are expected to reach the thermal value irrespective of the choice of the initial state. Recently, a weak violation of ETH has been observed for a few specific states in quantum many-body systems, although most states still follow ETH and show fast relaxation [19]. The full system is still non-integrable, as can be verified using the level statistics [19]. These weak ergodicity breaking states are commonly referred to as "scar states" [20], which manifested themselves in the experimentally observed perfect revival of some special initial states in Rydberg atom chains [21], and optical lattices [22,23], and led to a flurry of theoretical studies in recent times [19,. Time evolution of these specific initial states shows a finite overlap with themselves even after a sufficiently long time [45]. This weak breaking of ergodicity is also evident in the behavior of bipartite entanglement entropy, which fails to respect the volume-law scaling [46,47]. The slow thermalization is often attributed to the approximate Hilbert space division into thermalizing and non-thermalizing part H ≈ H nonth H th [48]. In this * [email protected][email protected][email protected] space, scar states are represented in terms of the Krylov basis vectors. The evolution focuses inside the Krylov space, usually termed as the "Krylov-restricted thermalization" [49]. In this paper, we aim to look at this weak ergodicity breaking in terms of quantum complexity. Complexity, primarily borrowed from computer science, is a fairly new concept that has emerged as a new tool for diagnosing quantum chaos [50][51][52][53][54] and scrambling [55,56] in many-body systems. The term complexity refers to the cost of implementing any task at hand in the minimum number of steps. For our purpose, we consider the difficulty of spreading an initial state in the Hilbert space through the time evolution of a Hamiltonian. The difficulty is naturally understood in terms of complexity, dubbed as "spread complexity" [57]. The definition is straightforward and suitable compared to the other theoretical cousins, namely, the circuit complexity [58,59]. Recently, it has been shown to detect topological and non-topological phases in many-body systems [60]. The formulation is based upon the iterative process of the Lanczos algorithm [61], often known as the forward scattering approximation [26]. Although the formulation works in any generic case, symmetry greatly simplifies the problem, and Lanczos coefficients can be extracted analytically. Here we should mention that this formulation is conceptually different than studying the operator growth, where the behavior of Lanczos coefficients is speculated by the universal operator growth hypothesis [50]. We begin by studying the time evolution of the |Z 2 (i.e., |1010101010101010 for N = 16 lattice size) state for a simple paramagnetic spin chain Hamiltonian H p = N n=1 σ x n , where σ x n denotes the Pauli X matrices. This state is the lowest-weight state in the j = N/2 representation of the SU(2) symmetry of the Hamiltonian. By this, we mean that the Hamiltonian can be separated into two parts H ± , where H ± and H z follows the SU (2) algebra (H z is obtained by the commutator [H + , H − ]). Then we study the evolution of the |Z 2 and a generic |0 state (i.e., an initial state without any Z symmetry which does not falls into the representation of the symmetry) for the PXP Hamiltonian. For numerical consistency (lattice size N = 16), we fix the choice |0 = |0010100100100010 (in σ z basis). It is known that the paramagnetic Hamiltonian (more generally H ± and H z ) satisfies the SU(2) symmetry relations. Therefore, the Lanczos coefficients and complexity can be computed analytically. On the other hand, the PXP Hamiltonian is known to break the SU(2) symmetry algebra. For this purpose, we implement the Lanczos algorithm numerically to compute the Lanczos coefficients, the Krylov wave functions, and complexity. We find that the complexity for the Néel state (i.e., the |Z 2 state) demonstrates an oscillatory component while that of a generic (|0 ) state does not. Furthermore, the growth of the spread complexity for the Néel state appears to be slower than that of the generic state. We explain these observations in terms of the weak ergodicity breaking observed in the PXP Hamiltonian. We investigate the Lanczos coefficients of the PXP Hamiltonian in detail. This allows us to study the SU(2) algebra breaking since the underlying symmetry algebra controls the Lanczos coefficients of any system. We find that scaling the PXP Hamiltonian by a factor (which we determine numerically) gives rise to a Hamiltonian that approximately follows a q-deformed SU(2) algebra, denoted as SU q (2). We study the system for sizes N = 12 to 30, and find a system-size-dependent q value. Extrapolation to 1/N → 0 gives us the q value in the thermodynamic limit. We explicitly write the algebra and determine the algebra-breaking terms. We finally consider first-order perturbative correction to the PXP Hamiltonian. Here, we show that a scaled perturbed PXP Hamiltonian is very well approximated by an SU(2) algebra for the Néel state, with some explicit algebra-breaking terms. We then evaluate the Krylov basis wave functions and complexity for the Néel state and the |0 state. The Néel state is found to demonstrate strong revival with nearly full oscillatory behavior for the Krylov wave functions and complexity. The growth rate for the complexity is much slower [characteristic of SU (2) algebra] than the growth without perturbation. Therefore, adding first order perturbations moves the approximate algebra from q < 1 to q ≈ 1. Adding first-order perturbation initiates a stronger ergodicity breaking and a nearly exact division of the Hilbert space into the thermalizing and non-thermalizing parts. The paper is organized as follows. In Section II, we provide a brief review of Krylov complexity for states (spread complexity) and describe the Lanczos algorithm. In Section III, we briefly review q-deformed SU(2) algebra. We provide our analytical and numerical results in Section IV for the paramagnetic Hamiltonian and the PXP Hamiltonian. In Section V, we present the results for both PXP Hamiltonian and perturbed PXP Hamiltonian and describe a physical picture to understand the behavior of Lanczos coefficients and complexity. Section VI summarizes our results and proposes future directions. II. KRYLOV (SPREAD) COMPLEXITY FOR STATES Here we introduce the ideas of Krylov complexity for quantum states [57]. This is a somewhat different formalism as compared to the formalism of Krylov complexity for describing operator growth [50]. This notion finds various applications in the study of chaos, scrambling and integrability in many-body quantum and semiclassical systems [51,52,[54][55][56][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78]. To describe the notion of Krylov complexity for states, we consider the Hamiltonian evolution of a quantum state under a time-independent Hamiltonian |Ψ(t) = e −iHt |Ψ(0) ,(1) where |Ψ(0) is not an eigenstate of the Hamiltonian H. This is a well-known quantum quench protocol [79]. This time evolution can be visualized (by writing a series expansion for e −iHt ) as the successive application of H on the state |Ψ(0) . This generates a basis {H n |Ψ(0) , n ∈ N} in which the time evolved state |Ψ(t) can be expanded into, as a Taylor series in t. From a physical perspective, it quantifies the spreading of the state |Ψ(0) in the Hilbert space H. Generally, there exist an infinite number of possible choices of such bases to describe the time evolution of |Ψ(0) . The question, then, is to find the most optimal basis in the sense of minimizing some cost function. The way to construct such an optimal basis is by orthonormalizing the above basis in a way similar to the Gram-Schmidt orthonormalization process. One performs a recursive algorithm known as the Lanczos algorithm [57,61] |A n+1 = (H − a n ) |K n − b n |K n−1 , |K n = b −1 n |A n .(2) where one starts from an initial state |K 0 and recursively orthonormalizes the states |A n . The constants a n 's and b n 's (with b 0 = 0) are known as Lanczos coefficients and usually obtained by the normalization factor from the orthonormalization procedure as a n = K n |H|K n , b n = A n |A n .(3) The basis thus generated is known as the Krylov basis |K n with n = 0, 1, . . .. The action of Hamiltonian on this basis is given by H |K n = a n |K n + b n |K n−1 + b n+1 |K n+1 ,(4) On such basis, the Hamiltonian takes a symmetric tridiagonal form, where the primary diagonal elements are given by a n 's, and the subdiagonal and the superdiagonal elements consist of b n 's. By performing the above procedure, we readily express the time evolution of the state on the Krylov basis as |Ψ(t) = n ψ n (t) |K n ,(5) where ψ n (t)'s are known as the Krylov basis functions, and they are complex in general. They are obtained by solving the following recursive differential equation i∂ t ψ n (t) = a n ψ n (t) + b n+1 ψ n+1 (t) + b n ψ n−1 (t) . (6) with b 0 = 0. The conservation of probability implies n |ψ n (t)| 2 = 1. The first element ψ 0 (t) = Ψ(t)|Ψ(0) ≡ S(t) is often called the "return amplitude" or the autocorrelation function in the operator context [50]. Further, this motivates us to define the complexity as C(t) = n n|ψ n (t)| 2 ,(7) which is minimized by the above choice of Krylov basis [57]. In this sense, complexity acts as a natural "cost functional." For practical purposes, it has to be computed in case-by-case examples. However, if the evolution Hamiltonian possesses some symmetry, then one can directly obtain the corresponding Lanczos coefficients and compute the complexity (see Appendix A). In later sections, we see how to understand such a case having SU(2) symmetry. III. q-DEFORMED SU(2) ALGEBRA AND LANCZOS COEFFICIENTS This section briefly reviews q-deformed SU(2) algebra. The notion of q deformations has been studied since the late 1990s. Extensive studies have included studies of spin chains, Dirac oscillators, conformal quantum mechanics, and many others [80][81][82][83][84]. We will focus on q deformations of SU(2)-like algebras. For a better understanding of q deformations and their various applications, we direct the readers' attention to the references [85][86][87][88][89][90][91][92][93][94][95]. Canonically, q numbers are defined as a version of ordinary numbers parametrized by q, with a correspondence [x] q := q x − q −x q − q −1 , lim q→1 [x] q = x ,(8) i.e., the ordinary numbers are recovered in the limit q → 1. The parameter q could be a real number, or it could be a complex phase. For real case, we write q = e τ , while q = e iτ for complex phase. In either case, τ ∈ R. However, in this paper, we only take it as a positive real number. Some examples of the q numbers are [0] q = 0, [1] q = 1, [2] q = q + q −1 and [3] q = 1 + q 2 + q −2 etc. It is important to note that the q numbers can be alternately expressed in terms of Chebyshev polynomials of the second kind [96,97] [n] q = U n−1 (x) , where x = (q + q −1 )/2. The above identity can be easily verified by parametrization of q, and using the identity U n−1 (cos θ) = sin(nθ)/ sin θ. It is also evident from the definition that q-numbers are symmetric under the transformation q → 1/q. Therefore, for this work, we restrict ourselves to the region 0 < q ≤ 1. Any value of q outside this region can be mapped into it by q → 1/q. The q-deformed SU(2), known as SU q (2) [98], is generated by three generators J 0 and J ± , with the following algebra [80,81] [J 0 , J ± ] = ±J ± , [J + , J − ] = [2J 0 ] q ,(10) where [2J 0 ] q has to be understood as a q-deformed operator. For two cases, where q can be real or complex phase, we expand the second commutation relation as [86] [J + , J − ] = 1 sinh τ ∞ n=0 (2τ J 0 ) 2n+1 (2n + 1)! for q = e τ , [J + , J − ] = 1 sin τ ∞ n=0 (−1) n (2τ J 0 ) 2n+1 (2n + 1)! for q = e iτ .(11) Hence, the algebra SU q (2) is a non-linear generalization of standard SU(2). The limit q → 1 implies τ → 0, where we recover the usual SU(2) algebra. The natural basis of this algebra is given by |j, n where −j ≤ n ≤ j. The states can be formed by repeatedly acting the annihilation ladder operator J − on the highestweight state |j, j or the creation ladder operator J + on the lowest-weight state |j, −j . Here, we choose the latter, with a slight abuse of notation n → j + n, to make it consistent with [57,70]. We have the states |j, −j + n = [Γ(2j − n + 1)] q [n] q ! [Γ(2j + 1)] q J n + |j, −j ,(12) where n = 0, · · · , 2j. The action of the generators is defined as [91] J 0 |j, −j + n = (−j + n) |j, −j + n , J + |j, −j + n = [n + 1] q [2j − n] q |j, −j + n + 1 , J − |j, −j + n = [n] q [2j − n + 1] q |j, −j + n − 1 . Here the ordinary numbers are replaced by the q numbers. We consider the Hamiltonian of the form H = α(J + + J − ) + η 0 J 0 + δ1 ,(13) where α, β and γ are some model-dependent numbers. The Krylov basis is formed by the basis vectors of SU q (2), i.e., |K n = |j, −j + n . It is straightforward to compute the Lanczos coefficients as a (q) n = η 0 (−j + n) + δ , b (q) n = α [n] q [2j − n + 1] q .(14) Using (9), we write them in terms of Chebyshev polynomials as b (q) n = α U n−1 (x) U 2j−n (x) = α n−1 k=0 U 2j−2n+2k+1 (x) 1/2 .(15) where in the last equality, we have used the product formulas of Chebyshev polynomials. It is tempting to call them q-Lanczos coefficients, generated from the qdeformed algebra. However, for simplicity, we continue to call them Lanczos coefficients and denote them by b n . The Lanczos coefficients for different values of q are shown in Fig. 1. IV. NUMERICAL RESULTS A. Paramagnetic Hamiltonian To begin with, we consider the simple paramagnetic Hamiltonian H p = N n=1 σ x n [26] with periodic boundary condition. Since we are interested in the time evolution of the Z 2 symmetry-broken Néel state, we take N to be even. The Hamiltonian can be separated into two different parts as H p = H + + H − where H ± = n (σ ± 2n + σ ∓ 2n−1 ) .(16) Here σ ± n = (σ x n ± iσ y n )/2. The two parts follow the exact SU(2) algebra, namely, [H z , H ± ] = ±H ± and [H + , H − ] = 2H z with H z = n (σ z 2n − σ z 2n−1 )/2, furnishing |Z 2 and |Z 2 as the lowest-and the highestweight states, respectively, in the representation of spin j = N/2 [24]. Choosing the initial state as the Néel state |Z 2 , the time evolved state |Ψ(t) = e −iHpt |Z 2 = n=0 ψ n (t) |K n spreads over the Krylov basis, constructed from the basis function ψ n (t) and the basis vectors |K n . As said before, the lowest-and the highestweight states are |K 0 = |Z 2 and |K n = |Z 2 , respectively. The fidelity for the |Z 2 state shows exact revival. In fact, for any other initial state such as |Z 2 , |Z 3 , |Z 4 , etc. we shall see perfect revival in fidelity due to the integrable nature of the Hamiltonian. The exact representation allows us to infer the Lanczos coefficients (see Appendix A) directly as a n = 0 , b n = n(N − n + 1) . The Lanczos coefficients b n show the maximum at n = (N + 1)/2 and terminates at n = N + 1 (see Fig. 2(a)), which is the dimension of the Krylov space. The wave functions are given by ψ n (t) = ( N C n ) 1/2 (i cot t) −n cos N t, and they are related by the recursion (6). Complexity is simply computed using (7) as C(t) = N n=0 n|ψ n (t)| 2 = N sin 2 t ,(18) which is periodic with a time period T = π, same as the revival of the survival amplitude S(t) = ψ 0 = Ψ(t)|Z 2 = cos N t. Moreover, it reaches maxima of a value N at half-period T = π/2. This is expected as |Z 2 does not show decoherence, and complexity should not decay with time. The numerical plot is shown in Fig. 2(b). The long-time average of complexity isC = N/2, which is extensive in system size. All the analytic expressions are perfectly consistent with the numerical results in Figs. 2(a) and 2(b), obtained by the direct implementation of the Lanczos algorithm Eq.(2)-(3). B. PXP Hamiltonian Now we turn to the more complicated PXP Hamiltonian. The Hamiltonian is [19] H PXP = N m=1 P m−1 σ x m P m+1 ,(19) where P = |0 0| is the projector and σ x = |0 1|+|1 0| is the Pauli X-matrix. We consider a system of N sites with periodic boundary condition, for which P 0 = P N and P N +1 = P 1 . The dimension of the Hilbert space can be shown to be D N = F N +1 + F N −1 ∼ φ N for large N , where F N is the N -th Fibonacci number and φ = 1.61803 · · · is the golden ratio. The Hamiltonian possesses translational, and inversion symmetries [29]. The presence of the projectors in the PXP Hamiltonian ensures that no two adjacent sites are in the excited |1 state) (see [99] for similar hard-boson model), and therefore this Hamiltonian is non-integrable and thermalizing. However, its thermalizing nature is sensitive to the choice of the initial state. It is observed that the |Z 2 state, written as |010101 · · · , shows weak thermalization [19]. States without any Z symmetries are known to thermalize much faster. Even states with larger Z symmetries, like |Z 3 , also thermalize slower than a generic state in the system. However, their thermalizing nature is stronger than the |Z 2 state. These states are related to the many-body scar states (i.e., the weakly entangled eigenstates) of the PXP Hilbert space. Specifically, such Z states are known to be comprised of superposition of the weakly entangled eigenstates [24]. Similar to the paramagnetic case, we split the Hamiltonian as H PXP = H + + H − as [26] H ± = m∈odd P m−1 σ ∓ m P m+1 + m∈even P m−1 σ ± m P m+1 .(20) However, in this case, the generators H ± satisfy [H + , H − ] = m (−1) m P m−1 σ z m P m+1 , and [H z , H ± ] ≈ ±H ± , i.e., it does not obey the exact SU(2); it only obeys approximately, marked by the notation "≈" [26]. It is indeed possible to associate the H ± to a broken SU(2) algebra via appropriate scaling of the H ± generators. Due to forming such a "broken" algebra, the fragmentation of the PXP Hamiltonian is only approximate H PXP ≈ H nonth H th . Hence, we cannot apply the above analytic tools as we did in the previous section. (19). The standard SU(2) result is given for comparison (dashed line). The bn's for a generic state (|0 state; without any Z symmetry) is also plotted (circles). Here we choose the system size N = 16 and K ∼ 20 Krylov basis vectors. Around K ∼ N + 1, the state is driven out of the approximate Krylov subspace. This is in contrast with the paramagnetic Hamiltonian in Fig. 2(a), where the Krylov subspace was exact and shown by the dashed line in this figure. We start from two different initial states, |Z 2 and a generic state |0 without any Z symmetry. We evolve them by the PXP Hamiltonian. Our results for b n are presented in Fig. 3 for N = 16. Interestingly enough, we find that the a n 's turn out to be exactly zero numerically which provides us an additional motivation for breaking the Hamiltonian into of H ± as an approximation of the SU(2) algebra for PXP [see Eq. (43) and (45)]. We approximate the Lanczos coefficients b n 's for |Z 2 initial state by the q-deformed SU(2) algebra by (15). We find that a good approximation to the observed b n (PXP) is given by α in the range {0.400, 0.442}, and the q-value is dependent on the system size, due to finite size effects. One may note that the value of α varies with system sizes. However, the variation does not follow any obvious pattern. This leads us to suspect that the reason behind it may be finite size effects and/or inherent numerical inaccuracies in the Lanczos algorithm. The q-value (and α) is determined individually for each system size via a least-square fit of the Lanczos coefficients. The function N +1 n=0 |b PXP n − b (q) n | 2 is minimized for q and α. The asymptotic value of q at N → ∞ is obtained by a linear fit on the q versus 1/N plot (Fig. 4). The extrapolation of the fitting gives the value of q ∞ to be 1.0053 ± 0.0044. However, since we have chosen the convention that q is restricted between 0 and 1, and since the quantity b (q) n remains same under q → 1/q, we choose q ∞ = 1/1.0053 = 0.9947. To see the extent of the difference that occurs between q = 0.9947 and q = 1, it is instructive to parametrize q = e τ and perform a series expansion near τ = 0 for the same. The first term is O(1) term which is the usual SU(2) expression. The next O(τ ) term turns out to be proportional to N 5/2 . Therefore, one can infer that despite the thermodynamic result, the deviation of the Lanczos coefficient from the SU(2) result is infinitely large. The same can be seen numerically, comparing the difference between the SU q (2) (or even the PXP) Lanczos coefficients and the SU(2) Lanczos coefficients for different values of N . While the value of q for increasing N does increase towards unity, the difference between the q-deformed and pure SU(2) Lanczos also increases. Thus, the Lanczos coefficients tell us that even though the thermodynamic limit is close to SU(2) in terms of the parameter q, in terms of measurable quantities it is indeed very distant from pure SU (2). The "broken" SU(2) algebra satisfies the following commutation relations [J + , J − ] = 2J 0 ,(21)[J 0 , J ± ] = ±J ± + extra terms ,(22) where the PXP Hamiltonian is written as H = α(J + + J − ). The Lanczos coefficients and the wave functions corresponding to the Krylov basis expansion for an unbroken SU(2) algebra are derived in Appendix A. We note the expression for the Lanczos coefficients below b n = α n(2j − n + 1) . A q-deformation of such an SU(2) algebra would lead us to the expression (15). As mentioned, we numerically obtain the condition that α ∈ {0.400, 0.442}. The additional terms in (22) reflect the deviation from the SU (2) algebra. We find that q-deformation of this algebra is a good approximation to Eq.(21)- (22). However, it is not exact, and there are still additional terms present, as we will see shortly. Therefore, the algebra satisfied by the PXP ladder operators is a "broken" q-deformed SU(2). The algebra in Eqs. (21) and (22) can be explicitly seen to be as follows [ [27] and Eq. (20)]: H ± = nσ ∓ 2n+1 +σ ± 2n ,(24) whereσ ( * ) m ≡ P m−1 σ ( * ) m P m+1 . We have the following commutator [H + , H − ] = nσ z 2n −σ z 2n+1 ≡ 2H 0 .(25) It is straightforward to see that the following commutator holds [H 0 , H ± ] = ± nσ ∓ 2n+1 +σ ± 2n ± X ± ,(26) where the "extra terms" X ± are −X ± = m∈odd P m−2 P m−1 σ ∓ m P m+1 + P m−1 σ ∓ m P m+1 P m+2 + m∈even P m−2 P m−1 σ ± m P m+1 + P m−1 σ ± m P m+1 P m+2 . As mentioned in [27], this algebra is readily identified as a broken SU(2), where the algebra breaking terms are X ± . However, such an algebra interpreted as SU(2) implies that we must have α = 1, which is not the case as seen from Fig. 3. Rather, we find the α values mentioned in Table. I represent a better approximation to the PXP Hamiltonian's Lanczos coefficients. To write the algebra given above as a broken SU(2), we begin by writing the PXP Hamiltonian as H PXP = α(J + + J − ) (for some constant α) where we have J ± = 1 α nσ ∓ 2n+1 +σ ± 2n .(27) For the rest of this discussion, we assume that α is a real number. The commutation relations between these ladder operators are [J + , J − ] = 1 α 2 [H + , H − ] = 2 α 2 H 0 ≡ 2J 0 .(28) This gives us the relation that J 0 = 1 α 2 H 0 .(29) We write the commutator between J 0 and J ± as [J 0 , J ± ] = ± 1 α 2 J ± ± 1 α 3 X ± .(30) From Eq.(21)-(22), we note that this is not exactly a broken SU(2) algebra. To cast it into that form, we absorb a term 1−α 2 α 2 J ± into X ± . Therefore, the commutator in Eq.(30) is better written as [J 0 , J ± ] = ±J ± ±X ± ,(31) where we haveX ± = 1 α 3 X ± + (1−α 2 ) α 2 J ± . C. Broken q-deformed SU(2) algebra for the PXP Hamiltonian Interpreting the algebra of PXP Hamiltonian as a version of q-deformed algebra, we have [J + , J − ] = [2J 0 ] q ,(32)[J 0 , J ± ] = ±J ± ,(33) where [2J 0 ] q = sinh 2τ J0 sinh τ , using q = e τ . This relation can, in principle, be inverted, provided the inversion is treated as a series. Therefore, we would have J 0 = 1 2τ sinh -1 ([2J 0 ] q sinh τ ) , = 1 2τ ∞ n=0 (−1) n (2n)! 2 2n (n!) 2 (2n + 1) (sinh τ ) 2n+1 ([J + , J − ]) 2n+1 . We evaluate [J 0 , J ± ] and demonstrate that it is close to ±J ± . This can be done by considering the series expansion given above term-by-term. We consider only the first order correction to the J 0 J 0 = sinh τ 2τ [J + , J − ] − (sinh τ ) 3 12τ [J + , J − ] 3 + · · · ,(34) Note that in the τ → 0 limit, this reduces to the usual commutation relation. Splitting the Hamiltonian in (as before) H PXP = α(J + + J − ), we know that these follow the commutation relations (21)-(22) 1 2 [[J + , J − ], J ± ] = ±(J ± +X ± ) .(35) We introduce the notation Q = [J + , J − ] and use the relation [Q, J ± ] = ±2(J ± +X ± ). Evaluating the commutator [J 0 , J ± ] gives us [J 0 , J ± ] = ± sinh τ τ (J ± +X ± ) ∓ (sinh τ ) 3 6τ Q 2 (J ± +X ± ) + Q(J ± +X ± )Q + (J ± +X ± )Q 2 + · · · . From here, we can read off the extra terms in the commutator as [J 0 , J ± ] extra = ± sinh τ τX ± ∓ (sinh τ ) 3 6τ Q 2 (J ± +X ± ) + Q(J ± +X ± )Q + (J ± +X ± )Q 2 ∓ (1 − sinh τ τ )J ± · · · .(36) Therefore, we find that the PXP Hamiltonian, when written in terms of ladder operators J ± , corresponds to a broken version of the q-deformed algebra given in (32) and (33). The algebra-breaking terms are given in (36). D. Perturbative correction In the previous section, we showed that q-deformed algebra enables us to find an analytic form of the Lanczos coefficients within an excellent approximation. The analytical expression holds only within the approximate Krylov subspace. This approximation occurs because the symmetry algebra of the PXP Hamiltonian is not exact SU q (2). However, the SU(2) structure can be recovered with λ = 0.108 [27] (we assume the same perturbation strength for system sizes N = 12, . . . , 26 and find excellent agreement). Consider the full Hamiltonian as H PXP = H PXP + H pert .(1) Here, we use a compact notation PXPP to denote P m−1 σ x m P m+1 P m+2 and similarly for the second one. We plot the Lanczos coefficients (see Fig. 5) and the first few wave functions ψ i (t) with i = 1, . . . , 4 [see Fig. 8 (b)] corresponding to the Krylov basis expansion for the Hamiltonian H (1) PXP , initialized by the same Néel state. We see that the Lanczos coefficients closely follow a "broken" SU(2), and the revival of complexity also increases compared to the unperturbed PXP Hamiltonian. It is worth mentioning that q deformation of the "broken" SU(2) algebra does not yield a better approximation. In other words, attempting to q deform this algebra and numerically determine the q value ends up giving q = 1 for all system sizes considered. First, we begin by considering the Lanczos coefficients. We consider system sizes from N = 12 to 26 and evolve the |Z 2 state. The Lanczos coefficients show excellent agreement with the "broken" SU(2) result (see Fig. 5 ) b n = α n(N − n + 1) ,(38) with α ≈ 0.7025 for all system sizes considered. The value of α again suggests that writing H (1) PXP = α(J (1) + +J (1) − ), the ladder operators must satisfy the commutation relations in Eqs. (21) and (22), with numerically fixed α. Following the discussion on non-perturbed PXP, we write αJ (1) ± = H (1) ± where H (1) ± = nσ ∓ 2n+1 +σ ± 2n + λ nσ ∓ 2n+1 P 2n+3 +σ ± 2n P 2n+2 + λ n P 2n−1σ ∓ 2n+1 + P 2n−2σ ± 2n .(39) The commutator of these ladder operators can be evaluated to give 1 2 [H (1) + , H (1) − ] = nσ z 2n −σ z 2n+1 − 2λ n P 2n−2σ z 2n − P 2n−1σ z 2n+1 + 2λ nσ z 2n P 2n+2 −σ z 2n+1 P 2n+3 + λ nσ + 2nσ − 2n+2 P 2n+3 −σ + 2n+1σ − 2n+3 P 2n+4 + λ nσ − 2nσ + 2n+2 P 2n+3 −σ − 2n+1σ + 2n+3 P 2n+4 + λ 2 Y (1) ,(40) where we have only written down the O(λ 0 ) and O(λ) terms. There are also O(λ 2 ) terms, which are written collectively as the operator Y (1) . Therefore, the righthand side of Eq.(40) can be identified as α 2 J ± ] = ±J (1) ± ±X (1) ± ,(41) whereX (1) ± contains terms up to order λ 3 . Here we again have absorbed a 1−α 2 α 2 J (1) ± term intoX (1) ± . The perturbation strength λ is canonically fixed by studying the fidelity and complexity and ensuring that it demonstrates nearly perfect revival at a periodic time interval. We provide an intuitive explanation of the behavior of the wave functions ψ n (t)'s and the complexity C(t) in the next section. In Appendix A, we study the wave functions in the Krylov basis expansion of the |Z 2 state. For Hamiltonians of the type α(H + + H − ) the wave function ψ n 's depend parametrically only on α. The "zeroth" wave function, i.e., ψ 0 is nothing but fidelity. We plot the same for the |Z 2 state and a generic state and fit it to (58). The results are given in Fig. 8(a). As is clear, there is strong agreement between the perturbed PXP fidelity and (58) (for n = 0). We also evaluate a few other ψ n (t) (n = 1, 2, 3, 4) numerically, and find good agreement with (58) in all cases. In summary, the PXP Hamiltonian with first-order perturbation H (1) PXP corresponds to a broken SU(2) algebra. We find very strong agreement with the Lanczos coefficients and Krylov basis wave functions of unbroken SU (2). The algebra-breaking terms, therefore, have a subleading contribution. V. COMPLEXITY OF THE |Z2 STATE: RESULTS AND A PHYSICAL PICTURE Application of the Hamiltonian on any initial state can be thought of as a single-particle tight-binding problem on a lattice, when expressed in Krylov basis due to Eq.(4), where the n th Krylov basis state can be interpreted as the n th lattice site. The Lanczos coefficients b n and b n+1 denotes the hopping amplitude from n th site to (n − 1) th and (n + 1) th sites, respectively, under application of H. Further, the square of Krylov basis wave functions |ψ n (t)| 2 corresponds to the probability of finding the particle at the n th site. For a given Hamiltonian, if under time evolution of the initial state (achieved by repeated application of the Hamiltonian), we obtain a perfect revival. This implies that we must have a finite number of Krylov basis states (compared to the size of the Hilbert space). In other words, the effective tight-binding model is defined only over a finite number of lattice sites. The particle starts initially from the 0 th site and "hops" between those finite numbers of lattice sites states under time evolution. This can happen if the Hamiltonian has perfect SU(2) symmetry and can be split into parts which can be used to generate the finite number of Krylov basis states from the initial state, as explained in Sec. IV. For the case of |Z 2 state evolved by the integrable paramagnetic Hamiltonian, the initial |Z 2 state corresponds to the 0 th site and |Z 2 state corresponds to the last site of the finite tight-binding lattice. For a system of size, N = 16 the Hilbert-space dimension is ∼ 6.5 × 10 4 , yet we only need 17 Krylov basis states to describe the evolution of |Z 2 in the paramagnetic model. In Fig. 2(a) we observe b n is exactly 0 for n = 0 and n = 17, implying that under time evolution, the state can never go beyond the 17 th Krylov basis state as the hopping is entirely suppressed. Hence, the time evolution is bounded within the non-thermal sector of the Hilbert space given by the Krylov basis states. The structure of the b n 's also tells that the hopping rates are higher in the middle of the lattice and are gradually suppressed at either end. From a kinetic point of view, we may say that the particle, on average, spends most of the time near the 0 th and 17 th sites. However, it periodically moves back and forth between the two ends. Complexity defined as C(t) = n n|ψ n (t)| 2 can now be easily interpreted as the "average position of the particle" under time evolution in the lattice. In terms of Krylov basis states, we can rephrase it by saying that under time evolution, the state is most likely to be "close" to |Z 2 or |Z 2 state, and it periodically oscillates in between, which is nicely captured by the oscillatory behavior of the complexity shown in Fig. 2(b). The extrema of C(t) occurs at values close to 0 and 16, denoting the positions of the states |Z 2 and |Z 2 in Krylov basis, where it is most likely to be found if observed over some interval of time. Regarding the evolution of |Z 2 state under the nonintegrable PXP model, we observe in Fig. 3 that b n 's do not become zero at any finite n although they tend to come close to zero (at n = 17 for N = 16) before shooting off to larger values. This initial tendency is well understood by the weakly broken SU(2) structure of the Hamiltonian and the presence of the scar-states in the Hilbert space. In this case, in the tight-binding picture, the particle is not bound inside the first 17 sites, although there is a tendency to stay localized near the 0 th and 17 th site due to the suppressed hopping. Once it reaches the 17 th site, it can hop outside the region with a higher amplitude given by the next b n . In the Krylov basis picture, we understand this as the initial |Z 2 state, after sufficient time, "leaks" into the thermal part of the Hilbert space and thereby deviating from the exact revival over time as shown in the plot for fidelity or |ψ 0 (t)| = | Ψ(t)|Z 2 | in Fig. 6(a). We also show the fidelity of a generic product state, which quickly goes to zero, showing no sign of periodic revival at all. Specifically, we consider the state given by |0010100100100010 (in terms of σ z configuration) for system size N = 16. In Fig. 6(b) we have shown the behavior of the absolute values of the Krylov wave functions ψ 0 , ψ 1 , ψ 2 , ψ 3 , and ψ 4 , whose squares are the probabilities of finding the particle in 0, 1, 2, 3, 4 th Krylov bases "lattice sites" as functions of time. We observe that as time progresses, the particle gradually moves from the 0 th site to the subsequent sites 2 . Before the revival period also, we see that the particle slowly returns to the 0 th site from the latter sites and then, after a reflection at the 0 th site again moves towards the next sites. The reflection can be understood by the presence of the double peaks of |ψ n (t)|'s for n = 0 and the single peaks of |ψ 0 (t)| around the revival period. However, the decreasing amplitude of all the wave functions at revival is due to the broken SU(2) structure and the "leaking" of the state into the thermal part of Hilbert space which has already been explained. The complexity plot in Fig. 7 for the |Z 2 state in the case of the PXP model also has interesting signatures of weak ergodicity breaking. Like the case of the integrable and the complexity. Fig. 5 shows the bounded nature of b n for different system sizes. This behavior is almost exactly like the case of the SU(2) paramagnetic model, implying that the Hilbert space division into non-thermal and thermal parts is nearly exact. The perturbation strength is fixed by ensuring that the time-periodic revival of the state is almost perfect without any noticeable reduction of amplitude. This is seen in plot for fidelity or |ψ 0 (t)| of the |Z 2 state in Fig. 8(a). We have also shown the fidelity for the same generic state as before given by |0010100100100010 in σ z basis, which decays to zero after a small time interval. The plots for the wave function in Fig. 8(b) have a similar feature to the case without perturbation. However, in this case, there is a very negligible amount of the decay of the wave functions at revival, implying the nearly "lossless" dynamics of the initial state in the finite Krylov basis. The complexity also regains its fully oscillatory behavior, very much like the exact SU(2) case. However, there is a slight increase in complexity which can be noticed only after a number of periods. To contrast the two cases, we show the complexity before and after adding the perturbation term in a single plot in Fig. 9. Thus, the behavior of b n , the Krylov wave functions, and the periodic and bounded nature of complexity can be invoked to understand the periodic revival of certain initial states for a non-integrable Hamiltonian, which can happen due to the presence of weakly entangled scar states. VI. CONCLUSION In this work, we have extended the study of Krylov state (spread) complexity for the time evolution governed by the PXP Hamiltonian. While the exact SU(2) algebra allows us to infer the closed-form expression of Lanczos coefficients and corresponding complexity, it is not the case for the PXP Hamiltonian. The algebra is not exact SU(2), yet still close to SU(2). We have quantified this closeness by mapping the algebra to a class of well-known q-deformed algebra SU q (2), where the deformation generically encodes the algebra-breaking terms. Still, as we have shown that PXP cannot be written as exact SU q (2) for any q, but the approximation is much better compared to the usual SU(2) algebra. However, the crucial point is that expressing as SU q (2) allows us to compute the Lanczos coefficients for the initial Néel state in terms of Chebyshev polynomials. This was missing in previous studies, and we aim to fill this gap by proving analytic expression, which fits the numerical results in good approximation. The complexity for the Néel fails to become completely periodic and oscillates with a much slower value compared to the generic state. The complete periodicity can be well recovered by adding a perturbation to the PXP Hamiltonian, which primarily restricts the generators within a closed algebra in an approximate sense. This, however, no longer holds for a generic state. Previous studies [60] have been focused on the complexity, where one uses the spin-1/2 representation of SU(2) algebra. This suggests that only two Krylov bases are possible, and the complexity can be written in terms of fidelity. Hence, complexity and fidelity carry the same information. On the other hand, we have worked with an effective spin-N/2 representation, thus allowing us to consider a larger number of Krylov basis vectors. This implies that in our case, complexity, in general, carries more information than fidelity. Furthermore, the physical interpretation of Lanczos coefficients, Krylov basis wave functions, and the complexity is general and not specific to the system discussed in the paper. Such a picture will help to understand the behavior of complexity of any initial state evolving under generic Hamiltonian. Specifically, this work opens up possibilities to extend for other Z symmetric state (e.g., like |Z 3 , |Z 4 ) with and without perturbation. Another interesting future direction will be to study the behavior of complexity beyond the PXP model such as higher spin [102,103], periodically driven systems [104][105][106][107], and hypercube models [108]. For the PXP model itself, it will be interesting to consider other possibilities such as (p, q) deformation or general Φ deformation of SU(2) [95] satisfied by the Hamiltonian. For the q deformation studied in this work, it would be worth attempting to derive exact analytic expressions for the Krylov basis wave functions, as well as complexity and K-entropy, if possible. Additionally, understanding the reason behind the system-size dependence of the q value might shed further light on the true nature of the algebra describing the PXP Hamiltonian. Finally, it would be interesting to study the effect of transverse magnetic field on the PXP Hamiltonian [25] H = m (P m−1 σ x m P m+1 − χ σ z m ) .(42) with χ > 0. This Hamiltonian not only possesses scar states but also contains the critical states near a critical value χ = χ c ≈ 0.655 [25], where an Ising-type phase transition occurs. For such Hamiltonian, |Z 2 state thermalizes near the critical regime while it fails to thermalize off-criticality. It is tempting to think that, in such a case, the complexity might show unbounded growth even for the |Z 2 state. We hope to return to some of these questions in future studies. Here we briefly discuss the symmetry Krylov basis construction, if the Hamiltonian has any particular symmetry [57]. Here we primarily focus on the SU(2) algebra for spin j, which is given by [J 0 , J ± ] = ±J ± and [J + , J − ] = 2J 0 . The corresponding Hamiltonian can be explicitly expressed in terms of the SU(2) generators as H = α(J + + J − ) + η 0 J 0 + δI .(43) The associated Krylov basis vectors are finitedimensional and given by |K n = |j, −j + n , where n = 0, · · · 2j. The lowest-weight state corresponds to n = 0, and we take it as the initial state. It is time evolved according to |Ψ(t) = e −iHt |K 0 = 2j m=0 ψ n (t) |K n ,(44) where ψ n (t)'s are the Krylov basis coefficients. Note that the initial state is not an eigenstate of the Hamiltonian (43). The symmetry allows us to directly extract the associated Lanczos coefficients as [57] a n = η 0 (−j + n) + δ , b n = α n(2j − n + 1) . (45) Let us now note the following relations [111]. For an operator given bŷ G = e λ+T++λ−T−+λ0T0+ω1 ,(46) where T 0 , T ± satisfy the commutation relations [T + , T − ] = 2T 0 , [T 0 , T ± ] = ±T ± .(47) We have the following expression forĜ G = e ω e Λ+T+ e log(Λ0)T0 e Λ−T− , and we have the functions Λ ± , Λ 0 are given by Λ 0 = cosh ν − λ 0 2ν sinh ν −2(49) Λ ± = 2λ ± sinh(ν) 2ν cosh ν − λ 0 sinh ν (50) ν 2 = λ 0 2 2 + λ + λ − .(51) Therefore, the time evolution operator is given as e −iHt = e −iωt e AT+ e BT0 e CT− .(52) We find the following expressions for A, B, and C (by noting that λ + = λ − = α) in the Hamiltonian. H = α(T + + T − ) + η 0 T 0 + ω1 .(53) For the given Hamiltonian (53), we have the following expressions for ν, Λ ± . ν = it 2 4α 2 + η 2 0 ,(54)A = C = 2α i 4α 2 + η 2 0 cot t 2 4α 2 + η 2 0 − η 0 .(55) Therefore, the time evolved state is given by |ψ(t) = e −iHt |j, −j = e −iωt e −Bj 2j n=0 A n Γ(2j + 1) n!Γ(2j − n + 1) |j, −j + n . From here, the wave functions can be easily read off ψ n (t) = e −iωt e −Bj A n Γ(2j + 1) n!Γ(2j − n + 1) . As one can see here, the expression for ψ n (t) for η 0 = 0 and ω = 0 depends solely on α, as is reflected in our numerical results. From Eq.(58), one can see that the non-zero basis functions can be obtained until n = 2j + 1. This is due to the existence of the reciprocal Gamma function in (58) and it is well-known that lim z→m 1/Γ(−m) = 0 for m = 0, 1, 2, . . . [112]. Thus, we get a simpler expression for the complexity as Note that the evolution of complexity depends on α and η 0 , whereas the spin j only sets its amplitude of it. Also, note the periodic nature of the C(t). We can also compute the entropy S K (t) = − N n=0 |ψ n (t)| 2 log |ψ n (t)| 2 . The plot for entropy is shown in Fig. 10(b). Appendix B: Some commutation relations During our discussions on the algebra satisfied by the PXP Hamiltonian with first-order perturbation, we omit the full expressions for the commutators of the ladder operators. In this appendix, we explicitly list the same. In (40), we denote the O(λ 2 ) terms collectively as Y (1) . The explicit expression for the same is as follows (we again resort to the notationσ ( * ) m ≡ P m−1 σ ( * ) m P m+1 ): Y (1) = n P 2n−1σ z 2n+1 − P 2n−2σ z 2n + nσ z 2n P 2n+2 −σ z 2n+1 P 2n+3 + 1 2 nσ + 2n σ − 2n+2 P 2n+3 −σ + 2n+1 σ − 2n+3 P 2n+4 + 1 2 nσ − 2n σ + 2n+2 P 2n+3 −σ − 2n+1 σ + 2n+3 P 2n+4 + 2 n P 2n−1σ z 2n+1 P 2n+3 − P 2n−2σ z 2n P 2n+2 − 1 2 n P 2n−2σ + 2n σ − 2n+2 P 2n+3 − P 2n−1σ + 2n+1 σ − 2n+3 P 2n+4 − 1 2 n P 2n−2σ − 2n σ + 2n+2 P 2n+3 − P 2n−1σ − 2n+1 σ + 2n+3 P 2n+4 + 1 2 nσ + 2n σ − 2n+2 P 2n+3 P 2n+4 −σ + 2n+1 σ − 2n+3 P 2n+4 P 2n+5 + 1 2 nσ − 2n σ + 2n+2 P 2n+3 P 2n+4 −σ − 2n+1 σ + 2n+3 P 2n+4 P 2n+5 . Similarly, during our discussion of the commutator (41), we do not write the explicit expression forX (1) ± . To write a little more, it is prudent to first consider the commutator [H whereX n,± are terms containing multiple P and σ operators [100]. There are 16 such terms in this expression, and they are weighted by the functions f n (λ). These functions are polynomials of λ, of order up to λ 3 . Thus, we rewrite the above expression in terms of J FIG. 1 . 1Growth of bn's [from Eq.(9)] for j = 1 for various values of q. Although the n takes a discrete value, we use a continuum n for the plotting. This visualization will be useful for understanding the behavior of the Lanczos coefficients in later sections. FIG. 2 . 2(a) Growth of bn's and (b) evolution of complexity C(t) for the paramagnetic Hamiltonian Hp = N n=1 σ x n , initialized in the |Z2 state for a system of lattice size N = 16. The expression for the Lanczos coefficients and the complexity is given by Eq.(17) and Eq.(18), respectively. The Lanczos coefficients exactly terminate at n = N + 1 = 17, which implies the dimension of the Krylov subspace is K = 17. N n for N = 16 FIG. 3. Growth of bn's (disks) for the |Z2 state versus the q-deformed SU(2) (thick line) result for the PXP Hamiltonian 4. q values versus 1/N for the PXP model. Possibly due to finite-size effects, the q values are different for different system sizes N . The asymptotic value turns out to be q∞ = 0.9947±0.0044, which is close to the SU(2) value (q = 1). The system sizes considered in this are N = 12, 14, · · · , 30. The linear regression fit has an R 2 value of 0.992 and a standard error of 0.0040. 12 q12 = 0.78047 α12 = 0.40059 N = 14 q14 = 0.81093 α14 = 0.40759 N = 16 q16 = 0.83240 α16 = 0.40971 N = 18 q18 = 0.84775 α18 = 0.40762 N = 20 q20 = 0.87539 α20 = 0.44092 N = 22 q22 = 0.88607 α22 = 0.44219 N = 24 q24 = 0.89463 α24 = 0.44212 N = 26 q26 = 0.90152 α26 = 0.44069 N = 28 q28 = 0.90698 α28 = 0.43765 N = 30 q30 = 0.91115 α30 = 0.43256 with first order perturbation FIG. 5. Plot of bn's for the |Z2 state, after adding the perturbation (37) to the PXP model for different lattice sizes. The dots indicate the numerical results while the line indicates the expression (38) (α ≈ 0.7025). Both are in excellent agreement for all system sizes considered. by adding a suitable term in the PXP Hamiltonian H pert = λ N m=1 (PXPP + PPXP) . ± ] can be evaluated. One can see that it possible to write this commutator 1 FIG. 6 . 6for |Z 2 > ψ 1 for |Z 2 > ψ 2 for |Z 2 > ψ 3 for |Z 2 > ψ 4 for |Z 2 > (a) Behavior of ψ0(t) for |Z2 state (red disk) in PXP model. The brown circles represent ψ0(t) for an arbitrary state that does not possess Z symmetry (|0 state). (b) ψ0(t), ψ1(t), ψ2(t), ψ3(t), and ψ4(t) for |Z2 state in PXP model showing the evolution of initial state in the Krylov basis with a slight decay at revival due to the broken SU(2) symmetry of the model. In all cases, we choose N = 16. FIG. 7 . 7Evolution of complexity C(t) for the |Z2 state (in red) and for a generic state without Z symmetry (|0 state, in blue) in PXP model for system size N = 16. The complexity for the generic state grows without any constraint, only to be bounded by the finite number of Krylov basis states considered in our computation. This is reflected by the sudden dip in the complexity (shown in blue). Subsequent time evolution only makes it hop towards lower Krylov basis states. 2ψFIG. 9 . 9See [101] for a similar situation, but with slightly different motivation. PXP(λ) for |Z 2 > SU(2) for |Z 2 > PXP(λ) for |0> state for |Z 2 > ψ 1 for |Z 2 > ψ 2 for |Z 2 > ψ 3 for |Z 2 > ψ 4 for |Z 2 > {0,1,2,3,4} (t) vs t (b)FIG. 8. (a) Behavior of ψ0(t) for λ = 0.108 for the |Z2 state (red disk) compared with the analytical result (58) (dashed red line, for n = 0). The brown circles represent ψ0(t) for an arbitrary state that does not possess Z symmetry (|0 state). (b) ψ0(t), ψ1(t), ψ2(t), ψ3(t), and ψ4(t) for |Z2 state in perturbed PXP model showing the evolution of initial state in the Krylov basis. The amplitudes are the same at revival periods because of the recovered SU(2) structure due to the addition of perturbation. In all cases, we choose N = 16.and SU(2) symmetric paramagnetic model, the C(t) has an oscillatory behavior. However, it is not bounded between any finite portion of the Krylov basis. Hence along with an oscillatory part, it also grows slowly into higher order Krylov basis states, implying that under time evolution, it never fully comes back to the |Z 2 state. On the other hand, a generic state starts spreading into the Krylov space of basis states right away and only gets bounded because we have only taken a finite number of Krylov basis states for computational convenience. Then, the subsequent application of Hamiltonian on this state only makes it comeback to the lower Krylov basis vectors.The approximate recovery of SU(2) algebra after the addition of a suitable perturbation, for the case of |Z 2 initial state, can be observed directly from the behavior of the Lanczos coefficients, the Krylov wave functions, Evolution of complexity C(t) for the |Z2 state in case of PXP model (in blue) and perturbed PXP model (in red) for system size N = 16. FIG. 10 . 10(a) Evolution of ψn(t)'s and the (b) evolution of entropy SK (t) for the paramagnetic model. We choose lattice size N = 16. TABLE I . ITable outliningthe values of q and α obtained numerically via least-square fitting for system sizes N = 12 to N = 30. While these commutators are very tedious to evaluate by hand, it is possible to evaluate them in a few seconds using[100]. VII. ACKNOWLEDGEMENTSWe would like to thank Aranya Bhattacharya, Hugo A. Camargo, Pawel Caputa, Chethan Krishnan, Silvia Pappalardi, Tanay Pathak, Diptiman Sen, and Masaki Tezuka for useful discussions and comments on the draft. We especially thank Tanay Pathak for collaborations on early stages of the work and Aninda Sinha for making us aware of the q-deformed algebra through the Refs.[85,86,97]. Some numerical computations were done using QuSpin[109,110]. BB would like to thank the organizers and participants of CHAHOL22 and the Max-Planck-Institut für Physik komplexer Systeme, Dresden, for helpful discussions. P.N. wishes to thank NITheCS and the University of Cape Town for the hospitality during the final stages of the work, where parts of the results were presented. B.B. and S. BB and SS contributed equally to this work. Evolution of entanglement entropy in one-dimensional systems. Pasquale Calabrese, John L Cardy, 10.1088/1742-5468/2005/04/P04010arXiv:cond-mat/0503393J. Stat. Mech. 05044010Pasquale Calabrese and John L. Cardy, "Evolution of entanglement entropy in one-dimensional systems," J. Stat. Mech. 0504, P04010 (2005), arXiv:cond- mat/0503393. Weak and strong typicality in quantum systems. Lea F Santos, Anatoli Polkovnikov, Marcos Rigol, 10.1103/physreve.86.010102Physical Review E. 86Lea F. Santos, Anatoli Polkovnikov, and Marcos Rigol, "Weak and strong typicality in quantum sys- tems," Physical Review E 86 (2012), 10.1103/phys- reve.86.010102. Microscopic origin of thermodynamic entropy in isolated systems. J M Deutsch, Haibin Li, Auditya Sharma, 10.1103/physreve.87.042135Physical Review E. 87J. M. Deutsch, Haibin Li, and Auditya Sharma, "Mi- croscopic origin of thermodynamic entropy in isolated systems," Physical Review E 87 (2013), 10.1103/phys- reve.87.042135. Stationary entanglement entropies following an interaction quench in 1d bose gas. Mario Collura, Márton Kormos, Pasquale Calabrese, 10.1088/1742-5468/2014/01/p01009Journal of Statistical Mechanics: Theory and Experiment. 1009Mario Collura, Márton Kormos, and Pasquale Cal- abrese, "Stationary entanglement entropies following an interaction quench in 1d bose gas," Journal of Statisti- cal Mechanics: Theory and Experiment 2014, P01009 (2014). Quantum thermalization through entanglement in an isolated many-body system. A M Kaufman, M E Tai, A Lukin, M Rispoli, R Schittko, P M Preiss, M Greiner, 10.1126/science.aaf6725Science. 353A. M. Kaufman, M. E. Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, and M. Greiner, "Quan- tum thermalization through entanglement in an isolated many-body system," Science 353, 794-800 (2016). Quench dynamics and relaxation in isolated integrable quantum spin chains. H L Fabian, Maurizio Essler, Fagotti, 10.1088/1742-5468/2016/06/064002arXiv:1603.06452J. Stat. Mech. 160664002cond-mat.quant-gasFabian H. L. Essler and Maurizio Fagotti, "Quench dynamics and relaxation in isolated integrable quan- tum spin chains," J. Stat. Mech. 1606, 064002 (2016), arXiv:1603.06452 [cond-mat.quant-gas]. Quantum statistical mechanics in a closed system. J M Deutsch, 10.1103/PhysRevA.43.2046Phys. Rev. A. 43J. M. Deutsch, "Quantum statistical mechanics in a closed system," Phys. Rev. A 43, 2046-2049 (1991). Chaos and quantum thermalization. Mark Srednicki, 10.1103/PhysRevE.50.888Phys. Rev. E. 50Mark Srednicki, "Chaos and quantum thermalization," Phys. Rev. E 50, 888-901 (1994). Relaxation in a completely integrable many-body quantum system: An ab initio study of the dynamics of the highly excited states of 1d lattice hard-core bosons. Marcos Rigol, Vanja Dunjko, Vladimir Yurovsky, Maxim Olshanii, 10.1103/PhysRevLett.98.050405arXiv:cond-mat/0604476Phys. Rev. Lett. 9850405cond-mat.otherMarcos Rigol, Vanja Dunjko, Vladimir Yurovsky, and Maxim Olshanii, "Relaxation in a completely integrable many-body quantum system: An ab initio study of the dynamics of the highly excited states of 1d lattice hard-core bosons," Phys. Rev. Lett. 98, 050405 (2007), arXiv:cond-mat/0604476 [cond-mat.other]. Thermalization and its mechanism for generic isolated quantum systems. Marcos Rigol, Vanja Dunjko, Maxim Olshanii, 10.1038/nature06838arXiv:0708.1324Nature. 452cond-mat.stat-mechMarcos Rigol, Vanja Dunjko, and Maxim Olshanii, "Thermalization and its mechanism for generic iso- lated quantum systems," Nature 452, 854-858 (2008), arXiv:0708.1324 [cond-mat.stat-mech]. Remarks on the notion of quantum integrability. Jean-Sebastien Caux, Jorn Mossel, 10.1088/1742-5468/2011/02/P02023arXiv:1012.3587J. Stat. Mech. 11022023cond-mat.str-elJean-Sebastien Caux and Jorn Mossel, "Remarks on the notion of quantum integrability," J. Stat. Mech. 1102, P02023 (2011), arXiv:1012.3587 [cond-mat.str-el]. Metal-insulator transition in a weakly interacting many-electron system with localized single-particle states. D M Basko, I L Aleiner, B L Altshuler, 10.1016/j.aop.2005.11.014Annals of Physics. 321D.M. Basko, I.L. Aleiner, and B.L. Altshuler, "Metal-insulator transition in a weakly interacting many-electron system with localized single-particle states," Annals of Physics 321, 1126-1205 (2006). Local conservation laws and the structure of the manybody localized states. Z Maksym Serbyn, Dmitry A Papić, Abanin, 10.1103/PhysRevLett.111.127201arXiv:1305.5554Phys. Rev. Lett. 111127201cond-mat.dis-nnMaksym Serbyn, Z. Papić, and Dmitry A. Abanin, "Lo- cal conservation laws and the structure of the many- body localized states," Phys. Rev. Lett. 111, 127201 (2013), arXiv:1305.5554 [cond-mat.dis-nn]. Localization of interacting fermions at high temperature. Vadim Oganesyan, David A Huse, 10.1103/PhysRevB.75.155111arXiv:cond-mat/0610854Phys. Rev. B. 75155111condmat.str-elVadim Oganesyan and David A. Huse, "Localization of interacting fermions at high temperature," Phys. Rev. B 75, 155111 (2007), arXiv:cond-mat/0610854 [cond- mat.str-el]. Many-body localization phase transition. Arijeet Pal, David A Huse, 10.1103/PhysRevB.82.174411arXiv:1010.1992Phys. Rev. B. 82174411cond-mat.dis-nnArijeet Pal and David A. Huse, "Many-body localiza- tion phase transition," Phys. Rev. B 82, 174411 (2010), arXiv:1010.1992 [cond-mat.dis-nn]. Phenomenology of fully many-bodylocalized systems. David A Huse, Rahul Nandkishore, Vadim Oganesyan, 10.1103/PhysRevB.90.174202arXiv:1408.4297Phys. Rev. B. 90174202cond-mat.stat-mechDavid A. Huse, Rahul Nandkishore, and Vadim Oganesyan, "Phenomenology of fully many-body- localized systems," Phys. Rev. B 90, 174202 (2014), arXiv:1408.4297 [cond-mat.stat-mech]. Many-body localization and thermalization in quantum statistical mechanics. Rahul Nandkishore, David A Huse, 10.1146/annurev-conmatphys-031214-014726arXiv:1404.0686Annual Review of Condensed Matter Physics. 6cond-mat.stat-mechRahul Nandkishore and David A. Huse, "Many-body lo- calization and thermalization in quantum statistical me- chanics," Annual Review of Condensed Matter Physics 6, 15-38 (2015), arXiv:1404.0686 [cond-mat.stat-mech]. Colloquium: Many-body localization, thermalization, and entanglement. A Dmitry, Ehud Abanin, Immanuel Altman, Maksym Bloch, Serbyn, 10.1103/RevModPhys.91.021001arXiv:1804.11065Rev. Mod. Phys. 9121001condmat.dis-nnDmitry A. Abanin, Ehud Altman, Immanuel Bloch, and Maksym Serbyn, "Colloquium: Many-body lo- calization, thermalization, and entanglement," Rev. Mod. Phys. 91, 021001 (2019), arXiv:1804.11065 [cond- mat.dis-nn]. Weak ergodicity breaking from quantum many-body scars. C J Turner, A A Michailidis, D A Abanin, M Serbyn, Z Papić, 10.1038/s41567-018-0137-5Nature Physics. 14C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Ser- byn, and Z. Papić, "Weak ergodicity breaking from quantum many-body scars," Nature Physics 14, 745- 749 (2018). Bound-state eigenfunctions of classically chaotic hamiltonian systems: Scars of periodic orbits. Eric J Heller, 10.1103/PhysRevLett.53.1515Phys. Rev. Lett. 53Eric J. Heller, "Bound-state eigenfunctions of classically chaotic hamiltonian systems: Scars of periodic orbits," Phys. Rev. Lett. 53, 1515-1518 (1984). Probing many-body dynamics on a 51-atom quantum simulator. Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soonwon Choi, Alexander S Zibrov, Manuel Endres, Markus Greiner, Vladan Vuletić, Mikhail D Lukin, 10.1038/nature24622Nature. 551Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soon- won Choi, Alexander S. Zibrov, Manuel Endres, Markus Greiner, Vladan Vuletić, and Mikhail D. Lukin, "Prob- ing many-body dynamics on a 51-atom quantum simu- lator," Nature 551, 579-584 (2017). Observing nonergodicity due to kinetic constraints in tilted Fermi-Hubbard chains. Sebastian Scherg, Thomas Kohlert, Pablo Sala, Frank Pollmann, 10.1038/s41467-021-24726-0arXiv:2010.12965Nature Commun. 12Bharath Hebbe Madhusudhana, Immanuel Bloch, and Monika Aidelsburger. cond-mat.quant-gasSebastian Scherg, Thomas Kohlert, Pablo Sala, Frank Pollmann, Bharath Hebbe Madhusudhana, Immanuel Bloch, and Monika Aidelsburger, "Observing non- ergodicity due to kinetic constraints in tilted Fermi- Hubbard chains," Nature Commun. 12, 4490 (2021), arXiv:2010.12965 [cond-mat.quant-gas]. Quantum many-body scars in optical lattices. Hongzheng Zhao, Joseph Vovrosh, Florian Mintert, Johannes Knolle, 10.1103/PhysRevLett.124.160604arXiv:2002.01746Phys. Rev. Lett. 124160604cond-mat.quant-gasHongzheng Zhao, Joseph Vovrosh, Florian Mintert, and Johannes Knolle, "Quantum many-body scars in op- tical lattices," Phys. Rev. Lett. 124, 160604 (2020), arXiv:2002.01746 [cond-mat.quant-gas]. Emergent su(2) dynamics and perfect quantum many-body scars. Soonwon Choi, Christopher J Turner, Hannes Pichler, Wen Wei Ho, Alexios A Michailidis, Zlatko Papić, Maksym Serbyn, Mikhail D Lukin, Dmitry A Abanin, 10.1103/PhysRevLett.122.220603arXiv:1812.05561Phys. Rev. Lett. 122220603quant-phSoonwon Choi, Christopher J. Turner, Hannes Pich- ler, Wen Wei Ho, Alexios A. Michailidis, Zlatko Papić, Maksym Serbyn, Mikhail D. Lukin, and Dmitry A. Abanin, "Emergent su(2) dynamics and perfect quan- tum many-body scars," Phys. Rev. Lett. 122, 220603 (2019), arXiv:1812.05561 [quant-ph]. Quantum many-body scars and quantum criticality. Zhiyuan Yao, Lei Pan, Shang Liu, Hui Zhai, 10.1103/PhysRevB.105.125123arXiv:2108.05113Phys. Rev. B. 105125123cond-mat.quant-gasZhiyuan Yao, Lei Pan, Shang Liu, and Hui Zhai, "Quantum many-body scars and quantum criticality," Phys. Rev. B 105, 125123 (2022), arXiv:2108.05113 [cond-mat.quant-gas]. Quantum scarred eigenstates in a rydberg atom chain: Entanglement, breakdown of thermalization, and stability to perturbations. C J Turner, A A Michailidis, D A Abanin, M Serbyn, Z Papić, 10.1103/PhysRevB.98.155134arXiv:1806.10933Phys. Rev. B. 98155134condmat.quant-gasC. J. Turner, A. A. Michailidis, D. A. Abanin, M. Ser- byn, and Z. Papić, "Quantum scarred eigenstates in a rydberg atom chain: Entanglement, breakdown of thermalization, and stability to perturbations," Phys. Rev. B 98, 155134 (2018), arXiv:1806.10933 [cond- mat.quant-gas]. Quantum scars as embeddings of weakly broken lie algebra representations. Kieran Bull, Jean-Yves Desaules, Zlatko Papić, 10.1103/PhysRevB.101.165139arXiv:2001.08232Phys. Rev. B. 101165139cond-mat.str-elKieran Bull, Jean-Yves Desaules, and Zlatko Papić, "Quantum scars as embeddings of weakly broken lie algebra representations," Phys. Rev. B 101, 165139 (2020), arXiv:2001.08232 [cond-mat.str-el]. Signatures of integrability in the dynamics of rydberg-blockaded chains. Vedika Khemani, Chris R Laumann, Anushya Chandran, 10.1103/PhysRevB.99.161101arXiv:1807.02108Phys. Rev. B. 99161101cond-mat.str-elVedika Khemani, Chris R. Laumann, and Anushya Chandran, "Signatures of integrability in the dynamics of rydberg-blockaded chains," Phys. Rev. B 99, 161101 (2019), arXiv:1807.02108 [cond-mat.str-el]. Exact quantum many-body scar states in the rydberg-blockaded atom chain. Cheng-Ju Lin, Olexei I Motrunich, 10.1103/PhysRevLett.122.173401arXiv:1810.00888Phys. Rev. Lett. 122173401cond-mat.quant-gasCheng-Ju Lin and Olexei I. Motrunich, "Exact quan- tum many-body scar states in the rydberg-blockaded atom chain," Phys. Rev. Lett. 122, 173401 (2019), arXiv:1810.00888 [cond-mat.quant-gas]. Orthogonal quantum manybody scars. Hongzheng Zhao, Adam Smith, Florian Mintert, Johannes Knolle, 10.1103/PhysRevLett.127.150601arXiv:2102.07672Phys. Rev. Lett. 127150601cond-mat.stat-mechHongzheng Zhao, Adam Smith, Florian Mintert, and Johannes Knolle, "Orthogonal quantum many- body scars," Phys. Rev. Lett. 127, 150601 (2021), arXiv:2102.07672 [cond-mat.stat-mech]. Extensive Multipartite Entanglement from su(2) Quantum Many-Body Scars. Jean-Yves Desaules, Francesca Pietracaprina, Zlatko Papić, John Goold, Silvia Pappalardi, 10.1103/PhysRevLett.129.020601arXiv:2109.09724Phys. Rev. Lett. 12920601quant-phJean-Yves Desaules, Francesca Pietracaprina, Zlatko Papić, John Goold, and Silvia Pappalardi, "Extensive Multipartite Entanglement from su(2) Quantum Many- Body Scars," Phys. Rev. Lett. 129, 020601 (2022), arXiv:2109.09724 [quant-ph]. Rainbow scars: From area to volume law. Christopher M Langlett, Zhi-Cheng Yang, Julia Wildeboer, Alexey V Gorshkov, Thomas Iadecola, Shenglong Xu, 10.1103/PhysRevB.105.L060301arXiv:2107.03416Phys. Rev. B. 10560301cond-mat.str-elChristopher M. Langlett, Zhi-Cheng Yang, Julia Wilde- boer, Alexey V. Gorshkov, Thomas Iadecola, and Shen- glong Xu, "Rainbow scars: From area to volume law," Phys. Rev. B 105, L060301 (2022), arXiv:2107.03416 [cond-mat.str-el]. From tunnels to towers: quantum scars from Lie Algebras and q-deformed Lie Algebras. Fiona Nicholas O&apos;dea, Anushya Burnell, Vedika Chandran, Khemani, 10.1103/PhysRevResearch.2.043305arXiv:2007.16207Phys. Rev. Res. 243305cond-mat.stat-mechNicholas O'Dea, Fiona Burnell, Anushya Chandran, and Vedika Khemani, "From tunnels to towers: quantum scars from Lie Algebras and q-deformed Lie Algebras," Phys. Rev. Res. 2, 043305 (2020), arXiv:2007.16207 [cond-mat.stat-mech]. Quantum Many-Body Scars: A Quasiparticle Perspective. Anushya Chandran, Thomas Iadecola, Vedika Khemani, Roderich Moessner, arXiv:2206.11528cond-mat.str-elAnushya Chandran, Thomas Iadecola, Vedika Khemani, and Roderich Moessner, "Quantum Many-Body Scars: A Quasiparticle Perspective," (2022), arXiv:2206.11528 [cond-mat.str-el]. Multimagnon quantum many-body scars from tensor operators. Long-Hin Tang, O&apos; Nicholas, Anushya Dea, Chandran, 10.1103/PhysRevResearch.4.043006arXiv:2110.11448Phys. Rev. Research. 443006cond-mat.str-elLong-Hin Tang, Nicholas O'Dea, and Anushya Chan- dran, "Multimagnon quantum many-body scars from tensor operators," Phys. Rev. Research 4, 043006 (2022), arXiv:2110.11448 [cond-mat.str-el]. Onsager's scars in disordered spin chains. Naoyuki Shibata, Nobuyuki Yoshioka, Hosho Katsura, 10.1103/PhysRevLett.124.180604arXiv:1912.13399Phys. Rev. Lett. 124180604quant-phNaoyuki Shibata, Nobuyuki Yoshioka, and Hosho Katsura, "Onsager's scars in disordered spin chains," Phys. Rev. Lett. 124, 180604 (2020), arXiv:1912.13399 [quant-ph]. Many Body Scars as a Group Invariant Sector of Hilbert Space. Kiryl Pakrouski, N Preethi, Fedor K Pallegar, Igor R Popov, Klebanov, 10.1103/PhysRevLett.125.230602arXiv:2007.00845Phys. Rev. Lett. 125230602cond-mat.str-elKiryl Pakrouski, Preethi N. Pallegar, Fedor K. Popov, and Igor R. Klebanov, "Many Body Scars as a Group In- variant Sector of Hilbert Space," Phys. Rev. Lett. 125, 230602 (2020), arXiv:2007.00845 [cond-mat.str-el]. Group theoretic approach to many-body scar states in fermionic lattice models. Kiryl Pakrouski, N Preethi, Fedor K Pallegar, Igor R Popov, Klebanov, 10.1103/PhysRevResearch.3.043156arXiv:2106.10300Phys. Rev. Res. 343156cond-mat.str-elKiryl Pakrouski, Preethi N. Pallegar, Fedor K. Popov, and Igor R. Klebanov, "Group theoretic approach to many-body scar states in fermionic lattice models," Phys. Rev. Res. 3, 043156 (2021), arXiv:2106.10300 [cond-mat.str-el]. Quantum many-body scar states with emergent kinetic constraints and finite-entanglement revivals. Thomas Iadecola, Michael Schecter, 10.1103/PhysRevB.101.024306arXiv:1910.11350Phys. Rev. B. 10124306cond-mat.str-elThomas Iadecola and Michael Schecter, "Quantum many-body scar states with emergent kinetic constraints and finite-entanglement revivals," Phys. Rev. B 101, 024306 (2020), arXiv:1910.11350 [cond-mat.str-el]. Entanglement enhanced metrology with quantum manybody scars. Shane Dooley, Silvia Pappalardi, John Goold, arXiv:2207.13521quant-phShane Dooley, Silvia Pappalardi, and John Goold, "En- tanglement enhanced metrology with quantum many- body scars," (2022), arXiv:2207.13521 [quant-ph]. Weak Ergodicity Breaking in the Schwinger Model. Jean-Yves Desaules, Debasish Banerjee, Ana Hudomal, Zlatko Papić, Arnab Sen, Jad C Halimeh, arXiv:2203.08830cond-mat.str-elJean-Yves Desaules, Debasish Banerjee, Ana Hudomal, Zlatko Papić, Arnab Sen, and Jad C. Halimeh, "Weak Ergodicity Breaking in the Schwinger Model," (2022), arXiv:2203.08830 [cond-mat.str-el]. Observation of unconventional many-body scarring in a quantum simulator. Guo-Xian Su, Hui Sun, Ana Hudomal, Jean-Yves Desaules, Zhao-Yu Zhou, Bing Yang, Jad C Halimeh, Zhen-Sheng Yuan, Zlatko Papić, Jian-Wei Pan, arXiv:2201.00821condmat.quant-gasGuo-Xian Su, Hui Sun, Ana Hudomal, Jean-Yves De- saules, Zhao-Yu Zhou, Bing Yang, Jad C. Halimeh, Zhen-Sheng Yuan, Zlatko Papić, and Jian-Wei Pan, "Observation of unconventional many-body scarring in a quantum simulator," (2022), arXiv:2201.00821 [cond- mat.quant-gas]. Prominent quantum many-body scars in a truncated Schwinger model. Jean-Yves Desaules, Ana Hudomal, Debasish Banerjee, Arnab Sen, Zlatko Papić, Jad C Halimeh, arXiv:2204.01745condmat.quant-gasJean-Yves Desaules, Ana Hudomal, Debasish Baner- jee, Arnab Sen, Zlatko Papić, and Jad C. Halimeh, "Prominent quantum many-body scars in a truncated Schwinger model," (2022), arXiv:2204.01745 [cond- mat.quant-gas]. Real Time Dynamics and Confinement in the Zn Schwinger-Weyl lattice model for 1+1 QED. Giuseppe Magnifico, Marcello Dalmonte, Paolo Facchi, Saverio Pascazio, Francesco V Pepe, Elisa Ercolessi, 10.22331/q-2020-06-15-281arXiv:1909.048214quant-phGiuseppe Magnifico, Marcello Dalmonte, Paolo Facchi, Saverio Pascazio, Francesco V. Pepe, and Elisa Erco- lessi, "Real Time Dynamics and Confinement in the Zn Schwinger-Weyl lattice model for 1+1 QED," Quantum 4, 281 (2020), arXiv:1909.04821 [quant-ph]. Quantum many-body scars and weak breaking of ergodicity. Maksym Serbyn, A Dmitry, Zlatko Abanin, Papić, 10.1038/s41567-021-01230-2arXiv:2011.09486Nature Phys. 17quant-phMaksym Serbyn, Dmitry A. Abanin, and Zlatko Papić, "Quantum many-body scars and weak break- ing of ergodicity," Nature Phys. 17, 675-685 (2021), arXiv:2011.09486 [quant-ph]. Entanglement of exact excited states of Affleck-Kennedy-Lieb-Tasaki models: Exact results, many-body scars, and violation of the strong eigenstate thermalization hypothesis. Sanjay Moudgalya, Nicolas Regnault, B Andrei Bernevig, 10.1103/PhysRevB.98.235156arXiv:1806.09624Phys. Rev. B. 98235156cond-mat.str-elSanjay Moudgalya, Nicolas Regnault, and B. An- drei Bernevig, "Entanglement of exact excited states of Affleck-Kennedy-Lieb-Tasaki models: Exact results, many-body scars, and violation of the strong eigenstate thermalization hypothesis," Phys. Rev. B 98, 235156 (2018), arXiv:1806.09624 [cond-mat.str-el]. Weak ergodicity breaking through the lens of quantum entanglement. Zlatko Papić, arXiv:2108.03460cond-mat.quant-gasZlatko Papić, "Weak ergodicity breaking through the lens of quantum entanglement," arXiv:2108.03460 [cond-mat.quant-gas]. Hilbert Space Fragmentation and Commutant Algebras. Sanjay Moudgalya, Olexei I Motrunich, 10.1103/PhysRevX.12.011050arXiv:2108.10324Phys. Rev. X. 1211050condmat.stat-mechSanjay Moudgalya and Olexei I. Motrunich, "Hilbert Space Fragmentation and Commutant Algebras," Phys. Rev. X 12, 011050 (2022), arXiv:2108.10324 [cond- mat.stat-mech]. Thermalization and its absence within Krylov subspaces of a constrained Hamiltonian. Sanjay Moudgalya, Abhinav Prem, Rahul Nandkishore, Nicolas Regnault, Bernevig, arXiv:1910.14048condmat.str-elSanjay Moudgalya, Abhinav Prem, Rahul Nandkishore, Nicolas Regnault, and B Andrei Bernevig, "Thermal- ization and its absence within Krylov subspaces of a con- strained Hamiltonian," (2019), arXiv:1910.14048 [cond- mat.str-el]. A Universal Operator Growth Hypothesis. Daniel E Parker, Xiangyu Cao, Alexander Avdoshkin, Thomas Scaffidi, Ehud Altman, 10.1103/PhysRevX.9.041017arXiv:1812.08657Phys. Rev. X. 941017cond-mat.stat-mechDaniel E. Parker, Xiangyu Cao, Alexander Avdoshkin, Thomas Scaffidi, and Ehud Altman, "A Universal Op- erator Growth Hypothesis," Phys. Rev. X 9, 041017 (2019), arXiv:1812.08657 [cond-mat.stat-mech]. Euclidean operator growth and quantum chaos. Alexander Avdoshkin, Anatoly Dymarsky, 10.1103/PhysRevResearch.2.043234arXiv:1911.09672Phys. Rev. Res. 243234condmat.stat-mechAlexander Avdoshkin and Anatoly Dymarsky, "Eu- clidean operator growth and quantum chaos," Phys. Rev. Res. 2, 043234 (2020), arXiv:1911.09672 [cond- mat.stat-mech]. Quantum chaos as delocalization in Krylov space. Anatoly Dymarsky, Alexander Gorsky, 10.1103/PhysRevB.102.085137arXiv:1912.12227Phys. Rev. B. 10285137cond-mat.statmechAnatoly Dymarsky and Alexander Gorsky, "Quantum chaos as delocalization in Krylov space," Phys. Rev. B 102, 085137 (2020), arXiv:1912.12227 [cond-mat.stat- mech]. The Multi-faceted Inverted Harmonic Oscillator: Chaos and Complexity. Arpan Bhattacharyya, Wissam Chemissany, S Haque, Jeff Murugan, Bin Yan, 10.21468/SciPostPhysCore.4.1.002arXiv:2007.01232SciPost Phys. Core. 42hep-thArpan Bhattacharyya, Wissam Chemissany, S. Shajidul Haque, Jeff Murugan, and Bin Yan, "The Multi-faceted Inverted Harmonic Oscillator: Chaos and Complex- ity," SciPost Phys. Core 4, 002 (2021), arXiv:2007.01232 [hep-th]. Krylov localization and suppression of complexity. E Rabinovici, A Sánchez-Garrido, R Shir, J Sonner, 10.1007/JHEP03(2022)211arXiv:2112.12128JHEP. 03211hep-thE. Rabinovici, A. Sánchez-Garrido, R. Shir, and J. Son- ner, "Krylov localization and suppression of complex- ity," JHEP 03, 211 (2022), arXiv:2112.12128 [hep-th]. On The Evolution Of Operator Complexity Beyond Scrambling. J L F Barbón, E Rabinovici, R Shir, R Sinha, 10.1007/JHEP10(2019)264arXiv:1907.05393JHEP. 10hep-thJ. L. F. Barbón, E. Rabinovici, R. Shir, and R. Sinha, "On The Evolution Of Operator Complexity Beyond Scrambling," JHEP 10, 264 (2019), arXiv:1907.05393 [hep-th]. Krylov complexity in saddle-dominated scrambling. Budhaditya Bhattacharjee, Xiangyu Cao, Pratik Nandy, Tanay Pathak, 10.1007/JHEP05(2022)174arXiv:2203.03534JHEP. 05174quant-phBudhaditya Bhattacharjee, Xiangyu Cao, Pratik Nandy, and Tanay Pathak, "Krylov complexity in saddle-dominated scrambling," JHEP 05, 174 (2022), arXiv:2203.03534 [quant-ph]. Quantum chaos and the complexity of spread of states. Vijay Balasubramanian, Pawel Caputa, Javier M Magan, Qingyue Wu, 10.1103/PhysRevD.106.046007arXiv:2202.06957Phys. Rev. D. 10646007hep-thVijay Balasubramanian, Pawel Caputa, Javier M. Ma- gan, and Qingyue Wu, "Quantum chaos and the com- plexity of spread of states," Phys. Rev. D 106, 046007 (2022), arXiv:2202.06957 [hep-th]. Circuit complexity in quantum field theory. Ro Jefferson, Robert C Myers, 10.1007/JHEP10(2017)107arXiv:1707.08570JHEP. 10107hep-thRo Jefferson and Robert C. Myers, "Circuit complex- ity in quantum field theory," JHEP 10, 107 (2017), arXiv:1707.08570 [hep-th]. Toward a Definition of Complexity for Quantum Field Theory States. Shira Chapman, Michal P Heller, Hugo Marrochio, Fernando Pastawski, 10.1103/PhysRevLett.120.121602arXiv:1707.08582Phys. Rev. Lett. 120121602hep-thShira Chapman, Michal P. Heller, Hugo Marrochio, and Fernando Pastawski, "Toward a Definition of Complex- ity for Quantum Field Theory States," Phys. Rev. Lett. 120, 121602 (2018), arXiv:1707.08582 [hep-th]. Quantum complexity and topological phases of matter. Pawel Caputa, Sinong Liu, arXiv:2205.05688hep-thPawel Caputa and Sinong Liu, "Quantum complex- ity and topological phases of matter," (2022), arXiv:2205.05688 [hep-th]. The Recursion Method: Application to Many Body Dynamics. V S Viswanath, G Müller, Lecture Notes in Physics Monographs. SpringerV.S. Viswanath and G. Müller, The Recursion Method: Application to Many Body Dynamics, Lecture Notes in Physics Monographs (Springer Berlin Heidelberg, 1994). Complexity growth of operators in the SYK model and in JT gravity. Brian Shao-Kai Jian, Zhuo-Yu Swingle, Xian, 10.1007/JHEP03(2021)014arXiv:2008.12274JHEP. 0314hep-thShao-Kai Jian, Brian Swingle, and Zhuo-Yu Xian, "Complexity growth of operators in the SYK model and in JT gravity," JHEP 03, 014 (2021), arXiv:2008.12274 [hep-th]. Operator complexity: a journey to the edge of Krylov space. E Rabinovici, A Sánchez-Garrido, R Shir, J Sonner, 10.1007/JHEP06(2021)062arXiv:2009.01862JHEP. 0662hep-thE. Rabinovici, A. Sánchez-Garrido, R. Shir, and J. Son- ner, "Operator complexity: a journey to the edge of Krylov space," JHEP 06, 062 (2021), arXiv:2009.01862 [hep-th]. Operator growth in the transversefield ising spin chain with integrability-breaking longitudinal field. Jae Dong Noh, 10.1103/PhysRevE.104.034112arXiv:2107.08287Phys. Rev. E. 10434112quant-phJae Dong Noh, "Operator growth in the transverse- field ising spin chain with integrability-breaking lon- gitudinal field," Phys. Rev. E 104, 034112 (2021), arXiv:2107.08287 [quant-ph]. A statistical mechanism for operator growth. Xiangyu Cao, 10.1088/1751-8121/abe77carXiv:2012.06544J. Phys. A. 54144001cond-mat.stat-mechXiangyu Cao, "A statistical mechanism for op- erator growth," J. Phys. A 54, 144001 (2021), arXiv:2012.06544 [cond-mat.stat-mech]. Strong and almost strong modes of Floquet spin chains in Krylov subspaces. J Daniel, Aditi Yates, Mitra, 10.1103/PhysRevB.104.195121arXiv:2105.13246Phys. Rev. B. 104195121cond-mat.str-elDaniel J. Yates and Aditi Mitra, "Strong and almost strong modes of Floquet spin chains in Krylov subspaces," Phys. Rev. B 104, 195121 (2021), arXiv:2105.13246 [cond-mat.str-el]. Operator delocalization in quantum networks. Joonho Kim, Jeff Murugan, Jan Olle, Dario Rosa, 10.1103/PhysRevA.105.L010201arXiv:2109.05301Phys. Rev. A. 10510201quantphJoonho Kim, Jeff Murugan, Jan Olle, and Dario Rosa, "Operator delocalization in quantum networks," Phys. Rev. A 105, L010201 (2022), arXiv:2109.05301 [quant- ph]. Operator growth in 2d CFT. Pawel Caputa, Shouvik Datta, 10.1007/JHEP12(2021)188arXiv:2110.10519JHEP. 12188hepthPawel Caputa and Shouvik Datta, "Operator growth in 2d CFT," JHEP 12, 188 (2021), arXiv:2110.10519 [hep- th]. Krylov complexity of many-body localization: Operator localization in Krylov basis. Fabian Ballar Trigueros, Cheng-Ju Lin, 10.21468/SciPostPhys.13.2.037arXiv:2112.04722SciPost Phys. 1337cond-mat.dis-nnFabian Ballar Trigueros and Cheng-Ju Lin, "Krylov complexity of many-body localization: Operator local- ization in Krylov basis," SciPost Phys. 13, 037 (2022), arXiv:2112.04722 [cond-mat.dis-nn]. Geometry of Krylov complexity. Pawel Caputa, Javier M Magan, Dimitrios Patramanis, 10.1103/PhysRevResearch.4.013041arXiv:2109.03824Phys. Rev. Res. 413041hep-thPawel Caputa, Javier M. Magan, and Dimitrios Patra- manis, "Geometry of Krylov complexity," Phys. Rev. Res. 4, 013041 (2022), arXiv:2109.03824 [hep-th]. Probing the entanglement of operator growth. Dimitrios Patramanis, 10.1093/ptep/ptac081arXiv:2111.03424063A01 (2022). 2022hep-thDimitrios Patramanis, "Probing the entanglement of operator growth," PTEP 2022, 063A01 (2022), arXiv:2111.03424 [hep-th]. Ultimate Physical Limits to the Growth of Operator Complexity. Niklas Hörnedal, Nicoletta Carabba, Apollonas S Matsoukas-Roubeas, Adolfo Del Campo, 10.1038/s42005-022-00985-1arXiv:2202.05006Commun. Phys. 5207quant-phNiklas Hörnedal, Nicoletta Carabba, Apollonas S. Matsoukas-Roubeas, and Adolfo del Campo, "Ultimate Physical Limits to the Growth of Operator Complex- ity," Commun. Phys. 5, 207 (2022), arXiv:2202.05006 [quant-ph]. Numerically probing the universal operator growth hypothesis. Robin Heveling, Jiaozi Wang, Jochen Gemmer, 10.1103/PhysRevE.106.014152arXiv:2203.00533Phys. Rev. E. 10614152cond-mat.stat-mechRobin Heveling, Jiaozi Wang, and Jochen Gem- mer, "Numerically probing the universal operator growth hypothesis," Phys. Rev. E 106, 014152 (2022), arXiv:2203.00533 [cond-mat.stat-mech]. Stability of Exponentially Damped Oscillations under Perturbations of the Mori-Chain. Robin Heveling, Jiaozi Wang, Christian Bartsch, Jochen Gemmer, 10.1088/2399-6528/ac863barXiv:2204.06903J. Phys. Comm. 685009quant-phRobin Heveling, Jiaozi Wang, Christian Bartsch, and Jochen Gemmer, "Stability of Exponentially Damped Oscillations under Perturbations of the Mori-Chain," J. Phys. Comm. 6, 085009 (2022), arXiv:2204.06903 [quant-ph]. Operator growth and Krylov construction in dissipative open quantum systems. Aranya Bhattacharya, Pratik Nandy, Pratyush Nath, Himanshu Sahu, arXiv:2207.05347quant-phAranya Bhattacharya, Pratik Nandy, Pingal Pratyush Nath, and Himanshu Sahu, "Operator growth and Krylov construction in dissipative open quantum sys- tems," (2022), arXiv:2207.05347 [quant-ph]. . Aritra Banerjee, Arpan Bhattacharyya, Priya Drashni, Srinidhi Pawar, arXiv:2205.15338CFT to BMS: Complexity and OTOC. hep-thAritra Banerjee, Arpan Bhattacharyya, Priya Drashni, and Srinidhi Pawar, "CFT to BMS: Complexity and OTOC," (2022), arXiv:2205.15338 [hep-th]. Krylov complexity from integrability to chaos. E Rabinovici, A Sánchez-Garrido, R Shir, J Sonner, 10.1007/JHEP07(2022)151arXiv:2207.07701JHEP. 07151hep-thE. Rabinovici, A. Sánchez-Garrido, R. Shir, and J. Son- ner, "Krylov complexity from integrability to chaos," JHEP 07, 151 (2022), arXiv:2207.07701 [hep-th]. . Chang Liu, Haifeng Tang, Hui Zhai, arXiv:2207.13603Krylov Complexity in Open Quantum Systems. cond-mat.str-elChang Liu, Haifeng Tang, and Hui Zhai, "Krylov Complexity in Open Quantum Systems," (2022), arXiv:2207.13603 [cond-mat.str-el]. Colloquium: Nonequilibrium dynamics of closed interacting quantum systems. Anatoli Polkovnikov, Krishnendu Sengupta, Alessandro Silva, Mukund Vengalattore, 10.1103/RevModPhys.83.863arXiv:1007.5331Rev. Mod. Phys. 83cond-mat.stat-mechAnatoli Polkovnikov, Krishnendu Sengupta, Alessan- dro Silva, and Mukund Vengalattore, "Colloquium: Nonequilibrium dynamics of closed interacting quan- tum systems," Rev. Mod. Phys. 83, 863-883 (2011), arXiv:1007.5331 [cond-mat.stat-mech]. The quantum group suq(2) and a qanalogue of the boson operators. L C Biedenharn, 10.1088/0305-4470/22/18/004Journal of Physics A: Mathematical and General. 22L C Biedenharn, "The quantum group suq(2) and a q- analogue of the boson operators," Journal of Physics A: Mathematical and General 22, L873-L878 (1989). On q-analogues of the quantum harmonic oscillator and the quantum group suq(2). A J Macfarlane, 10.1088/0305-4470/22/21/020Journal of Physics A: Mathematical and General. 22A J Macfarlane, "On q-analogues of the quantum har- monic oscillator and the quantum group suq(2)," Jour- nal of Physics A: Mathematical and General 22, 4581- 4588 (1989). q-deformations of the o(3) symmetric spin-1 heisenberg chain. M T Batchelor, L Mezincescu, R I Nepomechie, V Rittenberg, 10.1088/0305-4470/23/4/003Journal of Physics A: Mathematical and General. 23M T Batchelor, L Mezincescu, R I Nepomechie, and V Rittenberg, "q-deformations of the o(3) symmetric spin-1 heisenberg chain," Journal of Physics A: Mathe- matical and General 23, L141-L144 (1990). Classical and quantum q-deformed physical systems. A Lavagno, A M Scarfone, P Narayana, Swamy, 10.1140/epjc/s2006-02557-yarXiv:quant-ph/0605026Eur. Phys. J. C. 47A. Lavagno, A. M. Scarfone, and P. Narayana Swamy, "Classical and quantum q-deformed physical systems," Eur. Phys. J. C 47, 253-261 (2006), arXiv:quant- ph/0605026. Q-deformed conformal quantum mechanics. Donam Youm, 10.1103/PhysRevD.62.095009arXiv:hep-th/0007114Phys. Rev. D. 6295009Donam Youm, "Q-deformed conformal quantum me- chanics," Phys. Rev. D 62, 095009 (2000), arXiv:hep- th/0007114. Introduction To Quantum Groups. M Chaichian, A Demichev, World Scientific Publishing CompanyM. Chaichian and A. Demichev, Introduction To Quan- tum Groups (World Scientific Publishing Company, 1996). Quantum groups and their applications in nuclear physics. D Bonatsos, C Daskaloyannis, 10.1016/S0146-6410(99)00100-3Progress in Particle and Nuclear Physics. 43D. Bonatsos and C. Daskaloyannis, "Quantum groups and their applications in nuclear physics," Progress in Particle and Nuclear Physics 43, 537-618 (1999). q-deformed quantum lie algebras. Alexander Schmidt, Hartmut Wachter, Journal of Geometry and Physics. 56Alexander Schmidt and Hartmut Wachter, "q-deformed quantum lie algebras," Journal of Geometry and Physics 56, 2289-2325 (2006). q-deformed heisenberg algebras. Julius Wess, Geometry and Quantum Physics. SpringerJulius Wess, "q-deformed heisenberg algebras," in Ge- ometry and Quantum Physics (Springer, 2000) pp. 311- 382. On q-deformed infinite-dimensional n-algebra. Lu Ding, Xiao-Yu Jia, Ke Wu, Zhao-Wen Yan, Wei-Zhong Zhao, 10.1016/j.nuclphysb.2016.01.003arXiv:1404.0464Nuclear Physics B. 904hep-thLu Ding, Xiao-Yu Jia, Ke Wu, Zhao-Wen Yan, and Wei-Zhong Zhao, "On q-deformed infinite-dimensional n-algebra," Nuclear Physics B 904, 18-38 (2016), arXiv:1404.0464 [hep-th]. Phase operator problem and an index theorem for Q deformed oscillator. Kazuo Fujikawa, arXiv:hep-th/9603130Frontiers in Quantum Field Theory in Honor of the 60th Birthday of. Prof. K. KikkawaKazuo Fujikawa, "Phase operator problem and an in- dex theorem for Q deformed oscillator," in Frontiers in Quantum Field Theory in Honor of the 60th Birth- day of Prof. K. Kikkawa (1996) pp. 354-366, arXiv:hep- th/9603130. A schwinger term in q-deformed su (2) algebra. Kazuo Fujikawa, Harunobu Kubo, C H Oh, Modern Physics Letters A. 12Kazuo Fujikawa, Harunobu Kubo, and CH Oh, "A schwinger term in q-deformed su (2) algebra," Modern Physics Letters A 12, 403-409 (1997). Dynamics of observables in a q-deformed harmonic oscillator. Aditi Pradeep, Sasidharan Anupama, Chethil Sudheesh, The European Physical Journal D. 74Aditi Pradeep, Sasidharan Anupama, and Chethil Sud- heesh, "Dynamics of observables in a q-deformed har- monic oscillator," The European Physical Journal D 74, 1-8 (2020). Nonlinear quantum mechanics in a q-deformed hilbert space. G Bruno, Ernesto P Da Costa, Borges, 10.1016/j.physleta.2019.05.056Physics Letters A. 383Bruno G. da Costa and Ernesto P. Borges, "Nonlin- ear quantum mechanics in a q-deformed hilbert space," Physics Letters A 383, 2729-2738 (2019). Deformed heisenberg algebra: origin of q-calculus. Narayana Swamy, Physica A: Statistical Mechanics and its Applications. 328P Narayana Swamy, "Deformed heisenberg algebra: ori- gin of q-calculus," Physica A: Statistical Mechanics and its Applications 328, 145-153 (2003). Nonlinear deformed SU(2) algebras involving two deforming functions. D Bonatsos, P Kolokotronis, C Daskaloyannis, A Ludu, C Quesne, 10.1007/BF01690332arXiv:q-alg/9701030Czech. J. Phys. 46D. Bonatsos, P. Kolokotronis, C. Daskaloyannis, A. Ludu, and C. Quesne, "Nonlinear deformed SU(2) algebras involving two deforming functions," Czech. J. Phys. 46, 1189-1196 (1996), arXiv:q-alg/9701030. On Chebyshev polynomials and torus knots. A M Gavrilik, A M Pavlyuk, arXiv:0912.4674Ukr. J. Phys. 55math-phA. M. Gavrilik and A. M. Pavlyuk, "On Chebyshev polynomials and torus knots," Ukr. J. Phys. 55, 129- 134 (2010), arXiv:0912.4674 [math-ph]. Dispersion relations and knot theory. Aninda Sinha, arXiv:2204.13986hep-thAninda Sinha, "Dispersion relations and knot theory," (2022), arXiv:2204.13986 [hep-th]. Quantum linear problem for the sine-gordon equation and higher representations. P P Kulish, N Yu, Reshetikhin, 10.1007/BF01084171Journal of Soviet Mathematics. 23P. P. Kulish and N. Yu. Reshetikhin, "Quantum linear problem for the sine-gordon equation and higher repre- sentations," Journal of Soviet Mathematics 23, 2435- 2441 (1983). Competing density-wave orders in a one-dimensional hard-boson model. Paul Fendley, K Sengupta, Subir Sachdev, 10.1103/PhysRevB.69.075106arXiv:cond-mat/0309438Phys. Rev. B. 6975106cond-mat.str-elPaul Fendley, K. Sengupta, and Subir Sachdev, "Competing density-wave orders in a one-dimensional hard-boson model," Phys. Rev. B 69, 075106 (2004), arXiv:cond-mat/0309438 [cond-mat.str-el]. . K Bull, K. Bull, https://github.com/Cable273/comP. Localized shocks. Daniel A Roberts, Douglas Stanford, Leonard Susskind, 10.1007/JHEP03(2015)051arXiv:1409.8180JHEP. 0351hep-thDaniel A. Roberts, Douglas Stanford, and Leonard Susskind, "Localized shocks," JHEP 03, 051 (2015), arXiv:1409.8180 [hep-th]. Weak Ergodicity Breaking and Quantum Many-Body Scars in Spin-1 XY Magnets. Michael Schecter, Thomas Iadecola, 10.1103/PhysRevLett.123.147201arXiv:1906.10131Phys. Rev. Lett. 123147201cond-mat.str-elMichael Schecter and Thomas Iadecola, "Weak Ergod- icity Breaking and Quantum Many-Body Scars in Spin- 1 XY Magnets," Phys. Rev. Lett. 123, 147201 (2019), arXiv:1906.10131 [cond-mat.str-el]. Quantum manybody scars in spin-1 Kitaev chains. Wen-Long You, Zhuan Zhao, Jie Ren, Gaoyong Sun, Liangsheng Li, Andrzej M Oleś, 10.1103/PhysRevResearch.4.013103arXiv:2201.09220Phys. Rev. Res. 413103cond-mat.str-elWen-Long You, Zhuan Zhao, Jie Ren, Gaoyong Sun, Liangsheng Li, and Andrzej M. Oleś, "Quantum many- body scars in spin-1 Kitaev chains," Phys. Rev. Res. 4, 013103 (2022), arXiv:2201.09220 [cond-mat.str-el]. Collapse and revival of quantum many-body scars via floquet engineering. Sourav Bhaskar Mukherjee, Arnab Nandy, Diptiman Sen, K Sen, Sengupta, 10.1103/PhysRevB.101.245107arXiv:1907.08212Phys. Rev. B. 101245107quant-phBhaskar Mukherjee, Sourav Nandy, Arnab Sen, Dip- timan Sen, and K. Sengupta, "Collapse and revival of quantum many-body scars via floquet engineering," Phys. Rev. B 101, 245107 (2020), arXiv:1907.08212 [quant-ph]. Dynamics of the vacuum state in a periodically driven rydberg chain. Arnab Bhaskar Mukherjee, Diptiman Sen, K Sen, Sengupta, 10.1103/PhysRevB.102.075123arXiv:2005.07715Phys. Rev. B. 10275123cond-mat.str-elBhaskar Mukherjee, Arnab Sen, Diptiman Sen, and K. Sengupta, "Dynamics of the vacuum state in a pe- riodically driven rydberg chain," Phys. Rev. B 102, 075123 (2020), arXiv:2005.07715 [cond-mat.str-el]. Many-body scar state intrinsic to periodically driven system. Sho Sugiura, Tomotaka Kuwahara, Keiji Saito, 10.1103/PhysRevResearch.3.L012010arXiv:1911.06092Phys. Rev. Research. 312010cond-mat.stat-mechSho Sugiura, Tomotaka Kuwahara, and Keiji Saito, "Many-body scar state intrinsic to periodically driven system," Phys. Rev. Research 3, L012010 (2021), arXiv:1911.06092 [cond-mat.stat-mech]. Driving quantum many-body scars in the pxp model. Ana Hudomal, Jean-Yves Desaules, Bhaskar Mukherjee, Guo-Xian Su, Jad C Halimeh, Zlatko Papić, 10.1103/PhysRevB.106.104302arXiv:2204.13718Phys. Rev. B. 106104302condmat.quant-gasAna Hudomal, Jean-Yves Desaules, Bhaskar Mukherjee, Guo-Xian Su, Jad C. Halimeh, and Zlatko Papić, "Driv- ing quantum many-body scars in the pxp model," Phys. Rev. B 106, 104302 (2022), arXiv:2204.13718 [cond- mat.quant-gas]. Hypergrid subgraphs and the origin of scarred quantum walks in many-body hilbert space. Jean-Yves Desaules, Kieran Bull, Aiden Daniel, Zlatko Papić, 10.1103/PhysRevB.105.245137arXiv:2112.06885Phys. Rev. B. 105245137quant-phJean-Yves Desaules, Kieran Bull, Aiden Daniel, and Zlatko Papić, "Hypergrid subgraphs and the origin of scarred quantum walks in many-body hilbert space," Phys. Rev. B 105, 245137 (2022), arXiv:2112.06885 [quant-ph]. Quspin: a python package for dynamics and exact diagonalisation of quantum many body systems part i: spin chains. Phillip Weinberg, Marin Bukov, arXiv:1610.03042SciPost Physics. 23physics.compphPhillip Weinberg and Marin Bukov, "Quspin: a python package for dynamics and exact diagonalisation of quan- tum many body systems part i: spin chains," SciPost Physics 2, 003 (2017), arXiv:1610.03042 [physics.comp- ph]. QuSpin: a Python package for dynamics and exact diagonalisation of quantum many body systems. Part II: bosons, fermions and higher spins. Phillip Weinberg, Marin Bukov, 10.21468/SciPostPhys.7.2.020arXiv:1804.06782SciPost Phys. 720physics.comp-phPhillip Weinberg and Marin Bukov, "QuSpin: a Python package for dynamics and exact diagonalisa- tion of quantum many body systems. Part II: bosons, fermions and higher spins," SciPost Phys. 7, 020 (2019), arXiv:1804.06782 [physics.comp-ph]. New bch-like relations of the su(1,1), su(2) and so(2,1) lie algebras. D Martínez-Tibaduiza, A H Aragão, C Farina, C A D Zarro, 10.1016/j.physleta.2020.126937arXiv:2005.09500Physics Letters A. 384126937math-phD. Martínez-Tibaduiza, A.H. Aragão, C. Farina, and C.A.D. Zarro, "New bch-like relations of the su(1,1), su(2) and so(2,1) lie algebras," Physics Letters A 384, 126937 (2020), arXiv:2005.09500 [math-ph]. Handbook of mathematical functions with formulas, graphs, and mathematical tables. Milton Abramowitz, A Irene, Robert H Stegun, Romer, Milton Abramowitz, Irene A Stegun, and Robert H Romer, "Handbook of mathematical functions with for- mulas, graphs, and mathematical tables," (1988). . Appendix A: Complexity For, Su, APPENDIX A: COMPLEXITY FOR SU(2) SYMMETRY
[ "https://github.com/Cable273/comP." ]
[ "X-RAY STUDIES OF BLAZAR 1ES 1959+650 USING SWIFT & XMM-NEWTON SATELLITE", "X-RAY STUDIES OF BLAZAR 1ES 1959+650 USING SWIFT & XMM-NEWTON SATELLITE" ]
[ "Kiran A Wani ", "Haritma Gaur ", "M K Patil " ]
[]
[]
High synchrotron energy peaked blazar 1ES 1959+650 is studied with Swift and XMM-Newton satellite in total 127 observations during the period June 2018−December 2020. We extensively studied its flux and spectral variability on intra-day and long-term timescales. Discrete correlation function analysis between soft and hard X-ray bands indicates soft as well as hard lags. The results are used to constrain the magnetic field of the emitting region which is found to be 0.64±0.05 Gauss. On long-term timescales, distribution of fluxes shows lognormality behaviour which could be attributed to minijets-in-a-jet model or might be due to the propagation of relativistic shocks down the jet. The spectral energy distribution around the synchrotron peak is well described by the log parabola model. Spectral parameters like peak energy E p , curvature β and the peak luminosity L p are derived from spectral analysis. Their correlations are studied to constrain the acceleration processes of the emitting particles. E p shows strong correlation with L p during the high state of the source which indicates spectral changes might be caused by the variations of the average electron energy. Low values of curvature parameter β and a weak correlation between E p and β indicates co-existence of stochastic/statistical acceleration of electrons in the emitting region. Implications of other results are also discussed.
null
[ "https://export.arxiv.org/pdf/2305.03246v1.pdf" ]
258,547,116
2305.03246
d831fe703074e8f61e2cb11f1eb5e3048ac465f0
X-RAY STUDIES OF BLAZAR 1ES 1959+650 USING SWIFT & XMM-NEWTON SATELLITE May 8, 2023 Draft version May 8, 2023 Kiran A Wani Haritma Gaur M K Patil X-RAY STUDIES OF BLAZAR 1ES 1959+650 USING SWIFT & XMM-NEWTON SATELLITE May 8, 2023 Draft version May 8, 2023Draft version Preprint typeset using L A T E X style emulateapj v. 12/16/11Subject headings: radiation mechanisms: non-thermal -galaxies: active -galaxies High synchrotron energy peaked blazar 1ES 1959+650 is studied with Swift and XMM-Newton satellite in total 127 observations during the period June 2018−December 2020. We extensively studied its flux and spectral variability on intra-day and long-term timescales. Discrete correlation function analysis between soft and hard X-ray bands indicates soft as well as hard lags. The results are used to constrain the magnetic field of the emitting region which is found to be 0.64±0.05 Gauss. On long-term timescales, distribution of fluxes shows lognormality behaviour which could be attributed to minijets-in-a-jet model or might be due to the propagation of relativistic shocks down the jet. The spectral energy distribution around the synchrotron peak is well described by the log parabola model. Spectral parameters like peak energy E p , curvature β and the peak luminosity L p are derived from spectral analysis. Their correlations are studied to constrain the acceleration processes of the emitting particles. E p shows strong correlation with L p during the high state of the source which indicates spectral changes might be caused by the variations of the average electron energy. Low values of curvature parameter β and a weak correlation between E p and β indicates co-existence of stochastic/statistical acceleration of electrons in the emitting region. Implications of other results are also discussed. INTRODUCTION Blazar, a subclass of AGN, constitutes BL Lacertae objects (BL Lacs) and flat spectrum radio quasars (FS-RQs). Blazars are characterized by rapid flux variability at time scales ranging from a few minutes to years across the entire electromagnetic spectrum; high optical polarisation and the emission is predominantly non-thermal in nature which emanates from a relativistic jet streaming along or aligned very close to our line of sight (Blandford & Rees 1978). The spectral energy distribution of blazars consists of double peaked hump with the first one peaking in submm to soft X-rays, whereas the second hump peaks at MeV to TeV energies (Urry & Padovani 1995). The low energy component of the SED is mostly due to the synchrotron emission from the relativistic electrons. However, the physical mechanisms behind the high energy emission are not well understood and are thought to be originating from the Compton upscattering of synchrotron photons by same population of relativistic electrons (Synchrotron self Compton, SSC i.e. Ghisellini & Maraschi 1989;Mastichiadis & Kirk 1997) or external seed photons originating from the accretion disc, broad line region (BLR) and torus components of a blazar (External Compton, EC i.e. Dermer et al. 1992;Ghisellini et al. 1998). The other models which appear to be viable mechanisms to explain the X-ray through γ-ray emission are hadronic models where the high energy emis-1 Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital -263 002, India; [email protected]; [email protected] 2 School of Physical Sciences, SRTM University, Nanded, 431 606, India sion is produced by relativistic protons through proton synchrotron radiation and photo-pion production, followed by pion decay and electromagnetic cascades (e.g. Mannheim & Biermann 1992;Böttcher et al. 2013, and references therein). Blazars are divided into three classes based on the position of synchrotron peak in their SED (Padovani & Giommi 1995). Low frequency BL Lac objects exhibit synchrotron peak in IR-Optical band, intermediate frequency BL Lac objects have their synchrotron peak at optical-UV frequencies and high frequency BL Lac (HBL) objects show a synchrotron peak in the UV to X-ray band. 1ES 1959+650 is at a redshift of z=0.048 (Perlman et al. 1996) and is a prominent high synchrotron peaked blazar in which the synchrotron peak of the SED appears in the UV-X-ray band (Krawczynski et al. 2004;Kapanadze et al. 2016b;Abdo et al. 2010). In X-rays, it was first detected by the Slew Survey with the Einstein Imaging Proportional Counter (IPC) (i.e. Elvis et al. 1992), followed by BeppoSAX (Beckmann et al. 2002), RXTE, Swift, XMM-Newton (Tagliaferri et al. 2003;Massaro et al. 2008), by the Nuclear Spectroscopic Telescope Array (NuSTAR) (Pandey et al. 2017) in later years. It has shown strong flux variability in optical, X-ray and TeV energy bands (i.e. Krawczynski et al. 2004;Kapanadze et al. 2016bKapanadze et al. , 2018aKaur et al. 2017;Patel et al. 2018;Wang et al. 2018Wang et al. , 2019MAGIC Collaboration et al. 2020). High X-ray flaring activity of the source has been reported by Perlman et al. (2005) and Krawczynski et al. (2004) 1ES 1959+650 underwent an unprecedented X-ray flaring activity during August 2015-January 2016. During this period, it varied by a factor of ∼5.7 with maximum value above 20 counts/sec, along with high flux activity in TeV energy band (i.e. Kapanadze et al. 2016b;Kaur et al. 2017;Patel et al. 2018). However, in several multi-wavelength campaigns, orphan flares in γ-rays (which are not simultaneous with X-ray flares) have been found in June 2002 (i.e. Krawczynski et al. 2004) and found that the orphan γ-ray flare cannot be explained with conventional one-zone SSC models. They invoked Multiple-Component SSC models; External Compton models where the variations of the external photon intensity in the jet frame can cause orphan γ-ray flares; magnetic field aligned along jet axis and thus the observer would not see the synchrotron flare but the electrons would scatter SSC γ-rays in our direction and thus be able to see the inverse Compton flare. Bottcher (2005) proposed that orphan flares are difficult to reconcile with the standard leptonic SSC model and suggested that they may originate from relativistic protons interacting with an external photon field supplied by electron synchrotron radiation reflected off a dilute reflector. Chandra et al. (2021) modelled the SED of this blazar using a single zone time dependent SSC model which could explain the flares successfully. Patel et al. (2018) studied this source during observation period June-July 2016 and explained its broadband SED using two zone SSC model where the inner zone is mainly responsible for producing the synchrotron peak and the high energy γ-ray part whereas, the second zone explains less variable optical-UV and low energy γ-ray emission. The X-ray spectral index hardens with increasing flux level in the long-term duration ( Patel et al. 2018;MAGIC Collaboration et al. 2020). and also during a number of flares (i.e. Wang et al. 2018). Kapanadze et al. (2016bKapanadze et al. ( , 2018b also showed the 'harder-when-brighter' trend in blazar 1ES 1959+650. Recently, Chandra et al. (2021) presented an extensive analysis of 1ES 1959+650 during the period 2016-2017, until February 2021 using the X-ray data from AstroSAT and Swift and found that the synchrotron peak shifts significantly with different flux states. Shah et al. (2021) also used AstroSAT observations to study the anti-correlation between the photon index and the X-ray flux using a broken power law. Around the synchrotron peak, SED is curved and can be well described by log parabolic model (e.g. Landau et al. 1986;Massaro et al. 2004;Tramacere et al. 2007;Chen 2014;Gaur et al. 2018;Pandey et al. 2018)). It is characterized by the peak energy (E p ), peak luminosity (L p ) and the curvature parameter β. The log parabolic spectral distribution arises when the acceleration probability is a decreasing function of electron energy. By analyzing large X-ray observations, it is suggested by Massaro et al. (2004); Tramacere et al. (2007Tramacere et al. ( , 2011) that the observed anti-correlation expected between E p and β could be used as a signature of a stochastic component in the acceleration process. For the blazar population, an apparent anti-correlation is expected between E p and L p (e.g. Tramacere et al. 2007;Massaro et al. 2008;Kapanadze et al. 2016bKapanadze et al. , 2017Kapanadze et al. , 2018a which might be associated with the change in average electron energy, beaming factor or magnetic field (e.g. Tramacere et al. 2009). In the present work, we analyzed Swift observations of 1ES 1959+650 during the period June 2018−December 2020 in 125 nights. We also studied two XMM-Newton satellite observations available during this period. Our aim is to study the temporal/spectral variability of this source on timescales from minutes to years covering different flux states. Studying flux and spectral variability during different flux states are important as distinct physical processes may play a dominant role in different flux states. Flux and spectral variations of blazars on diverse timescales arises either due to pure intrinsic phenomenon such as the interaction of relativistic shocks with particle density or magnetic field irregularities in the jet (e.g. Marscher 2014); production of minijets-in-ajet (e.g. Giannios et al. 2009) or due to extrinsic mechanisms. Extrinsic mechanisms involve the geometrical effects that results due to bending of the jets, either through instabilities (e.g. Pollack et al. 2016) or through orbital motion (e.g. Fromm et al. 2013;Larionov et al. 2020;Valtonen & Wiik 2012). Long term variability is generally attributed to a mixture of intrinsic as well as extrinsic mechanisms which includes shocks propagating down twisted jets or relativistic plasma blobs moving downstream helical structure in the magnetized jets (e.g. Marscher et al. 2008). Spectral energy distribution of blazars are explained by leptonic and hadronic models (Böttcher et al. 2013) and flux and spectral variability can be explained by merely changing the SED parameters adopting a common set of physical mechanisms (e.g. Patel et al. 2018;Prince et al. 2019;Ghosal et al. 2022). We fit the spectra using log parabolic model and derived spectral parameters i.e. E p , L p and β which varies with different flux states of the source. In this work, we analyzed correlations between these spectral parameters during different flux states of the source which could provide tight observational constraints upon the acceleration and injection processes of the emitting electrons. The paper is structured as follows. Section 2 describes observation and data analysis procedures of Swift and XMM-Newton satellites. Section 3 provides the analysis techniques used to quantify variability, variability timescales and power spectral density. Section 4 provides the spectral analysis and various models used in the studies. In section 5, we describe the results and their interpretation. Results are discussed and summarized in section 6. DATA REDUCTION AND ANALYSIS 2.1. SWIFT Neil Gehrels Swift observatory is a multi-wavelength facility equipped with X-ray Telescope (XRT), the Burst Alert Telescope (BAT) and Ultraviolet/ Optical Telescope (UVOT) (Gehrels et al. 2004). We retrieved the Swift-XRT (Burrows et al. 2005) data from publicly available HEASARC data archive. The data reduction is performed with the XRT Data Analysis Software (v.3.6.0) which is a part of the HEASoft package (v.6.28). Level 1 unscreened event files were reduced, calibrated and cleaned with the use of XRTPIPELINE script (ver- sion 0.13.5). Latest calibration files of Swift CALDB are used with remote access 3 . All the observations are in Windowed Timing mode. Events are selected with standard filtering criteria of 0-2 grades for Windowed Timing (WT) observations. The sources with its centre pixels lying inside the two pixel radius of bad pixels are not used in the analysis. Source region is extracted with a circular region of 20 pixel radius. For the Windowed timing mode, the source should appear in the middle of the 1-D image, therefore the background can be selected in regions on either side of the source. XRTPRODUCTS is used to obtain the light curve and spectrum of the source and background. The obtained source light curve is corrected for the resultant loss of effective area, bad/hot pixels and vignetting with the use of XRTLCCORR task. The ancillary response files (ARFs) were created using the XRTMKARF task. Source spectra were binned to ensure at least 20 count per bin in order to use the χ 2 fitting method. 2.2. XMM-Newton Blazar 1ES 1959+650 is observed on 5th July 2019 and 16th July 2020 by XMM-Newton satellite (Jansen et al. 2001). We used the European Photon Imaging Camera (EPIC) pn instrument data. EPIC-pn is most sensitive and less affected by the photon pile-up effects (Strüder et al. 2001). Data reduction is performed with the use of XMM-Newton Science Analysis System (SAS) for the LC extraction. We extracted the high energy (10 keV < E < 12 keV) light curve for the full frame of the exposed CCD in order to identify flaring particle background. We restrict our analysis to the 0.3-10 keV energy range, as data below 0.3 keV are markedly contaminated by noise events and data above 10 keV are usually dominated by background flares. Source region is extracted using a circle of 40 arcsec radius centered on the source. Background light curve is obtained from the region that corresponds to circular annulus centered on the source with inner and outer radius of 50 arcsec and 60 arcsec, respectively. Pile up effect is examined using the SAS task EPATPLOT. We found a pile-up in our observations. In order to remove the pile-up, the central 7.5 arcsec radius region is removed while extracting the 3 https://heasarc.gsfc.nasa.gov/FTP/caldb LC. Source LCs are obtained for the 0.3-10 keV energy band (corrected for background flux and given in units of counts s −1 ), sampled with a fixed bin size of 0.5 ks. ANALYSIS TECHNIQUES Excess Variance Excess variance (σ 2 XS ) is a measure of source intrinsic variance (Edelson et al. 2002;Vaughan et al. 2003a) evaluated by subtracting the variance that arises from measurement errors (σ 2 err ) from the total variance of the observed LC (S 2 ). If a LC consists of N measured flux values x i with corresponding finite uncertainties σ err, i arising from measurement errors, then the normalised excess variance (σ 2 N XS ) is calculated as follows: σ 2 N XS = S 2 −σ 2 err x 2 ,(1) wherex is the arithmetic mean of x i ,σ 2 err = 1 N i σ 2 err, i is the mean square error and S 2 is the sample variance of the LC, as given by S 2 = 1 N − 1 i (x i −x) 2 . (2) The fractional rms variability amplitude, F var (Edelson et al. 1990;Rodríguez-Pascual et al. 1997), which is the square root of σ 2 N XS is thus F var = S 2 −σ 2 err x 2 .(3) The uncertainty on F var is given by (Vaughan et al. 2003a). err(F var ) = 1 2Nσ 2 err x 2 F var 2 + σ 2 err N 1 x 2 .(4) 3.2. Hardness Ratio Hardness ratio (HR) is useful to characterise the spectral changes over a broad X-ray energy range (e.g. Park et al. 2006;Sivakoff et al. 2004). The energy range of (0.3-2) keV and (2-10) keV are used here as soft and hard bands respectively. Hardness ratio is then defined as: HR = (H − S) (H + S) ,(5) and the error in HR (σ HR ) is calculated, as follows: σ HR = 2 (H + S) 2 S 2 σ 2 H + H 2 σ 2 S ,(6) where S and H are the net count rates in the soft (0.3-2 keV) and hard (2-10 keV) bands, respectively, while σ S and σ H are their respective errors (e.g. Pandey et al. 2017). Doubling/Halving Timescales Characteristic halving/doubling timescale τ , depending on the increase or decrease in the flux, is the shortest flux variability time which is calculated from: F (t 1 ) = F (t 2 ) 2 (t1−t2)/τvar ,(7) where F (t 1 ) and F (t 2 ) are the fluxes of the LC at times t 1 and t 2 , respectively. We consider the timescales when the difference in flux is significant at 3σ level (e.g. Foschini et al. 2011;Dhiman et al. 2021). Discrete Correlation Function Discrete correlation function is calculated for light curves of two different energy bands. We consider here 0.3-2 keV and 2-10 keV as two energy bands. It is used to investigate a correlation between two unevenly sampled time series data. For such two discrete data sets a i and b j , unbinned discrete correlation is defined as (Edelson & Krolik 1988): for all measured pairs (a i , b j ) with the pairwise lag ∆T ij = t j − t i . e i and e b ,ā andb, σ a and σ b are the measurement error, mean, standard deviation associated with the data set a i and b j respectively. DCF(τ ) is obtained by averaging N number of pairs lying in the range τ − ∆τ 2 < ∆T ij < τ + ∆τ 2 where ∆τ =500s. U DCF ij = (a i −ā)(b i −b) (σ 2 a − e 2 a )(σ 2 b − e 2 b ) ,(8)DCF (τ ) = 1 N U DCF ij .(9) DCF evaluates the cross-correlation and possible time lags between the soft and hard energy band. The obtained DCF is fitted with the Gaussian function (Edelson & Krolik 1988) of the form: DCF (τ ) = a × exp −(τ − τ lag ) 2 (2σ 2 ) ,(10) where τ lag is the time lag at which DCF peaks and σ is the width of Gaussian function (used in Gaur et al. 2015). 3.5. Power Spectral Density (PSD) Power spectral density is the measure of variability power as a function of temporal frequency. It is used to characterize the temporal variations in flux and noise processes in general. Periodogram is a tool to find hidden periodicities including any quasi-periodic oscillations (QPOs). It is defined as the modulus-squared of the Discrete Fourier Transform (DFT) of the data (Vaughan 2005). The obtained power spectrum consists of red noise which dominates over the measurement error i.e the poisson noise at the lower frequencies while the white noise dominates the red noise at higher frequencies which becomes equivalent to the poisson noise level of the data. The red noise part of the power spectrum is then fitted with a model of the form P (f )= N f −m where N is the normalisation and m is the power-law spectral index (m > 0) (van der Klis 1989; González-Martín & Vaughan 2012). Equations used in determining PSDs are provided in Appendix. Spectral Analysis The XSPEC software package version 12.11.1 is used for spectral fitting. The Galactic absorption n H is fixed to be 1.0 × 10 21 cm −2 (Lockman & Savage 1995) and the Xspec routine "cflux" is used to obtain unabsorbed flux and its error. Massaro et al. (2004Massaro et al. ( , 2008 found that blazars spectra are curved which arise due to log parabolic electron distributions. Therefore, they are well described by the log parabola model e.g., (Tramacere et al. 2007(Tramacere et al. , 2009). We fit each spectra using models which are defined as follows: 1. Power law model, which is defined by k E −α . It is characterized by the photon index α, redshift z, and Normalization k. 2. The log-parabolic model logpar. It is characterized by photon index α, curvature β, and Normalization k. 3. Another form of log-parabolic model i.e. eplogpar model is used to calculate synchrotron peak E p . Details of the equations are provided in Appendix. RESULTS We studied 125 archived observations of the Swift satellite of the TeV blazar 1ES 1959+650 during the period June 2018−December 2020. We also analyzed two publicly archived XMM-Newton observations of this blazar which are observed on 5th July 2019 and 16th July 2020. These observations are used to study flux and spectral variability of this blazar on intra-day as well as on long term timescales. 4.1. Intraday flux and spectral Variability Blazar 1ES 1959+650 is studied with the XMM-Newton observations held on 5th July 2019 & 16th July 2020. The obtained light curves are shown in figure 1. Flux variability is calculated using excess variance and amplitude is calculated to be 1.95 and 3.12 % respectively. We also analyzed Swift-XRT observations of this source during the period June 2018--December 2020 in total 125 nights. The sample of light curves are shown in figure 2. We found significant flux variability (i.e F var > 3σ) in five of these observations with amplitude varying between 4.11-7.34% which are presented in Table 1. In order to calculate the spectral variability of these observations, HR analysis is performed which are presented in table 5. HR analysis yields no spectral variability for any individual light curve which can be seen from the plots of HR versus counts/s (0.3-10 keV) shown in figure 2. Variability Timescale Doubling/halving timescale τ var defined in equation 7 is used to calculate the shortest variability timescales. The shortest variability timescale we found as t var = 15.28 ks. Emission region size is constrained with the equation defined as follows: R ≤ δ 1 + z c τ var(11) with τ var =15.27 ks, Doppler factor δ=15 (Patel et al. 2018) and it is found to be 6.56 × 10 15 cm consistent with values found in previous studies (MAGIC Collaboration et al. 2020;Shah et al. 2021). 4.3. Cross correlated variability and power spectral density Cross correlation studies are performed using XMM-Newton data as they are long observations with high cadence. The cross-correlation between the soft band (0.3-2) keV and the hard band (2-10) keV is performed using DCF and their corresponding DCF plots are shown in the right panel of figure 1. The DCF plots are fitted with a Gaussian function (as described in section 3.4) and we have obtained a time lag of -0.94 and 0.36 ks respectively for the observations performed on 5th July 2019 and 16th July 2020 respectively. Results of DCF are provided in Table 2. Significant correlation at positive/negative lags means that soft/hard variations are leading the variations in hard/soft bands respectively. During observation performed on 5th July 2019, the lags are negative i.e. the variations in the (2-10) keV are leading those in (0.3-2) keV. Therefore, in this case, variations at lower energies are slower than the variations at higher energies. The reverse situation is observed during the second observation. We found that there is a hard lag i.e. (2-10) keV band is lagging behind the soft one (0.3-2) keV. X-ray emission of HSPs lie at the top of the synchrotron hump and are characterised by the energy dependent acceleration and cooling mechanisms. Following Zhang et al. (2002), the acceleration timescale t acc and cooling timescale t cool of the relativistic electrons in the observed frame can be expressed as a function of the observed photon energy E (in keV): t acc (E) = 9.65 × 10 −2 (1 + z) 3/2 ξB −3/2 δ −3/2 E 1/2 s , (12) t cool (E) = 3.04 × 10 3 (1 + z) 1/2 B −3/2 δ −1/2 E −1/2 s , where z is the source's redshift, B is the magnetic field in Gauss, δ is the Doppler factor of the emitting region, and ξ is the parameter describing how fast the electrons can be accelerated. One can note that t acc and t cool both depend on the photon energy in inverse fashion. The higher energy electrons cool faster as compared to lower energy electrons but accelerated slower than lower energy electrons. Therefore, if t cool is greater than t acc , cooling process dominates (Kirk et al. 1998). In such a case, higher energy photons will lead to lower energy photons and soft lag is expected. τ soft = t cool (E l ) − t cool (E h ),(14) If t acc is comparable to t cool in the observed energy range, acceleration processes dominates in the emitting region and hard lag is expected. The time lag in an acceleration dominated system is expressed as: Using the observed lags, one can discern the physical parameters of the emitting region as follows: τ hard = t acc (E h ) − t acc (E l ),(15)Bδξ −2/3 = 0.21×(1+z)E 1/3 h 1 − (E l /E h ) 1/2 τ hard 2/3 Gauss,(16)Bδ 1/3 = 209.91× 1 + z E l 1/3 1 − (E l /E h ) 1/2 τ soft 2/3 Gauss. (17) where τ hard and τ soft refer to the observed hard and soft lags (in second) between the low E l and high E h energy bands (in keV), respectively, (E l and E h are logarithmic averaged energies of the given energy bands). Equation 17 is used to calculate the magnetic field with redshift z=0.048, τ soft =940 sec and Doppler factor δ=15 (Patel et al. 2018). Magnetic field, B of the emitting region is found to be 0.64±0.05 Gauss which is found to be consistent with the values provided in the literature. In period 1, source exhibit two flares with flux reaches upto 128.82 × 10 −11 erg cm −2 s −1 on 2018-09-16. Fractional variability amplitude F var is found to be ∼29.96% and significant spectral variability is also found during this period. Photon index has hardened with increase in flux of the source which varies between 2.17-1.58. Lower values of curvature parameter are found which ranges between 0.84-0.31. Peak energy shows positive correlation with respect to flux of the source and reached upto 2.45 keV. During Period 2, source exhibits small flares with mean flux level of this period reaching upto 52.93 × 10 −11 erg cm −2 s −1 . F var is found to be 23.12% and significant spectral variability is also found during this period which can be seen in Fig 4. Photon index has hardened with increase in flux of the source which varies between 2.25-1.56. Similar to Period 1, β has lower values which ranges between 0.82-0.23. During Period 3, source exhibit relatively low flux state with highest flux reaching upto 61.66 × 10 −11 erg cm −2 s −1 . F var is found to be 21.43% and significant spectral variability is found during this period. Photon index has hardened with increase in flux of the source which varies between 2.14-1.34. β has lower values ranging between 0.99-0.24. E p is positively correlated with flux and reaches upto 2.88 keV. Log Normality of flux distributions The nature of variability in a long term period can be quantified or explained with the flux distribution of the variable source (Uttley et al. 2005). Blazars exhibits a log-normal flux distribution over the normal distribution (Uttley et al. 2005;Chakraborty 2020;Shah et al. 2020) which could be an indication of the variability imprint of the accretion disk onto the jet. The flux distribution is expected to be a Gaussian for a linear stochastic process, while it is expected to be a lognormal for multiplicative processes that originate in the accretion disk (Uttley et al. 2005). The fluctuations of the lognormal fluxes are proportional to the flux itself and indicates underlying multiplicative physical processes. The lognormality of blazars on different time scales and in different spectral ranges are studied many times in literature (i.e. Kushwaha et al. 2016;Sinha et al. 2017;Chevalier et al. 2019;Bhatta & Dhital 2020). However, the dominance of doppler boosted jet emission in blazars restricts our understanding of the accretion disc-jet connection. Kushwaha et al. (2016) performed an extensive study of lognormality of flux distribution of blazars for the first time. Specifically, variations on minutes/hours like timescales should be independent of accretion disc perturbations and favours originating from the instabilities in the relativistic jets (Gaidos et al. 1996;Albert et al. 2007;Narayan & Piran 2012). As blazars have strong magnetized jets pointing towards us, flux distributions using minijet-in-a-jet model (Giannios et al. 2009) are studied by Biteau & Giebels (2012) and they found that the flux from a single randomly oriented mini jet will follow a Pareto distribution. The flux integrated from many isotropically oriented mini jets could lead to an α stable distribution which could converge to a log normal distribution when subjected to experimental uncertainties. Therefore, non-linear flux distributions can arise from small Gaussian perturbations. This could provide an explanation for the lognormal flux distributions in blazars during flaring states. In this scenario, flux distribution has been found to hold the rms-flux relation (Biteau & Giebels 2012). An alternative interpretation for the non-Gaussian distribution of blazar variability light curves is provided by Sinha et al. (2018). They explained it using a small perturbation in the acceleration timescale which can result in the variability of the particle number density that is a linear combination of Gaussian and lognormal processes. The dominant shape of the resultant flux distribution is determined by relative weight of these processes Bhatta & Dhital 2020). They also demonstrated that perturbation in the acceleration time-scale leads to Gaussian distribution in its photon index, whereas perturbation in the particle cooling rate produces neither of these distributions Shah et al. 2018Shah et al. , 2020Khatoon et al. 2020Khatoon et al. , 2022, and references therein). Flux distribution of 1ES 1959+650 is studied by Patel et al. (2018) using radio to γ-ray data. It is also studied by Bhatta & Dhital (2020) using decade-long Fermi/LAT observations. Duda & Bhatta (2021) used maximum likelihood estimation (MLE) methods to study flux distributions. In order to investigate log normality of flux distribution in our observations, we fit the histograms of the X-ray data observed by Swift-XRT between the period June 2018 to December 2020 with the Gaussian and lognormal distributions. The figure 6 shows the log-normal and normal flux distribution of this blazar for the total epoch used in this analysis. We have used Anderson-Darling (AD) test (e.g. Anderson & Darling 1952, 1954Jäntschi & Bolboacȃ 2018;Stephens 1977;D'Agostino 1986;Shah et al. 2018) to quantify the nature of flux distribution of blazar 1ES 1959+650. We have obtained that the blazar 1ES 1959+650 follows a log-normal behaviour in total epoch with a p-value = 0.20. We also found a significant linear correlation between rms and flux for the total epoch which indicates that the variability might arise from the minijet-in-a-jet model. However, as recently shown by Scargle (2020), the linear rms-flux relationship can be obtained from intrinsically additive processes, therefore this result might be used with cau-tion. We also calculated the photon index distribution for the total epoch and found it to be well fitted with lognormal as well as normal distributions. As discussed in Sinha et al. (2018), small temporal fluctuations in the intrinsic time-scales in the acceleration region is capable of producing particle distributions with non-Gaussian signatures and significant flux-rms correlations. Therefore, we cannot rule out the possibility of an acceleration-dueto-shock scenario. 4.6. Relation between spectral parameters Blazar 1ES 1959+650 spectra are fit by log parabolic and power law models. The results of both the models are presented in table 3. We used the F-test to compare the fitting results of these two models. We found that all the spectra are well fitted by a log parabolic model. Then, we derived the spectral fitting parameters i.e. location of synchrotron peak(E p ), peak luminosity (L p ) with the log parabolic model. The results show the variation in A positive correlation is expected between photon index α and curvature β in the first order fermi acceleration scenario. This correlation is predicted for the energy dependent acceleration probability process EDAP (i.e. Massaro et al. 2004) where the probability p i that a particle undergoes an acceleration step i, with the corresponding energy γ q i and energy gain ε, is given by p i =g/γ q i where g and q are positive constants. Therefore, as the energy of the particle increases, the probability of the particle's acceleration decreases. According to Massaro et al. (2004), a linear relationship is expected between spectral index s and curvature r, s = −r(2/q) log g/γ 0 − (q − 2)/2. The synchrotron emission is produced by the differential energy spectrum of the form N (γ) ∼ γ/γ 0 −s−r log γ/γ0 is given by P S (ν) ∝ (ν/ν 0 ) −(a+b log(ν/ν0)) with a = (s−1)/2 and b = r/4 (i.e. Massaro et al. 2004). In our observations, we found weak negative correlation between α and β which is expected when g > γ 0 (i.e. . It implies that there exists electron population with a very low initial energy γ 0 in the emission zone. found a negative correlation between these quantities for Mrk 421 for some of its observational period during 2015 December-2018 April. The co-existence of second order Fermi acceleration/stochastic acceleration could also weaken the α-β correlation. Katarzyński et al. (2006) have shown via simulations that electrons can be accelerated at the shock front via EDAP but can gain energy via the stochastic mechanism after escaping the shock front. The combined effect of both such processes could result in a weak or no strong correlation between α and β. (Kapanadze et al. 2016b;Kapanadze 2018;). The synchrotron peak energy (E p ) and the luminosity (L p ) of a source follows a power-law relation of the form L p ∝ E a p (i.e. Rybicki & Lightman 1979). If the elec-tron distribution in the emitting region follows a logparabolic distribution, the peak luminosity is given by L p ∼ Nγ 2 p B 2 δ 4 . The peak energy follows E p ∼ γ 2 p Bδ (e.g., Tramacere et al. 2009). γ p represents the peak of n(γ)γ where γ is the electron Lorentz factor; N ∼ n(γ p )γ p is the total electron number, B represents the magnetic field and δ is the Doppler beaming factor. If a = 1, the spectral changes are mainly caused by the variations of the average electron energy, but the total electron number remains constant; if a = 1.5, they are mainly caused by the variations of the average electron energy, but the total electron number also changes; if a = 2, they are correlated with the changes of magnetic field, B; and if a = 4, they might be dominated by the variations of the beaming factor, δ. We found significant correlations between E p and L p during our observations. We fit our observational data using the equation log L p = a log E p + b and found a to be <1 during the whole observational period. During period 1, a ∼ 1 which concludes that the spectral changes might be caused by the variations of the average electron energy. The correlation between E p and β provides us clues about the acceleration mechanism (e.g. Massaro et al. 2004;Tramacere et al. 2011) whether it is statistical/stochastic mechanisms. These two mechanisms are produced by the log-parabolic electron distribution, resulting in a log-parabolic SED. In statistical acceleration process, the electron energy distribution follows the log-parabolic law and the acceleration efficiency of the emitting electrons is inversely proportional to their energy (Massaro et al. 2004). Therefore, in such process, E p and β follow the correlation of the form log E p ≈ Const. + 2/(5β), with the assumption of β = r /4 where r is the curvature of the electron energy distribution (Chen 2014). For the fluctuations of fractional acceleration gain process, electron energies follow log-normal law, and the energy gain fluctuations are a random variable around the systematic energy gain (Tramacere et al. 2011). In such case, E p and β follow the correlation of log E p ≈ Const. + 3/(10β) given β = r /4 (Chen 2014). Second scenario is described by the stochastic acceleration process. Here, a momentum-diffusion term is included in the kinetic equation, which leads to energy gain fluctuations in the diffusive shock acceleration process (Tramacere et al. 2011). In this process, E p and β follows the relation of log E p ≈ Const. + 1/(2β) where β = r /4 (Chen 2014). Therefore, theoretically expected values of C are 10/3, 5/2 and 2 for the fractional acceleration gain fluctuation, energy-dependent acceleration probability and stochastic acceleration processes, respectively. We found a significant negative correlation between E p and 1/β in our observations. We fit our observational data using the equation 1/β = C log E p + D but did not get the value of C close to the above values hence we cannot explain our observational data using any of the above acceleration mechanism. However, the coexistence of the above acceleration mechanism might be possible (Wang et al. 2019) which could lead to overall weakening of correlation. A positive correlation between flux and E p is found dur-ing our observations which indicates the shift of synchrotron peak to higher energies. Also, we found significant correlation between flux and S p which is expected as peak height increases as flux increases (Holder et al. 2003;Kapanadze et al. 2016bKapanadze et al. ,a, 2018aWang et al. 2019;Chandra et al. 2021). Near the peak energy of the emission, the cooling timescale shortens and can complete with the acceleration timescales (Tramacere et al. 2009) which leads to an anti-correlation between E p and β if the cooling timescales is shorter than that of EDAP or stochastic acceleration. We found significant correlation between flux and α which is an indication of hardening of the spectra as flux increases which is very common for HSP type blazars. It indicates that the hard X-rays are varying more rapidly than soft X-rays (Cui 2004;Giebels et al. 2002;Tagliaferri et al. 2003;Holder et al. 2003) or there is an injection of fresh electrons with an energy distribution harder than the previously cooled electrons (Mastichiadis & Kirk 2002). DISCUSSION We observed the blazar 1ES 1959+650 during the period June 2018-December 2020 when the source showed different flux states (high/low) to study their flux and spectral variability on intra-day and long term timescales using SWIFT satellite. The source is studied for intra-day flux variability in total 125 nights and found significant variability in only five nights. We did not find any spectral variability during these observations. The source is studied using two observations of XMM-Newton held on 5th July 2019 and 16th July 2020 and found significant flux variability in both of these observations with low amplitude variation of 1.95% and 3.12% respectively. The source is favoured by the shock-in-jet model where IDV is triggered by the interaction of the shock front with jet inhomogeneities; turbulence behind a shock front; smallest-size jet turbulent structures (i.e. Marscher & Gear 1985;Wagner & Witzel 1995;Sokolov et al. 2004). Smallest size jet structures are attributed to produce very rapidly variable emission due to light travel arguments. We found flux doubling timescale of 15.27 ks (between MJD 58670.07-58670.13) which leads to size of the emitting region to be 6.56 × 10 15 cm and black hole mass estimate of 2.95 × 10 8 M which is obtained as follows: M BH = c 3 t var 10G(1 + z)(18) where G is the Gravitational constant (e.g. Zhang et al. 2021)). Yuan et al. (2015) reported the optical timescales in the range from 23 minutes to 3.72 hr with the Kerr black hole mass in the range (0.42-4.09) × 10 8 M . Falomo et al. (2003) used the host galaxy luminosity relation and Kurtanidze et al. (2009) used the variability timescales to estimate the black hole mass to be 3.16 × 10 8 M . XMM-Newton observation of 1ES 1959+650 during two observations suggest the presence of soft as well as hard lags which infers that t acc and t cool of the emitting region changes from epoch to epoch. This is consistent with previous studies. Hard lags suggest that the flux evolution is dominated by acceleration processes while soft lags sug-gest that the emission region is dominated by cooling mechanisms. The first-order Fermi acceleration process (e.g. Kirk et al. 1998, and references therein) and statistical/stochastic processes involving second-order Fermi acceleration (Katarzyński et al. 2006;Becker et al. 2006) are most acceptable models for particle acceleration and therefore, hard delays are modulated by variations of the acceleration parameter ξ. We have estimated the value of magnetic field to be 0.64±0.05 Gauss using the soft lags and the values are found to be close to those reported in literature for our source. PSD analysis is done using two observations of XMM-Newton which are used to characterize the variability on intra-day timescales. Accretion disc models typically produce PSD slopes in the range between - On long term timescales, source exhibit high state during Period 1 with two flares with flux reaching upto 34.82 counts/s. During Period 2, the flux of the source decreases to 10.35 counts/s and during Period 3, the flux of the source is lowest with reaching upto 7.43 counts/s. Source showed significant spectral variability throughout our observations. We studied flux distributions of our source during different observational periods and found that the source follows a log-normal behaviour during the total epoch. studied the lognormality of Mrk 421 using the multi-wavelength data and found that log-normal fits were preferred over normal fits for most of their dataset. The flux variability of the source is attributed to the propagation of shocks downstream the relativistic jets (Sokolov et al. 2004). The formation of these shocks might be related to the turbulence/inhomogeneities occuring in the accretion disk (Kushwaha et al. 2016;Sinha et al. 2017;Kushwaha & Pal 2020). However, this is not always the case. Many observations including flaring events showed deviations from log-normal distribution which indicates that these flares might be triggered by the interaction of shocks with the jet inhomogeneities which could be related to the jet instabilities (i.e. Marscher 2014). All the spectra are well described by log parabolic model yielding spectral curvature ranging between 0.23-0.99 and photon index varies between 1.34-2.25. During our observations, peak energy E p varies in such a way that E p shifts upto higher energies as flux increases. We studied correlation between the spectral parameters derived from log parabolic model. We found weak negative correlation between α and β which is due to the co-existence of stochastic and statistical acceleration processes (Kapanadze et al. 2018b;Kapanadze 2018). The spectral hysteresis analysis of 1ES 1959+650 showed an interplay between the acceleration and cool-ing timescales of emitting particles and flux variability timescale (Kapanadze et al. 2018a,b). Correlation between E p and L p also follows a powerlaw as described in section 4.6. During Period 1, we found spectral changes to be caused by the variations of the average electron energy. The anti-correlation between E p and β is expected for the efficient stochastic acceleration of electrons by the magnetic turbulence which is not seen in our observations. The weak correlation between these parameters implies the co-existence of stochastic and statistical acceleration processes in the emitting region. CONCLUSION The main findings of this work are summarized as follows: 1. Swift-XRT and XMM-Newton EPIC-pn observations have been used to study the HSP 1ES 1959+650 during the period June 2018-December 2020 in total 127 nights of observations. Significant variability is detected in total 7 of the nights with flux variability amplitude varying between 1.95-3.12 %. Hardness ratio analysis shows no significant spectral variability in any of the nights. The flux doubling timescale is found to be 15.27 ks and the black hole mass is calculated to be 2.95 × 10 8 M . 2. Using XMM-Newton observations, crosscorrelation between soft band (0.3-2) keV and hard band (2-10) keV were performed using the DCF method. Both DCF plots are correlated and hard as well as soft lag of 360 and 940 seconds respectively are found which provides magnetic field strength of ∼ 0.64±0.05 Gauss in the jet. 3. PSD analysis is performed using XMM-Newton observations and power law slopes are found to be -2.41 and -2.15 which favours jet based model (as discussed in section 5). 4. Intra-day light curves are checked for lognormality behaviour and found that they are well modelled by normal as well as lognormal distributions. As suggested by Gaidos et al. (1996); Narayan & Piran (2012), variations on minutes/hours like timescales are independent of accretion disc fluctuations and could be attributed to some linear/nonlinear perturbations in the physical parameters used to model the relativistic jets in blazars. The most plausible model to explain short-term variability is turbulence in the jet behind the reconfinement shock that contains multiple synchrotron emitting cells of different sizes within the single emitting region which yields impressive light curves, PSDs and polarization variations (i.e. Pol-lack et al. 2016;Marscher 2014). 5. Source exhibits log normality behaviour on long term timescales. Long term variations are explained by superposition of small flares on long term trends. As blazar jets are highly magnetized, variability may be incorporated by minijets-in-ajet model as we found a linear correlation between rms-flux relation. However, as the photon index distribution is well fitted by Gaussian as well as normal distributions, we can not rule out the possibility of propagation and evolution of relativistic shocks through the jet leading to variability in Xray bands. These shocks could be related to an abrupt increase of the plasma injection rate at the jet base owing to the instabilities in the accretion disc. 6. On the long timescales, source showed high as well as low flux states. Log parabolic model is required to describe the X-ray spectra of this source yielding spectral curvature values ranges between 0.23-0.99 and photon index ranges between 1.34-2.25. Position of synchrotron SED peak E p varies between 0.46-2.88 keV. Synchrotron peak E p strongly correlates with flux which implies that E p increases as flux increases. Hardness ratio analysis on long term timescales indicates that the source follows the 'harder-when-brighter' trend. 7. Source showed weak correlation between photon index α and curvature β which could be due to the combined effect of statistical/stochastic processes. Electrons can be accelerated at the shock front via EDAP but they can gain energy via the stochastic mechanism after escaping the shock front (Katarzyński et al. 2006). 8. 1ES 1959+650 showed a low spectral curvature (β ∼ 0.23-0.99) and an anti-correlation between E p versus 1/β is found which might be due to the coexistence of stochastic and statistical acceleration processes. at N/2 evenly spaced frequencies f j = j N ∆T (where j = 1, 2, . . . , N/2), f N/2 = 1 2 ∆T is the Nyquist frequency, f N yq which denotes the maximal frequency that can be meaningfully inferred. To eliminate the zero frequency power it is important to subtract the mean flux from the light curve before calculating the DFT (Vaughan et al. 2003b). PSD is then defined as (Vaughan 2005): P (f j ) = 2∆T Nx 2 |DF T (f j )| 2 .(2) The log-parabolic model is given by F (E) = K(E/E 1 ) (−α−βlog(E/E1)) ,(3) in units of photons cm −2 s −1 keV −1 (e.g., Massaro et al. 2004). E 1 is the reference energy, generally fixed to 1 keV. The parameter α is the spectral index at the energy of E 1 , while β is the curvature parameter around the peak. K is the normalization constant. The location of the synchrotron peak is calculated by E p,logpar = E 1 10 (2−α)/2β (keV) , Another form of log-parabolic model i.e. eplogpar model is used to calculate synchrotron peak E p . It is defined as F (E) = K10 −β(log(E/Ep)) 2 /E 2 ,(5) in units of photons cm −2 s −1 keV −1 . (e.g. Tramacere et al. 2007Tramacere et al. , 2009). E p is the synchrotron peak in units of keV, β is the curvature parameter, which is the same as the parameter β in the above log-parabolic model. The parameter K is the flux in νF ν units at energy E p keV. using XMM-Newton and RXTE-PCA observations in 2002-2003. Kapanadze et al. (2016b) reported frequent X-ray flares of this source arXiv:2305.03246v1 [astro-ph.HE] Fig. 1 .Fig. 2 . 12-XMM-Newton light curves with their corresponding PSD and DCF of Blazar 1ES 1959+650. DCF is performed between X-ray energy range of 0.3-2 keV (soft band) and 2-10 keV (hard band). PSD is performed in the X-ray energy range of 0.3-10 keV. -Sample of Swift-XRT light curves and their corresponding plot of HR versus counts/s. Hardness Ratio (HR) is calculated between the X-ray energy range of 0.3-2 keV (soft band) and 2-10 keV (hard band). Complete set of light curves of all Swift observations (139 images) appears in a figure set in the online Journal. Fig. 3 . 3-Long term light curve of blazar 1ES 1959+650 using Swift-XRT observations from June 2018 to December 2020. Vertical green dashed line separates the three periods considered for the analysis. Period 1 corresponds to MJD 58301.75-58577.86; Period 2 corresponds to MJD 58581.37-58920.97 and Period 3 corresponds to MJD 59001.42-59208.88. Fig. 4 . 4-Temporal variations of various spectral parameters of blazar 1ES 1959+650 during different intervals. Best fit parameters are obtained with 2.7σ confidence interval. Fig. 5 .Fig. 6 . 56-Correlation between various spectral parameters of blazar 1ES 1959+650. -Fitting of Log-normal (red dashed line) and Normal (green dash-dotted line) distributions to count rate histogram of total epoch. Tagliaferri et al. (2003) andMAGIC Collaboration et al. (2020) performed SED modeling and obtained magnetic field close to the value we have obtained.4.4. Long Term flux and spectral VariabilityWe study long term flux and spectral variability of blazar 1ES 1959+650, observed between the period June 2018−December 2020. Depending on the flux state of the blazar, we have divided the total epoch into three periods. The flux state of the source is defined June−December 2020 with the mean count rate of 18.28, 14.82 and 10.90 counts/s respectively. The long term light curve of the blazar 1ES 1959+650, observed during the period June 2018−December 2020, is shown infigure 3. The light curves for different periods are shown in figure 4. During long term timescales, the source showed significant spectral variability. The HR, α, β and E p have showed long term variations which can be seen infigure 4for different periods. E p in the range of 0.46−2.88 keV whereas L p varies in the range of 0.51−2.50 erg cm −2 s −1 . Spectral variations of the photon index varies between 1.34−2.25 and the curvature (β) varies between 0.21−0.99. Correlation between various spectral parameters are shown in figure 5 and are studied with Spearman's rank correlation coefficient ρ s and their corresponding p-values. The results are presented in table 4. TABLE 1 1Observation log of XMM-Newton and Swift-XRT data for blazar 1ES 1959+650.Satellite Date (yyyy mm dd) Exposure Fvar (Observation ID) Time (sec) (%) XMM-Newton 2019-07-05 42139.94 1.95 ± 0.07 (0850980101) 2020-07-16 31239.34 3.12 ± 0.09 (0870210101) Swift-XRT 2018-06-03 1007.94 - (00094153007) 2018-06-10 988.12 1.80 ± 2.03 (00094153008) 2018-06-10 1002.94 1.39 ± 2.74 (00034588142) 2018-06-14 1003.10 - (00034588143) #Complete table of all observations appear in online supplementary material TABLE 2 2Results of DCF and PSD analysis of 1ES 1959+650. 07-05 -0.94±0.10 7.53±0.12 -2.41±0.10 -10.13 2020-07-16 0.36±0.07 5.06±0.08 -2.15±0.03 -8.39 τ lag & σ are the time lag at which DCF peaks & width of the Gaussian function fitting at DCF respectively; m & N are slope & Normalization constant of the power law function fitting at PSD respectively.Observation τ lag (ks) σ(ks) m log(N) Date 2019- TABLE 3 3Best spectral fit parameters for the Power Law and Log Parabolic Model of blazar 1ES 1959+650 of XMM-Newton and Swift-XRT Observations from June 2018 to December 2020. PL: Power law; LP: Log Parabolic; DoF: Degree of Freedom; Ep: Peak Energy; Lp: Peak Luminosity. Complete table of all observations appear in online supplementary material.Observation Fitting α β log 10 Flux χ 2 Red DoF F-test p-value Ep Lp(10 45 ) ID Model (ergs/cm 2 /s) (keV) (erg/s) XMM-Newton 0850980101 PL 2.13 +0.002 −0.002 - −9.16 +0.001 −0.001 41.47 175 - - - - LP 2.06 +0.002 −0.002 0.22 +0.005 −0.005 −9.18 +0.001 −0.001 5.57 174 1129.34 5.37e-78 0.74(0.01) 1.20(0.002) 0870210101 PL 1.99 +0.002 −0.002 - −9.27 +0.001 −0.001 29.67 175 - - - - LP 1.89 +0.003 −0.003 0.24 +0.006 −0.006 −9.29 +0.001 −0.001 2.79 174 1687.02 1.83e-91 1.66(0.02) 0.89(0.002) SWIFT-XRT 00094153007 PL 2.2 +0.021 −0.021 - −9.21 +0.006 −0.006 1.36 288 - - - - LP 2.09 +0.032 −0.033 0.31 +0.068 −0.067 −9.24 +0.009 −0.009 1.14 287 56.59 6.93e-13 0.71(0.13) 1.12(0.02) 00094153008 PL 2.18 +0.022 −0.022 - −9.25 +0.006 −0.007 1.15 280 - - - - LP 2.06 +0.034 −0.035 0.36 +0.073 −0.071 −9.29 +0.009 −0.009 0.88 279 86.25 4.66e-18 0.83(0.12) 1.00(0.02) 00034588142 PL 2.21 +0.022 −0.022 - −9.28 +0.007 −0.007 1.29 265 - - - - LP 2.07 +0.035 −0.036 0.44 +0.078 −0.075 −9.32 +0.010 −0.010 0.90 264 116.88 8.55e-23 0.84(0.10) 0.96(0.02) 00034588143 PL 2.25 +0.026 −0.025 - −9.31 +0.007 −0.007 1.07 241 - - - - LP 2.15 +0.037 −0.038 0.32 +0.085 −0.082 −9.34 +0.011 −0.011 0.89 240 50.52 1.34e-11 0.58(0.15) 0.94(0.03) TABLE 4 4Summary of Swift-XRT observations of 1ES 1959+650 during different periods.[MJD 58301.75-58577.86] [MJD 58581.37-58920.97] [MJD 59001.42-59208.88] Total Epoch Period 1 Period 2 Period 3 Mean counts/s 14.51 18.28 14.82 10.90 Maximum Flux (counts/s) 34.82 34.82 23.15 16.31 Fvar (%) 33.97 (0.09) 29.96(0.14) 23.12(0.18) 21.43(0.20) HR (max−min) -0.42−-0.64 -0.42−-0.63 -0.46−-0.64 -0.43−-0.61 α (max−min) 2.25−1.34 2.17−1.58 2.25−1.56 2.14−1.34 β (max−min) 0.99−0.21 0.84−0.31 0.82−0.23 0.99−0.24 Ep (keV) (max−min) 2.88−0.46 2.45−0.62 2.28−0.46 2.88−0.67 Slope / intercept (Ep vs. Lp) 0.47(0.08) /-0.08(0.04) 0.85(0.08) / 0.06(0.04) 0.38(0.13) / -0.06(0.06) 0.45(0.08) / -0.41(0.04) Slope / intercept (Ep vs. 1/β) -0.56(0.16) / 2.35(0.07) -0.34(0.2) / 2.03(0.08) -0.90(0.41) / 2.50(0.16) -0.41(0.30) / 2.50(0.14) Spearman's Correlation Coefficient ρs (p-value) ρs (p-value) ρs (p-value) ρs (p-value) soft and hard counts/s 0.90 (7.13e-53) 0.97 (7.95e-28) 0.94 (1.79e-18) 0.95 (3.50e-24) F lux (0.3−10)keV and HR 0.41 (5.53e-07) 0.86 (1.04e-14) 0.56 (2.98e-04) 0.64 (8.59e-07) F lux (0.3−10)keV and α -0.38(3.08e-06) -0.73 (6.60e-09) -0.40 (1.35e-02) -0.52 (1.51e-04) F lux (0.3−10)keV and Ep 0.39 (1.51e-06) 0.79 (5.30e-11) 0.44 (6.14e-03) 0.60 (6.10e-06) α and β -0.48 (2.00e-09) -0.41 (4.18e-03) -0.59 (1.05e-04) -0.48 (6.38e-04) Ep and α -0.97 (1.73e-87) -0.99 (6.74e-37) -0.95 (3.22e-19) -0.96 (2.15e-27) Ep and 1/β -0.31 (2.04e-04) -0.30 (3.84e-02) -0.37 (2.36e-02) -0.26 (7.09e-02) Ep and Lp 0.35 (2.63e-05) 0.79 (3.08e-11) 0.41 (1.04e-02) 0.62 (2.45e-06) max: maximum value; min: minimum value TABLE 5 5Results of χ 2 test for Hardness Ratio analysis.Obs. ID Obs. Date DoF χ 2 χ 2 0.90 XMM-Newton 0850980101 2019-07-05 81 53.8 97.7 0870210101 2020-07-16 62 37.8 76.6 Swift-XRT 00094153010 2018-07-02 16 3.4 23.5 00095332009 2019-06-17 33 12.4 43.7 00034588209 2020-06-21 21 7.0 29.6 00013906004 2020-12-10 30 5.1 40.2 #Complete table of all observations appear in online supplementary material 1.30 --2.10(Zhang & Bao 1991;Mangalam & Wiita 1993;Kelly et al. 2011) while jet based models yield steeper slopes in the range between -1.70 --2.90 (i.e.Calafut & Wiita 2015;Pollack et al. 2016;Wehrle et al. 2019). The observed PSD slopes of -2.41 and -2.15 for intra-day light curves are steeper as compared to those predicted by accretion disk models and are more consistent with jet based models. However, the small number of PSDs used here provides a tentative hint favouring fluctuations originating in jets. ACKNOWLEDGEMENTSWe would like to thank the anonymous reviewer for the constructive comments that helped us to improve the paper scientifically. We acknowledge the use of public data from the Swift data archive. This research is based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. KAW and HG acknowledge the financial support from the Department of Science and Technology, India through INSPIRE Faculty award IFA17-PH197 at ARIES, Nainital.APPENDIXFor an evenly sampled light curve (with a sampling period ∆T) comprising a series of fluxes x i measured at discrete times t i (i = 1, 2, . . . , N ): . A A Abdo, M Ackermann, I Agudo, 10.1088/0004-637X/716/1/30ApJ. 71630Abdo, A. A., Ackermann, M., Agudo, I., et al. 2010, ApJ, 716, 30, doi: 10.1088/0004-637X/716/1/30 . J Albert, E Aliu, H Anderhub, 10.1086/521382ApJ. 669862Albert, J., Aliu, E., Anderhub, H., et al. 2007, ApJ, 669, 862, doi: 10.1086/521382 . T W Anderson, D A Darling, 10.1214/aoms/1177729437The Annals of Mathematical Statistics. 23193Anderson, T. W., & Darling, D. A. 1952, The Annals of Mathematical Statistics, 23, 193 , doi: 10.1214/aoms/1177729437 . 10.1080/01621459.1954.10501232Journal of the American Statistical Association. 49765-. 1954, Journal of the American Statistical Association, 49, 765, doi: 10.1080/01621459.1954.10501232 . P A Becker, T Le, C D Dermer, 10.1086/505319ApJ. 647539Becker, P. A., Le, T., & Dermer, C. D. 2006, ApJ, 647, 539, doi: 10.1086/505319 . V Beckmann, A Wolter, A Celotti, 10.1051/0004-6361:20011752A&A. 383410Beckmann, V., Wolter, A., Celotti, A., et al. 2002, A&A, 383, 410, doi: 10.1051/0004-6361:20011752 . G Bhatta, N Dhital, 10.3847/1538-4357/ab7455ApJ. 891120Bhatta, G., & Dhital, N. 2020, ApJ, 891, 120, doi: 10.3847/1538-4357/ab7455 . J Biteau, B Giebels, 10.1051/0004-6361/201220056A&A. 548123Biteau, J., & Giebels, B. 2012, A&A, 548, A123, doi: 10.1051/0004-6361/201220056 . R D Blandford, M J Rees, 10.1088/0031-8949/17/3/020Phys. Scr. 17265Blandford, R. D., & Rees, M. J. 1978, Phys. Scr, 17, 265, doi: 10.1088/0031-8949/17/3/020 . M Böttcher, A Reimer, K Sweeney, A Prakash, 10.1088/0004-637X/768/1/54ApJ. 76854Böttcher, M., Reimer, A., Sweeney, K., & Prakash, A. 2013, ApJ, 768, 54, doi: 10.1088/0004-637X/768/1/54 . D N Burrows, J E Hill, J A Nousek, 10.1007/s11214-005-5097-2Space Sci. Rev. 120165Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, Space Sci. Rev., 120, 165, doi: 10.1007/s11214-005-5097-2 . V Calafut, P J Wiita, 10.1007/s12036-015-9324-2Journal of Astrophysics and Astronomy. 36255Calafut, V., & Wiita, P. J. 2015, Journal of Astrophysics and Astronomy, 36, 255, doi: 10.1007/s12036-015-9324-2 . N Chakraborty, 10.3390/galaxies8010007Chakraborty, N. 2020, Galaxies, 8, 7, doi: 10.3390/galaxies8010007 . S Chandra, M Boettcher, P Goswami, 10.3847/1538-4357/ac01d1ApJ. 91867Chandra, S., Boettcher, M., Goswami, P., et al. 2021, ApJ, 918, 67, doi: 10.3847/1538-4357/ac01d1 . L Chen, 10.1088/0004-637X/788/2/179ApJ. 788179Chen, L. 2014, ApJ, 788, 179, doi: 10.1088/0004-637X/788/2/179 . J Chevalier, D A Sanchez, P D Serpico, J P Lenain, G Maurin, 10.1093/mnras/stz027MNRAS. 484749Chevalier, J., Sanchez, D. A., Serpico, P. D., Lenain, J. P., & Maurin, G. 2019, MNRAS, 484, 749, doi: 10.1093/mnras/stz027 . W Cui, 10.1086/382587ApJ. 662Cui, W. 2004, ApJ, 605, 662, doi: 10.1086/382587 . C D Dermer, R Schlickeiser, A Mastichiadis, A&A. 25627Dermer, C. D., Schlickeiser, R., & Mastichiadis, A. 1992, A&A, 256, L27 . V Dhiman, A C Gupta, H Gaur, P J Wiita, 10.1093/mnras/stab1743MNRAS. 5061198Dhiman, V., Gupta, A. C., Gaur, H., & Wiita, P. J. 2021, MNRAS, 506, 1198, doi: 10.1093/mnras/stab1743 . J Duda, G Bhatta, 10.1093/mnras/stab2574MNRAS. 5081446Duda, J., & Bhatta, G. 2021, MNRAS, 508, 1446, doi: 10.1093/mnras/stab2574 . R B D&apos;agostino, M Stephens, 123146Marcel-Dekker; New York, NY, USAD'Agostino, R.B.; Stephens, M. 1986, Marcel-Dekker: New York, NY, USA, 123, 146 . R Edelson, T J Turner, K Pounds, 10.1086/323779ApJ. 568610Edelson, R., Turner, T. J., Pounds, K., et al. 2002, ApJ, 568, 610, doi: 10.1086/323779 . R A Edelson, J H Krolik, 10.1086/166773ApJ. 333646Edelson, R. A., & Krolik, J. H. 1988, ApJ, 333, 646, doi: 10.1086/166773 . R A Edelson, J H Krolik, G F Pike, 10.1086/169036ApJ. 35986Edelson, R. A., Krolik, J. H., & Pike, G. F. 1990, ApJ, 359, 86, doi: 10.1086/169036 . M Elvis, D Plummer, J Schachter, G Fabbiano, 10.1086/191665ApJS. 80257Elvis, M., Plummer, D., Schachter, J., & Fabbiano, G. 1992, ApJS, 80, 257, doi: 10.1086/191665 . R Falomo, N Carangelo, A Treves, 10.1046/j.1365-8711.2003.06690.xMNRAS. 343505Falomo, R., Carangelo, N., & Treves, A. 2003, MNRAS, 343, 505, doi: 10.1046/j.1365-8711.2003.06690.x . L Foschini, G Ghisellini, F Tavecchio, G Bonnoli, A Stamerra, 10.1051/0004-6361/201117064A&A. 53077Foschini, L., Ghisellini, G., Tavecchio, F., Bonnoli, G., & Stamerra, A. 2011, A&A, 530, A77, doi: 10.1051/0004-6361/201117064 . C M Fromm, E Ros, M Perucho, 10.1051/0004-6361/201321784A&A. 557105Fromm, C. M., Ros, E., Perucho, M., et al. 2013, A&A, 557, A105, doi: 10.1051/0004-6361/201321784 . J A Gaidos, C W Akerlof, S Biller, 10.1038/383319a0Nature. 383319Gaidos, J. A., Akerlof, C. W., Biller, S., et al. 1996, Nature, 383, 319, doi: 10.1038/383319a0 . H Gaur, P Mohan, A Wierzcholska, M Gu, 10.1093/mnras/stx2553MNRAS. 4733638Gaur, H., Mohan, P., Wierzcholska, A., & Gu, M. 2018, MNRAS, 473, 3638, doi: 10.1093/mnras/stx2553 . H Gaur, A C Gupta, R Bachev, 10.1051/0004-6361/201526536A&A. 582103Gaur, H., Gupta, A. C., Bachev, R., et al. 2015, A&A, 582, A103, doi: 10.1051/0004-6361/201526536 . N Gehrels, G Chincarini, P Giommi, 10.1086/422091ApJ. 6111005Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005, doi: 10.1086/422091 . G Ghisellini, A Celotti, G Fossati, L Maraschi, A Comastri, 10.1046/j.1365-8711.1998.02032.xMNRAS. 301451Ghisellini, G., Celotti, A., Fossati, G., Maraschi, L., & Comastri, A. 1998, MNRAS, 301, 451, doi: 10.1046/j.1365-8711.1998.02032.x . G Ghisellini, L Maraschi, 10.1086/167383ApJ. 340181Ghisellini, G., & Maraschi, L. 1989, ApJ, 340, 181, doi: 10.1086/167383 . B Ghosal, A Tolamatti, S Bhattacharyya, 10.1093/mnras/stac2950MNRAS. 5175473Ghosal, B., Tolamatti, A., Bhattacharyya, S., et al. 2022, MNRAS, 517, 5473, doi: 10.1093/mnras/stac2950 . D Giannios, D A Uzdensky, M C Begelman, 10.1111/j.1745-3933.2009.00635.xMNRAS. 39529Giannios, D., Uzdensky, D. A., & Begelman, M. C. 2009, MNRAS, 395, L29, doi: 10.1111/j.1745-3933.2009.00635.x . B Giebels, E D Bloom, W Focke, 10.1086/340065ApJ. 571763Giebels, B., Bloom, E. D., Focke, W., et al. 2002, ApJ, 571, 763, doi: 10.1086/340065 . O González-Martín, S Vaughan, 10.1051/0004-6361/201219008A&A. 54480González-Martín, O., & Vaughan, S. 2012, A&A, 544, A80, doi: 10.1051/0004-6361/201219008 . J Holder, I H Bond, P J Boyle, 10.1086/367816ApJ. 5839Holder, J., Bond, I. H., Boyle, P. J., et al. 2003, ApJ, 583, L9, doi: 10.1086/367816 . F Jansen, D Lumb, B Altieri, 10.1051/0004-6361:20000036A&A. 3651Jansen, F., Lumb, D., Altieri, B., et al. 2001, A&A, 365, L1, doi: 10.1051/0004-6361:20000036 . L Jäntschi, S Bolboacȃ, 10.3390/math6060088Mathematics. 6Jäntschi, L., & Bolboacȃ, S. D. 2018, Mathematics, 6, doi: 10.3390/math6060088 . B Kapanadze, 10.3390/galaxies6040125Galaxies. 6125Kapanadze, B. 2018, Galaxies, 6, 125, doi: 10.3390/galaxies6040125 . B Kapanadze, D Dorner, P Romano, 10.3847/1538-4357/aa8ea6ApJ. 848103Kapanadze, B., Dorner, D., Romano, P., et al. 2017, ApJ, 848, 103, doi: 10.3847/1538-4357/aa8ea6 . B Kapanadze, D Dorner, S Vercellone, 10.1093/mnrasl/slw054MNRAS. 46126Kapanadze, B., Dorner, D., Vercellone, S., et al. 2016a, MNRAS, 461, L26, doi: 10.1093/mnrasl/slw054 . B Kapanadze, P Romano, S Vercellone, 10.1093/mnras/stv3004MNRAS. 457704Kapanadze, B., Romano, P., Vercellone, S., et al. 2016b, MNRAS, 457, 704, doi: 10.1093/mnras/stv3004 . B Kapanadze, S Vercellone, P Romano, 10.1016/j.newast.2020.101393New Astronomy. 79101393Kapanadze, B., Vercellone, S., & Romano, P. 2020, New Astronomy, 79, 101393, doi: https://doi.org/10.1016/j.newast.2020.101393 . B Kapanadze, D Dorner, S Vercellone, 10.3847/1538-4365/aad8b5doi: 10.3847/1538-4365/aad8b5MNRAS. 47313ApJSKapanadze, B., Dorner, D., Vercellone, S., et al. 2018a, MNRAS, 473, 2542, doi: 10.1093/mnras/stx2492 -. 2018b, ApJS, 238, 13, doi: 10.3847/1538-4365/aad8b5 . B Kapanadze, A Gurchumelia, D Dorner, 10.3847/1538-4365/ab6322ApJS. 24727Kapanadze, B., Gurchumelia, A., Dorner, D., et al. 2020, ApJS, 247, 27, doi: 10.3847/1538-4365/ab6322 . K Katarzyński, G Ghisellini, F Tavecchio, J Gracia, L Maraschi, 10.1111/j.1745-3933.2006.00156.xMNRAS. 36852Katarzyński, K., Ghisellini, G., Tavecchio, F., Gracia, J., & Maraschi, L. 2006, MNRAS, 368, L52, doi: 10.1111/j.1745-3933.2006.00156.x . N Kaur, S Chandra, K S Baliyan, Sameer, S Ganesh, 10.3847/1538-4357/aa86b0ApJ. 846158Kaur, N., Chandra, S., Baliyan, K. S., Sameer, & Ganesh, S. 2017, ApJ, 846, 158, doi: 10.3847/1538-4357/aa86b0 . B C Kelly, M Sobolewska, A Siemiginowska, 10.1088/0004-637X/730/1/52ApJ. 73052Kelly, B. C., Sobolewska, M., & Siemiginowska, A. 2011, ApJ, 730, 52, doi: 10.1088/0004-637X/730/1/52 . R Khatoon, R Prince, Z Shah, S Sahayanathan, R Gogoi, 10.1093/mnras/stac892MNRAS. 513611Khatoon, R., Prince, R., Shah, Z., Sahayanathan, S., & Gogoi, R. 2022, MNRAS, 513, 611, doi: 10.1093/mnras/stac892 . R Khatoon, Z Shah, R Misra, R Gogoi, 10.1093/mnras/stz3108MNRAS. 491Khatoon, R., Shah, Z., Misra, R., & Gogoi, R. 2020, MNRAS, 491, 1934, doi: 10.1093/mnras/stz3108 . J G Kirk, F M Rieger, A Mastichiadis, 10.48550/arXiv.astro-ph/9801265A&A. 333452Kirk, J. G., Rieger, F. M., & Mastichiadis, A. 1998, A&A, 333, 452, doi: 10.48550/arXiv.astro-ph/9801265 . H Krawczynski, S B Hughes, D Horan, 10.1086/380393ApJ. 601151Krawczynski, H., Hughes, S. B., Horan, D., et al. 2004, ApJ, 601, 151, doi: 10.1086/380393 O M Kurtanidze, S D Tetradze, G M Richter, Astronomical Society of the Pacific Conference Series. 408266The Starburst-AGN ConnectionKurtanidze, O. M., Tetradze, S. D., Richter, G. M., et al. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 408, The Starburst-AGN Connection, ed. W. Wang, Z. Yang, Z. Luo, & Z. Chen, 266 . P Kushwaha, S Chandra, R Misra, 10.3847/2041-8205/822/1/L13ApJ. 82213Kushwaha, P., Chandra, S., Misra, R., et al. 2016, ApJ, 822, L13, doi: 10.3847/2041-8205/822/1/L13 . P Kushwaha, M Pal, 10.3390/galaxies8030066Galaxies. 866Kushwaha, P., & Pal, M. 2020, Galaxies, 8, 66, doi: 10.3390/galaxies8030066 . R Landau, B Golisch, T J Jones, 10.1086/164480ApJ. 30878Landau, R., Golisch, B., Jones, T. J., et al. 1986, ApJ, 308, 78, doi: 10.1086/164480 . V M Larionov, S G Jorstad, A P Marscher, 10.1093/mnras/staa082MNRAS. 4923829Larionov, V. M., Jorstad, S. G., Marscher, A. P., et al. 2020, MNRAS, 492, 3829, doi: 10.1093/mnras/staa082 . F J Lockman, B D Savage, 10.1086/192133ApJS. 971Lockman, F. J., & Savage, B. D. 1995, ApJS, 97, 1, doi: 10.1086/192133 . V A Acciari, MAGIC CollaborationS Ansoldi, MAGIC Collaboration10.1051/0004-6361/201935450A&A. 63814MAGIC Collaboration, Acciari, V. A., Ansoldi, S., et al. 2020, A&A, 638, A14, doi: 10.1051/0004-6361/201935450 . A V Mangalam, P J Wiita, 10.1086/172453ApJ. 406420Mangalam, A. V., & Wiita, P. J. 1993, ApJ, 406, 420, doi: 10.1086/172453 . K Mannheim, P L Biermann, A&A. 25321Mannheim, K., & Biermann, P. L. 1992, A&A, 253, L21 . A P Marscher, 10.1088/0004-637X/780/1/87ApJ. 780Marscher, A. P. 2014, ApJ, 780, 87, doi: 10.1088/0004-637X/780/1/87 . A P Marscher, W K Gear, 10.1086/163592ApJ. 298Marscher, A. P., & Gear, W. K. 1985, ApJ, 298, 114, doi: 10.1086/163592 . A P Marscher, S G Jorstad, F D D&apos;arcangelo, 10.1038/nature06895Nature. 452966Marscher, A. P., Jorstad, S. G., D'Arcangelo, F. D., et al. 2008, Nature, 452, 966, doi: 10.1038/nature06895 . E Massaro, M Perri, P Giommi, R Nesci, 10.1051/0004-6361:20031558A&A. 413489Massaro, E., Perri, M., Giommi, P., & Nesci, R. 2004, A&A, 413, 489, doi: 10.1051/0004-6361:20031558 . F Massaro, A Tramacere, A Cavaliere, M Perri, P Giommi, 10.1051/0004-6361:20078639A&A. 478395Massaro, F., Tramacere, A., Cavaliere, A., Perri, M., & Giommi, P. 2008, A&A, 478, 395, doi: 10.1051/0004-6361:20078639 . A Mastichiadis, J G Kirk, 10.48550/arXiv.astro-ph/9610058A&A. 320Mastichiadis, A., & Kirk, J. G. 1997, A&A, 320, 19, doi: 10.48550/arXiv.astro-ph/9610058 . A Mastichiadis, J G Kirk, 10.1071/AS01108Publications of the Astronomical Society of Australia. 19Mastichiadis, A., & Kirk, J. G. 2002, Publications of the Astronomical Society of Australia, 19, 138-142, doi: 10.1071/AS01108 . R Narayan, T Piran, 10.1111/j.1365-2966.2011.20069.xMNRAS. 420604Narayan, R., & Piran, T. 2012, MNRAS, 420, 604, doi: 10.1111/j.1365-2966.2011.20069.x . P Padovani, P Giommi, 10.1086/175631ApJ. 444567Padovani, P., & Giommi, P. 1995, ApJ, 444, 567, doi: 10.1086/175631 . A Pandey, A C Gupta, P J Wiita, 10.3847/1538-4357/aa705eApJ. 841123Pandey, A., Gupta, A. C., & Wiita, P. J. 2017, ApJ, 841, 123, doi: 10.3847/1538-4357/aa705e . 10.3847/1538-4357/aabc5bApJ. 85949-. 2018, ApJ, 859, 49, doi: 10.3847/1538-4357/aabc5b . T Park, V L Kashyap, A Siemiginowska, 10.1086/507406ApJ. 652610Park, T., Kashyap, V. L., Siemiginowska, A., et al. 2006, ApJ, 652, 610, doi: 10.1086/507406 . S R Patel, A Shukla, V R Chitnis, 10.1051/0004-6361/201731987A&A. 61144Patel, S. R., Shukla, A., Chitnis, V. R., et al. 2018, A&A, 611, A44, doi: 10.1051/0004-6361/201731987 . E S Perlman, J T Stocke, J F Schachter, 10.1086/192300ApJS. 104251Perlman, E. S., Stocke, J. T., Schachter, J. F., et al. 1996, ApJS, 104, 251, doi: 10.1086/192300 . E S Perlman, G Madejski, M Georganopoulos, 10.1086/429688ApJ. 625727Perlman, E. S., Madejski, G., Georganopoulos, M., et al. 2005, ApJ, 625, 727, doi: 10.1086/429688 . M Pollack, D Pauls, P J Wiita, 10.3847/0004-637X/820/1/12ApJ. 82012Pollack, M., Pauls, D., & Wiita, P. J. 2016, ApJ, 820, 12, doi: 10.3847/0004-637X/820/1/12 . R Prince, N Gupta, K Nalewajko, 10.3847/1538-4357/ab3afaApJ. 883137Prince, R., Gupta, N., & Nalewajko, K. 2019, ApJ, 883, 137, doi: 10.3847/1538-4357/ab3afa . P M Rodríguez-Pascual, D Alloin, J Clavel, 10.1086/312996ApJS. 110Rodríguez-Pascual, P. M., Alloin, D., Clavel, J., et al. 1997, ApJS, 110, 9, doi: 10.1086/312996 . G B Rybicki, A P Lightman, Radiative processes in astrophysicsRybicki, G. B., & Lightman, A. P. 1979, Radiative processes in astrophysics . J D Scargle, 10.3847/1538-4357/ab8d38ApJ. 89590Scargle, J. D. 2020, ApJ, 895, 90, doi: 10.3847/1538-4357/ab8d38 . Z Shah, S H Ezhikode, R Misra, T R Rajalakshmi, 10.1093/mnras/stab1244MNRAS. 5045485Shah, Z., Ezhikode, S. H., Misra, R., & Rajalakshmi, T. R. 2021, MNRAS, 504, 5485, doi: 10.1093/mnras/stab1244 . Z Shah, N Mankuzhiyil, A Sinha, 10.1088/1674-4527/18/11/141Research in Astronomy and Astrophysics. 18141Shah, Z., Mankuzhiyil, N., Sinha, A., et al. 2018, Research in Astronomy and Astrophysics, 18, 141, doi: 10.1088/1674-4527/18/11/141 . Z Shah, R Misra, A Sinha, 10.1093/mnras/staa1746MNRAS. 4963348Shah, Z., Misra, R., & Sinha, A. 2020, MNRAS, 496, 3348, doi: 10.1093/mnras/staa1746 . A Sinha, R Khatoon, R Misra, 10.1093/mnrasl/sly136MNRAS. 480116Sinha, A., Khatoon, R., Misra, R., et al. 2018, MNRAS, 480, L116, doi: 10.1093/mnrasl/sly136 . A Sinha, S Sahayanathan, B S Acharya, 10.3847/1538-4357/836/1/83ApJ. 83683Sinha, A., Sahayanathan, S., Acharya, B. S., et al. 2017, ApJ, 836, 83, doi: 10.3847/1538-4357/836/1/83 . G R Sivakoff, C L Sarazin, J L Carlin, 10.1086/425244ApJ. 617262Sivakoff, G. R., Sarazin, C. L., & Carlin, J. L. 2004, ApJ, 617, 262, doi: 10.1086/425244 . A Sokolov, A P Marscher, I M Mchardy, 10.1086/423165ApJ. 613725Sokolov, A., Marscher, A. P., & McHardy, I. M. 2004, ApJ, 613, 725, doi: 10.1086/423165 . M A Stephens, 10.1093/biomet/64.3.583Biometrika. 64583Stephens, M. A. 1977, Biometrika, 64, 583, doi: 10.1093/biomet/64.3.583 . L Strüder, U Briel, K Dennerl, 10.1051/0004-6361:20000066A&A. 36518Strüder, L., Briel, U., Dennerl, K., et al. 2001, A&A, 365, L18, doi: 10.1051/0004-6361:20000066 . G Tagliaferri, M Ravasio, G Ghisellini, 10.1051/0004-6361:20034051A&A. 412711Tagliaferri, G., Ravasio, M., Ghisellini, G., et al. 2003, A&A, 412, 711, doi: 10.1051/0004-6361:20034051 . A Tramacere, P Giommi, M Perri, F Verrecchia, G Tosti, 10.1051/0004-6361/200810865A&A. 501Tramacere, A., Giommi, P., Perri, M., Verrecchia, F., & Tosti, G. 2009, A&A, 501, 879, doi: 10.1051/0004-6361/200810865 . A Tramacere, E Massaro, A M Taylor, 10.1088/0004-637X/739/2/66ApJ. 73966Tramacere, A., Massaro, E., & Taylor, A. M. 2011, ApJ, 739, 66, doi: 10.1088/0004-637X/739/2/66 . A Tramacere, F Massaro, A Cavaliere, 10.1051/0004-6361:20066723A&A. 466521Tramacere, A., Massaro, F., & Cavaliere, A. 2007, A&A, 466, 521, doi: 10.1051/0004-6361:20066723 . C M Urry, P Padovani, 10.1086/133630PASP. 107803Urry, C. M., & Padovani, P. 1995, PASP, 107, 803, doi: 10.1086/133630 P Uttley, I M Mchardy, S ; M J Vaughan, K Wiik, 10.1146/annurev.aa.27.090189.002505doi: 10.1146/annurev.aa.27.090189.002505van der Klis, M. 1989. 359517ARA&AUttley, P., McHardy, I. M., & Vaughan, S. 2005, MNRAS, 359, 345, doi: 10.1111/j.1365-2966.2005.08886.x Valtonen, M. J., & Wiik, K. 2012, MNRAS, 421, 1861, doi: 10.1111/j.1365-2966.2011.20009.x van der Klis, M. 1989, ARA&A, 27, 517, doi: 10.1146/annurev.aa.27.090189.002505 . S Vaughan, 10.1051/0004-6361:20041453A&A. 431391Vaughan, S. 2005, A&A, 431, 391, doi: 10.1051/0004-6361:20041453 . S Vaughan, R Edelson, R S Warwick, P Uttley, 10.1046/j.1365-2966.2003.07042.xMNRAS. 3451271Vaughan, S., Edelson, R., Warwick, R. S., & Uttley, P. 2003a, MNRAS, 345, 1271, doi: 10.1046/j.1365-2966.2003.07042.x . S Vaughan, A C Fabian, K Nandra, 10.1046/j.1365-8711.2003.06285.xMNRAS. 3391237Vaughan, S., Fabian, A. C., & Nandra, K. 2003b, MNRAS, 339, 1237, doi: 10.1046/j.1365-8711.2003.06285.x . S J Wagner, A Witzel, 10.1146/annurev.aa.33.090195.001115ARA&A. 33163Wagner, S. J., & Witzel, A. 1995, ARA&A, 33, 163, doi: 10.1146/annurev.aa.33.090195.001115 . Y Wang, Y Xue, S Zhu, J Fan, 10.3847/1538-4357/aae307ApJ. 86768Wang, Y., Xue, Y., Zhu, S., & Fan, J. 2018, ApJ, 867, 68, doi: 10.3847/1538-4357/aae307 . Y Wang, S Zhu, Y Xue, 10.3847/1538-4357/ab4416ApJ. 885Wang, Y., Zhu, S., Xue, Y., et al. 2019, ApJ, 885, 8, doi: 10.3847/1538-4357/ab4416 . A E Wehrle, M Carini, P J Wiita, 10.3847/1538-4357/ab1b2dApJ. 877151Wehrle, A. E., Carini, M., & Wiita, P. J. 2019, ApJ, 877, 151, doi: 10.3847/1538-4357/ab1b2d . Y H Yuan, J H Fan, H J Pan, 10.1088/0004-6256/150/3/67AJ. 15067Yuan, Y. H., Fan, J. H., & Pan, H. J. 2015, AJ, 150, 67, doi: 10.1088/0004-6256/150/3/67 . X H Zhang, G Bao, A&A. 24621Zhang, X. H., & Bao, G. 1991, A&A, 246, 21 . Y H Zhang, A Treves, A Celotti, 10.1086/340349ApJ. 572762Zhang, Y. H., Treves, A., Celotti, A., et al. 2002, ApJ, 572, 762, doi: 10.1086/340349 . Z Zhang, A C Gupta, H Gaur, 10.3847/1538-4357/abdd38ApJ. 909103Zhang, Z., Gupta, A. C., Gaur, H., et al. 2021, ApJ, 909, 103, doi: 10.3847/1538-4357/abdd38
[]
[ "Maximally-stable Local Optima in Random Graphs and Spin Glasses: Phase Transitions and Universality", "Maximally-stable Local Optima in Random Graphs and Spin Glasses: Phase Transitions and Universality" ]
[ "Yatin Dandi \nStatistical Physics of Computation Laboratory\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\n\n", "David Gamarnik \nOperations Research Center\nSloan School of Management\nMIT\n02139CambridgeMA\n", "Lenka Zdeborová \nStatistical Physics of Computation Laboratory\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\n\n" ]
[ "Statistical Physics of Computation Laboratory\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\n", "Operations Research Center\nSloan School of Management\nMIT\n02139CambridgeMA", "Statistical Physics of Computation Laboratory\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\n" ]
[]
We provide a unified analysis of stable local optima of Ising spins with Hamiltonians having pairwise interactions and partitions in random weighted graphs where a large number of vertices possess sufficient single spin-flip stability. For graphs, we consider partitions on random graphs where almost all vertices possess sufficient appropriately defined friendliness/unfriendliness. For spin glasses, we characterize approximate local optima having almost all local magnetic fields of sufficiently large magnitude.For n nodes, as n → ∞, we prove that the maximum number of vertices possessing such stability undergoes a phase transition from n − o(n) to n − Θ(n) around a certain value of the stability, proving a conjecture from Behrens et al. [2022]. Through a universality argument, we further prove that such a phase transition occurs around the same value of the stability for different choices of interactions, specifically ferromagnetic and anti-ferromagnetic, for sparse graphs, as n → ∞ in the large degree limit. Furthermore, we show that after appropriate re-scaling, the same value of the threshold characterises such a phase transition for the case of fully connected spin-glass models. Our results also allow the characterization of possible energy values of maximally stable approximate local optima. Our work extends and proves seminal results in statistical physics related to metastable states, in particular, the work of Bray and Moore [1981]. arXiv:2305.03591v1 [math.PR] 5 May 2023 The mpmath development team. mpmath: a Python library for arbitrary-precision floating-point arithmetic (version 1.3.0), 2023. http://mpmath.org/. Subhabrata Sen. Optimization on sparse random hypergraphs and spin glasses. Random Structures & Algorithms, 53(3):504-536, 2018. David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. Physical review letters, 35(26): 1792, 1975. F Tanaka and SF Edwards. Analytic theory of the ground state properties of a spin glass. i. ising spin glass. conjecture on the maximum cut and bisection width in random regular graphs. Journal of Statistical Mechanics: Theory and Experiment, 2010(02):P02020, 2010.
null
[ "https://export.arxiv.org/pdf/2305.03591v1.pdf" ]
258,547,209
2305.03591
9a0f3047abd737c8f7b9e773c67e09abb1f2ed2a
Maximally-stable Local Optima in Random Graphs and Spin Glasses: Phase Transitions and Universality Yatin Dandi Statistical Physics of Computation Laboratory École Polytechnique Fédérale de Lausanne (EPFL) David Gamarnik Operations Research Center Sloan School of Management MIT 02139CambridgeMA Lenka Zdeborová Statistical Physics of Computation Laboratory École Polytechnique Fédérale de Lausanne (EPFL) Maximally-stable Local Optima in Random Graphs and Spin Glasses: Phase Transitions and Universality We provide a unified analysis of stable local optima of Ising spins with Hamiltonians having pairwise interactions and partitions in random weighted graphs where a large number of vertices possess sufficient single spin-flip stability. For graphs, we consider partitions on random graphs where almost all vertices possess sufficient appropriately defined friendliness/unfriendliness. For spin glasses, we characterize approximate local optima having almost all local magnetic fields of sufficiently large magnitude.For n nodes, as n → ∞, we prove that the maximum number of vertices possessing such stability undergoes a phase transition from n − o(n) to n − Θ(n) around a certain value of the stability, proving a conjecture from Behrens et al. [2022]. Through a universality argument, we further prove that such a phase transition occurs around the same value of the stability for different choices of interactions, specifically ferromagnetic and anti-ferromagnetic, for sparse graphs, as n → ∞ in the large degree limit. Furthermore, we show that after appropriate re-scaling, the same value of the threshold characterises such a phase transition for the case of fully connected spin-glass models. Our results also allow the characterization of possible energy values of maximally stable approximate local optima. Our work extends and proves seminal results in statistical physics related to metastable states, in particular, the work of Bray and Moore [1981]. arXiv:2305.03591v1 [math.PR] 5 May 2023 The mpmath development team. mpmath: a Python library for arbitrary-precision floating-point arithmetic (version 1.3.0), 2023. http://mpmath.org/. Subhabrata Sen. Optimization on sparse random hypergraphs and spin glasses. Random Structures & Algorithms, 53(3):504-536, 2018. David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. Physical review letters, 35(26): 1792, 1975. F Tanaka and SF Edwards. Analytic theory of the ground state properties of a spin glass. i. ising spin glass. conjecture on the maximum cut and bisection width in random regular graphs. Journal of Statistical Mechanics: Theory and Experiment, 2010(02):P02020, 2010. Introduction In combinatorics and theoretical computer science, the properties of partitions of vertices of random graphs have been analyzed in the context of well-known NP-hard problems of maximum cuts and minimum bisections [Gamarnik and Li, 2018, Dembo et al., 2017, Alaoui et al., 2021. In statistical physics, such partitions arise naturally due to the association of the vertices with spin values ∈ {−1, 1}. This leads to the equivalence between notions in different disciplines such as maximum cuts and minimum bisections of a weighted graph corresponding to the minimum energy configurations in Ising models, and spin glasses. In this work, we consider the property of a partition being H-assortative/disassortative as recently studied in [Behrens et al., 2022], i.e. partitions such that the difference between the number of neighbours within the same partition and in the opposite partition for each vertex exceeds/does-not-exceed a given threshold. This property generalizes related notions of friendly and unfriendly partitions [Ferber et al., 2021], satisfactory/co-satisfactory graph partitions [Bazgan et al., 2010], which only requires the differences to be non-negative or non-positive respectively. Similar notions of assortative/disassortative partitions have been considered in the context of game theory as alliances [Kristiansen et al., 2004], in contagion as cohesive subsets [Morris, 2000], as local minimum bisections, local maximum cuts in combinatorial optimization [Angel et al., 2017], and as d-cuts in generalizations of the matching cut problem [Gomes and Sau, 2021]. For the sake of generality, we describe our results for weighted graphs, denoted by G = (V, W ), where V is the set of vertices with n = |V | nodes, and W is a symmetric matrix in R n×n representing the edge weights for each possible pair of edges. Concretely, we associate each pair of vertices (i, j) ∈ [n] × [n] to a weighted edge w ij ∈ R. This includes the case of sparse graphs with w ij = 0 representing the absence of an edge. We interchangeably refer to partitions of the graph as configurations, obtained by an assignment of spins to each vertex: σ ∈ {+1, −1} n . We denote the subsets of vertices having positive and negative spins by V + and V − , respectively. We define bisections as the partitions/configurations satisfying |V + | = |V − | when n = |V | is even and |V + | − |V − | = ±1 for odd n. We denote by d the average degree of the graph. We associate each each partition (V + , V − ) of a graph with a measure of single-spin flip stability for vertices. Consider a weighted graph G = (V, W ) with n nodes and weights w ij associated to edges (i, j) ∈ E, indexed by a parameter d. Here d will usually stand for the average degree of the graph. We define the following single-spin-flip stability function for each vertex v ∈ [n]: s σ (v, W ) = 1 √ d ( i,σ i =σv w vi − i,σ i =σv) w vi ) (1.1) = 1 √ d σ v j w vj σ j . (1.2) The above function allows us to characterize spins and configurations satisfying a given minimum stability requirement. We say that a vertex v in a given configuration/partition represented by σ is h-stable if: s σ (v, W ) ≥ h (1.3) We further define a configuration σ to be h-stable if each vertex is h-stable. The term "stability" is derived from physics terminology. For general w ij ∈ R, s σ (v, W ) quantifies the change in the Hamiltonian H d (σ) := − 1 √ d 1≤i<j≤n σ i σ j W ij (1.4) upon switching the spin of the v th vertex. In particular, the condition s σ (v, W ) ≥ 0 is equivalent to the local stability of the configuration σ w.r.t changes in the value of the v th spin. s(v) can also be interpreted as the product of the v th spin with the local field n i=1 w vi σ i . In general, let σ (v) denote the configuration obtained by switching the value of the v th spin. We have: H d (σ (v) ) − H d (σ) = 2s σ (v, W ) . Configurations σ satisfying s σ (v) ≥ 0 ∀v correspond to local optima, alternatively termed "metastable states" of the Hamiltonian. Such states have been extensively studied in physics (e.g.: [Bray and Moore, 1981, Tanaka and Edwards, 1980, De Dominicis et al., 1980 for spin glass models). For notational convenience, we further define the following normalized energy function: E(σ) = 1 n H d (σ) = − 1 n √ d 1≤i<j≤n σ i σ j w ij . (1.5) The above formulation covers several related notions of interest: • When w ij take values in [0, ∞), we obtain the case of ferromagnetic interactions. In particular, suppose that the weights w i,j are sampled independently such that w ij ∈ {0, 1} with p(w ij = 1) = d n . Consider the graph with edges {(i, j) : w i,j = 1}. From Equation 1.1, we have that s σ (v, W ) is proportional to the difference between the number of neighbours of v belonging to the same partition as v and the ones belonging to the opposite partition. Following Ferber et al. [2021], we refer to this difference as "friendliness" of the v th vertex. Therefore, s σ (v, W ) measures the friendliness/assortativness of the v th vertex in Erdos-Renyi graphs G(n, p = d/n). • When the random edge weights w ij take values in (−∞, 0], we obtain the case of antiferromagnetic interactions. In particular for w ij ∈ {0, −1} with p(w ij = −1) = d n , s σ (v, W ) measures the unfriendliness/dis-assortativeness of the v th vertex in G(n, p = d/n), defined as the difference between the number of neighbours of v belonging to the opposite partition and the ones belonging to the same partition. Summary of the established phase transitions: Our work proves the existence of phase transitions related to the following aspects of single-spin-flip stability for a wide range of measures over the edge weights w i,j : 1. The maximum number of h-stable vertices across all bisections at different h: For the class of sparse random graphs in the limit n → ∞ and degree d → ∞, as well as dense graphs, we prove that the maximum number goes from n − o(n) to n − Θ(n) around a value of the threshold h * ≈ 0.3513. This is illustrated in Figure 1. 2. The interdependence between the energies of configurations and the maximum number of h-stable vertices: We further seek to understand the alterations in the maximum number of h-stable vertices as the energy range of the configurations under consideration is varied. Our results establish the occurrence of phase transitions in the maximal number of h-stable vertices across bisections as one varies the maximum or minimum allowed energy of the configurations. We prove that for a fixed h, the maximum number of h-stable vertices amongst bisections having energy at least E goes from n − o(n) for E < E max (h) to Θ(n) for E > E max (h). Furthermore, we show that there exists a range of threshold values [h cor , h * ) such that there exists another phase transition at a minimum value of energy E min such that maximum number of h-stable vertices amongst configurations having energy at most E changes from n − o(n) for E > E min to n − Θ(n) for E < E min . Illustrations of the above transitions can be found in Figure 2 The numerical value of E max at h = 0, E max (0) ≈ −0.2865 obtained through our analysis matches computations from the seminal work of Bray and Moore [1981] on metastable states in spin glass models. We prove the above phase transitions first for a particular case of sparse anti-ferromagnetic Erdős-Rényi graphs using the first and second-moment methods and subsequently extend the results to other distributions and dense graphs using a universality-type argument. For h < h cor , we prove that the maximum number of h-stable vertices amongst configurations having energy at most E is n − o(n) for E > E cor and is Θ(n) for E < E min for some E max > E cor > E min . Our technique is, however, inconclusive for E < E cor due to the failure of the second-moment method. This is illustrated through Figure 3. Concretely, E cor is the energy where the expected number of pairs of h-stable configurations having non-zero overlap dominates the number of orthogonal pairs of h-stable configurations. This corroborates the non-rigorous analysis of Bray and Moore [1981], who reported the energy value −0.6725 as the onset of the correlation of local minima being in the Sherrington-Kirkpatrick model. This matches the value E cor (0) obtained through our analysis. Our proofs are conjectured on the validity of numerical solutions to certain systems of non-linear equations. All the numerical solutions in the paper are obtained through the use of the mpmath library mpmath development team [2023] in Python. Notation We use o n , O n , o d , O d to denote standard asymptotic bounds w.r.t the variable in the subscript. We denote by capital letters (eg: W ) random variables defining the weighted adjacency matrices of the graph with the corresponding small letters indexed by edges (e.g. w ij ) denoting the edge weights. For simplicity, whenever possible, we suppress the dependence of random variables and parameters of graphs on the degree d and the number of nodes n. For example, given a sequence r n ∈ R satisfying r n n ∈ N and lim n→∞ r n = r, we simply denote r n by r. Related work In recent work, [Behrens et al., 2022] utilized the cavity method from statistical physics to characterize the space of h-stable partitions for sparse regular graphs with ferromagnetic as well as anti-ferromagnetic interactions. They conjectured the existence of a threshold h * such that for h > h * no h-stable partitions exist with high probability while for h < h * , such partitions do exist with high probability as n → ∞. They obtained the following numerical value: h * ≈ 0.355 + o d (1). We prove a similar phase transition with the same numerical value of h * in the sequential limit n → ∞ and d → ∞ for Erdős-Rényi graphs for nearly h-stable partitions, defined as partitions having n − o d (1)n − o(n) h-stable vertices. Furthermore, we show that a phase transition occurs not only in the occurrence of nearly h-stable partitions but also in the maximum number of h-stable vertices in any partition. Concretely, we prove that for h > h * , every partition contains Θ d (1)n − o(n) vertices violating h-stability w.h.p as n → ∞, while for h < h * , partitions with n(1 − o d (1)) − o(n) h-stable vertices exist w.h.p as n → ∞. This is reminiscent of a similar phenomenon in other problems e.g. the MAX-SAT, where both decision and optimization problems simultaneously undergo a phase transition. [Ferber et al., 2021] proved that in Erdos-Renyi graphs drawn from G(n, 1/2), there exist bisections with n − o(n) friendly vertices with high probability as n → 0. Their result is covered as a special case of our work, corresponding to h = 0 of Theorem 3 in our work for dense graphs. Since for any fixed d, Erdős-Rényi graphs contain Θ(n) vertices having no neighbours w.h.p as n → ∞, the sublinear o d (1)n + o(n) terms in the above bounds on the maximum number of h-stable partitions are unavoidable. For regular graphs, h = 0, and the fully connected Sherrington-Kirkpatrick model Sherrington and Kirkpatrick [1975], stronger results implying the existence of partitions with all the vertices being h-stable may be possible to derive. We leave this to future work. The case h = 0 was considered in [Gamarnik and Li, 2018] for Erdős-Rényi graphs with a fixed number of edges. They aimed to compute tight bounds on the max-cut of sparse random graphs. Earlier work by [Coppersmith et al., 2004] had computed such bounds by considering the expected number of partitions having a given cut size. To obtain a tighter upper bound on the max-cut size, [Gamarnik and Li, 2018] restricted the count of partitions to ones satisfying local optimality w.r.t the max-cut problem. Analogously, to obtain a lower bound, they considered the second moment of the number of partitions having a given cut size. These locally optimal partitions exactly correspond to h-stable partitions for h = 0. This allows us to utilize the techniques in [Gamarnik and Li, 2018] for the computation of first and second moments for arbitrary h. Similarly, [Addario-Berry et al., 2019] considered the case of h = 0 for the Sherrington-Kirkpatrick (SK) model to count the number of local minima. The threshold h * characterizes a sharp phase transition in the optimization problem of maximizing the number of h-stable vertices. Remarkably, the value of the threshold is the same for both friendly (assortative) and unfriendly (disassortative) cases, as well as for the spin glass models. Furthermore, we prove that with appropriate rescaling, the same threshold applies to the local optima of the dense Sherrington-Kirkpatrick model. Such an equivalence is analogous to the relation between the values of the max-cut and min-bisection problems, first conjectured in [Zdeborová and Boettcher, 2010] and subsequently proven in [Dembo et al., 2017] for large degrees using the interpolation method. The h-stability constraint in Equation 1.3 can be viewed as a case of random constraint satisfaction problems. However, unlike related problems in this class such as the binary perceptron Gardner [1987], Aubin et al. [2019], the h-stability constraint involves interactions between the spins. Furthermore, the symmetric nature of the interactions induces correlations between the different constraints since the weight w ij appears in the constraints for both the i th as well as the j th vertices. Under such correlations, the rigorous evaluation of even the first moment is non-trivial and was only recently proven for the case of the Sherrington-Kirkpatrick model with Gaussian weights in [Addario-Berry et al., 2019]. The value of the threshold h is related to the robustness to perturbation of metastable states. The constraint for robustness can be incorporated into the dynamics as in the case of Hopfield networks. Hopfield networks aim to model associative memory [Hopfield, 1982] through the convergence of configurations to stable planted attractors. For such networks, the maximum value of the threshold on the local field such that metastable states exist was considered in [Treves and Amit, 1988]. When the ratio of the number of stored patterns to the number of spins approaches infinity, they obtained the maximum value of the threshold as h * ≈ 0.3513, which matches the threshold obtained by us for spin glass and random graphs. Concurrent Work: In a concurrent work, Minzer, Sah, and Sawhney prove the existence of exactly friendly bisections G(n, 1/2) with high probability i.e bisections with all n vertices being friendly. They also prove a phase transition in the existence and absence of bisections in G(n, 1/2) with all n vertices having friendliness h √ d around a value of h * matching the value of the threshold in our results. Their proof relies on a fine-grained application of the second moment method, combining the analysis of Gamarnik [2021] with switching and enumeration techniques, along with the use of isoperimetry in graphs. Based on our universality results, we conjecture that the existence of bisections with all n h-stable vertices for h < h * should also hold for other related models such as the Sherrington-Kirkpatrick (SK) model. Furthermore, we believe that our results related to the energy of h-stable configurations (Theorems 4, 5 and 6) should apply for such exactly h-stable configurations. Organization of the paper The rest of the paper is organized as follows: In Section 2, we summarize the main results concerning the existence of a threshold h * and its universality across a wide range of measures. In Section 3, we prove the occurrence of a sharp transition in the maximum number of h-stable vertices in a partition around a value h = h * , where h * is defined as a root of a non-linear Equation 3.9. Our proof relies on the second-moment method combined with concentration arguments. For the computation of the first and second moments up to sub-leading exponential terms, we borrow the notation and several results from [Gamarnik and Li, 2018], who computed the same when h = 0 and all the vertices are required to be locally h-stable. Our result includes the computation of the first moment for the number of partitions with at-least a constant fraction of the vertices satisfying h-stable. This requires generalizations of the large deviation results in [Gamarnik and Li, 2018]. For the second moment, the expression is obtained directly through appropriate modifications of the proof in [Gamarnik and Li, 2018]. Therefore, we only provide a list of the required modifications in Section B.2. In Section 3.6, we utilize the first and second-moment computations to prove the existence of partitions having n − o(n) h-stable vertices with high probability. This is not a direct consequence of the Paley-Zygmund inequality and requires proving the concentration of certain auxiliary functions along with a technique of perturbing the threshold. In Section 4, we utilize similar techniques to prove the phase transitions in the maximum number of h-stable vertices amongst configurations satisfying certain energy constraints. In Section 5, we prove the universality of the threshold for sparse graphs through an application of the Lindeberg's method to carefully chosen functions of the edge weights for different regimes of h. In Section 6, we extend this result to dense graphs. In Section 7, we prove analogous universality results for the transitions obtained upon the imposition of energy constraints. Finally, we conclude in Section 8 with some open directions. Main Results In this section, we summmarize and present our main theorems. Our main result concern the existence of a threshold h * for different choices of weighted random graphs defined on n nodes and indexed by a parameter d, such that the maximum number of h-stable vertices in any partition undergoes a sharp phase transition as n → ∞. We first define such a threshold for sparse graphs indexed by the corresponding degree parameter d below: Definition 1. We say that a value h * > 0 is a maximal stability threshold for a family of random variables W (d,n) denoting weighted graphs indexed by the number of nodes n and a parameter d if as n → ∞, the following hold: 1. For any h > h * , there exists an 0 < (h) < 1 and a d(h), such that for d > d(h), with high probability as n → ∞, all partitions in W (d,n) have at least n vertices violating h-stability. 2. For any h < h * , and any > 0, there exists a d(h, ), such that for all d > d(h, ), with high probability as n → ∞, there exists a bisection in W (d,n) with at most n vertices violating h-stability. The above definition states that for h > h * , w.h.p as n → ∞, every partition has at least Θ d (1)n − o(n) vertices violating h-stability, while for h < h * , there exist partitions with (1 − o 1 (d))n − o(n) h-stable vertices. When such a threshold exists while restricting to bisections, we term it a "maximal stability bisection threshold". Note that h-stable partitions trivially exist for ferromagnetic interactions if all vertices belong to one of the groups. This is analogous to the distinction between max-cut and min-bisection, where the problem of minimizing the cut size is non-trivial only upon imposing limitations on the imbalance between the partitions. In such cases, we restrict ourselves to the set of bisections, i.e configurations having magnetization 0. We refer to an h * satisfying the above conditions restricted to the set of bisections as a maximal bisection threshold for W (d,n) . To state our results, we define the following function: w(h) = sup x (−x 2 + log(1 + erf(3x − h/ √ 2))). (2.1) We observe that the above function is decreasing in h and therefore possesses at-most one root. It is plotted in Figure 4. Our first result proves the existence of such a threshold for anti-ferromagnetic sparse Erdos-Renyi graphs. Theorem 1. (Threshold for sparse anti-ferromagnets) Let h * denote the unique root of the function w(h). Then for sparse Erdős-Rényi graphs G(n, p = d/n) with average degree d, corresponding to weight matrices with independent edge weights. w ij ∈ {0, 1} p(w ij = 1) = d n , h * is a maximal stability threshold according to the Definition 1. The above result also applies to other models of Erdos-Renyi graphs defined in Section 3.1. Our proof relies on first and second-moment computation up to sub-leading exponential terms. These rely on existing results in [Gamarnik and Li, 2018], who computed these terms for the case of h = 0, i.e for local optima of the max-cut problem. By numerically solving the resulting non-linear systems of equations, we confirm that the first and second moments match up to sub-leading exponential terms for h ≤ h * . This, however, only yields an exponentially decaying lower bound on the probability of the existence of almost h-stable partitions through the Paley-Zygmund inequality. To boost the probability of existence, we utilize results on concentration along with a technique of perturbing the stability threshold, as discussed in Section 3.6. Next, through the use of Lindeberg's method, we extend the above result to a wide range of weighted sparse graphs, including ferromagnetic (w ij ∈ {0, 1}), anti-ferromagnetic (w ij ∈ {0, −1}), and dilute-spin glass interactions (w ij ∈ {0, 1, −1}), under the restriction of zero magnetization (bisections): This results in the following theorem: Theorem 2. (Stability threshold for sparse graphs) Let W (d,n) ∈ R n×n be a family of random variables over weighted graphs indexed by the number of nodes n and a parameter d, with W = W (d,n) satisfying: 1. w ij are independent and identically distributed with mean µ. E[|w ij − µ| 2 ] = d n (1 − d n ). 3. E[|w ij − µ| 3 ] = O n ( d n ) . Under the above assumptions, the value of h * from Theorem 1 is a maximal stability bisection threshold for W (d,n) . In particular, the above result holds for sparse Erdős-Rényi graphs with independent edges, average degree d, and anti-ferromagnetic, ferromagnetic, and spin glass interactions. We then extend the above universality result to dense graphs, including the Sherrington-Kirkpatrick model with w ij ∼ N (0, 1). For such graphs, the parameter d in the definition of h-stability is replaced by n, and we obtain the following result: Theorem 3. (Stability threshold for dense graphs) Let W (n) ∈ R n×n be a sequence of random variables of weighted graphs on n-nodes, with W = W (n) satisfying: 1. w ij are independent and identically distributed with mean µ. 2. E[|w ij − µ| 2 ] = 1. 3. E[|w ij − µ| 3 ] = O n (1). Under the above assumptions, for h * from Theorem 1, the following statements hold: 1. For any h > h * , there exists an (h) such that with high probability as n → ∞, all bisections in W (n) have at least n vertices violating h-stability. 2. For any h < h * , and any > 0, with high probability as n → ∞, there exists a bisection in W (n) with at most n vertices violating h-stability. In particular, the above result holds for dense graphs and independent weights on the edges sampled from Gaussian w ij ∼ N (0, 1). Remark. We note that s σ (v, kW ) = ks σ (v, W ) for k ∈ R. Therefore, the above result can be generalized to W such that E[|w ij − µ| 2 ] = k 2 and E[|w ij − µ| 3 ] = O n (1) for some constant k 2 by scaling h by k. For example, applying the above result to Erdős-Rényi graphs with edge probability 1/2 and h = 0, we obtain Theorem 1.1 in Ferber et al. [2021], also known as Furedi's conjecture. Our analysis not only characterizes the existence of nearly h-stable states but also the possible values of their energy. We introduce the following energy threshold: Definition 2. Given a threshold value h ∈ R, we say that a value E max (h) > 0 is a maximal energy threshold for a family of random variables W (d,n) denoting weighted graphs indexed by the number of nodes n and parameter d if as n → ∞, the following statements hold: 1. For any energy E > E max (h), there exists an 0 < (h, E) < 1 and a d(h, E), such that for degree d > d(h, E), with high probability as n → ∞, all partitions in W (d,n) with energy greater than or equal to E have at least n vertices violating h-stability. 2. For any energy E < E max (h), and any > 0, there exists a d(h, E, ), such that for all d > d(h, E, ), with high probability as n → ∞, there exists a partition in W (d,n) with at most n vertices violating h-stability and energy greater than or equal to E. Analogously, we define the minimal energy threshold E min for configurations having energy less than equal to given values E. Again, while restricting to bisections, we call the above thresholds as "maximal energy bisection threshold" and "minimal energy bisection threshold". Like Theorem 1, our next result proves the existence of the above thresholds under certain conditions on h: Theorem 4. (Energy thresholds for sparse anti-ferromagnetic graphs) Let E max denote the largest root of the function: Figure 6 for h = 0). Similarly, let E min (h) denote the smallest root of the above function. Suppose that h ∈ [−0.1, h * ), where h * is the threshold defined in Theorem 1. Then for sparse Erdős-Rényi graph G with average degree d and anti-ferromagnetic interactions, E max (h) is a maximal energy threshold. Furthermore, there exists a value h cor ≈ 0.2856 such that for any h ∈ [h cor , h * ), E min (h) is a minimal energy bisection threshold. w (E, h) = w(−E/ √ 2, h) = − log 2 − E 2 − sup θ∈R (−θ 2 − log(1 + erf(− √ 2E + θ − h/ √ 2))) (plotted in Remark. The value −0.1 in the above theorem corresponds to the lower limit of h upto which we numerically verify certain conditions to hold. We expect, however the result to hold for all h ∈ [−∞, h * ). For h ∈ [−0.1, h cor ], we still obtain that all bisections in W (d,n) with energy at-most E have Θ d (1)n − o(n) vertices violating h-stability, for any E < E min (h). However, our results only imply the existence of bisections with o d (1)n + o(n) vertices violating h-stability and energy at-most E for E > E cor (h), for some E cor (h) such that E min (h) < E cor (h) < E max (h) due to the failure of the second moment method. This is illustrated in Figure 2 and explained further in Section 4. We extend the above result to sparse graphs satisfying the assumptions in Theorem 2, namely we establish obtain the universality. Our results are as follows: Theorem 5. (Energy thresholds for sparse graphs) Let W (d,n) ∈ R n×n be a family of random variables satisfying the assumptions in Theorem 2. Let E max (h), E min (h) be as defined in Theorem 4. Suppose that h ∈ [−0.1, h * ), where h * is the threshold defined in Theorem 1. Then, E max (h) is a maximal energy bisection threshold. Let h cor be as defined in Theorem 4, then for h ∈ [h cor , h * ), E min (h) is a minimal energy bisection threshold. Theorem 6. (Energy thresholds for dense graphs) Let W (n) ∈ R n×n be a family of random variables satisfying the assumptions in Theorem 3. Suppose that h ∈ [−0.1, h], where h * is the threshold defined in Theorem 1. Let E max (h), E min (h) be as defined in Theorem 4. 1. For any energy E > E max (h), there exists an (h, E) such that with high probability as n → ∞, all bisections in W (n) with energy greater than equal to E, have at least n vertices violating h-stability. 2. For any energy E < E max (h), and any > 0, with high probability as n → ∞, there exists a bisection in W (n) with at most n vertices violating h-stability. Analogously, for h ∈ [h cor , h * ), with h cor as defined in Theorem 4, E min (h) acts as a minimal energy threshold. Proof of Theorem 1: Sparse Graphs with Large Degrees We consider the setup of sparse Erdős-Rényi graphs having n nodes with anti-ferromagnetic interactions. Concretely, we associate each graph G = (V, E), with n vertices V and edges E with a weighted graph G = (V, W ) having anti-ferromagnetic interactions i.e w ij = −1 for (i, j) ∈ E. As discussed in Section 1, the stability constraint in Equation 1.3 for a vertex v and partition σ then imposes a lower bound on the unfriendliness of the vertex v. We denote by d the average degree of the random graphs. Preliminaries: Equivalent models of Random Graphs We consider the following different models for generating such graphs: 1. G(n, p = d/n): Erdős-Rényi graphs with edges being sampled independently with probability p = d n without loops and parallel edges. 2. G(n, m = d 2 n): Erdős-Rényi graphs sampled uniformly from the set of all graphs having the total number of d 2 n edges without loops and parallel edges. 3. G (n, p = d/n): Erdős-Rényi graphs with edges being sampled independently with probability p = d n , including loops and parallel edges.. 4. G (n, m = d 2 n): Erdős-Rényi graphs sampled uniformly from the set of all graphs having the total number of d 2 n edges, including loops and parallel edges. 5. CM (n, m = d 2 n)(configuration model): For a fixed number of d 2 n edges, each end of an edge is independently and uniformly assigned to one of the n-vertices. Unlike the other two models, this allows for loops and parallel-edges. Equivalently, the graph can be considered as being generated through an assignment of dn balls (ends of edges) to n bins (vertices). At different stages of the proof, we shall utilize one of the above models. For instance, the configuration model allows us to decouple the assignment of edges across and within different partitions, leading to simplified first and second-moment computations. It further allows us to utilize stronger concentration results due to a fixed total number of edges being independently assigned to vertices. In later sections, while proving universality using the Lindeberg's method, we will switch to the G(n, p = 2c/n) Erdős-Rényi model, which allows us to exploit the independence of the edge weights across different pairs of edges. Therefore, we require a reduction from the existence result for the configuration model to G(n, p = d/n). We first note that under the configuration model, the number of loops and multi-edges is O n (1) w.r.t n with high probability as n → ∞. Furthermore, with probability bounded away from 0 as n → ∞, G ∼ CM (n, m = d 2 n) is a simple graph i.e contains no loops or multi-edges. The distribution of graphs under the configuration model conditioned on being simple is simply G(n, m = d 2 n). Therefore, for a given h and d, the existence or non-existence of h-stable partitions with high probability for CM (n, m = d 2 n) implies the corresponding property for G(n, m = d 2 n). Similarly, we obtain a reduction from CM (n, m = d 2 n) to G (n, m = d 2 n) by conditioning on the graph having no multi-edges while allowing loops and parallel edges. To obtain the reduction from G(n, m = d 2 n) to G(n, p = d/n), we rely on the following asymptotic equivalence property for the first two models (see for example: [Janson et al., 2011]). Lemma 1. Let P n be any property for graphs on n-nodes. If for any sequence m(n) = dn+O( nd(1 − d n )), P n holds for G ∼ G(n, m = d 2 n) with high probability as n → ∞, then P n also holds with high probability for G ∼ G(n, p = d/n). Here a property P n is defined as the event when G lies in a fixed subset of the n 2 possible edges. Similarly, we obtain a reduction from G (n, m = d 2 n) to G (n, p = d/n). In our case, the property of interest corresponds to the existence of almost h-stable partitions with high probability. Proof Technique and Setup In light of the equivalence described in the previous section, it suffices to prove Theorem 1 for the configuration model with a fixed number of edges dn/2 (including loops and multi-edges). We now summarize the major elements of our proof, adapted from Gamarnik and Li [2018]: 1. First moment conditioned on the violation: A direct application of the first-moment method for the number of h-stable partitions would only allow us to prove the absence of h-stable partitions in the regime h > h * with high probability. This is insufficient to prove the stronger result in Theorem 1 that with high probability for h > h * , at-least (h)n vertices violate h-stability. Moreover, the absence of h-stable partitions already follows from the fact that for Erdős-Rényi graphs of any average degree d, there exist Θ(n) vertices having less than h √ d neighbours with high probability as n → ∞. Therefore, we instead compute the first moment of the number of partitions with at least r n n vertices satisfying h-stability for some 0 < r n ≤ 1, where r n is a sequence of positive rational numbers such that r n n ∈ N, ∀n and lim n→∞ r n = r. Let X(h, r n ) denote the total number of partitions with at-least r n vertices satisfying h-stability. Subsequently, considering all possible subsets of r n n-vertices would allow us to bound the probability of any partition having at-least rn h-stable vertices. 2. Conditioning on cut size and the configuration model: As mentioned earlier, to simplify the analysis of various terms, we make use of the configuration model. Furthermore, to decouple the number of edges of vertices belonging to different partitions, it will be convenient to consider the first and second moments conditioned on the cut size, defined as the number edges with vertices belonging to different partitions. We denote by X(z, h, r) the number of such partitions having a cut size zn, defined as the number of edges between vertices lying in different partitions, For consistency with [Gamarnik and Li, 2018], we also represent the cut size through a variable x defined such that: z = d/4 + x d 2 . (3.1) We note that since the cut size only ranges from 0 to n(n−1) 2 , the first moment satisfies the following bounds: max z E[X(z, h, r)] ≤ E[X(h, r)] ≤ n(n − 1) 2 max z E[X(z, h, r)]. (3.2) 3. Large deviation theory and Poisson, normal approximations: Exact, non-asymptotic closed-form expressions for the required first and second moments appear intractable. Therefore, we borrow and adapt results from Gamarnik and Li [2018] to obtain asymptotic expressions for the leading terms in the exponents for the first, and second moments conditioned on the cut sizes. We denote by w r (x, h), the first-moment entropy density at a given number of the constrained vertices r, threshold h, and cut size zn, defined as: w r (x, h) = lim sup d→∞ lim sup n→∞ 1 n log E[X(z, h, r)]. (3.3) where z = d/4 + x d 2 . For r = 1, our results further imply the existence of the above limits (i.e the lim sup in the above limits can be replaced by lim) w r (x, h) simply corresponds to the first moment entropy density of the number of h-stable configurations with the given cut-size, which we denote by w(x, h): w(x, h) = lim d→∞ lim n→∞ 1 n log E[X(z, h, 1)]. (3.4) The form of the limit implies that w(x, h) is convex in h (Lemma 32 in the Appendix). We denote by x * (h) the unique maximizer of w(x, h), and z * (d) = d/4 + x * d 2 . We further define: w r (h) = sup x w r (x, h), (3.5) with w 1 (h) denoted as w(h). From Equation 3 .1, we have: w r (h) = lim sup d→∞ lim sup n→∞ 1 n log E[X(h, r)]. (3.6) For the second moment, instead of considering all possible partitions of arbitrary cut sizes for the two partitions, we restrict to bisections having a given cut size zn . For even n, we consider exact bisections i.e n i=1 σ i = 0, while for odd n, we allow n i=1 σ i = ±1. Let M 0 denote the set of such configurations. We define X 0 (z, h, r) to be the number of configurations in M 0 having at-least rn h-stable vertices. For r = 1, we denote X 0 (z, h, 1) by X 0 (z, h) We further set r = 1, i.e consider only the partitions with all the vertices satisfying h-stability. We denote by W (x, h) the second moment entropy density at the given value of x and z = d/4 + x d 2 , and threshold h, defined as: W (x, h) = lim d→∞ lim n→∞ 1 n log[E[X 2 0 (z, h, 1)]], (3.7) where the existence of the above limit is an implication of our results. The second-moment entropy density can be further divided into contributions from pairs of configurations having a fixed overlap. For two configurations satisfying the above constraints, we quantify the overlap between the two configurations through a parameter ω measuring the overlap between the two configurations ω(σ i , σ i ) = ( n i=1 σ i σ i )/n. Let E opt (σ, h) be the event that the configuration σ is h-stable. We have: E[X 2 0 (z, h, 1)] = ω σ,σ ∈M 0 ,ω(σ i ,σ i )=ω P[E opt (σ, h) ∩ E opt (σ , h)], where the sum is over ω ∈ [−1, −1 + 1/n, · · · , 1]. We define the second-moment entropy density conditioned on the overlap as: W (x, ω, h) = lim d→∞ lim n→∞ 1 n log[ σ,σ ∈M 0 ,ω(σ i ,σ i )=ω P[E opt (σ, h) ∩ E opt (σ , h)] . (3.8) 4. Boosting the probability of existence: Results from the theory of large deviations, however, only allow us to obtain the first and second moments up to the leading exponential terms. An application of the Paley-Zygmund inequality then only yields a lower bound decaying exponentially with n. We boost this lower bound through the use of the concentration of a suitably chosen random variable. This is described in Section 3.6. First Moment In this section, we calculate upto leading exponential terms, the first moment of the number of partitions containing at least rn h-stable vertices, denoted as E[X(h, r)]. The proof is based on the analysis presented in Gamarnik and Li [2018], which concentrated on the setup of local maxima in the Max-cut problem, corresponding to the case of r = 1 and h = 0. The primary novel contributions of this section are: 1. The generalization of the first moment computation in Gamarnik and Li [2018] to accommodate configurations with h-stability constraint imposed on a fraction r < 1 of the vertices, rather than all vertices: This requires proving a large deviation result for mixture of distributions and controlling the effect of possibly different fraction of vertices satisfying h-stability in the two partitions. 2. Utilizing the first moment E[X(h, r)] to show that with high probability at-least Θ d (1)n − o(n) vertices violate h-stability for any h > h * : This is based on showing that w(h) < 0 for h > h * along with an argument based on the continuity of w(h, r) w.r.t r to show that w(h, r) must also fall below 0 for some r < 1. The first moment entropy density w(h) for r = 1 is given by, the function defined in Equation A.3 and plotted in Figure 4: w(h) = sup x (−x 2 + log(1 + erf(3x − h/ √ 2)))). (3.9) Since the remaining arguments closely follow Gamarnik and Li [2018], we relegate the full technical details to Appendix A and describe only the main result of this section below: Proposition 1. Let h * denote the unique root of the function: w(h) defined in Equation A.3. For any h > h * , ∃r(h) such that: w r (h) < 0. (3.10) Proof of Theorem 1 for h > h * Proposition 1 implies that for r = r(h), there exists a constant C(h) such that for large enough d, n: E[X(h, r)] ≤ exp (−C(h)n). (3.11) Therefore, using Markov's inequality, we obtain that for any h > h * and large enough d, any partition contains at-least (1 − r(h))n vertices violating h-stability. Since, r(h) is independent of d, this completes the proof of Theorem 1 for the case h > h * . Numerically, we obtain the value of the root as h * = 0.3513 which matches the one in [Behrens et al., 2022]. Remark. We note in Figure 4 that at h = 0, the numerical value of the first moment entropy density w(0) ≈ 0.1992 matches the one obtained in Addario-Berry et al. [2019]. Furthermore, it matches the value of the entropy density i.e E[log X] reported in Bray and Moore [1981]. This indicates similar to the hightemperature regime of the SK model, the number of local optima equals the corresponding first-moment entropy density. Furthermore, we conjecture that the value of the entropy density is universal. We leave the rigorous confirmation of this result to future work. The threshold h * ≈ 0.3513 mark the point where the entropy becomes negative. Another value of interest is the entropy θ * (0, 1) ≈ 0.1992 for h = 0 that is well known from Bray and Moore [1981] . Second Moment Unlike the proof for the first moment in Section 3.3, here we do not consider the h-stablity of a fraction r < 1 of the vertices since it suffices to prove the existence of partitions for r = 1. Therefore, the proof of the second moment does not require substantial modifications to the arguments in Gamarnik and Li [2018], who obtained the second moment for local maxima of MAX-CUT i.e the case h = 0. In the Appendix B, we provide a proof sketch of the main arguments in Gamarnik and Li [2018] along with references to specific results in Gamarnik and Li [2018] that require the substitution of h = 0. We obtain the following expression for the second moment entropy density: Proposition 2. Let the second moment entropy density W (x, ω, h) be defined as in 3.8. Then for any h ∈ R, W (x, ω, h) is described through the unique solution of the following set of non-linear equations: W (x, ω, h) = − 2β log β − 2(1/2 − β) log(1/2 − β) − 1 2 t * (x, β) 2 β 2 − 1 2 (x − t * (x, β)) 2 (1/2 − β) 2 + 2β log P θ * 1 (x, β), 1/2 − β β , t * (x, β) β 3/2 − h √ 2β + 2(1/2 − β) log P θ * 2 (x, β), β 1 − 2β , x − t * (x, β) (1/2 − β) 3/2 − h √ 1 − 2β , (3.12) where β = (ω + 1)/4 P (θ, a 1 , a 2 ) = 1 π exp(θ 2 /2) ∞ 0 ∞ a 1 z 2 exp(−((z 1 − θ − a 2 ) 2 + z 2 2 )/2)dz 1 dz 2 , and θ * 1 (x, β), θ * 2 (x, β), satisfy the following consistency equations: θ 1 Q θ 1 , 1/2 − β β , t β 3/2 − h √ 2β (3.13) + ∞ 0 exp   − z 2 2 + 1 2 1/2 − β β z − θ 1 − t β 3/2 + h √ 2β 2   dz = 0, (3.14) θ 2 Q θ 2 , β 1/2 − β , x − t (1/2 − β) 3/2 − h √ 1 − 2β (3.15) + ∞ 0 exp   − z 2 2 + 1 2 β 1/2 − β z − θ 2 − x − t (1/2 − β) 3/2 + h √ 1 − 2β 2   dz = 0, (3.16) − t β 2 + x − t (1/2 − β) 2 − 2 θ 1 β 1/2 + 2 θ 2 (1/2 − β) 1/2 = 0, (3.17) where Q(θ, a 1 , a 2 ) = ∞ 0 ∞ a 1 z 2 exp(−((z 1 − θ − a 2 ) 2 + z 2 2 )/2)dz 1 dz 2 . (3.18) The uniqueness of critical points for the above set of non-linear equations follows from arguments in [Gamarnik and Li, 2018], relying on the strict convexity of the logarithmic moment generating function. Figure 5. For h = h * and x = x * (h * ), i.e the the unique maximizer of w(x, h * ), W (x, ω, h) is plotted in This leads to the following result: Lemma 2. Let h * denote the root of the w(h), defined in Equation A.3, then: (E[X 0 (z * (h * ), h * )]) 2 ≥ exp(−o d (1)n − o(n))E[X 2 0 (z * (h * ), h * )] . (3.19) Proof. Through the proof of Lemma 5.1. in [Gamarnik and Li, 2018], we have that when ω = 0, the secondmoment entropy density W (x, ω, h) equals twice the first-moment entropy density w(x, h). Furthermore, through Lemma 31 in the Appendix, we have that for all x, h ∈ R, w(x, h) equals the first moment entropy density while restricting to bisections, i.e: w(x, h) = lim d→∞ lim n→∞ 1 n log E[X 0 (z, h)]. (3.20) From Figure 5, we observe that W (x * (h), h * , β) is maximized that ω = 0. Therefore, we have W (x * (h * ), h * ) = W (x * (h * ), h * , 0) = 2w(x * (h * ), h * ) . Thus, we obtain: lim n→∞ 1 n log E[X 2 0 (z * (h * ), h * )] = lim n→∞ 1 n 2E[X 0 (z * (h * ), h * )] + o d (1)n. (3.21) Exponentiating both sides completes the proof. Existence of partitions with high probability Equipped with the first and second moments up to the leading terms in the exponent, one could hope to apply the Payley-Zygmund inequality to obtain a lower bound on the probability of the existence of h-locally optimal configurations. However, this only leads to a sub-exponential lower bound. Concretely, we have, using the Payley-Zygmund inequality and Lemma 2: P(X 0 (z * (h), h * ) ≥ 1) ≥ (E[X 0 (z * (h * ), h)]) 2 (E[X 2 0 (z * (h * ), h * )]) 2 ≥ exp(−(o d (1)n + o(n)) . ( 3.22) To boost the above weak lower bound, we introduce a global function measuring the total deficit in h-stability, defined as follows: Definition 3. For each h and σ ∈ B N = {±1} N let D(W, h, σ) = i h − 1 √ d j w ij σ i σ j + , where (·) + denotes the linear threshold function defined as (x) + = x for x > 0 and 0 otherwise. We say that D(W, h, σ) is the total h-deficit associated with partition σ. Let D * (W, h) = min σ D(W, h, σ). Using Equation 3.22, we have for every h < h * P (D * (W, h) = 0) ≥ exp(−(o d (1)n + o(n)). ( 3.23) We start by proving the concentration of D * (W, h) at an exponential rate. Our proof is a consequence of McDiarmid's inequality, which we state here again for convenience: Lemma 3. (McDiarmid's inequality) Let f : X k → R be a real valued function of k independent random variables taking values in a space X . Suppose f satisfies the following bounded difference properties for all x 1 , · · · , x k ∈ X , and 1 ≤ i ≤ k: sup x i ∈X |f (x 1 , · · · , x i , · · · , x k ) − f (x 1 , · · · , x i , · · · , x k )| ≤ c i . (3.24) Then, for any > 0: P (|f (x 1 , · · · , x k ) − E [f (x 1 , · · · , x k )]| ≥ ) ≤ 2 exp −2 2 k i=1 c 2 i (3.25) Lemma 4. Let W be a weight matrix of a graph sampled from the anti-ferromagnetic configuration model with average degree d. Then, for any h < h * the minimum deficit D * (W, h),satisfies: P (|D * (W, h) − E [D * (W, h)] | ≥ n) ≤ 2 exp − 2 n . Proof. We represent the generative process under the configuration model with a fixed number of d/2n edges, through d/2n variables e 1 , e 2 , · · · , e d/2n . Where e i denotes the pairs of vertices independently assigned to each of the d/2n edges. Let W and W denote the weight matrices for two vertex assignments e 1 , e 2 , · · · , e i , · · · , e d/2n and e 1 , e 2 , · · · , e i , · · · , e d/2n differing only at the i th edges. Due to the 1-Lipschitzness of the threshold function (·) + , we have |D * (W, h) − D * (W , h)| ≤ 2 √ d . Therefore, an application of McDiarmid's inequality (Lemma 3) yields: P (|D * (W, h) − E [D * (W, h)] | ≥ n) ≤ 2 exp −2 2 n 2 (d/2)n × 4/d = 2 exp − 2 n (3.26) The above lemma allows us to bound the expectation E [D * (W, h)]: Corollary 1. Let W be a weight matrix as in Lemma 4, then for any h < h * : E [D * (W, d)] = o d (1)n + o(n) . (3.27) Proof. Suppose that, on the contrary, there exist , d ∈ R + and n ∈ N such that E [D * (W, d, h)] ≥ n , for all d > d and n > n . Then Lemma 4 implies that P[D * (W, h) = 0] is at most 2 exp(− 2 n) for all d > d , n > n . However, Equation 3.23 which implies that for large enough d, n, P[D * (W, h) = 0] ≥ exp(−kn) for any constant k. Therefore, we obtain a contradiction. Applying Lemma 4 again, we obtain the following high-probability bound on D * (W, h): Corollary 2. For every > 0, and h < h * ,for large enough d, n we have: P (D * (W, h) ≥ n) ≤ 2 exp(− 2 n/4). (3.28) Therefore, D * (W, h) = o d (1)n + o(n) with high probability. Proof. For large enough d, n, we have by Corollary 1, that E [D * (W, h)] ≤ n/2 . Therefore, for large enough d, n: P (D * (W, h) ≥ n) ≤ P (|D * (W, h) − E [D * (W, h)] | ≥ n/2) ≤ 2 exp(− 2 n/4). (3.29) Where the last inequality follows from Lemma 4. While the existence of h-stable partitions implies a 0 minimal deficit, a small minimal deficit doesn't imply the existence of nearly h-stable partitions. This is because the deficit can be distributed amongst a large number of vertices. Therefore, to relate the above bound on the total deficit to the maximum number of h-stable vertices, we introduce a technique of perturbing the stability threshold h. Let N (W, h, σ) = i 1 1 √ d j W ij σ i σ j − h ≥ 0 denote the count of h-stable vertices. Define N * (W, h) = max σ N (W, h, σ) . We obtain the following result, whose proof illustrates the use of our perturbation technique: Lemma 5. For every h < h * and > 0, P(N * (W, h) ≤ n − n) ≤ exp(−Θ d (1)n − o(n)). Proof. Let E n = E n (h) be the event N * (W, h) ≤ n − n. Fix W such that this event holds. Fix any σ and let S(σ) be the set of i ∈ [n] such that h − 1 √ d j w ij σ i σ j ≥ 0. Suppose E n (h) occurs. Then, |S(σ)| ≥ n. Fixh with h <h < h * . We have D(W,h, σ) = i h − 1 √ d j w ij σ i σ j + = i h − h + h − 1 √ d j w ij σ i σ j + ≥ i∈S(σ) h − h + h − 1 √ d j w ij σ i σ j + = i∈S(σ) h − h + h − 1 √ d j w ij σ i σ j ≥ i∈S(σ) h − h ≥ (h − h) n. Since σ was arbitrary, we conclude D * (W,h) ≥ (h − h) n which occurs with probability at most exp (−Cn) for some constant C and large enough d, n due to Corollary 2. Thus the event E n occurs with probability at most exp(−Θ d (1)(n) + o(n)). Proof of Theorem 4: Energy of h-stable configurations A by-product of the above first and second-moment computations is that it allows the analysis of the maximum number of h-stable vertices at particular levels of energy. Similar to the previous section, we first demonstrate this for the case anti-ferromagnetic interactions (i.e local max-cut) in sparse Erdos-Renyi graphs. This constitutes Theorem 4. In Section 7, we extend these results through a universality argument to other distributions over sparse and dense graphs. Let H d (σ) denote the energy of the configuration σ defined by the Hamiltonian 1.4. Following Equation 3.1, we denote the cut size by z = d/4 + x d 2 . For sparse-graphs with anti-ferromagnetic interactions and fixed number of edges dn, the energy E defined in Equation 1.5 is related to the cut size for a graph G as follows: E d (σ) = − 1 n √ d 1≤i<j≤n σ i σ j W ij = 1 n √ d (i,j)∈E σ i σ j = 1 n √ d (d/4n − xn d 2 − (d/4n + xn d 2 )) = − √ 2x. (4.1) Therefore, one can apply the first and second-moment analysis in the previous section to analyze the maximum number of h-stable vertices at different values of energy E. Concretely, from Lemma 31 in the Appendix, we have: This defines a first moment entropy density function w at a fixed value of energy E: w(x, h) = − log 2 − 2x 2 − sup θ∈R (−θ 2 − log(1 + erf(2x + θ − h/ √ 2))).w (E, h) = w(−E/ √ 2, h) = − log 2 − E 2 − sup θ∈R (−θ 2 − log(1 + erf(− √ 2E + θ − h/ √ 2))). (4.3) Analogously, we define the second moment entropy density W (E, ω, h) as W (−E/ √ 2, h), where W is defined through Proposition 2. 4.1 Local max-cut (h = 0). First, let's consider the case h = 0 i.e. the configurations corresponding to the local optima of H d (σ). We plot the first-moment entropy density as a function of the energy level, defined in 4.3 in Figure 6. By convexity of W (x, h) w.r.t x, (Lemma 32 in Appendix), it has two roots, labelled E min (0) and E max (0) respectively. The first-moment method then implies the absence of local optima having energies greater than E max (0) and less than E min (0). One may naturally expect a phase transition analogous to Theorem 1 around E max (0), E min (0). However, this requires the second-moment entropy density to be maximized at 0 overlap at both these points. This, turns out to be true only at E max (0). We observe that as E decreases from E max (0), there exists a value of the energy E cor (0) > E min where the overlap ω = 0 stops being a maxima of W (E, ω, h). This is illustrated in figure 7. Numerically, we observe that E cor (0) ≈ −0.6725. The value E cor (0) matches the value described in n Bray and Moore [1981] as the onset of the "correlation of local minima" for the Sherrington-Kirkpatrick Model. General h The arguments in the previous section can be applied to any h < h * . Based on numerical search, we conjecture that for each maximized at ω = 0 for E > E cor (h) while it is maximized at some non-zero overlap ω = 0 for E < E cor (h). value of h ∈ [−0.1, h * ), there exists a unique E cor (h) ∈ R such that W (E, ω, h) is We define E cor (h) as the following value of energy: E cor (h) = inf(E : sup ω∈[−1,1] W (E, ω, h) = W (E, 0, h)) (4.4) Therefore, there are up to three values of energy leading to transitions in the first and second moments of h-stable configurations: • The two roots of the first moment entropy density: Since w(x, h) is convex in x (Lemma 32 in the Appendix)., it has at most two roots. For h < h * , the maxima of the first moment w.r.t. x is strictly positive while w r (x, h) → −∞ as x → ±∞. Therefore, the two roots exist. We denote the smaller and larger roots by E min and E max respectively. We note that for E max , the second-moment entropy density is maximized at ω = 0 for all 0 ≥ h ≥ h * . This leads to the phase transition in the existence of nearly h-stable local optima around E max , as described in Theorem 4. • At small enough value of the energy E cor , overlap ω = 0 turns from a local maximum to a local minimum. This corresponds to the point where the second derivative of W (x, ω, h) at ω = 0 vanishes. In statistical physics, such a transition usually precedes or corresponds to the onset of replica symmetry breaking. These points are illustrated in Figure 8 for h ∈ [−0.1, h * ]. At h = 0, we have that E min < E cor < E max . However, as h increases, we observe that E cor and E min intersect at an intermediate value h cor < h * . Numerically, we obtain that h cor ≈ 0.2856. As illustrated in Figure 9, at h = h cor , the second moment entropy density W (E, ω, h) at E = E cor (h) is maximized at ω = 0 and vanishes. For h < h cor , the quantity E min (h) is deemed unphysical, given that it may not represent the minimum energy of nearly h-stable configurations. For instance, when h = 0, the minimum value of the energy has been proven to be E ≈ −0.7632 [Dembo et al., 2017], while E min (0) ≈ −0.7915 Maximal energies of h-stable configurations To analyze the phase transition around E max , we introduce the following random variable: X ≥E (h, r) = z:z≤dn−En √ d X(z, h, r). (4.5) The above term equals the number of configurations having at least rn h-stable vertices and normalized energy greater than or equal to E. Similarly, we define the corresponding first moment entropy density: w ≥E,r (h) = lim sup d→∞ lim sup n→∞ 1 n log E[X ≥E (h, r)] . ( 4.6) Since z only ranges from 0 to n(n−1) 4 , we have: max z:z≤dn−En √ d E[X(z, h, r)] ≤ E[ z:z≤dn−En √ d X(z, h, r)] ≤ n(n − 1) 4 max z:z≤dn−En √ d E[X(z, h, r)] (4.7) Therefore: (4.8) We recall that the first and second-moment computations in sections 3.3 and 3.5 involved fixing a sequence of cut sizes z n n, and associated parameters x n such that z n n ∈ N, z n = d/4 + x n d 2 and lim n→∞ z n = z, lim n→∞ x n = x. We exploit this partitioning of the moments into different cut sizes to establish the following consequences of our computations: Lemma 6. Suppose that h ∈ [−0.1, h * ), and E > E max (h). Then ∃r * (E, h) such that for all r > r * (E, h), for sufficiently large d, we have: w ≥E (h, r) < 0. (4.9) Proof. The proof follows that of Proposition 1. Specifically, we utilize Equation A.37 and instead of maximizing overall all x, we restrict ourselves to x ≥ E/ √ 2. The convexity of w(x, h) w.r.t x implies that w (E, h) is decreasing in E for E > E max (h). Therefore, for any E > E max (h), we have: w ≥E (h) = w(−E/ √ 2, h) (4.10) Lemma 7. Suppose that h ∈ [−0.1, h * ), and let E = E max (h). (E[X 0,≥E (h)]) 2 ≥ exp(−o d (1)n − o(n))E[X 2 0,≥E (h)] . Proof. We constraint ourselves to sequence of cut sizes x n such that lim n→∞ x n = x = E/ √ 2. We numerically verify and illustrate in Figure 8, that for all −0.1 ≤ h ≤ h * , we have E max (h) > E cor (h). Therefore, W (E, ω, h) = W (x, ω, h), with W (x, ω, h) defined by Proposition 2. We have that is W (x, ω, h) maximized at ω = 0 for E = E max (h). From the proof of Lemma 2, we further have that W (x min , h, 0) = 2w(x min , h). Let z min = d/4 + x min d/2. Similar to Equation 3.21, we obtain: 1 n log E[X 2 0 (z min , h)] = 1 n 2E[X 0 (z min , h)] + o d (1)n + o(n). (4.12) Exponentiating both sides completes the proof. We next define the maximal energy deficit function: Definition 4. For each h, σ ∈ B N = {±1} N , and energy E, define the maximal energy deficit function D ≥E as follows: D ≥E (W, h, σ) = i h − 1 √ d j w ij σ i σ j + + (nE − H d (σ)) + . Similarly, we define D * ≥E (W, h) as min σ D * ≥E (W, h, σ). We note that D * ≥E (W, h) = o d (1)n + o(n) implies that there exists a configurations satisfying both n i=1 h − 1 √ d j w ij σ i σ j + and (H d (σ) − nE) + are o d (1)n + o(n). By considering the first and second-moment computation for x = E max √ 2, we observe that the second-moment entropy density is maximized at ω = 0 for x = E max √ 2. Therefore, we obtain: P D * ≥E (W, h) = 0 ≥ exp(−o d (1)n − o(n)). (4.13) As in Lemma 4, using Mcdiarmid's inequality, we have the following concentration result: Lemma 8. Let W be a weight matrix of a graph sampled from the anti-ferromagnetic configuration model with average degree d. Then, for any E, h, the minimum maximal energy deficit D * ≥E (W, h), satisfies: P |D * ≥E (W, h) − E D * ≥E (W, h) | ≥ n ≤ 2 exp − 2 n . Next, analogous to Corollaries 1 and 2, we obtain: Corollary 3. Let W be a weight matrix as in Lemma 4, then for any h ∈ [−0.1, h * ), E = E max (h): E D * ≥E (W, h) = o d (1)n + o(n) . (4.14) Proof. Suppose that, on the contrary, there exist , d ∈ R + and n ∈ N such that we have E D * ≥E (W, h) ≥ n for any d > d , n > n , then Lemma 8 implies that P[D * ≥E (W, h) = 0] is at most exp(− 2 n), contradicting (4.13). The above Corollary and Lemma 8 then imply the following result: Now, let N * ≥E (W, h) denote the maximum number of vertices satisfying h-stability amongst the set of configurations having energy greater than E. Using a similar technique as Lemma 5, but with the perturbation applied to E instead of h, we prove the following result: Lemma 9. Suppose that h ∈ [−0.1, h * ).Then, for every E < E max (h) and > 0, P(N * E (W, h) ≤ n − n) ≤ exp(−Θ d (1)n + o(n)). Proof. Let E n = E n (h, E) be the event N * ≥E (W, h) ≤ n − n. Fix W such that this event holds. Fix any σ and let S(σ) be the set of i ∈ [n] such that h − 1 √ d j w ij σ i σ j ≥ 0. Suppose E n (h) occurs. Then, |S(σ)| ≥ n. FixẼ with E <Ẽ < E max . From the monotonocity of E max (h), w.r.t h, we observe that there exists ah with h <h < h * such thatẼ = E max (h). Then, for any σ, either: 1. E(σ) < E. Therefore E(σ)n −Ẽn ≥ (E −Ẽ)n. 2. N E (W, h, σ) ≥ n − n. In either case, we obtain: D ≥Ẽ (W,h, σ) = i h − 1 √ d j w ij σ i σ j + + (nẼ − H d (σ)) + = i h − h + h − 1 √ d j w ij σ i σ j + + (nẼ − nE + nE − H d (σ)) + ≥ min((h − h) n, (E −Ẽ)n). Let min((h − h) n, (E −Ẽ)n) = n for some constant > 0. Since σ was arbitrary, we conclude D * ≥Ẽ (W,h) ≥ n which occurs with probability at most exp (−Cn) for some constant C and large enough d, n due to Corollary 2. Thus the event E n occurs with probability at most exp(−Θ d (1)(n) − o(n)). The above result proves Theorem 4 for E < E max (h), while through Markov's inequality, Lemma 6 provides a proof for E > E max (h). Minimal energies of h-stable configurations Similar to the previous section, the proof of Theorem 4 for the minimal energy threshold for h ∈ [h cor , h * ) follows by numerically checking that E min > E cor , as illustrated in Figure 8. This implies that for h ∈ [h cor , h * ) and E = E min (h), using Paley-Zygmund, we have: (E[X ≤E (h)]) 2 ≥ exp(−o d (1)n − o(n))E[X 2 ≤E (h)]. (4.15) where X ≤E is defined analogous to equation . The rest of the proof follows that of Lemma 9, with a deficit function penalizing energy values larger than E, given by: Definition 5. For each h, σ ∈ B N = {±1} N , and energy E, define the minimal energy deficit function D ≥E as follows: D ≤E (W, h, σ) = i h − 1 √ d j w ij σ i σ j + + (H d (σ) − nE) + . 5 Universality for Sparse graphs: Proof of Theorem 2 In this section, we establish the universality of the threshold h * for families of random variables satisfying the assumptions of Theorem 2. In particular, this includes sparse Erdős-Rényi graphs having ferromagnetic, anti-ferromagnetic or spin glass interactions. Since we do not possess analytic expressions or proofs of the existence of a threshold for arbitrary distributions over graphs, directly relating the thresholds for different distributions seems challenging. Instead, we prove that for a given h < h * , a partition with at-most o d (1)n + o(n) vertices violating h-stability exists with high probability for all distributions. Analogously, for h > h * , we prove that every partition has Θ d (1)n − o(n) vertices violating h-stability. Throughout the subsequent sections, we restrict ourselves to configurations having zero magnetization or bisections. Universality for near h-stable configurations (h < h * ) For a given choice of weights W and a configuration σ, following Definition 3, we denote by D(W, h, σ), the total deficit of the h-stability vertices satisfying h-stability. We have: D(W, h, σ) = 1≤i≤n h − 1 √ d j w ij σ i σ j + . Let M 0 denote the set of configurations with zero magnetization i.e.: Proof. We utilize the argument used in the proof of Section 3.6. Assume that N * (W, h, σ) ≤ (1 − )n for some > 0. Then for any configuration, σ, at least n vertices violate h-stability. M 0 = {σ : n i=1 σ i = 0} .Sinceh > h, each such vertex contributesh − h to the totalh-deficit for σ i.e D(W,h, σ) ≥ (h − h * )n. Since σ is arbitrary, we have D * (W,h, σ) ≥ (h − hh)n. Therefore, N * (W, h, σ) ≤ (1 − )n =⇒ D * (W, h) ≥ (h − h)n. By assumption, we have lim n→∞ P [D * (W, h) ≥ (h − h)n] = 0. Thus, we obtain that for any > 0, lim n→∞ P [N * (W, h, σ) ≤ (1 − )n] = 0. We now show that the value of the threshold exhibits a universality property. For simplicity and due to the equivalence between the models with and without loops and parallel edges discussed earlier, we consider the models with loops and parallel edges. This allows us to consider weight matrices with i.i.d entries w ij . Using a modification of Lindeberg's argument, we obtain the following result: Proposition 3. Let W d,n be a family of random weighted graphs satisfying the assumptions in Theorem 2. Let h * be the threshold defined in Theorem 2. Then, for every and h < h * , there exists a degree d( , h) such that for all d ≥ d( , h), w.h.p as n → ∞, we have: D * (W d,n , h) ≤ n. (5.1) Equivalently, D * (W d,n , h) = o d (1)n + o(n) . Now, for any h < h * , pick h <h < h * . Using the above proposition, we have D * (W d,n ,h) = o d (1)n + o(n) with high probability. Therefore, using Lemma 10, we further obtain the following Corollary: Corollary 5. Let W be as in proposition 3. Then for any h < h * , with high probability, N * (W, h) ≥ n(1 − o d (1)) − o(n). (5.2) Where the above equation denotes convergence in probability under the sequential limit n → ∞ and d → ∞. In particular, the above result covers the cases of sparse graphs with ferromagnetic or anti-ferromagnetic interactions, or equivalently the case of friendly or unfriendly partitions in sparse random graphs. Proof. Let W denote an arbitrary random weight matrix with i.i.d entries w ij with mean µ. Let W denote the matrix with entries W ij = w ij − µ. We observe that for any σ ∈ M 0 , N (W, h, σ) = N (W , h, σ). Therefore, without loss of generality, we restrict to weight distributions having 0 mean. We prove the above result by establishing the universality of D * (W, h) for families of distributions satisfying the given assumptions as d → ∞ and n → ∞. This is expressed through the following result: Lemma 11. Let A, B be arbitrary random weight matrices with i.i.d random entries a ij , b ij satisfying the assumptions in Theorem 2 for parameters d, n with means 0. Then: |D * (A, h) − D * (B, h)| = o d (1)n + o(n), (5.3) with high probability as n → ∞. We first explain how the above result implies proposition 3. Let W (d,n) be an arbitrary family of weighted random graphs on n-nodes satisfying the assumptions in Theorem 2. Note that sparse Erdős-Rényi graphs with anti-ferromagnetic interactions sampled from G (n, p = d/n) correspond to one such family of random graphs. Let W = W (d,n) and W be a weighted-graph sampled from G (n, p = d/n) with anti-ferromagnetic interactions. Corollary 2 implies that for any h < h * , D * (W , h) = o d (1)n + o(n) with high probability as n → ∞. Therefore, applying Lemma 11 yields D * (W, h) = o d (1)n + o(n). Proof of Lemma 11 We prove the universality of D * (A, h) through an application of the Lindeberg's method [Chatterjee, 2006]. We first recall the central idea: Suppose we wish to prove that E [f (a 1 , a 2 , · · · , a n )] approximates E [f (b 1 , b 2 , · · · , b n )] for a thrice differentiable function f , and two sequences of independent variables, (a 1 , a 2 , · · · , a n ) and (b 1 , b 2 , · · · , b n ) with identical means and variance. Lindeberg's method involves iteratively swapping a i to b i and utilizing Taylor's theorem and the matching of the first two moments of a i and b i . In our case, the function of interest is D * (W, h). However, D * (W, h) is not-differentiable and involves a minimization over the set of configurations σ. We, therefore, introduce a series of smooth approximations to D * (W, h). A convenient approximation technique leading to simplified derivatives is through the introduction of a Hamiltonian [Sen, 2018]. Fix any function g : R → R. Let H d (W, g, σ) = 1≤i≤n g h − 1 √ d j w ij σ i σ j ,(5.4) for w = (w ij , 1 ≤ i, j ≤ n), σ ∈ B n . We now define the ground state and energy conditioned on the magnetization being 0, and the associated partition function: H * d (W, g) = min σ∈M 0 H d (W, g, σ), Z(W, g) = σ∈M 0 exp (−ρH d (W, g, σ)) , F (W, g) = 1 ρ log Z(W, g) , where we introduced a parameter ρ ∈ R + , commonly referred to as the inverse temperature. We have: − H * d (W, g) ≤ F (W, g) ≤ −H * d (W, g) + log 2 ρ n. (5.5) For any observable f : B n → R we denote by f the associated Gibbs average f = σ∈M 0 f (σ) exp(−ρH d (W, g, σ)) Z . Fix now 1 > 0 and let g 1 be an infinitely differentiable function g 1 : R → R with uniformly bounded first, second, third derivatives such that g 1 that is uniformly 1 close to () + , i.e: sup t∈R |g 1 (t) − (t) + | ≤ 1 . (5.6) Therefore, g acts a smooth approximation of threshold function (·) + . For example, we may choose g to be the following soft-plus function: g 1 (t) = γ ln (1 + exp (t/γ)), (5.7) where γ = 1 ln 2 . Through a straightforward calculation, one may check that g 1 (0) = 1 and g 1 (t) − (t) + is maximized at t = 0. Furthermore, g 1 (t), g 1 (t), g 1 (t) are uniformly bounded by constants depending on 1 . As per our previous discussion, our goal is to show that |E [H * d (A, (·) + )] − E [H * d (B, (·) + )]| is sufficiently small when h < h * . We have |H d (W, (·) + , σ) − H d (W, g 1 , σ)| ≤ 1 n (5.8) for all σ and thus we will focus on the case g = g 1 . Equation 5.6 along with the definition of H * d further imply that: H * d (W, (·) + ) ≤ H * d (W, g 1 ) + 1 n H * d (W, g 1 ) ≤ H * d (W, (·) + ) + 1 n . Therefore: H * d (W, (·) + ) − H * d (W, g 1 ) ≤ 1 n . (5.9) With the above definitions, we now apply the Lindeberg's argument to the two random weighted graphs A and B. We note that A = A T and B = B T . Both A and B therefore can be represented as sets of m = n(n + 1)/2 i.i.d entries a ij and b ij respectively with i ≤ j. We fix an order on the entries in the set {(i, j) : i ≤ j} arbitrarily by iterating from i = 1, · · · , n and j = 1, · · · , i and switch each element from M to B in this order. Let (k, r) be the t th element of the sequence. Define the matrix J (t) with entries j (t) ij = A ij when (i, j) ≤ (k, r), j (t) ij = B ij when (i, j) > (k, r). Let J (t) /kr = J (t−1) /kr denote the matrix with entries at positions (k, r) and (r, k) set to 0 and all the remaining entries being identical to J (t) . Note that J 0 = A and J m = B. Lindeberg's argument relies on the following telescoping sum: E[F (B, g 1 )] − E[A, g 1 )] = m t=0 E[F (J (t) , g 1 )] − E[F (J (t−1) , g 1 )] . (5.10) We denote by F (l) the partial l-th derivative of F with respect to j kr . Then, using the third order Taylor expansion over the variable j (t) kr at a fixed value of the matrix J (t) /kr , along with the independence of the edge weights, we obtain. E[F (J (t) , g 1 )|J (t) /kr ] − F (J (t) /kr , g 1 ) − F (1) (J (t) /kr , g 1 )E[b kr ] − 1 2 F (2) (J (t) /kr , g 1 )E[(b kr ) 2 ] ≤ 1 3! F (3) ∞ E[|b kr | 3 ], E[F (J (t−1) , g 1 )|J (t) /kr ] − F (J (t) /kr , g 1 ) − F (1) (J (t) /kr , g 1 )E[a kr ] − 1 2 F (2) (J (t) /kr , g 1 )E[(a kr ) 2 ] ≤ 1 3! F (3) ∞ E[ a 3 kr ],(5. 11) Recall that a kr and b kr have the same first and second moment. Therefore, an application of the triangle inequality yields: E[F (J (t) , g 1 )|J (t) /kr ] − E[F (J (t−1) , g 1 )|J (t) /kr ] ≤ 1 3! F (3) ∞ E[|b kr | 3 ] + 1 3! F (3) ∞ E[ a 3 kr ] . (5.12) The expected cost of swap" from m kr to b kr is therefore bounded by the 3-d derivative and the absolute centred 3 rd moment of b ij and a ij . By considering the expectation of E[F (J (t) , g 1 )|J (t) /kr ] − E[F (J (t−1) , g 1 )|J (t) /kr ] over J (t) /kr , we obtain: E[F (J (t) , g 1 )] − E[F (J (t−1) , g 1 )] ≤ 1 3! F (3) ∞ E[|b kr | 3 ] + 1 3! F (3) ∞ E[ a 3 kr ]. (5.13) Substituting in Equation 5.10 yields: E[F (J (t) , g 1 )] − E[F (J (t−1) , g 1 )] ≤ n(n + 1) 2 1 3! F (3) ∞ E[|b kr | 3 ] + 1 3! F (3) ∞ E[ a 3 kr ] . (5.14) To bound F (3) ∞ , we now compute the derivatives of F for an arbitrary symmetric weight matrix J with entries j ij for i, j ∈ [n]. We recall that the derivatives in equation 5.11 are w.r.t the variable j kr = j rk . Note that d dj kr l g 1 (h − 1 √ d m j lm σ l σ m ) = −ġ 1 (h − 1 √ d m j km σ k σ m ) 1 √ d σ k σ r −ġ 1 (h − 1 √ d m j rm σ r σ l ) 1 √ d σ k σ r . We thus have: F (1) (J, g 1 ) = 1 ρ Z −1 σ ρġ 1 (h − 1 √ d m j km σ k σ m ) 1 √ d σ k σ r exp(−ρH d (g 1 , J (t) , σ)) + 1 ρ Z −1 σ ρġ 1 (h − 1 √ d l j rl σ r σ l ) 1 √ d σ k σ r exp(−ρH d (g 1 , J (t) , σ)) = 1 √ d ġ 1 (h − 1 √ d m j km σ k σ m )σ k σ r T 1 + 1 √ d ġ 1 (h − 1 √ d l j rl σ r σ l )σ k σ r T 2 . Since · is Gibbs average it is at most the max term. We have, by assumption, that sup tġ 1 is bounded by a constant (independent of n). As σ i = ±1 we obtain a bound order 1/ √ d. Consider, the derivative of the first term: d dj kr T 1 = − 1 d g 1 (h − 1 √ d m j km σ k σ m ) T 3 + ρ d (ġ 1 (h − 1 √ d m j km σ k σ m )) 2 T 4 + ρ d (ġ 1 (h − 1 √ d m j km σ k σ m ))(ġ 1 (h − 1 √ d l j rl σ r σ l )) T 5 − ρ d ( (ġ 1 (h − 1 √ d m j km σ k σ m )) ) 2 T 6 − ρ d ( (ġ 1 (h − 1 √ d m j km σ k σ m )) )( (ġ 1 (h − 1 √ d m j km σ k σ m )) ) T 7 = O( 1 d ) . Using similar computations, we show that the derivatives of each of the terms in d dj kr T 3 , d dj kr T 4 , · · · , d dj kr T 7 ). For instance, we have: d dj kr T 3 = − 1 d 3/2 ... g 1 (h − 1 √ d m j km σ k σ m ) + ρ d 3/2 (g 1 (h − 1 √ d l j km σ k σ m )ġ 1 (h − 1 √ d m j km σ k σ m )σ k σ r ) + ρ d 3/2 (g 1 (h − 1 √ d l j km σ k σ m )ġ 1 (h − 1 √ d l j rl σ r σ l )σ k σ r ) − ρ d 3/2 (g 1 (h − 1 √ d m j km σ k σ m ) (ġ 1 (h − 1 √ d m j km σ k σ m )σ k σ j ) − ρ d 3/2 (g 1 (h − 1 √ d m j km σ k σ m ) (ġ 1 (h − 1 √ d l j rl σ r σ l )σ k σ r ) . Similarly, the derivatives of T 2 are obtained by replacing k and m with r and l respectively. We thus obtain: F (2) ∞ ≤ C 2, ,ρ d . (5.15) Now, from repeated applications of the chain and product rule of differentiation, we see that the derivatives of each of the terms T 4 , T 5 , T 6 , T 7 will be O(d 3 2 ). We thus have: F (3) ∞ ≤ C 3, ,ρ d 3 2 (5.16) for some constant C. We recall that by assumption over B and the definition of A, we have, for large enough n: E[|a kr | 3 ] ≤ C a ( d n ), E[|b kr | 3 ] ≤ C b ( d n ) . Substituting the above bounds in Equation B.4, conditioning on each entry in turn, and summing over, we obtain: |E[F (A, g 1 )] − E[F (B, g 1 )]| ≤ n(n + 1) 2 2C 3, ,ρ d 3 2 (C a ( d n ) + C b ( d n )) ≤ C F, 1 ,ρ n √ d for some constant C F, 1 ,ρ dependent on 1 , ρ. Now, let 2 be arbitrary. For d ≥ C 2 F, 1 ,ρ 2 2 , we have: |E[F (A, g 1 )] − E[F (B, g 1 )]| ≤ 2 n We note that using triangle inequality: E[H * d (A, (·) + )] − E[H * d (B, (·) + )] ≤ E[H * d (A, (·) + )] − E[H * d (A, g 1 )] + E[H * d (B, (·) + )] − E[H * d (B, g 1 )] + |E[H * d (A, g 1 )] − E[F (A, g 1 )]| + |E[H * d (B, g 1 )] − E[F (B, g 1 )]| + |E[F (A, g 1 )] − E[F (B, g 1 )]| ≤ (2 1 + 2 log 2 ρ + 2 )n , where we used Equations 5.8, 5.9, 5.5. Let > 0 be arbitrary. Then, with 1 = /6, 2 = /3, ρ = 2 log 2/ , we obtain that for d ≥ C 2 F, 1 ,ρ 2 2 , we have: E[H * d (A, (·) + )] − E[H * d (B, (·) + )] ≤ ( /3 + /3 + /3)n = n. (5.17) Since, was arbitrary, we equivalently obtain E[H * d (A, (·) + )] − E[H * d (B, (·) + )] = o d (1)n. (5.18) Next, we establish the concentration of H * d (A, (·) + ) and H * d (B, (·) + ). Let A and A (i,j) be weight matrices sampled from P A differing only at the edges (i, j), (j, i). Let a ij , a ij , denote the weight of the (i, j) edge in A and A (i,j) respectively. We note that from the 1-Lipschitzness of (·) + , and the definition of h * , we have: H * d (A, (·) + ) − H * d (A (i,j) , (·) + ) ≤ 2 √ d a ij − a ij . (5.19) Therefore: E (H * d (A, (·) + ) − H * d (A (i,j)) 2 , (·) + ) ≤ 2 √ d E (a ij − a ij ) 2 = 4 √ d d n (1 − d n ). Therefore, using the Efron-Stein inequality, we obtain: (5.20) for some constant C and large enough n. Therefore, Chebychev's inequality yields: Var((H * d (A, (·) + )) ≤ 1 2 n(n + 1) 2 4 √ d d n (1 − d n ) ≤ Cn √ d,Pr ( H * d (A, (·) + ) − E H * d (A, (·) + ) ≥ n) ≤ Cn √ d 2 n 2 C √ d 2 n → n→∞ 0. Therefore, we have H * d (A, (·) + ) = E [H * d (A, (·) + )] + o(n) with high probability as n → ∞. This completes the proof of Lemma 11. Universality of extensive violation of h-stability (h > h * ) To prove universality in the regime h > h * , we introduce the following truncated deficit function: T (W, h, σ) = 1≤i≤n h − 1 √ d j w ij σ i σ j + 1 , where the function (·) + 1 is defined as: (x) + 1 =      0, x < 0 x, 0 < x < 1 1 otherwise (5.21) Similar to the setup for h < h * , we restrict the Gibbs measure defined by the above Hamiltonian to be supported on the set of bisections. Therefore, we define N * (W, h) = max σ∈M 0 N (W, h, σ). Lemma 12. Suppose h ∈ R satisfies N * (W, h) = n − Θ d (1)n − o(n) with high probability as n → ∞. Then for anyh > h, we have T * (W,h) ≥ (Θ d (1))n + o(n) with high probability as n → ∞. Proof. N * (W, h) = n − Θ d (1)n − o(n) implies that ∃ independent of d such that for every partition, large enough d, as n → ∞, at least n vertices violate h-stability. Sinceh > h, any vertex violating h-stability also violatesh stability By the definition of (·) + 1 , each vertex contributes at-lest (h − h) + 1 to the total truncated deficit ath i.e T (W, h, σ) ≥ (h − h * )n for all σ. We thus have T * (W, h) ≥ (h − h)n with a high probability for large enough d as n → ∞. Next, we observe that, unlike the original deficit function, a large value of the truncated deficit directly implies the existence of a large number of vertices violating h-stability. This is expressed through the following lemma: Lemma 13. Suppose h ∈ R satisfies T * (W, h) = Θ d (1)n − o(n) with high probability as n → ∞. Then, we have N * (W, h) = n − (Θ d (1))n − o(n) with high probability as n → ∞. We recall that Theorem 1 implies that when W corresponds to an Erdős-Rényi graph with antiferromagnetic interactions, then for any h < h * , N * (W, h) = n − Θ d (1)n − o(n) with high probability as n → ∞. Therefore, Lemma 12 implies that T * (W, h) = Θ d (1)n − o(n). Subsequently, replacing the threshold function () + by the truncated threshold () 1 + in the proof of Lemma 11 results in the following universality result: Lemma 14. Let W be a weighted graph satisfying the assumptions in Theorem 2. Let h * be the threshold defined in Theorem 2. Then, ∀h > h * , there exists an = (h),such that for large enough d, w.h.p as n → ∞, we have: T * (W, h) ≥ n. (5.22) Combining the above proposition with Lemma 13, we obtain the following result: Corollary 6. Let W and h be as in Lemma 14 N * (W, h) ≤ n(1 − Θ d (1)) + o(n), (5.23) with high probability. Proof of Theorem 2 We finally note that corollaries 5 and 6 together imply Theorem 2. N (0, 1). For simplicity, we include self-interactions i.e w ii = 0. The proof can be generalized to exclude self-interactions by interpolating between the corresponding sparse model without loops. Let D(W, h, σ) denote the total h-deficit for the dense graph W , i.e: D(W, h, σ) = 1≤i≤n h − 1 √ d j w ij σ i σ j + , Analogous to Section 5, we let D * (W, h) = min σ∈M 0 D(W, h, σ), and define the following Hamiltonian for the dense graph: H(W, g, σ) = 1≤i≤n g h − 1 √ n j w ij σ i σ j . We observe that the above dense Hamiltonian H d can now be expressed as the sparse Hamiltonian H for rescaled and shifted variables p ij = d n w ij . H(W, g, σ) = H d (p, g, σ) = 1≤i≤n g h − 1 √ d j p ij σ i σ j . (6.1) . Let A and J be random weight matrices corresponding to edge weights sampled from the sparse antiferromagnetic random graphs with loops and the standard normal distribution respectively. Let P = d n J Repeating the Linderberg's argument while going from the matrix A to the matrix P , while replacing each element in turn as in the previous section, we obtain: E[F (J (t) , g 1 )|J (t) /kr ] − F (J (t) /kr , g 1 ) − F (1) (J (t) /kr , g 1 )E[b kr ] − 1 2 F (2) (J (t) /kr , g 1 )E[(p kr ) 2 ] ≤ 1 3! F (3) ∞ E[|p kr | 3 ], E[F (J (t−1) , g 1 )|J (t) /kr ] − F (J (t) /kr , g 1 ) − F (1) (J (t) /kr , g 1 )E[a kr ] − 1 2 F (2) (J (t) /kr , g 1 )E[(a kr ) 2 ] ≤ 1 3! F (3) ∞ E[ a 3 kr ], (6.2) where J (t) as before denotes the intermediate matrix. E[F (J (t) , g 1 )|J (t) /kr ] − E[F (J (t−1) , g 1 )|J (t) /kr ] ≤ 1 2 F (2) ∞ (E[(a kr ) 2 ] − E[(p kr ) 2 ]) + 1 3! F (3) ∞ E[|p kr | 3 ] + 1 3! F (3) ∞ E[ a 3 kr ] ≤ 1 2 F (2) ∞ ( d 2 n 2 ) + 1 3! F (3) ∞ E[|p kr | 3 ] + 1 3! F (3) ∞ E[ a 3 kr ]. We have the following bounds on the absolute third moment: E[|a kr | 3 ] ≤ C b ( d n ) E[|p kr | 3 ] ≤ C m ( d n ) 3 2 . (6.3) Utilizing the above bounds, along with the bounds on F (2) ∞ and F (3) ∞ in Equations 5.15,5.16 and taking expectation over J (t) /kr , we obtain: E[F (J (t) , g 1 )] − E[F (J (t−1) , g 1 )|] ≤ C 2, ,ρ d n 2 + C 3, ,ρ C b 3!n √ d + 2C 3, ,ρ (C m ) 3!n 3 2 . Finally, summing over the n(n+1) 2 variables and using n+1 n ≤ 2∀n ∈ N results in the following bound: |E[F (P, g 1 )] − E[F (A, g 1 )]| ≤ C 2, ,ρ d + 2C 3, ,ρ C b n 3! √ d + 4C 3, ,ρ (C m ) 3! √ n (6.4) Now, let 2 > 0 be arbitrary. Suppose: d ≥ ( 3! 2C 3, ,ρ C b 2 ) 2 , n ≥ max ( C 2, ,ρ d 2 , ( 4C 3, ,ρ (C m ) 3! 2) 2 ), (6.5) then we have: |E[F (P, g 1 )] − E[F (A, g 1 )]| ≤ 2 3 n + 2 3 n + 2 3 n = 2 n. Next, similar to the proof of Theorem 2, we use the error bounds in Equations 5.8, 5.9, 5.5 to obtain, for large enough d, n satisfying Equation 6.5: E[H * d (A, (·) + )] − E[H * d (P, (·) + )] ≤ (2 1 + 2 log 2 ρ + 2 )n. (6.6) Then, for any > 0 setting 1 = /6, 2 = /3, ρ = 2 log 2/ and d, d satisfying Equation 6.5, results in: Since H * d (P, (·) + ) = D * (J, h) we obtain the following result: E[H * d (A, (·) + )] − E[H * d (P, (·) + )] ≤ n. (6.7) Therefore, E[H * d (A, (·) + )] − E[H * d (P, (·) + )] = o d (1)n + o(n) . Lemma 15. Let A, J be arbitrary weight matrices with i.i.d random entries a ij , j ij satisfying assumptions in Theorem 2, 3 for parameters d, n and n respectively. Then: |D * (A, h) − D * (J, h)| = o d (1)n + o(n), (6.10) with high probability as n → ∞. Since Theorem 2 implies that D * (A, h) = o d (1)n + o(n) for h < h * with high probability as n → ∞, considering the limit d → ∞, we obtain: Lemma 16. Let J be a weighted graph on n nodes satisfying the assumptions in Theorem 3. Let h * be the threshold defined in Theorem 2, then for any h < h * , with high probability as n → ∞: D * (J, h) = o(n). (6.11) We note that Lemma 10 also applies to the deficit D * (J, h) and the maximum number of h-stable vertices N * (J, h) for dense graphs. Therefore, we obtain the following Corollary: Corollary 7. Let J, h * be as in Proposition 16. Then for any h < h * N * (J, h) ≥ o(n). (6.12) Similarly, using the truncated deficit function (·) + 1 instead of (·) + defined in Equation 5.21, we obtain the following results: Lemma 17. Let J, h * be as in Proposition 16 then for any h > h * T * (J, h) = Θ(n). (6.13) Corollary 8. Let J, h * be as in Proposition 16. Then for any h > h * : N * (J, h) ≤ n(1 − Θ(1)) + o(n). (6.14) Proof of Theorem 3 Corollaries 7 and 8 together imply Theorem 3. Extension of the result to all configurations In this paper, we restrict our attention to theorems concerning bisections, as this approach enables us to concurrently establish the universality result for distributions with arbitrary means of the edges w ij . For specific cases where the mean is 0, such as the Proof. Assume that N * ≥E (W, h) ≤ (1 − )n for some > 0 and large enough d, n. Then, for any configuration σ, either E(σ) <Ẽ or at-least n vertices violateh stability. In either case, we obtain: D ≥Ẽ (W,h, σ) ≥ 1≤i≤n h − 1 √ d j w ij σ i σ j + + (nE + 1 √ d i,j≤n w ij σ i σ j ) + = i h −h +h − 1 √ d j w ij σ i σ j + + n(Ẽ − E + E − H d (σ)) + ≥ max n h −h + , n(Ẽ − E) + . This contradicts D * ≥E (W, h) = o d (1)n + o(n), proving that N * ≥Ẽ (W,h) = n(1 − o d (1)) − o(n) Let g be a smooth uniform approximation of the threshold function () + as in Equation 5.7. Analogous to 1.4, we define the following Hamiltonian corresponding to the maximal energy deficit D ≥E function defined in Equation 3: H ≥E,d (W, g, σ) = 1≤i≤n g h − 1 √ d j w ij σ i σ j + g(nE + 1 √ d i,j≤n w ij σ i σ j ),(7.1) and the associated partition function: Z ≥E (W, g) = σ∈M 0 exp (−ρH ≥E,d (W, g, σ)) F ≥E (W, g) = 1 ρ log Z ≥E,d (W, g). The rest of the proof follows that of Theorem 2. Let A, B be two random weight matrices satisfying the assumptions in Theorem 2. We again apply the Lindeberg's method to the function F ≥E . The derivatives of F ≥E now involve additional terms due to the addition of the term g( 1 √ d i,j≤n w ij − nE) to the Hamiltonian. However, we recover the same bounds on the derivatives as Equations 5.15, 5.16. For instance, with J as an arbitrary symmetric matrix, and g 1 as defined in Equation 5.7, we have: F (1) ≥E (J, g 1 ) = 1 √ d ġ 1 (h − 1 √ d m j km σ k σ m )σ k σ r + 1 √ d ġ 1 (h − 1 √ d l j rl σ r σ l )σ k σ r + 1 √ d ġ 1 ( 1 √ d i,j≤n w ij − nE) . Proceeding similarly, we obtain that: F (3) ≥E ∞ ≤ C 3, ,E,ρ d 3 2 . (7.2) Now, applying equations 5.12 ,5.14, and summing over all w i,j , we obtain: |E[F ≥E (A, g 1 )] − E[F ≥E (B, g 1 )]| ≤ C F,E, 1 ,ρ n √ d , Where C 3, 1 ,E , C F,E, 1 ,ρ denote constants dependent on E, 1 , ρ. Subsequently, following the proof of Equation 5.18, we obtain: Lemma 19. Let A, B be arbitrary random weight matrices with i.i.d random entries satisfying the assumptions in Theorem 2. For any h, E ∈ R, we have, D * ≥E (A, h) − D * ≥E (B, h) = o d (1)n + o(n), (7.3) with high probability as n → ∞. Proposition 4. Let W d,n be a family of random weighted graphs satisfying the assumptions in 2. Let h * be the threshold defined in Theorem 2. Then, for every and h ∈ [−0.1, h * ), for any E ∈ R such that E cor (h) < E < E max (h), N * ≥E (W, h) ≥ n(1 − o d (1)) − o(n). (7.4) Proof. Let W be a random weighted graph from the sparse anti-ferromagnetic model with parameters d, n. Choose E <Ẽ < E max (h). Then, there exists ah satisfying h <h < h * such thatẼ = E max (h). Therefore, Corollary 3 implies that D * ≥Ẽ (W ,h) = o d (1)n + o(n). Applying Lemma 19 then yields D * ≥Ẽ (W,h) = o d (1)n + o(n). Subsequently, we apply Lemma 18 to obtain N * ≥E (W, h) = n(1 − o d (1)) − o(n) Next, we use the truncated () + 1 defined in Equation 5.21, to define the truncated maximal energy deficit function: T ≥Ẽ (W,h, σ) = 1≤i≤n h − 1 √ d j w ij σ i σ j + 1 + n(E + 1 n √ d i,j≤n w ij σ i σ j ) + 1 ,(7.5) The following results relate the truncated maximal energy deficit cut to the maximum number of h-stable vertices amongst configurations having sufficient energy: Lemma 20. Suppose E, h ∈ R satisfy N * ≥E (W, h) = n − Θ d (1)n − o(n) with high probability as n → ∞. Then for anyẼ > E andh < h, we have T * ≥Ẽ (W,h) ≥ (Θ d (1))n + o(n) with high probability as n → ∞. Proof. For any configuration σ, for large enough d, ∃ such that, we have either: 1. E(σ) < E. 2. N ≥E (W, h, σ) ≤ n − n. In either case, we obtain: T ≥E (W,h, σ) ≥ 1≤i≤n h − 1 √ d j w ij σ i σ j + 1 + n(E + 1 n √ d i,j≤n w ij σ i σ j ) + 1 = i h − h + h − 1 √ d j w ij σ i σ j + 1 + n(Ẽ − E + E − H d (σ)) + 1 ≥ max n h − h + 1 , n(Ẽ − E) + 1 Lemma 21. Suppose E, h ∈ R satisfy T * ≥E (W, h) = Θ d (1)n − o(n) with high probability as n → ∞. Then, we have N * ≥E (W, h) = n − (Θ d (1))n − o(n) with high probability as n → ∞. Proof. T * ≥E (W, h) = Θ d (1)n − o(n) implies that there exists an > 0 such that for large enough d, any configuration σ satisfies T E (W, h, σ) ≥ n. Since () + 1 is bounded by 1, we obtain that any configuration with (E + 1 n √ d i,j≤n w ij σ i σ j ) + 1 > 0 must satisfy h − 1 √ d j w ij σ i σ j > 0 for at-least n vertices. Proposition 5. Let W d,n be a family of random weighted graphs satisfying the assumptions in 4. Let h * be the threshold defined in Theorem 2. Then, for every and h ∈ [−0.1, h * ), E > E max (h), N * ≥E (W, h) ≤ n(1 − Θ d (1)) + o(n). (7.6) Propositions 4,5 along with Lemma 18 imply Theorem 4. Similarly, for the proof of Theorem 5, we consider the constrained Hamiltonian for the dense graph: H ≥E (W, g, σ) = 1≤i≤n g h − 1 √ n j w ij σ i σ j + g(nE + 1 √ n i,j≤n w ij σ i σ j ). (7.7) Next, analogous to Equation 6.1, we introduce the rescaled and shifted variables p ij = d n w ij and observe that the above Hamiltonian H E reduces to the constrained Hamiltonian H E,d for sparse graphs: H ≥E (w, g, σ) = H ≥E,d (p, g, σ) = 1≤i≤n g h − 1 √ d j p ij σ i σ j + g(nE + 1 √ d i,j≤n p ij σ i σ j ). (7.8) Therefore, analogous to Lemma 16, we utilize the bound in Equation 7.2 to obtain: Lemma 22. Let J be a weighted graph on n nodes satisfying the assumptions in Theorem 3. Suppose h ∈ [0.1, h * ). Let E max (h) be the threshold defined in Theorem 2, then for any E > E max (h), with high probability: D * ≥E (J, h) = o(n). (7.9) The rest of the proof of Theorem 6 for E < E max (h) follows that of Theorem 4, namely we utilize Lemma 20 to obtain the transition in N * ≥E around E max . Similarly, from Lemma 6, we have that N * ≥E (W, h) = n − Θ 1 (d)n for any E > E max (h) and h < h * .Therefore, we the Lindeberg's argument to the truncated deficit function along with Lemmas 20, 21 to obtain the statement of Theorem 5 for E > E max . The proofs for the minimal energy threshold for h ∈ [h cor , h * ) rely on identical arguments as above with the deficit function replaced by the maximal energy deficit function for minimal energy defined in Definition 5. In particular, this provides a proof for the values of E min (0) ≈ −0.791 and E max (0) ≈ −0.2865 while restricting to bisections for the SK model, validating the theoretical physics for these values based on predictions in Bray and Moore [1981]. Conclusions and Open Problems In this work, we analyzed several interesting phenomena related to the single-spin-flip-stability in random graphs and spin glasses. There are several promising directions for future work: 1. Geometry of solutions: Given the existence of near h-stable solutions, it is natural to wonder about the geometry of solutions. In Figure 5, we observe that the second-moment entropy density at the fixed cut size x * as a function of the overlap demonstrates a steep drop below 0 near overlaps −1 and 1. Using the Markov inequality, this implies that for large enough d, with high probability as n → ∞, all pairs of h-stable configurations do not have overlaps in a certain range. Such a conclusion can also be obtained for the total number of pairs configurations over all cut sizes by maximizing over the value of x for each value of the overlap ω. However, for small enough h > 0, the second-moment entropy density does not fall below 0 for all overlaps ω ∈ [−1, 1]. We illustrate this in Fig. 10 for h = 0.05 Using the small-set expansion of regular graphs, [Behrens et al., 2022] proved that with high probability, any two h-stable partitions have a hamming distance at least C(d)n for some constant C possibly dependent on d. Such a separation property, also known as the "Overlap Gap Property", has been linked to algorithmic hardness in a recent line of work [Gamarnik, 2021]. We leave to future work the relation between the frozen property and the geometry of approximate h-stable optima in Erdős-Rényi graphs, while accounting for the violations. It could also be useful to investigate the universality properties of such gaps. 2. Universality and the existence of fully h-stable partitions. Unlike the case of sparse Erdős-Rényi graphs, the SK model might contain configurations with all vertices being h-stable. We leave to future work extending our results for existence and universality for configurations containing n − o(n) hstable vertices to configurations with all vertices being h-stable. In the SK model, such configurations may even exhibit an overlap gap property similar to regular graphs. We believe the analysis of such properties and any associated universality to be a promising research direction. A concurrent work by Minzer, Sah, Sawhney recently proved such a result for dense Erdos-Renyi graphs from G(n, 1/2) 3. Extension to p-spin Ising models and hypergraphs: We believe that our proof techniques can be generalized to single-flip stability in models involving interactions between p ≥ 3 spins such as p-spin models and hypergraphs. 4. Extensions of universality and the limiting the entropy density: As mentioned in sections 1 and 7, our results corroborate the predictions in Bray and Moore [1981] for E min (0) and E max (0). However, further inspection of the numerical values reveals that even the values of the first moment entropy density and E cor (0) obtained through Theorem 4 for sparse graphs with anti-ferromagnetic interactions match the ones reported in Bray and Moore [1981]. This leads us to conjecture that a large number of properties of the h-stable configurations obey universality, including the first and second moment entropy densities. Furthermore, the first moment entropy density obtained by us for specific values of energy matches the quenched entropy density in Bray and Moore [1981], defined as the expectation of logarithm of the number of local optima at the given energy level. We therefore conjecture that the first moment entropy density asymptotically matches the quenched one. More precisely, let X(E 1 , E 2 , h) denote the number of h-stable configurations with normalized energy in range [E 1 , E 2 ] for some E 1 < E 2 . We conjecture that: lim n→∞ 1 n E [log(1 + X(E 1 , E 2 , h))] = lim n→∞ 1 n log(1 + E [X(E 1 , E 2 , h)]),(8.1) for E 1 , E 2 , h such that the RHS is positive. Here we use log(1+X(E 1 , E 2 , h)) instead of log(X(E 1 , E 2 , h)) ensure that the term remains defined for X(E 1 , E 2 , h) = 0. A Proof of Proposition 1 A.1 Setup and Preliminary Results We utilize the configuration model with edge repetitions and loops allowed. To compute E[X(z, h, r)] i.e. the first moment conditioned on the cut size, we further divide the number of partitions according to the number of edges within each partition. Consider a fixed partitioning V 1 , V 2 of the set of vertices. Let E ij denote the set of edges having ends in sets V i , V j . This partitions the set of edges into subsets E 11 , E 12 , E 22 . Let the cardinalities of V 1 , V 2 be αn, (1−α)n respectively. Equivalently, the average magnetization is given by 2α − 1. Let z 1 n, z 2 n, zn denote the cardinalities of the sets E 11 , E 22 , E 12 respectively. We further have, by assumption, |E 12 | = zn = (d/4)n + x d/2. Let X(z, z 1 , z 2 , α, h, r) denote the number of partitions satisfying the cardinality constraints having at-least r h-stable vertices. The first moment E[X(z, h, r)] can therefore be expressed as: E[X(z, h, r)] = z 1 ,z 2 ,α E[X(z, z 1 , z 2 , α, h, r)], (A.1) where the sum is over z 1 , z 2 satisfying (z 1 + z 2 + z)n = (d/2)n. Again, similar to Equation 3.2, we have: lim sup n→∞ 1 n log E[X(z, h, r)] = sup z 1 ,z 2 ,α lim sup n→∞ 1 n log E[X(z, z 1 , z 2 , α, h, r)] . (A.2) Now, consider the probability distribution over graphs under the configuration model, conditioned on some fixed values of z 1 , z 2 , z. The resulting distribution can be described as being obtained through two independent assignments of a fixed number of half-edges to vertices within each partition, corresponding to in-partition and cross-partition edges. Consider the partition V 1 . First, 2z 1 half-edges are assigned independently to the vertices in V 1 and subsequently paired with each other. The second assignment matches z cross-partition half-edges independently with vertices in V 1 . Similarly, the corresponding assignment of the remaining half-edges to vertices in V 2 is performed independently. We first note that the first moment at a fixed cut size and a fixed value of r can be expressed as a sum over all partitions of the product of 1) the probability of a random graph having the given number of edges and vertices in the two partitions, and 2) the probability of h-stability being satisfied conditioned on the cardinalities of the vertices and edges within and across partitions. Let E(σ, z, z 1 , z 2 , α) denote the event of the configuration σ having z 1 n and z 2 n edges in the two partitions V 1 and V 2 of size αn and (1 − αn). Similarly, let E opt (σ, h, r) denote the event of configuration σ satisfying h-stability for any subset of r vertices. We have: E[X(z, z 1 , z 2 , α, h, r)] = σ P [E(σ, z, z 1 , z 2 , α)]P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)]. (A.3) Here the first term denotes the probability of a partition satisfying the cardinality constraints z 1 , z 2 , z, α, while the second term is the probability of such a partition having at-least r h-stable vertices. The first term P [E(σ, z, z 1 , z 2 , α)]) is given by Lemma 3.2 in [Gamarnik and Li, 2018]: P [E(σ, z, z 1 , z 2 , α)] = n (1/2 + α)n dn 2z 1 n, 2z 2 n, zn, zn ((1/2 + α)n) (2z 1 +z)n ((1/2 − α)n) (2z 2 +z)n (zn)! (A.4) × F (2z 1 n)F (2z 2 n)n −dn (F (dn)) −1 , where F (m) = m! (m/2)!2 m/2 denotes the number of perfect matchings on a set of m nodes. We now note the following lemma from Gamarnik and Li [2018], characterizing the leading exponential terms in P [E(σ, z, z 1 , z 2 , α)]: Lemma 23 (Equations 35,36 in [Gamarnik and Li, 2018]):). 1 n log P [E(σ, z, z 1 , z 2 , α)] = −d log 2 + 4(z 1 − z 2 )α − (2 + 2d)α 2 − z log z − z 1 log 2z 1 − z 2 log 2z 2 + d/2 log d + o α (α 2 )d + o(1)O d ( √ d)α − 2dα 2 + o α (α 2 )d. (A.5) 1 n log P [E(σ, z, z 1 , z 2 , α)] ≤ z(− log 4 − 4α 2 ) + 2z 1 (− log 2 + 2α) + 2z 2 (− log 2 − 2α) − z log z − z 1 log 2z 1 − z 2 log 2z 2 + d/2 log d + o(1). (A.6) Let the dominating term on α in the RHS of Equation A.6 be denoted by q(α, d). Then, q(α, d) = O( √ d)α− 2dα 2 + o α (α 2 )d, implying that lim q(α, d) = −∞ as d → ∞ for α = ω( 1 √ d ). Furthermore, the dominating terms in the above bound are maximized when z 1 = z 2 , implying: 1 n log P [E(σ, z, z 1 , z 2 , α)] ≤ log 2 − 2x 2 + o d (1) + q(α, d) + o(1) . (A.7) The above result further allows us to prove throughout the subsequent discussion, we may restrict ourselves to α = O( 1 √ d ). This is shown through the following lemma: Lemma 24. Define g(α, d) = lim sup n→∞ 1 n log P [E(σ, z, z 1 , z 2 , α)]P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)].Then there exists constant C such that lim sup d→∞ sup α≤C/ √ d g(α, d) = lim sup d→∞ sup α g(α, d) Proof. From Lemma 23, we have that: 1 n log P [E(σ, z, z 1 , z 2 , α)] ≤ log 2 − 2x 2 + o d (1) + q(α, d) + o(1), (A.8) where lim sup d→∞ q(α, d) = −∞ if α = ω( 1 √ d ). Since, P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)] < 1, we further have that 1/n log P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)] is bounded uniformly in α by 0. Therefore, we have: 1 n log P [E(σ, z, z 1 , z 2 , α)]P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)] ≤ log 2 − 2x 2 + o d (1) + q(α, d) + o(1). (A.9) Now, for any constant C: lim d→∞ sup α≤C/ √ d g(α, d) ≤ lim d→∞ sup α g(α, d). Therefore, suppose no constant satisfies lim d→∞ sup α≤C/ √ d g(α, d) = lim d→∞ sup α g(α, d) , then considering large enough C yields lim d→∞ sup α g(α, d) = −∞. The term P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)], however, depends on h, r. To bound this term, we perform the simpler computation of the probability of a fixed subset of r vertices being h-stable. Let E opt (σ, h, V r ) denote the event of the configuration σ satisfying h-stability for a given subset V r of r n n vertices. For instance, V r = [r] denotes the subset of the first r vertices. We note that, by symmetry, P [E opt (σ, h, V r )|E(σ, z, z 1 , z 2 , α)] is the same for any of the n rn subsets of size rn. Therefore, through a union bound, we obtain the following: P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)] ≤ n rn P [E opt (σ, h, [r])|E(σ, z, z 1 , z 2 , α)] . (A.10) Conditioning on z 1 , z 2 , z under the configuration leads to the edge assignments in V 1 being independent of those in V 2 . Let σ ∈ {−1, 1} n be a fixed configuration. Let r (1) n , r (2) n be such that the number of vertices amongst the first r n vertices in V 1 , V 2 are r (1) n (1/2 + α)n, r n (1/2 − α)n respectively. We have r (1) n (1/2 + α)n + r A.2 Poissonization To compute P[O 1 ], we note that under the configuration model, conditioned on z 1 , z, the distribution of the number of neighbours of vertices within V 1 is independent of the number of neighbours in V 2 . This can be described through the following generative model: Let µ 1 , µ 2 , N ∈ N be fixed non-negative integers. Let E i , 1 ≤ i ≤ N denote random variables generated by assigning µ 1 half-edges to N vertices independently under the configuration model. Equivalently, the distribution can be described as that of µ 1 balls thrown independently into n bins. Similarly, let F i , 1 ≤ i ≤ N be generated through an assignment of µ 2 half-edges to N vertices independent of E i . Analogous to [Gamarnik and Li, 2018], we introduce the following function K(n, µ 1 , µ 2 , h, r) P[E i ≥ F i + h √ d, 1 ≤ i ≤ rn]. (A.12) By letting E i and F i be out-partition and in-partition edges of a vertices in V 1 or V 2 , we obtain: P [E opt (σ, h)|E(σ, z, z 1 , z 2 , α)] = K((1/2 + α)n, zn, 2z 1 n, h, r (1) n )K((1/2 − α)n, zn, 2z 2 n, h, r (2) n ) . To compute the probability of optimality, Gamarnik and Li [2018] utilized the poissonization technique by conditioning on the total number of half-edges assigned to a particular partition. This can be formalized through the following lemma: Lemma 25. [Gamarnik and Li, 2018] Consider the configuration model where µ half-edges are assigned independently and uniformly to n vertices. Let E i denote the number of half-edges assigned to the i th vertex. Let (B i ) i∈[n] be a sequence of independent Poisson variables with the mean µ/n. For any sequence of non-negative integers (t i ) i∈ [n] , summing up to µ, the resulting joint distribution of t i can asymptotically be described as follows: P[E i = t i , 1 ≤ i ≤ n] = P[B i = t i , 1 ≤ i ≤ n| n i=1 B i = µ] = Θ µ ( √ µ)P[B i = t i , 1 ≤ i ≤ n]. (A.13) The above lemma expresses the fact that conditioned on the total number of edges in a partition, the joint probability of obtaining a particular distribution of the number of edges for different vertices is approximately factorized. Under the configuration model, the above lemma can be applied to the assignment of in-partition and cross-partition half-edges to vertices in V 1 and V 2 . Now, consider the event of the first r n n vertices simultaneously satisfying h-stability. Namely, let: S(µ 1 , µ 2 , h, r) = {((t i , s i )) 1≤i≤n ∈ (Z ≥0 ) 2n : t i ≥ s i + h √ d, 1 ≤ i ≤ r n n; n i=1 t i = µ 1 ; n i=1 s i = µ 2 }. We have P[E i ≥ F i + h √ d, 1 ≤ i ≤ rn] = S(µ 1 ,µ 2 ,h) P[E i = t i , 1 ≤ i ≤ n]P[F i = s i , 1 ≤ i ≤ n] . Using Lemma 25, the terms P[E i = t i , 1 ≤ i ≤ n] and P[F i = s i , 1 ≤ i ≤ n] can be approximated through sequences of independent Poisson random variables. We therefore have: Lemma 26. Let (B i ) i∈[N ] , (C i ) i∈[N ] be sequences of independent Poisson variables with the means µ 1 /n,µ 2 /n. Then, for any h ∈ R, r N , ∈ R following holds: .14) where B i , C i denote independent Poisson variables with means µ 1 n and µ 2 n respectively. We denote the term P[B 1 ≥ C 1 + h √ d] by P 1 (h). Thus the probability of optimality is reduced to computing the large deviation rate of a single variable (number of edges) under independent constraints on each vertex. This is achieved through generalizations of large deviation results in Gamarnik and Li [2018] to the case of r < 1 and h = 0. K(n, µ 1 , µ 2 , h, r) = θ µ 1 ( √ µ 1 )θ µ 2 ( √ µ 2 )P n i=1 B i = µ 1 , n i=1 C i = µ 2 B i ≥ C i + h √ d, 1 ≤ i ≤ rn (P[B 1 ≥ C 1 + h √ d]) rn . (A A.3 Large Deviations The first result provides the large deviations rate for sums of independent variables arising from two distributions supported on lattices: Lemma 27. Consider two Lattices L 1 and L 2 defined as: L 1 = {b (1) 1 + z 1 h (1) 1 , b (1) 2 + z 2 h (1) 2 , · · · , b (1) k + z k h (1) k , z i ∈ Z, 1 ≤ i ≤ k}, L 2 = {b (2) 1 + z 1 h (2) 1 , b (2) 2 + z 2 h (2) 2 , · · · , b (2) k + z k h (2) k , z i ∈ Z, 1 ≤ i ≤ k}, . Let P 1 and P 2 be two sequences of discrete measures supported on L 1 , L 1 defined on the sigma algebra generated by point sets on L 1 , L 2 . Therefore, P 1 , P 2 assign a non-negative probability mass to each point in L 1 , L 2 . : Let X 1 , X 2 , . . . , X n be independent random vectors in R k , distributed as follows: X i ∼ P 1 , 1 ≤ i ≤ r n n, P 2 , otherwise. with parameters b (1) i , h (1) i and b (2) i , h (2) i ∈ R, 1 ≤ i ≤ k denoting the starting points and periods of the two lattices. Let M 1 (θ) = E P 1 [e θ,X ], M 2 (θ) = E P 2 [e θ,X ] be the moment generating functions of X 1 , X 2 respectively with the corresponding rate functions Λ 1 (θ) log M 1 (θ) and Λ 2 (θ) log M 2 (θ). Suppose that M 1 (θ) < ∞ and M 2 (θ) < ∞ for all θ ∈ R d . Then, the following large deviations result holds for S n = n i=1 X i : lim n→∞ 1 n log P[S n /n = y] = − θ * , y + rΛ 1 (θ * ) + (1 − r)Λ 2 (θ * ). (A.15) where θ * is the unique solution to the equation y = r∇Λ 1 (θ * ) + (1 − r)∇Λ 2 (θ * ). Proof. For 1 ≤ i ≤ rn, define measuresP 1 for X i : through the following exponential tilting: dP 1 dP 1 (z) = e θ * ,z −Λ 1 (θ * ) . (A.16) Analogously, for X j with rn ≤ j ≤ n, letP 2 be the measure defined by: dP 2 dP 2 (z) = e θ * ,z −Λ 2 (θ * ) . (A.17) Since P 1 , P 2 are discrete, the above tilting corresponds to reweighing the probability mass of each point in the corresponding lattice. Therefore, equations A.16, A.17 define the measuresP 1 ,P 2 uniquely. Furthermore,P 1 ,P 2 are probability measures since: R d dP 1 = R d e θ * ,z −Λ 1 (θ * ) dP 1 = 1 M 1 (θ * ) R d e θ * ,z dP 1 = 1 R d dP 2 = R d e θ * ,z −Λ 1 (θ * ) dP 1 = 1 M 2 (θ * ) R d e θ * ,z dP 2 = 1 (A.18) We note that under the measure changes defined by Equations A.16 and A.17, the means of the distributions can be evaluated as: m 1 = Eμ 1 [X] = 1 M 1 (θ * ) R d ze θ * ,z dP (1) = ∇Λ 1 (θ * ), m 2 = Eμ 2 [X] = 1 M 2 (θ * ) R d ze θ * ,z dP (2) = ∇Λ 2 (θ * ). As expected, we have rm 1 + (1 − r)m 2 = r∇Λ 1 (θ * ) + (1 − r)∇Λ 2 (θ * ) = y. LetX 1 ,X 2 , . . . ,X n be independent random vectors distributed according to the above measure changes: X i ∼ P 1 , 1 ≤ i ≤ rn, P 2 , otherwise. Since an exponential tilting of a measure doesn't affect its support, the measuresμ (1) andμ (2) still possess the biases and periods given by b (1) i , h (1) i and b (2) i , h (2) i respectively. LetS (1) n = rn i=1X i and S (2) n = n i=rn+1X i . LetS (1) n = n i=1X i =S(1) n +S (2) n . We next define the following three measures: P n ({x}) = P[S n /n = x], P (1) n ({x}) = P[(S (1) n − rnm 1 )/ √ rn = x], P (2) n (x) = P[(S (2) n − (1 − r)nm 2 )/ √ n − rn = x]. We have: P[S n /n = y] =µ n ({y}) = n i=1 z i =yn rn i=1 P 1 (dz i ) rn j=rn+1 P 2 (dz j ) = n i=1 z i =yn e − n i=1 θ * ,z i +n(rΛ 1 (θ * )+(1−r)Λ 2 (θ * )) rn i=1μ 1 (dz i ) rn j=rn+1μ 2 (dz j ) =e −n θ * ,y +n(rΛ 1 (θ * )+(1−r)Λ 2 (θ * ))μ n ({y}). (A.19) Therefore, we obtain: 1 n log P[S n /n = y] = − θ * , y + rΛ 1 (θ * ) + (1 − r)Λ 2 (θ * ) + 1 n logμ n ({y}). (A.20) Let Σ 1 , Σ 2 be the covariance matrices of P 1 , P 2 respectively. Letp 1 (x),p 2 (x) be the densities of normal variables with means 0 and the same covariancesasS (1) n / √ rn,S(2) n / √ n − rn, respectively. Then applying the local central limit theorem in [Gamarnik andLi, 2018, Durrett, 2019] to the 0-mean variablesX 1 − m 1 , . . . ,X rn − m 1 andX rn+1 − m 2 , . . . ,X rn − m 2 yields sup x∈L 1 n (rn) d 2 d i=1 h (1) ip (1) n (x) −p (1) (x) → 0, sup x∈L (2) n (n − rn) d 2 d i=1 h (2) ip (2) n (x) −p (2) (x) → 0. (A.21) By definition, we have P[S (1) n = rnm 1 ] =p (1) n (0),P[S (2) n = (1−r)nm 2 ] =p (2) n (0) andp (1) (0) = 1 (2π) d 2 √ |Σ 1 | ,p (2) (0) = 1 (2π) d 2 √ |Σ 2 | . The local CLT in Equations A.21 then implies the following limits: lim n→∞ 1 n log P[S (1) n = rnm 1 ] = lim n→∞ log d i=1h (1) i (2π) d 2 |Σ 1 |(rn) d 2 = 0, lim n→∞ 1 n log P[S (2) n = (1 − r)nm 2 ] = lim n→∞ log d i=1h (2) i (2π) d 2 |Σ 1 |(n − rn) d 2 = 0. (A.22) The eventS (1) n = rnm 1 andS (2) n = (1 − r)nm 2 impliesS n /n = y. Therefore, from the independence of S (1) n ,S (2) n we obtain: Substituting the above result in A.20 concludes the proof. P[S (1) n = rnm 1 ]P[S (1) n = (1 − r)nm 2 ] ≤μ n ({x}) ≤ 1. The above lemma, when applied to Poisson random variables in Lemma 26 conditioned on the hstability constraints, allows us to compute the multivariate local large deviation rate (with equality in the rate instead of inequality) through the moment generating functions for the same. A.4 Derivation of the Rate Function The moment-generating function is characterized through the following slight generalization of Lemma 3.3 in [Gamarnik and Li, 2018]: Lemma 28. Let λ 1 (d), λ 2 (d) be sequence of non-negative real numbers indexed by d ∈ N, such that the limits b = lim d→∞ b d , e = lim d→∞ d λ 1 (d) exist and λ 1 − λ 1 = o( √ λ 1 ). Define a d = λ 2 /λ 1 and b d = (λ 2 − λ 1 )/ √ λ 1 . We have lim d→∞ a d = 1. Let B(d) ∼ Pois(λ 1 (d)) and C(d) ∼ Pois(λ 2 (d)) be sequences of two independent Poisson random variables. Let X 1 and X 2 be two independent standard normal random variables. Define (U (d), V (d)) as: (U (d), V (d)) B(d) − λ 1 (d) λ 1 (d) , C(d) − λ 2 (d) λ 2 (d) . For every fixed θ 1 , θ 2 lim d→∞ E[exp(U d θ 1 + V d θ 2 ) | U (d) ≥ √ a c V (d) + b d + h d λ 1 ] = E[exp(X 1 θ 1 + X 2 θ 2 ) | X 1 ≥ X 2 + b + h √ e], lim d→∞ E[exp(U (d)θ 1 + V (d)θ 2 )] = E[exp(X 1 θ 1 + X 2 θ 2 )]. (A.23) Applying the above lemma to the large deviation term in Lemma 26 with λ 1 (d) = µ 1 n and λ 2 (d) = µ 2 n leads to the following result. Lemma 29. Let µ 1 (n, d), µ 2 (n, d) be positive integer sequences such that: 1. λ j (d) = lim n µ j (n, d)/n, j = 1, 2 exist for every d, and satisfy λ 1 = Θ d (d), λ 2 = λ 1 − Θ d ( √ d). The limits b = lim d→∞ λ 2 (d)−λ 1 (d) √ λ 1 (d) , e = lim d→∞ d √ λ 1 (d) exist. Let B i (d), C i (d), 1 ≤ i ≤ n be sequences of i.i.d. Poisson random variables with means E[B i (d)] = λ 1 (d), E[C i (d)] = λ 2 (d). Then lim N →∞ 1 N log K(n, µ 1 (d, N ), µ 2 (d, N ), h, r) = −r log 2 − sup θ∈R (−θ 2 − r log(1 + erf(θ − (b + h √ e)/2))) + o d (1). (A.24) Proof. Using the notation in Lemma 26, let B i , C i be sequences of N independent Poisson variables with means λ 1 = µ 1 (d,N ) N , λ 2 = µ 2 (d,N ) N respectively. Define a d = λ 2 /λ 1 and b d = (λ 2 − λ 1 )/ √ λ 1 . Let U i , V i denote the corresponding sequence of rescaled and shifted variables with means 0 and variance 1, i.e, (U i , V i ) B i −λ 1 (d) √ λ 1 (d) , C i −λ 2 (d) √ λ 2 (d) . Now, letŨ i ,Ṽ i be a sequence of independent random variables having the following probability mass functions: P (Ũ i ,Ṽ i = u, v) = P (U i , V i = u, v | U i ≥ √ a d V d + b d + h d λ 1 ), 1 ≤ i ≤ r n n, P (U i , V i = u, v), otherwise. (A.25) Then, using Lemma 27 with the lattice dimension k = 2, we have: lim n→∞ 1 n log P n i=1 B i = µ 1 , n i=1 C i = µ 2 B i ≥ C i + h √ d, 1 ≤ i ≤ r n n = −I(0, 0), where I(x 1 , x 2 ) = sup (θ 1 ,θ 2 )∈R 2 (θ 1 x 1 + θ 2 x 2 − r log(M 1 (θ 1 , θ 2 )) − (1 − r) log(M 2 (θ 1 , θ 2 )) denote the rate functions, corresponding to the MGFs M 1 (θ 1 , θ 2 ) of (Ũ i ,Ṽ i ) and M 2 (θ 1 , θ 2 ) of (Ũ i ,Ṽ i ). We note that Lemma 24, implies that lim d→∞ a d = 1. Using 28, we obtain: M 1 (θ 1 , θ 2 ) = exp( θ 2 1 +θ 2 2 2 ) P 1 1 + erf θ 1 −θ 2 −b 2 2 + o d (1), (A.26) where P 1 denotes P [X 1 ≥ X 2 + b + h √ 2e] for two independent standard normal variables X 1 , X 2 . While for M 2 , we simply have: M 2 (θ 1 , θ 2 ) = E[exp(θ 1Ũ (2) + θ 2Ṽ (2) )] = E [exp(θ 1 U + θ 2 U )] = R 2 1 2π exp(θ 1 t 1 + θ 2 t 2 ) exp − t 2 1 + t 2 2 2 dt 1 dt 2 + o d (1) = exp( θ 2 1 + θ 2 2 2 ) + o d (1). Using Lemma 26, we have: lim n→∞ 1 n log K(n, µ 1 (d, N ), µ 2 (d, N ), h, r N ) = −r log 2 − sup (θ 1 ,θ 2 )∈R 2 − θ 2 1 + θ 2 2 2 − r log(1 + erf θ 1 − θ 2 − b 2 − r log P 1 + r log P 1 . Through the convergence in distribution of (Ũ i ,Ṽ i ) to standard normal variables, we have: lim d→∞ P 1 = P 1 . (A.27) The proof is completed by noting that the unique critical points of (− θ 2 1 +θ 2 2 2 − r log(1 + erf θ 1 −θ 2 −b 2 ) satisfy θ 1 = −θ 2 A.5 Bounding P [E opt (σ, h, [r])|E(σ, z, z 1 , z 2 , α)] In this section, we combine the large deviation bounds in the earlier sections to obtain a bound on the first moment entropy density w(h, r). The above result allows us to bound the first moment entropy density: Lemma 30. We have: lim sup n→∞ 1 n log P [E opt (σ, h, [r])|E(σ, z, z 1 , z 2 , α)] ≤ − log 2−sup θ∈R (−θ 2 −(2r−1) log(1+erf(2x+θ−h √ 2)))+o d (1) Proof. Following the proof of Proposition 3.1 in [Gamarnik and Li, 2018], we define τ ∈ (−2x, 2x) by: 2z 1 = d/2 − (x + τ ) d/2, 2z 2 = d/2 − (x − τ ) d/2. We now apply Lemma 29 to the two terms. First, consider the partition V 1 . We set λ 1 (d) = zn 1/2+α , λ 2 (d) = 2z 1 n 1/2+α , N = (1/2 + α). We have: b = lim d→∞ λ 2 (d) − λ 1 (d) λ 1 (d) = (−2x − τ ) √ 1 + 2α , e = lim d→∞ √ d λ 1 (d) = lim d→∞ d(1/2 + α) d/4 = √ 2 + 4α, . Therefore using Lemma 29, we have: lim n→∞ 1 n log K((1/2 + α)n, zn, 2z 1 n, h, r (1) n ) =( 1 2 + α) −r (1) n log 2 − sup θ∈R (−θ 2 − r (1) n log(1 + erf(θ + (1 + 2α) − 1 2 (2x + τ ) − (2 + 4α) 1 2 h/2)))) + o d (1). Similarly, we have: lim n→∞ 1 n log K((1/2 − α)n, zn, 2z 2 n, h, r (2) n ) =( 1 2 − α) −r (2) n log 2 − sup θ∈R (−θ 2 − r (2) n log(1 + erf(θ + (1 − 2α) − 1 2 (2x − τ ) − (2 − 4α) 1 2 h/2))) + o d (1). We note that if r ≤ r , then the event E i ≥ F i + h √ d, 1 ≤ i ≤ r n implies the event E i ≥ F i + h √ d, 1 ≤ i ≤ rn. Therefore, K(n, µ 1 , µ 2 , h, r ) < K(n, µ 1 , µ 2 , h, r). Therefore, K(n, µ 1 , µ 2 , h, r ) is non-decreasing in r Now r (1) (1/2 + α) + r (2) (1/2 − α) = r and max r (1) , r (2) ≤ 1. Using Lemma 24, we may further assume that α = O d (d −1/2 ). Therefore, we obtain: (r (1) + r (2) )/2 + α(r (1) − r (2) ) = r =⇒ (r (1) + r (2) ) = 2r + O d (d −1/2 ). We further have: min(r (1) , r (2) ) = (r (1) + r (2) ) − max(r (1) , r (2) ) (A.28) Combining, we obtain: 2r − 1 ≤ min(r (1) , r (2) ) + O d (d −1/2 ). (A.29) Define L(a, r, h) to be the following function: L(a, r, h) − sup θ∈R −θ 2 − r log 1 + erf(θ − a/ √ 2 − h/ √ 2) . (A.30) Using Equations A.11, A.29, the monotonicity of K(n, µ 1 , µ 2 , h, r ) w.r.t r, along with α = O d (d −1/2 ) (Lemma 24), we obtain: lim n→∞ 1 n log P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)] = lim n→∞ 1 n log K((1/2 + α)n, zn, 2z 1 n, h, r (1) n )K((1/2 − α)n, zn, 2z 2 n, h, r (2) n ) ≤ −r log 2 + 1 2 (2r − 1)L(−2 √ 2x − √ 2τ, r (1) , h) + 1 2 (2r − 1)L(−2 √ 2x + √ 2τ, r (2) , h) + o d (1) (A.31) To bound the above terms, we utilize the convexity of L(a, r, h). From [Gamarnik and Li, 2018], we have that Prékopa-Leindler inequality implies the convexity of the function: g(a, r, h) = −r log(1 + erf(θ − a/ √ 2 − h √ 2)), w.r.t a, for any r > 0 and h ∈ R. Now, let a 1 , a 2 ∈ R, r 1 , r 2 ∈ R + , h ∈ R, and λ ∈ R, 0 ≤ λ ≤ 1. Taking point-wise supremum, this also implies the concavity of: L(a, r, h) = − sup θ∈R −θ 2 − r log 1 + erf(θ − a/ √ 2 − h √ 2) , (A.32) w.r.t a, for r ∈ R + , θ ∈ R, h ∈ R. The above lemma implies that the R.H.S in Equation A.31 is maximized when τ = 0. Therefore, using Equation A.31, we obtain the following bound: The above bound is tight when r = 1. A.6 The first moment entropy density and proof of Proposition 1 Let H d (x) denote the standard entropy function H d (x) = −x log x − (1 − x) log(1 − x). We note that the number of subsets of vertices of size r n n can be expressed as follows: . In particular, for r = 1 and τ = 0, all the utilized bounds are tight and H d (r) = 0. We obtain: Lemma 31. The sequential limit lim d→∞ lim n→∞ 1 n log E[X(z, h, 1)] exists and is given by w(x, h), where: w(x, h) = φ(x, h, 1) = − log 2 − 2x 2 − sup θ∈R (−θ 2 − log(1 + erf(2x + θ − h/ √ 2))). (A.38) Furtheremore, the same limit holds while restricting to bisections, i.e: lim d→∞ lim n→∞ 1 n log E[X 0 (z, h, 1)] = w(x, h) (A.39) Now, consider the maximization of φ(x, h, r) w.r.t x. We obtain the following max-min problem: sup x∈R φ(x, h, r) = sup x∈R inf θ∈∈R H d (r) + (1 − r) log 2 − 2x 2 + θ 2 + (2r − 1) log(1 + erf(2x + θ − h √ 2))) . (A.40) We note that, the function f (θ, x, h, r) = −θ 2 − r log(1 + erf(2x + θ − h √ 2)) equals log(M (θ, θ)), where M denotes the moment generating function of the variables 1 n n i=1Ũ i , 1 n n i=1Ṽ i are as defined as in Equation A.25. Following Lemma 3.6. in Gamarnik [2021], we have that Prékopa-Leindler inequality implies the convexity of log(1 + erf(2x + θ − h)) w.r.t x. Since the point-wise supremum of a set of convex functions is convex, we obtain: Lemma 32. For any h ∈ R, 0 ≤ r ≤ 1, φ(x, h, r) is a convex function of x. In particular for r = 1, we obtain that w(x, h) is convex in h. The above lemma implies the uniqueness of saddle point x * (h, r), θ * (h, r) satisfying x * (h, r) = argmax x φ(x, h, and θ * (h, r) = argmax θ f (θ, x * (h, r), h, r). We have that x * (h, r), θ * (h, r) satisfy the following system of equations: 2x * + r 1 π e −(2x * +θ * (x * )−h) 2 1 + erf(2x * + θ * (x * ) − h/ We thus obtain 2x * = θ * (x * , r), with x * satisfying: 2x * + r 1 π e −(3x * −h/ √ 2) 2 1 + erf(3x * − h/ √ 2) = 0 . (A.43) Since the above equation is continuous in x * , r, the uniqueness of x * implies that it is a continuous function of r in some neighbourhood of r = 1. This in-turn implies that sup x φ(x, h, r) = φ(x * (h, r), h, r) is a continuous-function of r. From Lemma 31, we have that at r = 1, sup x φ(x, h, r) is given by w(h) = sup x w(x, h). We plot w(h) in Figure 4. Let h * denote the root of w(h, 1). Through monotonicity of w(h), we have that for any h > h * , w(h) < 0 and sup x φ(x, h, 1) < 0. Finally, using continuity w.r.t r, we obtain that ∃r(h) < 1 such that for any r > r(h), sup x φ(x, h, r) < 0. Therefore, Equation A.37 completes the proof of Proposition 1. B Proof of Proposition 2 (based on Gamarnik and Li [2018]) B.1 Setup Analogous to the proof in Section 3.3, to obtain the second moment, we consider the partitions having fixed cardinalities as well as the overlap between the configurations. Let σ, σ ∈ −1, 1 n be two bisections on n vertices. As in the previous section, we consider a graph G = (V, E) sampled using the configuration model CM (n, m = d 2 n). We associate the graph with a weighted graph G = (v, W ) having anti-ferromagnetic interactions i.e w ij = −1 for (i, j) ∈ E. We define the following subsets of partitions: V 1 = {j : σ j = 1, σ j = 1} V 2 = {j : σ j = −1, σ j = −1} V 3 = {j : σ j = 1, σ j = −1} V 4 = {j : σ j = −1, σ j = 1}. (B.1) The partitioning of the vertices is illustrated in Figure 11. Due to the zero magnetization (bisection) constraint, we have: |V 1 | + |V 3 | = |V 2 | + |V 4 | = |V 1 | + |V 4 | = |V 2 | + |V 3 | = n 2 . (B.2) Therefore, we may assume without generality that |V 1 | = |V 2 | = βn and |V 3 | = |V 4 | = (1/2 − β)n for some β ∈ [0, 1/2]. The parameter β is related to the overlap σ between the two configurations as follows: Let v i1 , v i2 , v i3 , v i4 denote the total weight of edges having one end in vertex i and the other in sets V 1 , V 2 , V 3 , V 4 respectively. We note that the stability or unfriendliness of the vertex i is given by: s σ (i, W ) = (v i2 + v i4 − v i1 − v i3 )/ √ d s σ (i, W ) = (v i2 + v i3 − v i1 − v i4 )/ √ d. Therefore, the condition for the simultaneous h-stability of i in both partitions is equivalent to the following set of conditions: v i2 − v i1 ≥ |v i3 − v i4 | + h √ d, ∀i . (B.4) Analogous conditions can be defined for vertices in sets V 2 , V 3 , V 4 . Let nz j,k denote the number edges connecting vertices in V i to V j , for j, k ∈ {1, 2, 3, 4}. Similar to the first moment computation, we decouple the optimality constraints for different sets V 1 , V 2 , V 3 , V 4 by conditioning on |z j,k | and adopting the configuration model. Under the configuration model, following [Gamarnik and Li, 2018], the total number of pairs of h-stable partitions can be expressed as: E[X 2 0 (zn)] = β,nz j,k I 1 I 2 (h), (B.5) where we have as in the first moment I 1 represents the expected number of pairs of partitions having the cardinalities of the edge sets given by nz j,k , and having overlap β. This term is therefore independent of h and follows directly from [Gamarnik and Li, 2018]. I 2 is the probability of the h-stability being satisfied conditioned on the cardinalities and overlap. To compute I 2 , we factorize the probability of h-stability being satisfied across the four partitions and define: K(N, µ 1 , µ 2 , µ 3 , µ 4 , h) = P[E (2) i − E (1) i ≥ |E (3) i − E (4) i | + h √ d, 1 ≤ i ≤ N ], where E (j) 1 , · · · , E (j) k are generated by independently assigning µ j half-edges to N vertices. Next, we utilize the following extension of Lemma 4.2 from [Gamarnik and Li, 2018], analogous to Lemma 25 for the first moment: Lemma 33. For each j, j = 1, . . . , 4, Let (B (j) i ) i∈[n] be a family of i.i.d. Poisson variables with the same mean λ j = µ j /n > 0, then we have K(n, µ 1 , µ 2 , µ 3 , µ 4 , h) = 4 j=1 θ µ j ( √ µ j )P n i=1 B (j) i = µ j , 1 ≤ j ≤ 4 B i , 1 ≤ i ≤ n (P[B 1 ]) n , (B.6) where B i denotes the event B (2) i − B (1) i ≥ |B (3) i − B (4) i | + h √ d. B.2 Proof of Proposition 2 through a reduction to Gamarnik and Li [2018] We now list a series of substitutions to results in Gamarnik and Li [2018], allowing us to extend their second moment computation to h = 0: 1. Lemma 27 for k = 4 and r = 1 in our work allows us to generalize Lemma 4.5 in Gamarnik and Li [2018] to h = 0. This corresponds to replacing b 3 by λ 1 −λ 4 √ λ 1 +λ 4 + h √ 2c √ λ 1 +λ 4 . 2. The above modification to Lemma 4.5, along with Lemma 33 results in the following expression following the derivation in section 4.3 of Gamarnik [2021]: lim n 1 n log K(βn, nz 1,2 , nz 1,4 , nz 1,3 , 2nz 1,1 , h) = β inf θ 1 ,θ 2 log P θ 1 , θ 2 , 2(η 1,2 − η 1,3 ) 2β(β 1,2 + β 1,3 ) , β 1,2 + β 1,3 β 1,4 + 2β 1,1 , 2(η 1,4 − 2η 1,1 ) 2β(β 1,4 + 2β 1,1 ) − hβ 2(β 1,4 + 2β 1,1 ) + o d (1). Where P and η i,j , 1 ≤ i, j ≤ 4 are defined as in equations (102) and (84) in [Gamarnik and Li, 2018] respectively. 3. Lemma 4.6 is valid for any b 3 ∈ R. Therefore, the arguments in Lemma 4.8 yield the following result: W (x, β, h) = sup t∈[hβ,x−H d (1/2−β)] inf θ 1 ,θ 2 F (t, θ 1 , θ 2 , h), where F (t, θ 1 , θ 2 , h) = − t 2 2β 2 − (x − t) 2 2(1/2 − β) 2 + 2β log P θ 1 , 1/2 − β β , t β 3/2 − h √ 2β + 2(1/2 − β) log P θ 2 , β 1/2 − β , x − t (1/2 − β) 3/2 − h √ 1 − 2β . (B.7) Here the range of t is modified from [0, x] to [hβ/ √ 2, x − h(1/2 − β)/ √ 2] due to the modified constraints: η 1,4 − 2η 1,1 ≥ |η 1,3 − η 1,2 | + hβ/ √ 2, η 2,3 − 2η 2,2 ≥ |η 1,2 − η 2,4 | + h(1/2 − β)/ √ 2. (B.8) Figure 1 : 1Illustration of the phase transition in the maximum number of h stable vertices across all configurations. Figure 2 : 2Illustration of the phase transitions in the maximum number of h stable vertices amongst configurations satisfying certain energy constraints Figure 3 : 3Illustration of the maximum number of h stable vertices around E cor (h) and E min (h) for h < h cor . Figure 4 : 4The first moment entropy as a function of the imposed threshold h, obtained from Equation 3.9. Figure 5 : 5The second-moment entropy density at the threshold i.e h = h * , constrained on the cut size x * (h * ). The curves are obtained by numerically solving the system of equations defined by Equation 3.12. Figure 6 : 6First moment entropy density at h = 0 as a function of E. The curves are obtained by numerically solving Equation 4.3. Figure 7 : 01 Figure 8 : 7018Illustration of the transition in the second moment entropy density at h = 0 and from top to bottom E cor (0) + 0.01, E = E cor (0), E cor (0) − 0.The three values of energies E max , E cor , E max as a function of the threshold h. E max , E min are obtained as the roots of the Equation 4.3. E cor is obtained numerically as the smallest E such that W (−E/ √ 2, h) defined through Proposition 2 is maximized at an ω = 0. Figure 9 : 9Illustration of the transition in the second moment entropy density at from top to bottom h = h cor and E = E cor (h) + 0.01, E cor (h), E cor (h) − 0.01 respectively. Corollary 4 . 4Let W be a weight matrix as in Lemma 4, then for any h ∈ [−0.1, h * ) and E = E max (h), D * ≥E (W, h) = o d (1)n + o(n) with high probability as n → ∞. We define D * (W, h) = min σ∈M 0 D(W, h, σ). We further denote by N (W, h, σ) the number of h-stable vertices in σ and N * (W, h) = max σ∈M 0 N (W, h, σ) the maximum number of h-stable vertices in any partition.These functions are related through the following Lemma:Lemma 10. Supposeh ∈ R satisfies D * (W,h) = o d (1)n + o(n)with high probability as n → ∞. Then for any h <h, we have N * (W, h) = n(1 − o d (1)) − o(n) with high probability as n → ∞. using Effron-Stein inequality as in Equation 5.20, we have |H * d(A, (·) + ) − E [H * d (A, (·) + )]| = o(n) and |H * d (P, (·) + ) − E [H * d (P, (·) + )]| = o(n)with high probability as n → ∞. Therefore, we obtain, with high probability as n → ∞ H * d (A, (·) + ) − H * d (P, (·) + ) = o d (1)n + o(n). (6.9) 7 Proof of Theorems 5 and 6: Universality of the Maximal/Minimal Energy of Local Optima Using similar techniques as the previous section, we now prove the universality of the maximal energy thresholds at fixed values of h. As in the proof of Lemma 9 and Lemma 10, we have the following relation between N * ≥E (W, h) i.e. the maximum number of h-stable vertices amongst configurations having energy at least E, and the maximal energy deficit D ≥E , defined in Definition 4: Lemma 18. Supposeh,Ẽ ∈ R satisfy D * ≥Ẽ (W,h) = o d (1)n + o(n) with high probability as n → ∞. Then for any E <Ẽ and h <h, we have N * ≥E (W, h) = n(1 − o d (1)) − o(n) with high probability as n → ∞. Figure 10 : 10The second moment entropy density at x = x * (h) for h = 0.05 n (1/2 − α)n = r n n. Let O 1 denote the event of the h-stability of the first r n (1/2 + α)n vertices in V 1 . Similarly, let O 2 denote the event of the h-stability of the first r n (1/2 − α) vertices in V 2 . Therefore, we have:P [E opt (σ, h, V r )|E(σ, z, z 1 , z 2 , α)] = P[O 1 ]P[O 2 ].(A.11) [E opt (σ, h, [r])|E(σ, z, z 1 , z 2 , α)] ≤ −r log 2 + L(−2 √ 2x, 2r − 1, h) + o d (1) = −r log 2 − sup θ∈R (−θ 2 − (2r − 1) log(1 + erf(θ + 2x − h/ √ 2))) + o d (1) . have that H d (r) is a continuous function satisfying lim r→1 H d (r) = lim r→0 H d (r) = 0. Using Equation A.10, we obtain:lim sup n→∞ 1 n log P [E opt (σ, h, r)|E(σ, z, z 1 , z 2 , α)] ≤ ≤ H d (r) − r log 2 − sup θ∈R (−θ 2 − (2r − 1) log(1 + erf(θ + 2x − h/ √ 2))) + o d (1). (A.34)Finally, combining with the term and summing 2 n configurations yields (equations A.3 and A.logE[X(z, h, r)] ≤ H d (r) + (1 − r) log 2 − sup θ∈R (−θ 2 − (2r − 1) log(1 + erf(θ + 2x − h/ √ 2))) + o d (1). (A.35)Denote the above bound by φ(x, h, r) i.e:φ(x, h, r) = H d (r) − r log 2 − 2x 2 − sup θ∈R (−θ 2 − (2r − 1) log(1 + erf(2x + θ − h/ √ 2))) . (A.36)We therefore, have: w r (x, h) ≤ φ(x, h, r). (A.37) erf(2x * + θ * (x * ) Figure 11 : 11Illustration of the partitioning of the set of vertices. Each edge in the figure, including loops, denotes the set of edges between vertices belonging to the two sets. consider a vertex i in the set V 1 , i.e. the set of vertices having positive spins in both configurations. Now, consider the case of weight matrices W with i.i.d entries indexed by n satisfying the assumptions in Theorem 3 i.e. E[|w ij − µ| 2 ] = 1, E[|w ij − µ| 3 ] = O n (1) This includes weight matrices with i.i.d Gaussian entries i.e. w ij ∼6 Proof of Theorem 3: Sparse to Dense Reduction Sherrington-Kirkpatrick (SK) model, it is feasible to extend our results to encompass all configurations. This extension can be done leveraging the first moment results in Addario-Berry et al. [2019], who derived an expression for the first moment entropy density for the SK model when considering all partitions, matching the form of Equation A.37 derived by us for sparse graphs. AcknowledgmentsDavid Gamarnik acknowledges funding from NSF grant DMS-2015517. We acknowledge Freya Behrens for discussions and pointers to existing literature. Local optima of the Sherrington-Kirkpatrick Hamiltonian. Louigi Addario-Berry, Luc Devroye, Gábor Lugosi, Roberto I Oliveira, 10.1063/1.5020662Journal of Mathematical Physics. 60443301Louigi Addario-Berry, Luc Devroye, Gábor Lugosi, and Roberto I. Oliveira. Local optima of the Sherrington-Kirkpatrick Hamiltonian. Journal of Mathematical Physics, 60(4):043301, April 2019. doi: 10.1063/1.5020662. Local algorithms for Maximum Cut and Minimum Bisection on locally treelike regular graphs of large degree. Ahmed El Alaoui, Andrea Montanari, Mark Sellke, arXiv:2111.06813math-phAhmed El Alaoui, Andrea Montanari, and Mark Sellke. Local algorithms for Maximum Cut and Minimum Bisection on locally treelike regular graphs of large degree. arXiv:2111.06813 [math-ph], November 2021. Local max-cut in smoothed polynomial time. Omer Angel, Sébastien Bubeck, Yuval Peres, Fan Wei, arXiv:1610.04807cs, mathOmer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei. Local max-cut in smoothed polynomial time. arXiv:1610.04807 [cs, math], April 2017. Storage capacity in symmetric binary perceptrons. Benjamin Aubin, Will Perkins, Lenka Zdeborová, Journal of Physics A: Mathematical and Theoretical. 5229294003Benjamin Aubin, Will Perkins, and Lenka Zdeborová. Storage capacity in symmetric binary perceptrons. Journal of Physics A: Mathematical and Theoretical, 52(29):294003, 2019. Satisfactory graph partition, variants, and generalizations. Cristina Bazgan, Zsolt Tuza, Daniel Vanderpooten, 10.1016/j.ejor.2009.10.019European Journal of Operational Research. 2062Cristina Bazgan, Zsolt Tuza, and Daniel Vanderpooten. Satisfactory graph partition, variants, and gener- alizations. European Journal of Operational Research, 206(2):271-280, October 2010. ISSN 03772217. doi: 10.1016/j.ejor.2009.10.019. Freya Behrens, Gabriel Arpino, arXiv:2202.10379Yaroslav Kivva, and Lenka Zdeborová. (dis) assortative partitions on random regular graphs. arXiv preprintFreya Behrens, Gabriel Arpino, Yaroslav Kivva, and Lenka Zdeborová. (dis) assortative partitions on random regular graphs. arXiv preprint arXiv:2202.10379, 2022. Metastable states, internal field distributions and magnetic excitations in spin glasses. A J Bray, M A Moore, Journal of Physics C: Solid State Physics. 14192629AJ Bray and MA Moore. Metastable states, internal field distributions and magnetic excitations in spin glasses. Journal of Physics C: Solid State Physics, 14(19):2629, 1981. A generalization of the lindeberg principle. Sourav Chatterjee, The Annals of Probability. 346Sourav Chatterjee. A generalization of the lindeberg principle. The Annals of Probability, 34(6):2061-2076, 2006. Random max sat, random max cut, and their phase transitions. Don Coppersmith, David Gamarnik, Mohammadtaghi Hajiaghayi, Gregory B Sorkin, Random Structures & Algorithms. 244Don Coppersmith, David Gamarnik, MohammadTaghi Hajiaghayi, and Gregory B Sorkin. Random max sat, random max cut, and their phase transitions. Random Structures & Algorithms, 24(4):502-545, 2004. White and weighted averages over solutions of thouless anderson palmer equations for the sherrington kirkpatrick spin glass. Marc Cirano De Dominicis, Thomas Gabay, Henri Garel, Orland, Journal de Physique. 419Cirano De Dominicis, Marc Gabay, Thomas Garel, and Henri Orland. White and weighted averages over solutions of thouless anderson palmer equations for the sherrington kirkpatrick spin glass. Journal de Physique, 41(9):923-930, 1980. Extremal cuts of sparse random graphs. Amir Dembo, Andrea Montanari, Subhabrata Sen, The Annals of Probability. 452Amir Dembo, Andrea Montanari, and Subhabrata Sen. Extremal cuts of sparse random graphs. The Annals of Probability, 45(2):1190-1217, 2017. Probability: theory and examples. Rick Durrett, Cambridge university press49Rick Durrett. Probability: theory and examples, volume 49. Cambridge university press, 2019. Asaf Ferber, Matthew Kwan, Bhargav Narayanan, arXiv:2105.13337Ashwin Sah, and Mehtaab Sawhney. Friendly bisections of random graphs. arXiv preprintAsaf Ferber, Matthew Kwan, Bhargav Narayanan, Ashwin Sah, and Mehtaab Sawhney. Friendly bisections of random graphs. arXiv preprint arXiv:2105.13337, 2021. The overlap gap property: A topological barrier to optimizing over random structures. David Gamarnik, Proceedings of the National Academy of Sciences. 118412021David Gamarnik. The overlap gap property: A topological barrier to optimizing over random structures. Proceedings of the National Academy of Sciences, 118(41), 2021. On the max-cut of sparse random graphs. David Gamarnik, Quan Li, 10.1002/rsa.20738Random Structures & Algorithms. 522David Gamarnik and Quan Li. On the max-cut of sparse random graphs. Random Structures & Algorithms, 52(2):219-262, 2018. ISSN 1098-2418. doi: 10.1002/rsa.20738. Maximum storage capacity in neural networks. Elizabeth Gardner, Europhysics Letters). 44481EPLElizabeth Gardner. Maximum storage capacity in neural networks. EPL (Europhysics Letters), 4(4):481, 1987. Finding Cuts of Bounded Degree: Complexity, FPT and Exact Algorithms, and Kernelization. C M Guilherme, Ignasi Gomes, Sau, 10.1007/s00453-021-00798-8Algorithmica. 836Guilherme C. M. Gomes and Ignasi Sau. Finding Cuts of Bounded Degree: Complexity, FPT and Ex- act Algorithms, and Kernelization. Algorithmica, 83(6):1677-1706, June 2021. ISSN 1432-0541. doi: 10.1007/s00453-021-00798-8. Neural networks and physical systems with emergent collective computational abilities. J J Hopfield, 10.1073/pnas.79.8.2554Proceedings of the National Academy of Sciences. 798J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8):2554-2558, April 1982. ISSN 0027-8424, 1091- 6490. doi: 10.1073/pnas.79.8.2554. Random graphs. Svante Janson, Andrzej Rucinski, Tomasz Luczak, John Wiley & SonsSvante Janson, Andrzej Rucinski, and Tomasz Luczak. Random graphs. John Wiley & Sons, 2011. Alliances in graphs. Sandra Petter Kristiansen, Stephen Hedetniemi, Hedetniemi, JCMCC. The Journal of Combinatorial Mathematics and Combinatorial Computing. 48Petter Kristiansen, Sandra Hedetniemi, and Stephen Hedetniemi. Alliances in graphs. JCMCC. The Journal of Combinatorial Mathematics and Combinatorial Computing, 48, January 2004. The Review of Economic Studies. Stephen Morris, Contagion, 10.1111/1467-937X.0012167Stephen Morris. Contagion. The Review of Economic Studies, 67(1):57-78, January 2000. ISSN 0034-6527. doi: 10.1111/1467-937X.00121. This implies the uniqueness of optimal t * , θ * for the above problem and the corresponding non-linear system of equations. = ∞ , = ∞. This implies the uniqueness of optimal t * , θ * for the above problem and the corresponding non-linear system of equations.
[]
[ "Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data", "Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data" ]
[ "Hamidreza Shahidi \nDavid R. Cheriton School of Computer Science\nUniversity of Waterloo\n\n", "Ming Li \nDavid R. Cheriton School of Computer Science\nUniversity of Waterloo\n\n", "Jimmy Lin [email protected] \nDavid R. Cheriton School of Computer Science\nUniversity of Waterloo\n\n" ]
[ "David R. Cheriton School of Computer Science\nUniversity of Waterloo\n", "David R. Cheriton School of Computer Science\nUniversity of Waterloo\n", "David R. Cheriton School of Computer Science\nUniversity of Waterloo\n" ]
[ "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics" ]
A number of researchers have recently questioned the necessity of increasingly complex neural network (NN) architectures. In particular, several recent papers have shown that simpler, properly tuned models are at least competitive across several NLP tasks. In this work, we show that this is also the case for text generation from structured and unstructured data. We consider neural tableto-text generation and neural question generation (NQG) tasks for text generation from structured and unstructured data, respectively.Table-to-text generation aims to generate a description based on a given table, and NQG is the task of generating a question from a given passage where the generated question can be answered by a certain sub-span of the passage using NN models. Experimental results demonstrate that a basic attentionbased seq2seq model trained with the exponential moving average technique achieves the state of the art in both tasks. Code is available at https://github.com/h-shahidi/ 2birds-gen.
10.18653/v1/2020.acl-main.355
[ "https://www.aclweb.org/anthology/2020.acl-main.355.pdf" ]
202,718,912
1909.10158
99e84b33a8cef7bcea7d54027e606d9fb6a9adc1
Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020 Hamidreza Shahidi David R. Cheriton School of Computer Science University of Waterloo Ming Li David R. Cheriton School of Computer Science University of Waterloo Jimmy Lin [email protected] David R. Cheriton School of Computer Science University of Waterloo Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics3864July 5 -10, 2020. 2020 A number of researchers have recently questioned the necessity of increasingly complex neural network (NN) architectures. In particular, several recent papers have shown that simpler, properly tuned models are at least competitive across several NLP tasks. In this work, we show that this is also the case for text generation from structured and unstructured data. We consider neural tableto-text generation and neural question generation (NQG) tasks for text generation from structured and unstructured data, respectively.Table-to-text generation aims to generate a description based on a given table, and NQG is the task of generating a question from a given passage where the generated question can be answered by a certain sub-span of the passage using NN models. Experimental results demonstrate that a basic attentionbased seq2seq model trained with the exponential moving average technique achieves the state of the art in both tasks. Code is available at https://github.com/h-shahidi/ 2birds-gen. Introduction Recent NLP literature can be characterized as increasingly complex neural network architectures that eke out progressively smaller gains over previous models. Following a previous line of research (Melis et al., 2018;Mohammed et al., 2018;Adhikari et al., 2019), we investigate the necessity of such complicated neural architectures. In this work, our focus is on text generation from structured and unstructured data by considering description generation from a table and question generation from a passage and a target answer. More specifically, the goal of the neural tableto-text generation task is to generate biographies based on Wikipedia infoboxes (structured data). An infobox is a factual table with a number of fields Passage: Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. Answer: as a coolant in generators Question: How is hydrogen used at power stations? (e.g., name, nationality, and occupation) describing a person. For this task, we use the WIKIBIO dataset (Lebret et al., 2016) as the benchmark dataset. Figure 1 shows an example of a biographic infobox as well as the target output textual description. Automatic question generation aims to generate a syntactically correct, semantically meaningful and relevant question from a natural language text and a target answer within it (unstructured data). This is a crucial yet challenging task in NLP that has received growing attention due to its application in improving question answering systems , providing material for educational purposes (Heilman and Smith, 2010), and helping conversational systems to start and continue a conversation (Mostafazadeh et al., 2016). We adopt the widely used SQuAD dataset (Rajpurkar et al., 2016) for this task. Table 1 presents a sample (passage, answer, question) triple from this dataset. Prior work has made remarkable progress on both of these tasks. However, the proposed models utilize complex neural architectures to capture necessary information from the input(s). In this paper, we question the need for such sophisticated NN models for text generation from inputs comprising structured and unstructured data. Specifically, we adopt a bi-directional, attentionbased seq2seq model (Bahdanau et al., 2015) equipped with a copy mechanism (Gu et al., 2016) for both tasks. We demonstrate that this model, together with the exponential moving average (EMA) technique, achieves the state of the art in both neural table-to-text generation and NQG. Interestingly, our model is able to achieve this result even without using any linguistic features. Our contributions are two-fold: First, we propose a unified NN model for text generation from structured and unstructured data and show that training this model with the EMA technique leads to the state of the art in neural table-to-text generation as well as NQG. Second, because our model is, in essence, the primary building block of previous models, our results show that some previous papers propose needless complexity, and that gains from these previous complex neural architectures are quite modest. In other words, the state of the art is achieved by careful tuning of simple and wellengineered models, not necessarily by adding more complexity to the model, echoing the sentiments of Lipton and Steinhardt (2018). Related Work In this section, we first discuss previous work for neural table-to-text generation and then NQG. Neural Table-to-Text Generation Recently, there have been a number of end-to-end trainable NN models for table-to-text generation. Lebret et al. (2016) propose an n-gram statistical language model that incorporates field and position embeddings to represent the structure of a table. However, their model is not effective enough to capture long-range contextual dependencies while generating a description for the table. To address this issue, suggest a structure-aware seq2seq model with local and global addressing on the table. While local addressing is realized by content encoding of the model's encoder and word-level attention, global addressing is accomplished by field encoding using a fieldgating LSTM and field-level attention. The fieldgating mechanism incorporates field information when updating the cell memory of the LSTM units. Liu et al. (2019b) utilize a two-level hierarchical encoder with coarse-to-fine attention to model the field-value structure of a table. They also propose three joint tasks (sequence labeling, text autoencoding, and multi-label classification) as auxiliary supervision to capture accurate semantic representations of the tables. In this paper, similar to Lebret et al. (2016), we use both content and field information to represent a table by concatenating the field and position embeddings with the word embedding. Unlike , we don't separate local and global addressing by using specific modules for each, but rather adopt the EMA technique and let the bidirectional model accomplish this implicitly, exploiting the natural advantages of the model. Neural Question Generation Previous NQG models can be classified into rulebased and neural-network-based approaches. Du et al. (2017) propose a seq2seq model that is able to achieve better results than previous rule-based systems without taking the target answer into consideration. concatenate answer position indicators with the word embeddings to make the model aware of the target answer. They also use lexical features (e.g., POS and NER tags) to enrich their model's encoder. In addition, Song et al. (2018) suggest using a multi-perspective context matching algorithm to further leverage information from explicit interactions between the passage and the target answer. More recently, Kim et al. (2019) use answerseparated seq2seq, which replaces the target answer in the passage with a unique token to avoid using the answer words in the generated question. They also make use of a module called keywordnet to extract critical information from the target answer. Similarly, Liu et al. (2019a) propose using a clue word predictor by adopting graph convolution networks to highlight the imperative aspects of the input passage. Our model is architecturally more similar to , but with the following distinctions: (1) we do not use additional lexical features, (2) we utilize the EMA technique during training and use the averaged weights for evaluation, (3) we do not make use of the introduced maxout hidden layer, and (4) we adopt LSTM units instead of GRU units. These distinctions, along with some hyperparameter differences, notably the optimizer and learning rate, have a considerable impact on the experimental results (see Section 5). Model: Seq2Seq with Attention and a Copy Mechanism In this section, we introduce a simple but effective attention-based seq2seq model for both neural table-to-text generation and NQG. Figure 2 provides an overview of our model. Encoder Our encoder is a bi-directional LSTM (BiLSTM) whose input x t at time step t is the concatenation of the current word embedding e t with some additional task-specific features. For neural table-to-text generation, additional features are field name f t and position information p t , following Lebret et al. (2016). The position information itself is the concatenation of p + t , which is the position of the current word in its field when counting from the left, and p − t , when counting from the right. Considering the word University, in Figure 1, as an example, it is the first word from the left and the third word from the right in the Institutions field. Hence, the structural information of this word would be {Institutions, 1, 3}. Thus, the input to the encoder at time step t for this task is x t = [e t ; f t ; p + t ; p − t ], where [.; .] denotes concatenation along the feature dimension. For NQG, similar to , we use a single bit b t , indicating whether the t th word in the passage belongs to the target answer, as an additional feature. Hence, the input at time step t is x t = [e t ; b t ]. Remarkably, unlike previous work (Song et al., 2018;Kim et al., 2019), we do not use a separate encoder for the target answer to have a unified model for both tasks. Attention-Based Decoder Our decoder is an attention-based LSTM model (Bahdanau et al., 2015). Due to the considerable overlap between input and output words, we use a copy mechanism (Gu et al., 2016) that integrates the attention distribution over the input words with the vocabulary distribution. Exponential Moving Average The exponential moving average (EMA) technique, also referred to as temporal averaging, was initially introduced to be used in optimization algorithms for better generalization performance and reducing noise from stochastic approximation in recent parameter estimates by averaging model parameters (Polyak and Juditsky, 1992;Moulines and Bach, 2011;Kingma and Ba, 2015). In applying the technique, we maintain two sets of parameters: (1) training parameters θ that are trained as usual, and (2) evaluation parameters θ that are an exponentially weighted moving average of the training parameters. The moving average is calculated using the following expression: θ ← − β × θ + (1 − β) × θ(1) where β is the decay rate. Previous work (Szegedy et al., 2016;Merity et al., 2018;Adhikari et al., 2019;Liu et al., 2019a) has used this technique for different tasks to produce more stable and accurate results. In Section 5, we show that using this simple technique considerably improves the performance of our model in both of the tasks. Experimental Setup In this section, we introduce the datasets first, then explain additional implementation details, and finally describe the evaluation metrics. Datasets We use the WIKIBIO dataset (Lebret et al., 2016) for neural table-to-text generation. This dataset contains 728,321 articles from English Wikipedia and uses the first sentence of each article as the ground-truth description of the corresponding infobox. The dataset has been divided into training (80%), validation (10%), and test (10%) sets. For NQG, we use the SQuAD dataset v1.1 (Rajpurkar et al., 2016) in our experiments, containing 536 Wikipedia articles with over 100K questionanswer pairs. The test set of the original dataset is not publicly available. Thus, Du et al. (2017) and re-divide available data into training, validation, and test sets, which we call split-1 and split-2, respectively. In this paper, we conduct experiments and evaluate our model on both of the data splits. Implementation Details For the sake of reproducibility, we discuss implementation details for achieving the results shown in Tables 2 and 3. We train the model using crossentropy loss and retain the model that works best on the validation set during training for both tasks. We replace unknown tokens with a word from the input having the highest attention score. In addition, a decay rate of 0.9999 is used for the exponential moving average in both of the tasks. For the neural table-to-text generation task, we train the model up to 10 epochs with three different seeds and a batch size of 32. We use a single-layer BiLSTM for the encoder and a single-layer LSTM for the decoder and set the dimension of the LSTM hidden states to 500. Optimization is performed using the Adam optimizer with a learning rate of 0.0005 and gradient clipping when its norm exceeds 5. The word, field, and position embeddings are trainable and have a dimension of 400, 50, and 5, respectively. The maximum position number is set to 30. Any higher position number is therefore counted as 30. The most frequent 20,000 words and 1,480 fields in the training set are selected as word vocabulary and field vocabulary, respectively, for both the encoder and the decoder. Ultimately, we conduct greedy search to decode a description for a given input table. For the NQG task, we use a two-layer BiLSTM for the encoder and a single-layer LSTM for the decoder. We set the dimension of the LSTM hidden states to 350 and 512 for split-1 and split-2, respectively. Optimization is performed using the AdaGrad optimizer with a learning rate of 0.3 and gradient clipping when its norm exceeds 5. The word embeddings are initialized with pre-trained 300-dimensional GloVe embeddings (Pennington et al., 2014), which are frozen during training. We train the model up to 20 epochs with five different seeds and a batch size of 50. We further employ dropout with a probability of 0.1 and 0.3 for data split-1 and split-2, respectively. Moreover, we use the vocabulary set released by Song et al. (2018) for both the encoder and the decoder. During decoding, we perform beam search with a beam size of 20 and a length penalty weight of 1.75. Evaluation Following previous work, we use BLEU-4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-4, and ROUGE-L (Lin, 2004) to evaluate the performance of our model. BLEU and METEOR were originally designed to evaluate machine translation systems, and ROUGE was designed to evaluate text summarization systems. Results and Discussion In this section, we present our experimental results for both neural table-to-text generation and NQG. We report the mean and standard deviation of each metric across multiple seeds to ensure robustness against potentially spurious conclusions (Crane, 2018). In Tables 2 and 3, we compare previous work with our results for NQG and neural table-totext generation, respectively. All results are copied from the original papers except for in Table 3, where Repl. refers to scores from experiments that we conducted using the source code released by the authors, and Orig. refers to scores taken from the original paper. It is noteworthy that a similar version of our model has served as a baseline in previous papers Kim et al., 2019;Liu et al., 2019a). However, the distinctions discussed in Section 2, especially the EMA technique, enable our model to achieve the state of the art in all cases but BLEU-4 on the SQuAD split-2, where our score is very competitive; furthermore, Liu et al. (2019a) only report results from a single trial. Our results indicate that a basic seq2seq model is able to effectively learn the underlying distribution of both datasets. Conclusions and Future Work In this paper, we question the necessity of complex neural architectures for text generation from structured data (neural table-to-text generation) and unstructured data (NQG). We then propose a simple yet effective seq2seq model trained with the EMA technique. Empirically, our model achieves the state of the art in both of the tasks. Our results highlight the importance of thoroughly exploring simple models before introducing complex neural architectures, so that we can properly attribute the source of performance gains. As a potential direction for future work, it would be interesting to investigate the use of the EMA technique on transformer models as well and conduct similar studies to examine needless architectural complexity in other NLP tasks. Figure 1 : 1An example infobox from the WIKIBIO dataset and the corresponding target output description. Figure 2 : 2An overview of our model. Table 1 : 1A sample (passage, answer, question) triple from the SQuAD dataset. Table 2 : 2Experimental results for NQG on the test sets. Liu et al. (2018) Orig. 44.89 ± 0.33 41.21 ± 0.25 Liu et al. (2018) Repl. 44.45 ± 0.11 39.65 ± 0.10 Liu et al. (2019b) 45.14 ± 0.34 41.26 ± 0.37 Our Model 46.07 ± 0.17 41.53 ± 0.30 + EMA 46.76 ± 0.03 43.54 ± 0.07Models BLEU-4 ROUGE-4 KN * 2.21 0.38 Template KN ** 19.80 10.70 Lebret et al. (2016) 34.70 ± 0.36 25.80 ± 0.36 Bao et al. (2018) 40.26 - Sha et al. (2018) 43.91 37.15 Table 3 : 3Experimental results for neural table-to-text generation on the test set. * KN is Kneser-Ney language model(Heafield et al., 2013). ** Template KN is a KN language model over templates. Both models are proposed byLebret et al. (2016) as baselines. AcknowledgmentsThis research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Rethinking complex neural network architectures for document classification. Ashutosh Adhikari, Achyudh Ram, Raphael Tang, Jimmy Lin, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking complex neural net- work architectures for document classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4046- 4051. Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Rep- resentations. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72. Tableto-Text: Describing table region with natural language. Junwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, Tiejun Zhao, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceJunwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuan- hua Lv, Ming Zhou, and Tiejun Zhao. 2018. Table- to-Text: Describing table region with natural lan- guage. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5020- 5027. Questionable answers in question answering research: Reproducibility and variability of published results. Matt Crane, Transactions of the Association of Computational Linguistics. 6Matt Crane. 2018. Questionable answers in question answering research: Reproducibility and variability of published results. Transactions of the Association of Computational Linguistics, 6:241-252. Learning to ask: Neural question generation for reading comprehension. Xinya Du, Junru Shao, Claire Cardie, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for reading comprehension. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342- 1352. Question generation for question answering. Nan Duan, Duyu Tang, Peng Chen, Ming Zhou, Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingIn Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 866-874. Incorporating copying mechanism in sequence-to-sequence learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640. Scalable modified Kneser-Ney language model estimation. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, Philipp Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational Linguistics2Short Papers)Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 690-696. Automatic Factual Question Generation from Text. Michael Heilman, Carnegie Mellon UniversityPh.D. thesisMichael Heilman. 2011. Automatic Factual Question Generation from Text. Ph.D. thesis, Carnegie Mel- lon University. Good question! Statistical ranking for question generation. Michael Heilman, A Noah, Smith, Human Language Technologies: The. Michael Heilman and Noah A. Smith. 2010. Good question! Statistical ranking for question genera- tion. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 609-617. Improving neural question generation using answer separation. Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung, Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. the Thirty-Third AAAI Conference on Artificial IntelligenceYanghoon Kim, Hwanhee Lee, Joongbo Shin, and Ky- omin Jung. 2019. Improving neural question gen- eration using answer separation. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intel- ligence, pages 6602-6609. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. arXiv:1412.6980. A framework for automatic question generation from text using deep reinforcement learning. Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li, arXiv:1808.04961Vishwajeet Kumar, Ganesh Ramakrishnan, and Yuan- Fang Li. 2018. A framework for automatic question generation from text using deep reinforcement learn- ing. arXiv:1808.04961. Neural text generation from structured data with application to the biography domain. Rémi Lebret, David Grangier, Michael Auli, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingRémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with ap- plication to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203-1213. ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Troubling trends in machine learning scholarship. Zachary C Lipton, Jacob Steinhardt, arXiv:1807.03341v2Zachary C. Lipton and Jacob Steinhardt. 2018. Troubling trends in machine learning scholarship. arXiv:1807.03341v2. Learning to generate questions by learning what not to generate. Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, Yu Xu, arXiv:1902.10418Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019a. Learn- ing to generate questions by learning what not to generate. arXiv:1902.10418. Hierarchical encoder with auxiliary supervision for neural table-to-text generation: Learning better representation for tables. Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, Zhifang Sui, Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. the Thirty-Third AAAI Conference on Artificial IntelligenceTianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, and Zhifang Sui. 2019b. Hierar- chical encoder with auxiliary supervision for neural table-to-text generation: Learning better representa- tion for tables. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, pages 6786-6793. Table-to-text generation by structure-aware seq2seq learning. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceTianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial In- telligence, pages 4881-4888. On the state of the art of evaluation in neural language models. Gábor Melis, Chris Dyer, Phil Blunsom, Proceedings of the 6th International Conference on Learning Representations. the 6th International Conference on Learning RepresentationsGábor Melis, Chris Dyer, and Phil Blunsom. 2018. On the state of the art of evaluation in neural language models. In Proceedings of the 6th International Conference on Learning Representations. Regularizing and optimizing LSTM language models. Stephen Merity, Nitish Shirish Keskar, Richard Socher, Proceedings of the 6th International Conference on Learning Representations. the 6th International Conference on Learning RepresentationsStephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In Proceedings of the 6th Interna- tional Conference on Learning Representations. Strong baselines for simple question answering over knowledge graphs with and without neural networks. Salman Mohammed, Peng Shi, Jimmy Lin, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersSalman Mohammed, Peng Shi, and Jimmy Lin. 2018. Strong baselines for simple question answering over knowledge graphs with and without neural networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 291-296. Generating natural questions about an image. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, Lucy Vanderwende, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Mar- garet Mitchell, Xiaodong He, and Lucy Vander- wende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1802-1813. Nonasymptotic analysis of stochastic approximation algorithms for machine learning. Eric Moulines, Francis R Bach, Advances in Neural Information Processing Systems. Eric Moulines and Francis R. Bach. 2011. Non- asymptotic analysis of stochastic approximation al- gorithms for machine learning. In Advances in Neu- ral Information Processing Systems, pages 451-459. Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318. Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543. Acceleration of stochastic approximation by averaging. T Boris, Anatoli B Polyak, Juditsky, SIAM Journal on Control and Optimization. 304Boris T. Polyak and Anatoli B. Juditsky. 1992. Ac- celeration of stochastic approximation by averag- ing. SIAM Journal on Control and Optimization, 30(4):838-855. SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392. Orderplanning neural text generation from structured data. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceLei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order- planning neural text generation from structured data. In Proceedings of the Thirty-Second AAAI Confer- ence on Artificial Intelligence, pages 5414-5421. Leveraging context information for natural question generation. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, Daniel Gildea, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersLinfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context infor- mation for natural question generation. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 2 (Short Papers), pages 569-574. Answer-focused and position-aware neural question generation. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, Shi Wang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingXingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3930- 3939. Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818-2826. Question answering and question generation as dual tasks. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, Ming Zhou, arXiv:1706.02027Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question gen- eration as dual tasks. arXiv:1706.02027. Learning to collaborate for question answering and asking. Duyu Tang, Nan Duan, Zhao Yan, Zhirui Zhang, Yibo Sun, Shujie Liu, Yuanhua Lv, Ming Zhou, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersDuyu Tang, Nan Duan, Zhao Yan, Zhirui Zhang, Yibo Sun, Shujie Liu, Yuanhua Lv, and Ming Zhou. 2018. Learning to collaborate for question answering and asking. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1564- 1574. Teaching machines to ask questions. Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, Yanjun Wu, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. 2018. Teaching machines to ask ques- tions. In Proceedings of the Twenty-Seventh Inter- national Joint Conference on Artificial Intelligence, IJCAI-18, pages 4546-4552. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, Qifa Ke, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingYao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question genera- tion with maxout pointer and gated self-attention net- works. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901-3910. Neural question generation from text: A preliminary study. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, Ming Zhou, National CCF Conference on Natural Language Processing and Chinese Computing. SpringerQingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural ques- tion generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662-671. Springer. Sequential copying networks. Qingyu Zhou, Nan Yang, Furu Wei, Ming Zhou, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceQingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2018. Sequential copying networks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 4987-4995.
[ "https://github.com/h-shahidi/" ]
[ "The Good, the Bad and the Ugly: Evaluating Convolutional Neural Networks for Prohibited Item Detection Using Real and Synthetically Composited X-ray Imagery", "The Good, the Bad and the Ugly: Evaluating Convolutional Neural Networks for Prohibited Item Detection Using Real and Synthetically Composited X-ray Imagery" ]
[ "Neelanjan Bhowmik ", "Qian Wang ", "Yona Falinie ", "A Gaus ", "Marcin Szarek \nSchool of Engineering\nCranfield University Cranfield\nUK\n", "Toby P Breckon ", "\nDepartment of {Computer Science 1 | Engineering\n\n", "\nDurham University Durham\nUK\n" ]
[ "School of Engineering\nCranfield University Cranfield\nUK", "Department of {Computer Science 1 | Engineering\n", "Durham University Durham\nUK" ]
[]
Detecting prohibited items in X-ray security imagery is pivotal in maintaining border and transport security against a wide range of threat profiles. Convolutional Neural Networks (CNN) with the support of a significant volume of data have brought advancement in such automated prohibited object detection and classification. However, collating such large volumes of X-ray security imagery remains a significant challenge. This work opens up the possibility of using synthetically composed imagery, avoiding the need to collate such large volumes of hand-annotated real-world imagery. Here we investigate the difference in detection performance achieved using real and synthetic X-ray training imagery for CNN architecture detecting three exemplar prohibited items, Firearm, Firearm Parts, Knives , within cluttered and complex X-ray security baggage imagery. We achieve 0.88 of mean average precision (mAP) with a Faster R-CNN and ResNet 101 CNN architecture for this 3-class object detection using real X-ray imagery. While the performance is comparable with synthetically composited X-ray imagery (0.78 mAP), our extended evaluation demonstrates both challenge and promise of using synthetically composed images to diversify the X-ray security training imagery for automated detection algorithm training.
null
[ "https://arxiv.org/pdf/1909.11508v1.pdf" ]
202,750,087
1909.11508
c1359444d4d545e908ca466d20564ac2fe5143de
The Good, the Bad and the Ugly: Evaluating Convolutional Neural Networks for Prohibited Item Detection Using Real and Synthetically Composited X-ray Imagery Neelanjan Bhowmik Qian Wang Yona Falinie A Gaus Marcin Szarek School of Engineering Cranfield University Cranfield UK Toby P Breckon Department of {Computer Science 1 | Engineering Durham University Durham UK The Good, the Bad and the Ugly: Evaluating Convolutional Neural Networks for Prohibited Item Detection Using Real and Synthetically Composited X-ray Imagery BHOWMIK ET AL.: X-RAY PROHIBITED ITEM DETECTION 1 Detecting prohibited items in X-ray security imagery is pivotal in maintaining border and transport security against a wide range of threat profiles. Convolutional Neural Networks (CNN) with the support of a significant volume of data have brought advancement in such automated prohibited object detection and classification. However, collating such large volumes of X-ray security imagery remains a significant challenge. This work opens up the possibility of using synthetically composed imagery, avoiding the need to collate such large volumes of hand-annotated real-world imagery. Here we investigate the difference in detection performance achieved using real and synthetic X-ray training imagery for CNN architecture detecting three exemplar prohibited items, Firearm, Firearm Parts, Knives , within cluttered and complex X-ray security baggage imagery. We achieve 0.88 of mean average precision (mAP) with a Faster R-CNN and ResNet 101 CNN architecture for this 3-class object detection using real X-ray imagery. While the performance is comparable with synthetically composited X-ray imagery (0.78 mAP), our extended evaluation demonstrates both challenge and promise of using synthetically composed images to diversify the X-ray security training imagery for automated detection algorithm training. Introduction To ensure transport and border security, X-ray security screening is commonplace within public transport and border security installations such as airports, railway and metro stations. However, due to the nature of cluttered and complex X-ray imagery (Figure 1), the process of c 2019. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. arXiv:1909.11508v1 [cs.CV] 25 Sep 2019 X-ray screening is complicated by tightly packed items within baggage making it challenging and time-consuming to identify the presence of prohibited items. With the natural occurrence of such prohibited items being rare, previous studies cite time constraints as a major factor in the performance limitations of human operators for this screening task [3,25]. Whilst challenging for a human, a reliable automatic prohibited item detection system may assist in improving the performance and throughput of such screening processes [27]. To date, contemporary X-ray security scanners already implement material discrimination via dual-energy multiple view X-ray imagery to enable threat material detection [18]. This use of dual-energy X-ray gives rise to the false-colour mapped appearance of X-ray security imagery (e.g., metals, alloy or hard plastic are shown in blue while less dense objects are shown in green/orange -see Figure 1). A B C Convolutional Neural Network (CNN) based methods have proven effective in detecting a wide range of object classes within this context [8,10,21,26]. However, the performance of such object detection approaches is heavily reliant on the availability of a substantial volume of labelled X-ray imagery. Unfortunately, the availability of such X-ray imagery datasets suitable for training CNN architectures is limited and also restricted in size and item coverage (e.g. GDXray [14], SIXray [17]). Commonly, it is challenging to collect sufficient X-ray imagery containing example of prohibited items with large variations in pose, scale and item construction. To overcome this challenge, contemporary data augmentation schemes such as image translation, rotation, flipping and re-scaling are applied to enlarge the availability of otherwise limited training datasets [10]. However, such methods suffer from the fact that the resulting augmented dataset still lacks diversity in terms of prohibited item variation and inter-occlusion emplacement within complex and cluttered X-ray security imagery. This motivates the use of synthetically composed imagery, where such imagery readily enables the introduction of more variability in pose, scale and prohibited item usage in an efficient and readily available way. In this work, we devise a Synthetically Composited (SC) data augmentation approach via the use of Threat Image Projection (TIP). TIP is an established process within operational aviation security for the monitoring of human operators which uses a smaller collection of X-ray imagery comprising of isolated prohibited objects (only), which are subsequently superimposed onto more readily available benign X-ray security imagery. Here this approach additionally facilitates the generation of synthetic, yet realistic prohibited X-ray security imagery for the purpose of CNN training. Our key contributions are the following: (a) the synthesis of high quality prohibited images from benign X-ray imagery using a documented TIP approach and (b) an extended comparative evaluation on how real and synthetically generated X-ray imagery impacts the performance for prohibited object detection and classification using CNN architectures. Related Work Traditional computer vision methods that rely on handcrafted features have been applied to prohibited item detection in X-ray security imagery such as Bag of Visual Words (BoVW) [11,13,27] and sparse representations [15]. However, the recent advancement in CNN have drawn more attention to prohibited item detection due to significant performance gains within X-ray security imagery [1,2,16]. The works of [1,16] compare handcrafted features with a BoVW based sparse representation to CNN features. These shows that such deep CNN features achieve superior performance with more than 95% accuracy for prohibited item detection. The study of [1] exhaustively compares various CNN architectures to evaluate the impact of network complexity on overall performance. Fine tuning the entire network architecture for this problem domain yields 0.99% true positive, 0.01% false positive and 0.994% accuracy for generalized prohibited item detection [1]. Further work on prohibited item under X-ray security imagery is undertaken by Mery et al. [13], where regions of interest detection is performed across multiple views of the object. Subsequently, the candidate region obtained from an earlier segmentation step is then matched based on their similarity. This achieves 94.3% true positive and 5.6% false positive across multiple view X-ray security imagery. The work of [2] examines the relative performance of traditional sliding window driven CNN detection model based on [1] against contemporary region-based and single forward-pass based CNN variants such as Faster R-CNN [21], R-FCN [4], and YOLOv2 [20], achieving a maximal 0.88 and 0.97 mAP over 6-class object detection and 2-class firearm detection problems respectively. To investigate the generalised applicability of CNN within X-ray security imagery, large X-ray imagery datasets are required. Existing public domain datasets such as GDXray [14] contains three major categories of prohibited items, {Guns, Shurikens, Razor blades}. However, images in GDXray are provided with lesser clutter and overlap making object detection less challenging than in typical operational conditions. By contrast, the SIXray dataset [17] contains six classes, {Guns, Knives, Wrenches, Pliers, Scissors, Hammers}, from cluttered operational imagery. This provides more inter-occluding imagery examples but at the same time provides significantly fewer prohibited item than benign samples akin to an operational (real-world) scenario, where the presence of prohibited items is low within stream-ofcommerce (largely benign) X-ray security imagery. To overcome the limited dataset availability, data augmentation has been used to increase overall dataset diversity. Whilst simple image data augmentation strategies such as translations, flipping and scaling do increase geometric diversity of the imagery they do not increase the appearance or content diversity of the dataset itself [5]. The work of [30] alternatively attempts data augmentation based on an Generative Adversarial Network (GAN) approach but generates synthetic prohibited items in isolation rather than within a full cluttered X-ray security image. By contrast, the work of [9] utilises an approach, similar to the concept of TIP, whereby a prohibited item is superimposed into X-ray security imagery. Therefore, in this work, we explore the feasibility of TIP as a data augmentation strategy to support performance enhancement and evaluation of contemporary deep CNN architectures within the context of prohibited item detection in security X-ray imagery. Proposed Approach We investigate the use of a full TIP pipeline, based on prior work in the field [19,23,24], to generate a range of appearance and contents based dataset variation (Section 3.1). Subsequently, CNN object detection architecture is used to evaluate the TIP based data augmentation approach and compare the performance with real X-ray security imagery (Section 3.2) Synthetic X-ray Security Imagery via TIP Our TIP pipeline consists of three components: threat image transformation, insertion position determination and image compositing as illustrated in Figure 2. We use threat (prohibited item) images containing clean, isolated object signatures which can be easily segmented from their plain background via simple thresholding. To diversify the resultant synthetic images, we apply threat image transformation via rotating the threat signature by a random angle θ . Although other threat image transformation strategies (e.g., noise, illumination, magnification, etc.) have been explored in [22], our work focuses on the pure combination of our segmented threat signature and a benign X-ray security image, isolating the effects of other data augmentation techniques. We denote this transformed threat image as I s and the i-th row, j-th column pixel as I s (i, j). A valid insertion position within the bag image is determined based on the bag region and the shape of threat signature. Given a bag image I t , we use morphological operations to extract the bag region. Specifically, the original bag image is firstly binarised by thresholding (Figure 3b) to extract the foreground (target) region for insertion. Due to noise, a simple thresholding process cannot ideally separate background and foreground. We sequentially apply a series of appropriately parameterised morphological operations including dilation (Figure 3c), hole filling ( Figure 3d) and erosion (Figure 3e) to identify the largest connected image region as the target for insertion (see Figure 3f). Obviously, a valid insertion of the threat signature has to guarantee the threat signature is completely located inside this target region. To this end, we use a loop to generate a random insertion position until it is a valid one. The selected valid insertion position can be denoted by a binary mask matrix M of the same size as the target baggage image with elements of ones indicating the insertion region. Finally, a threat signature I s is superimposed onto the target bag image I t in the selected valid position (denoted by M) to generate a synthetically composited image I T IP . To ensure the plausibility of the composited TIP image, we consider two factors in image blend. Parameter α controls the transparency of the source image I s (α = 0.9). The other parameter is the threat threshold T ensuring the consistency of source image with the target image in terms of image contrast. The use of threat threshold T aims to remove the high-value pixels of the threat signature so that the inserted threat signature is not visually too bright comparing against the target region where it is superimposed. To calculate the value of T , we first transform the target image I t to a greyscale image G t . The threat threshold T can be empirically calculated by: T = min(exp (ĝ 5 ) − 0.5, 0.95)(1) whereĝ is the normalised average intensity of the insertion region within G t calculated as: g = ∑ i, j G t (i, j) * M(i, j) ∑ i, j 255 * M(i, j) ∈ [0, 1](2) The image compositing can be formulated as follows: I T IP (i, j) = (1 − α)I t (i, j) + αI s (i , j ), M(i, j) = 1 and I s (i , j ) < T * 255, I t (i, j), otherwise(3) where I(i, j) denotes the value of pixel in i-th row and j-th column of the image I; I s (i , j ) denotes the pixel in source image corresponding to the pixel of I(i, j). Since the value of T computed by Eq.(1) is in the range of 0.5-0.95, any pixel of the higher value than T * 255 in the source image will be ignored during image compositing process. The proposed TIP approach is able to generate a large number of diverse synthetic Xray baggage images containing prohibited items whose locations are accessible without any extra cost for training a supervised learning detection model. Detection Strategies We use two representative CNN object detection model, Faster R-CNN [21] and RetinaNet [12], for the purposes of our evaluation. Faster R-CNN [21] is an object detection algorithm which is the combination of its predecessor Fast R-CNN [6] and Region Proposal Network (RPN). Unlike Fast R-CNN [6], which utilises external region proposal, this architecture has its own region proposal network, which is consists of convolutional layers that generate object proposals and two fully connected layers that predict coordinates of bounding boxes. The corresponding locations and bounding boxes are then fed into objectness classification and bounding box regression layers. Finally the objectness classification layer classify whether a given region proposal is an object or a background region while a bounding box regression layer predicts object localisation, at the end of the overall detection process. RetinaNet [12] is an object detector where the key idea is to solve the extreme class imbalance between foreground and background classes. To improve the performance, RetinaNet employs a novel loss function called Focal Loss, where it modifies the cross-entropy loss such that it down-weights the loss in easy negative samples so that the loss is focusing on the sparse set of hard samples. Unlike Faster R-CNN [21] which apply two-stage approach, RetinaNet only apply one-stage approach, potentially to be faster and simpler. Experimental Setup Our experimental setup comprises of real X-ray security imagery dataset and one constructed using the TIP based synthetic compositing approach outlined in Section 3.1. These are evaluated within a common CNN training environment using the CNN architecture outlined in Section 3.2. Dbf3 Real dataset: The Durham Dataset Full Three-class (Dbf3) images are generated using a Smith Detection dualenergy X-ray scanner ( Figure. 1). It consists of total 7,603 images, which is divided into three classes of prohibited item. In this experiment we uses subsets of the datasets, which consists of three types metallic prohibited item, {Firearm, Firearm Parts, Knives}. Out of these three classes, we incorporate 3,192 images of firearms, 1,204 images of firearms parts, and 3,207 images of knives, within cluttered and complex X-ray security dataset. Dbf3 SC dataset: The Synthetically Composited (SC) dataset is generated using TIP approach of Section 3.1. We use 3,366 benign X-ray security images, generated by a Smith Detection X-ray scanner, and 123 individual prohibited objects of three classes {Firearm, Firearm Parts, Knives}. The prohibited item are composed into the benign images to create synthetically composited X-ray security imagery dataset. We use the same number of images as Dbf3 Real in the synthetically composited dataset. Exemplar images from Dbf3 Real ( Figure. 4A) and Dbf3 SC ( Figure 4B) are visually realistic and challenging to distinguish from the real images. Dbf3 Real+SC dataset: A subset of Dbf3 Real and subset of Dbf3 SC images are combined to create this dataset, where the numbers of synthetic and real images are are used in equal number to present a data set with 50% of each which is itself the same size as Dbf3 Real . The CNN architecture (Section 3.2) are trained on a GTX 1080Ti GPU, optimised by Stochastic Gradient Descent (SGD) with a weight decay of 0.0001, learning rate of 0.01 and termination at maximum of 180k epochs. ResNet 50 and ResNet 101 are chosen as net-work backbone to operate within the detection framework of [7]. We split each dataset into training (60%), validation (20%) and test sets (20%) so that each split has similar class distribution. All CNN architecture are initialised with ImageNet [10] pre-trained weights for their respective model [29]. Evaluation Our evaluation considers the comparative performance of CNN architecture to detect prohibited items using real X-ray imagery against prohibited items under synthetic X-ray imagery. We consider mean Average Precision (mAP) as our evaluation criteria following [2]. Prohibited Item Detection Results In the first set of experiments (Table 1, upper), prohibited items in X-ray security imagery are detected using the CNN architectures set out in Section 3.2. We use the Dbf3 dataset consisting of three types of prohibited items {Firearm, Firearm Parts, Knives}. To provide performance benchmark, our CNN architectures are firstly trained and evaluated on real images of Dbf3 (Dbf3 Real ⇒ Dbf3 Real ). The AP/mAP highlighted in Table 1(upper) denotes the maximal performance achieved. Table 1 shows statistical results of prohibited item detection for Faster R-CNN [21] and RetinaNet [12] architecture using ResNet 50 and ResNet 101 . Inline with the overall complexity of the network, we observe maximal mAP performance from ResNet 101 for all three prohibited item classes. In this performance benchmark, we observe that the best performance (mAP = 0.88) is achieved on Dbf3 real by Faster R-CNN with ResNet 101 configuration, as presented in Table 1 Table 1: Detection results of varying CNN architecture trained on: Upper → Dbf3 Real , Middle → Dbf3 SC and Lower → Dbf3 Real+SC . All models are evaluated on set of real X-ray security imagery. In second set of experiments (Table 1, middle), the CNN architecture are trained on the synthetic X-ray imagery (Dbf3 SC ) achieve 0.78 mAP when tested on same set of real X-ray imagery (Dbf3 Real ) of Table 1 (upper). Even though the performance is lesser when compared with former results (Table 1, upper), this experimental setting does not require any manual image labelling (as TIP insertion positions are known) and yet achieves surprisingly good performance on a standard benchmark. The performance gap between CNN architecture trained on real and synthetically composited X-ray imagery is attributable to the domain shift problem whereby the distribution of training and test data differ. In the first experiment (Table 1, upper), the training and test data are from the same distribution since they are created by randomly dividing data captured under the same experimental conditions. By contrast, in this second experimental setup (Table 1, middle), the prohibited items used for the synthetic X-ray imagery (Dbf3 SC ) data are different from those in the test X-ray imagery (Dbf3 Real ) data. It is also noteworthy that prohibited images used for generating synthetic X-ray imagery (Dbf3 SC ) data is a smaller set of prohibited item instances than in the real training images. As a result, CNN architecture trained on synthetic data have larger generalisation errors than those trained on real data. However, when tested on synthetic X-ray imagery (Dbf3 SC ) data (Table 2), however, CNN architecture trained with real or synthetic CNN architecture have comparable performance. These experimental results show that it is essential to have diverse prohibited item signatures in the training data to improve the generalisation. It also largely explains why overall performance in Table 2 (showing evaluation on the synthetic dataset, Dbf3 SC ) is significantly higher than overall performance in Table 1 (evaluation is on the real dataset, Dbf3 Real ). In the third set of experiments (Table 1, bottom), we evaluate the effectiveness of synthetic X-ray imagery by combining it with real images of Dbf3 to create Dbf3 Real+SC dataset, as explained in the Section 4. We evaluate the testing sets of images from real Dbf3 (Table 1) and synthetically composite ( Table 2) datasets. Surprisingly, the combination of real and synthetic imagery data does not improve the results (e.g. 0.81 vs 0.88 on Db f 3 Real and 0.89 vs 0.91 on Db f 3 SC with Faster R-CNN and ResNet101). This can also be explained by the domain shift problem mentioned previously. Possibly this data combination can perform well if we apply domain adaptation techniques [28] explicitly. In addition, we may also need to evaluate the quality of the TIP solution that underpins our work further. Qualitative Examples Exemplar prohibited items detection results from Faster R-CNN [21] with ResNet 101 are depicted in Figure 5, using real (top row) and synthetic (bottom row) training imagery. These results illustrate that the synthetically composited imagery using TIP techniques can be effective in training detection architectures for prohibited item detection in cluttered X-ray security imagery. We also visually inspect the detection results to investigate the performance difference when training the models using real and synthetic data. By comparing the results depicted in Figures 6A1 and 6B1, the model trained with synthetic data fails to detect the knives since such type of knives have very different appearance from the ones we used to generate the synthetic imagery. On the other hand, from Figures 6A2 and 6B2 we can see that the model trained on synthetic imagery has mistakenly detected something benign as a knife. These results account for the low performance for knife detection observed in Table 1. As a result, we need to either use more diverse threat signatures for data synthesis or particular domain adaptation techniques to tackle the potential domain shift problem identified previously. A1 B1 B2 A2 Figure 6: Exemplar prohibited item detection (by Faster R-CNN [21]) using Dbf3 Real (A1,A2) and Dbf3 SC (B1,B2) training datasets. Green dashed box in B1 fails to detect, where in B2 wrongly detects as knife. Conclusion This work explores the possibility of generating synthetically composite X-ray security imagery for training of CNN architecture to bypass the collecting a large amount of handannotated real-world X-ray baggage imagery. We synthesise high-quality synthetically composited X-ray images using TIP approach and we present an extensive comparison on how real and synthetic X-ray security imagery affects the performance of CNN architecture for prohibited object detection in cluttered X-ray baggage images. Our experimental comparison demonstrates Faster R-CNN achieves the highest performance with mAP: 0.88 when trained on Real data (the good), followed by Real+Synthetic (the bad) and Synthetic (the ugly) over a three-class, {Firearms, Firearm parts, Knives}, prohibited item detection problem. This demonstrates a strong insight into the benefits of using real X-ray training data, also challenge and promise of using synthetic X-ray imagery. In our future work, it is worth further investigating how to improve the effectiveness of synthetically composited imagery for training CNN architecture. Based on other work [30], a potential direction is to generate more diverse prohibited items images using generative adversarial networks (GAN). The generated prohibited item images then could be used for generating synthetic baggage images using TIP or similar. Figure 1 : 1Exemplar X-ray security baggage images with prohibited objects -red box: (A) Firearm (B) Firearm Parts and (C) Knife. Figure 2 : 2Threat image projection (TIP) pipeline for synthetically composited image generation. Figure 3 : 3Image segmentation using morphological operations for insertion position determination. Figure 4 : 4Visual comparison of real (A) and SC (B) X-ray security imagery of prohibited items. Figure 5 : 5Exemplar detection of prohibited items in red box using Faster R-CNN [21] and trained on (A) Dbf3 Real and (B) Dbf3 SC images. (upper).Train ⇒ Evaluation Model Network Average precision mAP Firearm Firearm Parts Knives Dbf3 Real ⇒ Dbf3 Real Faster R-CNN [21] ResNet 50 0.87 0.84 0.76 0.82 ResNet 101 0.91 0.88 0.85 0.88 RetinaNet [12] ResNet 50 0.88 0.86 0.73 0.82 ResNet 101 0.89 0.86 0.73 0.83 Dbf3 SC ⇒ Dbf3 Real Faster R-CNN [21] ResNet 50 0.82 0.77 0.55 0.71 ResNet 101 0.86 0.80 0.66 0.78 RetinaNet [12] ResNet 50 0.84 0.77 0.53 0.71 ResNet 101 0.84 0.76 0.54 0.72 Dbf3 Real+SC ⇒ Dbf3 Real Faster R-CNN [21] ResNet 50 0.85 0.79 0.65 0.76 ResNet 101 0.87 0.81 0.74 0.81 RetinaNet [12] ResNet 50 0.85 0.81 0.64 0.76 ResNet 101 0.86 0.80 0.63 0.76 : Detection results of different CNN architecture trained on: Upper → Dbf3 Real , Middle → Dbf3 SC and Lower → Dbf3 Real+SC . All models are evaluated on set of SC dataset.Train ⇒ Evaluation Model Network Average precision mAP Firearm Firearm Parts Knives Dbf3 Real ⇒ Dbf3 SC Faster R-CNN [21] ResNet 50 0.88 0.87 0.84 0.87 ResNet 101 0.92 0.92 0.89 0.91 RetinaNet [12] ResNet 50 0.89 0.87 0.83 0.86 ResNet 101 0.90 0.88 0.85 0.88 Dbf3 SC ⇒ Dbf3 SC Faster R-CNN [21] ResNet 50 0.90 0.88 0.83 0.87 ResNet 101 0.93 0.92 0.86 0.91 RetinaNet [12] ResNet 50 0.91 0.89 0.84 0.88 ResNet 101 0.91 0.89 0.83 0.86 Dbf3 Real+SC ⇒ Dbf3 SC Faster R-CNN [21] ResNet 50 0.89 0. 86 0.83 0.86 ResNet 101 0.91 0.89 0.87 0.89 RetinaNet [12] ResNet 50 0.90 0.87 0.83 0.87 ResNet 101 0.90 0.88 0.84 0.87 Table 2 Acknowledgements:The authors would like to thank the UK Home Office for partially funding this work. Views contained within this paper are not necessarily those of the UK Home Office. Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery. S Akçay, M Kundegorski, M Devereux, T Breckon, IEEE International Conference on Image Processing. IEEES. Akçay, M. E Kundegorski, M. Devereux, and T. P Breckon. Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery. In IEEE International Conference on Image Processing, pages 1057-1061. IEEE, 2016. Using deep convolutional neural network architectures for object classification and detection within X-ray baggage security imagery. S Akçay, M E Kundegorski, C Willcocks, T Breckon, IEEE Transactions on Information Forensics and Security. 139S. Akçay, M. E. Kundegorski, C. G Willcocks, and T. P Breckon. Using deep convolu- tional neural network architectures for object classification and detection within X-ray baggage security imagery. IEEE Transactions on Information Forensics and Security, 13(9):2203-2215, 2018. The impact of post-9/11 airport security measures on the demand for air travel. G Blalock, V Kadiyali, D Simon, The Journal of Law and Economics. 504G. Blalock, V. Kadiyali, and D. H Simon. The impact of post-9/11 airport security measures on the demand for air travel. The Journal of Law and Economics, 50(4): 731-755, 2007. R-FCN: Object detection via region-based fully convolutional networks. J Dai, Y Li, K He, J Sun, Proceedings of the 30th International Conference on Neural Information Processing Systems. the 30th International Conference on Neural Information Processing SystemsNIPSJ. Dai, Y. Li, K. He, and J. Sun. R-FCN: Object detection via region-based fully convolutional networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 379-387. NIPS, 2016. Synthetic data augmentation using gan for improved liver lesion classification. M Frid-Adar, E Klang, M Amitai, J Goldberger, H Greenspan, International Symposium on Biomedical Imaging. IEEEM. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan. Synthetic data augmentation using gan for improved liver lesion classification. In International Sym- posium on Biomedical Imaging, pages 289-293. IEEE, 2018. R Girshick, Fast R-Cnn, International Conference on Computer Vision. IEEER Girshick. Fast R-CNN. In International Conference on Computer Vision, pages 1440-1448. IEEE, 2015. . R Girshick, I Radosavovic, G Gkioxari, P Dollár, K He, Detectron, R. Girshick, I. Radosavovic, G. Gkioxari, P. Dollár, and K. He. Detectron. https: //github.com/facebookresearch/detectron, 2018. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Conference on Computer Vision and Pattern Recognition. IEEEK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition. IEEE, June 2016. An evaluation of deep learning based object detection strategies for threat object detection in baggage security imagery. D K Jain, Pattern Recognition Letters. 120D.K. Jain et al. An evaluation of deep learning based object detection strategies for threat object detection in baggage security imagery. Pattern Recognition Letters, 120: 112-119, 2019. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in Neural Information Processing Systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep con- volutional neural networks. In Advances in Neural Information Processing Systems, pages 1097-1105, 2012. On using feature descriptors as visual words for object detection within X-ray baggage security screening. M E Kundegorski, S Akçay, M Devereux, A Mouton, T P Breckon, International Conference on Imaging for Crime Detection and Prevention. IEEEM. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon. On using feature descriptors as visual words for object detection within X-ray baggage security screening. In International Conference on Imaging for Crime Detection and Preven- tion, pages 1-6. IEEE, 2016. Focal loss for dense object detection. T-Y Lin, P Goyal, R Girshick, K He, P Dollár, International Conference on Computer Vision. IEEET-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In International Conference on Computer Vision, pages 2980-2988. IEEE, 2017. Automated X-ray object recognition using an efficient search algorithm in multiple views. D Mery, V Riffo, I Zuccar, C Pieringer, Conference on Computer Vision and Pattern Recognition workshops. IEEED. Mery, V. Riffo, I. Zuccar, and C. Pieringer. Automated X-ray object recognition using an efficient search algorithm in multiple views. In Conference on Computer Vision and Pattern Recognition workshops, pages 368-374. IEEE, 2013. Gdxray: The database of X-ray images for nondestructive testing. D Mery, V Riffo, U Zscherpel, G Mondragón, I Lillo, I Zuccar, H Lobel, M Carrasco, Journal of Nondestructive Evaluation. 34442D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco. Gdxray: The database of X-ray images for nondestructive testing. Journal of Nondestructive Evaluation, 34(4):42, 2015. Object recognition in baggage inspection using adaptive sparse representations of X-ray images. D Mery, E Svec, M Arias, Image and Video Technology. SpringerD. Mery, E. Svec, and M. Arias. Object recognition in baggage inspection using adap- tive sparse representations of X-ray images. In Image and Video Technology, pages 709-720. Springer, 2015. Modern computer vision techniques for X-ray testing in baggage inspection. D Mery, E Svec, M Arias, V Riffo, J M Saavedra, S Banerjee, Transactions on Systems, Man, and Cybernetics: Systems. 47D. Mery, E. Svec, M. Arias, V. Riffo, J. M. Saavedra, and S. Banerjee. Modern com- puter vision techniques for X-ray testing in baggage inspection. Transactions on Sys- tems, Man, and Cybernetics: Systems, 47(4):682-692, 2016. Sixray: A large-scale security inspection X-ray benchmark for prohibited item discovery in overlapping images. C Miao, L Xie, F Wan, C Su, H Liu, J Jiao, Q Ye, Conference on Computer Vision and Pattern Recognition. IEEEC. Miao, L. Xie, F. Wan, C. Su, H. Liu, J. Jiao, and Q. Ye. Sixray: A large-scale security inspection X-ray benchmark for prohibited item discovery in overlapping images. In Conference on Computer Vision and Pattern Recognition, pages 2119-2128. IEEE, 2019. A review of automated image understanding within 3D baggage computed tomography security screening. A Mouton, T P Breckon, Journal of X-ray Science and Technology. 235A. Mouton and T. P. Breckon. A review of automated image understanding within 3D baggage computed tomography security screening. Journal of X-ray Science and Technology, 23(5):531-555, 2015. Threat image projection system. E C Neiderman, J L Fobes, US Patent. 6540E. C. Neiderman and J. L. Fobes. Threat image projection system, 2005. US Patent 6,899,540. Yolo9000: better, faster, stronger. J Redmon, A Farhadi, Conference on Computer Vision and Pattern Recognition. IEEEJ. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. In Conference on Com- puter Vision and Pattern Recognition, pages 7263-7271. IEEE, 2017. Faster R-CNN: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Transactions on Pattern Analysis and Machine Intelligence. 396S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object de- tection with region proposal networks. Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137-1149, 2017. Griffin. Threat image projection (TIP) into X-ray images of cargo containers for training humans and machines. T W Rogers, N Jaccard, E D Protonotarios, J Ollier, E J Morton, L , International Carnahan Conference on Security Technology. IEEET. W. Rogers, N. Jaccard, E. D. Protonotarios, J. Ollier, E. J. Morton, and L. D. Grif- fin. Threat image projection (TIP) into X-ray images of cargo containers for training humans and machines. In International Carnahan Conference on Security Technology, pages 1-7. IEEE, 2016. Measuring visual abilities and visual knowledge of aviation security screeners. A Schwaninger, D Hardmeier, F Hofer, International Carnahan Conference on Security Technology. IEEEA. Schwaninger, D. Hardmeier, and F. Hofer. Measuring visual abilities and visual knowledge of aviation security screeners. In International Carnahan Conference on Security Technology, pages 258-264. IEEE, 2004. A statistical approach for image difficulty estimation in X-ray screening using image measurements. A Schwaninger, S Michel, A Bolfing, Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization. the 4th Symposium on Applied Perception in Graphics and VisualizationACMA. Schwaninger, S. Michel, and A. Bolfing. A statistical approach for image diffi- culty estimation in X-ray screening using image measurements. In Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization, pages 123-130. ACM, 2007. The impact of image based factors and training on threat detection performance in X-ray screening. A Schwaninger, A Bolfing, T Halbherr, S Helman, A Belyavin, L Hay, Conference on Research in Air Transportation. A. Schwaninger, A. Bolfing, T. Halbherr, S. Helman, A. Belyavin, and L. Hay. The impact of image based factors and training on threat detection performance in X-ray screening. In Conference on Research in Air Transportation, pages 317-324, 2008. Inception-v4, inception-resnet and the impact of residual connections on learning. C Szegedy, S Ioffe, V Vanhoucke, A A , Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceC. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 4278-4284, 2017. Improving feature-based object recognition for X-ray baggage security screening using primed visualwords. D Turcsany, A Mouton, T P Breckon, International Conference on Industrial Technology. IEEED. Turcsany, A. Mouton, and T. P. Breckon. Improving feature-based object recogni- tion for X-ray baggage security screening using primed visualwords. In International Conference on Industrial Technology, pages 1140-1145. IEEE, 2013. Unifying unsupervised domain adaptation and zeroshot visual recognition. Q Wang, P Bu, T P Breckon, International Joint Conference on Neural Networks. Q. Wang, P. Bu, and T.P. Breckon. Unifying unsupervised domain adaptation and zero- shot visual recognition. In International Joint Conference on Neural Networks, 2019. Aggregated residual transformations for deep neural networks. S Xie, R Girshick, P Dollár, Z Tu, K He, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionS. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1492-1500, 2017. Data augmentation for X-ray prohibited item images using generative adversarial networks. J Yang, Z Zhao, H Zhang, Y Shi, IEEE Access. 7J. Yang, Z. Zhao, H. Zhang, and Y. Shi. Data augmentation for X-ray prohibited item images using generative adversarial networks. IEEE Access, 7:28894-28902, 2019.
[]
[ "Safe Perception-Based Control under Stochastic Sensor Uncertainty using Conformal Prediction", "Safe Perception-Based Control under Stochastic Sensor Uncertainty using Conformal Prediction", "Safe Perception-Based Control under Stochastic Sensor Uncertainty using Conformal Prediction", "Safe Perception-Based Control under Stochastic Sensor Uncertainty using Conformal Prediction" ]
[ "Shuo Yang ", "George J Pappas ", "Rahul Mangharam ", "Lars Lindemann ", "Shuo Yang ", "George J Pappas ", "Rahul Mangharam ", "Lars Lindemann " ]
[]
[]
We consider perception-based control using state estimates that are obtained from high-dimensional sensor measurements via learning-enabled perception maps. However, these perception maps are not perfect and result in state estimation errors that can lead to unsafe system behavior. Stochastic sensor noise can make matters worse and result in estimation errors that follow unknown distributions. We propose a perception-based control framework that i) quantifies estimation uncertainty of perception maps, and ii) integrates these uncertainty representations into the control design. To do so, we use conformal prediction to compute valid state estimation regions, which are sets that contain the unknown state with high probability. We then devise a sampled-data controller for continuous-time systems based on the notion of measurement robust control barrier functions. Our controller uses idea from self-triggered control and enables us to avoid using stochastic calculus. Our framework is agnostic to the choice of the perception map, independent of the noise distribution, and to the best of our knowledge the first to provide probabilistic safety guarantees in such a setting. We demonstrate the effectiveness of our proposed perception-based controller for a LiDAR-enabled F1/10th car.
10.48550/arxiv.2304.00194
[ "https://export.arxiv.org/pdf/2304.00194v1.pdf" ]
257,913,645
2304.00194
772af7207b62381fddb1be90fd4af76a89f0d8a0
Safe Perception-Based Control under Stochastic Sensor Uncertainty using Conformal Prediction Shuo Yang George J Pappas Rahul Mangharam Lars Lindemann Safe Perception-Based Control under Stochastic Sensor Uncertainty using Conformal Prediction We consider perception-based control using state estimates that are obtained from high-dimensional sensor measurements via learning-enabled perception maps. However, these perception maps are not perfect and result in state estimation errors that can lead to unsafe system behavior. Stochastic sensor noise can make matters worse and result in estimation errors that follow unknown distributions. We propose a perception-based control framework that i) quantifies estimation uncertainty of perception maps, and ii) integrates these uncertainty representations into the control design. To do so, we use conformal prediction to compute valid state estimation regions, which are sets that contain the unknown state with high probability. We then devise a sampled-data controller for continuous-time systems based on the notion of measurement robust control barrier functions. Our controller uses idea from self-triggered control and enables us to avoid using stochastic calculus. Our framework is agnostic to the choice of the perception map, independent of the noise distribution, and to the best of our knowledge the first to provide probabilistic safety guarantees in such a setting. We demonstrate the effectiveness of our proposed perception-based controller for a LiDAR-enabled F1/10th car. Introduction Perception-based control has received much attention lately [1][2][3][4]. System states are usually not directly observable and can only be estimated from complex and noisy sensors, e.g., cameras or LiDAR. Learningenabled perception maps can be utilized to estimate the system's state from such high-dimensional measurements. However, these estimates are usually imperfect and may lead to estimation errors, which are detrimental to the system safety. The above observation calls for perception-based control with safety guarantees as it is crucial for many autonomous and robotic systems like self-driving cars. Recent work has been devoted to addressing these safety concerns while applying perception-based control using perception maps, see, e.g., [3,[5][6][7][8]. These work, however, either assume simple or no sensor noise models, consider specific perception maps, or lack end-to-end safety guarantees. In realistic settings, stochastic sensor noise may be unknown and follow skewed and complex distributions that do not resemble a Gaussian distribution, as is often assumed. Additionally, perception maps can be complex, e.g., deep neural networks, making it difficult to quantify estimation uncertainty. In this paper, we study perception-based control under stochastic sensor noise that follows arbitrary and unknown distributions. To provide rigorous safety guarantees, we have to account for estimation uncertainty caused by i) imperfect learning-enabled perception maps, and ii) noisy sensor measurements. As shown in Figure 1, to perform safety-critical control, we first leverage conformal prediction [9], a statistical tool * S. Yang, G. J. Pappas, and R. Mangharam Figure 1: Overview of the system and robust safe controller. The stochastic sensor noise and imperfect perception module result in state estimation error. Conformal prediction is used to obtain the estimation error upper bound, which is then integrated into the sampled-data safe controller. for uncertainty quantification, to obtain state estimation regions that are valid with high probability. We then integrate these uncertain state estimation regions into the control design inspired by the notion of measurement robust control barrier functions from [5]. Specifically, we design a sampled-data controller using idea from self-triggered control to ensure safety for continuous-time systems while avoiding the use of stochastic calculus. To summarize, we make the following contributions: • We use conformal prediction to quantify state estimation uncertainty of complex learning-enabled perception maps under arbitrary sensor and noise models; • We use these uncertainty quantifications to design a sampled-data controller for continuous-time systems. We provide probabilistic safety guarantees which, to our knowledge, is the first work to do so in such a setting; • We demonstrate the effectiveness of our framework in the LiDAR-enabled F1/10th vehicle simulations. Related Work Perception-based control: Control from high-dimensional sensor measurements such as cameras or Li-DAR has lately gained attention. While empirical success has been reported, e.g., [1,2,10], there is a need for designing safe-by-construction perception-enabled systems. Resilience of perception-enabled systems to sensor attacks has been studied in [11,12], while control algorithms that provably generalize to novel environments are proposed in [13,14]. In another direction, the authors in [15] plan trajectories that actively reduce estimation uncertainty. Control barrier functions under estimation uncertainty: Perception maps are first presented in combination with measurement-robust control barrier functions in [5,8]. In these works, the perception error is quantified for the specific choice of the Nadarya-Watson regressor. Our approach is agnostic to the perception map and, importantly, allows to consider arbitrary stochastic sensor noise which poses challenges for continuous-time control. Measurement robust control barrier functions are learned in different variations in [6,16,17]. Perception maps are further used to design sampled-data controllers [18][19][20] without explicit uncertainty quantification of the sensor and perception maps. The works in [21,22] consider state observers, e.g., extended Kalman filters, for barrier function-based control of stochastic systems. On the technical level, our approach is different as we avoid dealing with Itô calculus using sampled-data control. Similarly, bounded state observers were considered in [23,24]. However, state observer-based approaches are generally difficult to use in perception-systems as models of high-dimensional sensors are difficult to obtain. The authors in [25] address this challenge by combining perception maps and state observers. However, the authors assume a bound on the sensor noise and do not explicitly consider the effect of stochastic noise distributions. Uncertainty quantification of perception maps is vital. In similar spirit to our paper, [7,26] use (self-)supervised learning for uncertainty quantification of vision-based systems. While success is empirically demonstrated, no formal guarantees are provided as we pursue in this paper. Conformal prediction for control: Conformal prediction is a statistical method that provides probabilistic guarantees on prediction errors of machine learning models. It has been applied in computer vision [27,28], protein design [29], and system verification [30,31]. Recently, there are works that use conformal prediction for safe planning in dynamic environments, e.g., [32,33]. However, conformal prediction is only used for quantifying the prediction and not perception uncertainty, as we do in this work. To our knowledge, our work is the first to integrate uncertainty quantification from conformal prediction into perception-based control. Preliminaries and Problem Formulation We denote by R, N, and R n the set of real numbers, natural numbers, and real vectors, respectively. Let β: R → R denote an extended class K ∞ function, i.e., a strictly increasing function with β(0) = 0. For a vector v ∈ R n , let v denote its Euclidean norm. System Model We consider nonlinear control-affine systems of the forṁ x(t) = f (x(t)) + g(x(t))u(t) =: F (x(t), u(t))(1) where x(t) ∈ R n and u(t) ∈ U are the state and the control input at time t, respectively, with U ⊆ R m denoting the set of permissible control inputs. The functions f : R n → R n and g : R n → R n×m describe the internal and input dynamics, respectively, and are assumed to be locally Lipschitz continuous. We assume that the dynamics in (1) are bounded, i.e., that there exists an upper boundF such that F (x, u) ≤F for every (x, u) ∈ R n × U. For an initial condition x(0) ∈ R n and a piecewise continuous control law u : R ≥0 → R m , we denote the unique solution to the system in (1) as x : I → R n where I ⊆ R ≥0 is the maximum time interval on which the solution x is defined. In this paper, we assume that we do not have knowledge of x(t) during testing time, but that we observe potentially high-dimensional measurements y(t) ∈ R l via an unknown locally Lipschitz continuous senor map p : R n × R d → R l as y(t) = p x(t), δ(x(t), t) ,(2) where δ(x(t), t) is a disturbance modeled as a state-dependent random variable that is drawn from an unknown distribution D x over R d , i.e., δ(x, t) ∼ D x . 1 A special case that equation (2) covers is those imperfect and noisy sensors that can be modeled as y(t) = x(t) + δ(t), e.g., as considered in [4,34]. The function p(x, δ(x, t)) can also encode a simulated image plus noise emulating a real camera. In general, the function p can model high-dimensional sensors such as camera images or LiDAR point clouds. A common assumption in recent work that we adopt implicitly in this paper is that there exists a hypothetical inverse sensor map q : R l → R n that can recover the state x as q(p(x, 0)) = x when there is no disturbance [5,35]. This inverse sensor map q is, however, rarely known and hard to model. One can instead learn perception map q : R l → R n that approximately recovers the state x such that q(y, 0) −q(y) is small and bounded, which can then be used for control design [5,25,35]. Note that learning an approximation of p is much harder than learning the approximationq of q when l n. Remark 1. The assumption on the existence of an inverse map q is commonly made, as in [5,25,35], and realistic when the state x consists of positions and orientations that can, for instance, be recovered from a single camera image. If the state x additionally consists of other quantities such as velocities, one can instead assume that q partially recovers the state as q(p(x, 0)) = Cx for a selector matrix C while using a contracting Kalman filter to estimate the remaining states when the system is detectable [25]. For the sake of simplicity, we leave this consideration for future work. Based on this motivation, we assume that we have obtained such a perception mapq : R l → R n that estimates our state x(t) at time t from measurements y(t), and is denoted aŝ x(t) :=q(y(t)). Note thatq could be any state estimator, such as a convolutional neural network. In our case study, we used a multi-layer perceptron (MLP) as the estimator. Safe Perception-Based Control Problem We are interested in designing control inputs u from measurements y that guarantee safety with respect to a continuously differentiable constraint function h : R n → R, i.e., so that h(x(t)) ≥ 0 for all t > 0 if initially h(x(0)) ≥ 0. Safety here can be framed as the controlled forward invariance of the system (1) with respect to the safe set C := {x ∈ R n |h(x) ≥ 0} which is the superlevel set of the function h. The difficulty in this paper is that we are not able to measure the state x(t) directly during runtime, and that we have only sensor measurements y(t) from the unknown and noisy sensor map p available. Problem 1. Consider the system in (1) with initial state x(0) ∈ R n and sensor model in (2). Let h : R n → R be a continuously differentiable constraint function, T ⊂ R ≥0 be a time interval, and α be a failure probability. Design a control input u from sensor measurements y such that Prob(x(t) ∈ C, ∀t ∈ T ) ≥ 1 − α. Uncertainty Quantification via Conformal Prediction In our solution to Problem 1, we use conformal prediction which is a statistical tool introduced in [9, 36] to obtain valid uncertainty regions for complex prediction models without making assumptions on the underlying distribution or the prediction model [37,38]. Let Z, Z (1) , . . . , Z (k) be k + 1 independent and identically distributed real-valued random variables, known as the nonconformity scores. Our goal is to obtain an uncertainty region for Z defined via a functionZ : R k → R so that Z is bounded byZ(Z (1) , . . . , Z (k) ) with high probability. Formally, given a failure probability α ∈ (0, 1), we want to construct an uncertainty regionZ such that Prob(Z ≤Z) ≥ 1 − α where we omitted the dependence ofZ on Z (1) , . . . , Z (k) for convenience. By a surprisingly simple quantile argument, see [39,Lemma 1], the uncertainty regionZ is obtained as the (1 − α)th quantile of the empirical distribution over the values of Z (1) , . . . , Z (k) and ∞. We recall this result next. Lemma 1 (Lemma 1 in [39]). Let Z, Z (1) , . . . , Z (k) be k + 1 independent and identically distributed realvalued random variables. Without loss of generality, let Z (1) , . . . , Z (k) be sorted in non-decreasing order and define Z (k+1) := ∞. For α ∈ (0, 1), it holds that Prob(Z ≤Z) ≥ 1 − α wherē Z := Z (r) with r := (k + 1)(1 − α) and where · is the ceiling function. Some clarifying comments are in order. First, we remark that Prob(Z ≤Z) is a marginal probability over the randomness in Z, Z (1) , . . . , Z (k) and not a conditional probability. Second, note that (k + 1)(1 − α) > k implies thatZ = ∞. Safe Perception-Based Control with Conformal Prediction Addressing Problem 1 is challenging for two reasons. First, the perception mapq may not be exact, e.g., even in the disturbance-free case, it may not hold thatq(p(x, 0)) = x. Second, even if we have accurate state estimates in the disturbance-free case, i.e., whenq(p(x, 0)) is close to x, this does not imply that we have the same estimation accuracy with disturbances, i.e.,q(p(x, δ)) may not necessarily be close to x. Our setting is thus distinctively different from existing approaches and requires uncertainty quantification of the noisy error betweenx(t) and x(t). Conformal Prediction for Perception Maps Let us now denote the stochastic state estimation error as e(x, t) := x − x = q p(x, δ(x, t)) =y − x . For a fixed state x ∈ R n , our first goal is to construct a prediction regionĒ x so that Prob e(x, t) ≤Ē x ≥ 1 − α(3) holds uniformly over t ∈ R ≥0 . Note that the distribution D x of δ is independent of time t so that we will get uniformity automatically. While we do not know the sensor map p, we assume here that we have an oracle that gives us N ≥ (N + 1)(1 − α) state-measurement data pairs (x, y (i) ) called calibration dataset, where i ∈ {1, . . . , N } and y (i) = p(x, δ (i) ) with δ (i) ∼ D x . This is a common assumption, see, e.g., [5,25], and such an oracle can, for instance, be a simulator that we can query data from. By defining the nonconformity score Z (i) := q(y (i) ) − x , and assuming that Z (i) are sorted in non-decreasing order, we can now obtain the guarantees in equation (3) by applying Lemma 1. In other words, we obtainĒ x := Z (r) with r from Lemma 1 so that Prob x ∈ {ζ ∈ R n | x − ζ ≤Ē x } ≥ 1 − α holds. Note that this gives us information about the estimatex, but not about the state x which was, in fact, fixed a-priori. To revert this argument and obtain a prediction region for x fromx, we have to ensure that equation (3) holds for a set of states instead of only a single state x, which will be presented next. To do so, we use a covering argument next. Consider now a compact subset of the workspace X ⊆ R n that should include the safe set C. Let > 0 be a gridding parameter and construct an -netX of X , i.e., construct a finite setX so that for each x ∈ X there exists an x j ∈X such that x − x j ≤ . For this purpose, simple gridding strategies can be used as long as the set X has a convenient representation. Alternatively, randomized algorithms can be used that sample from X [40]. We can now again apply a conformal prediction argument for each grid point x j ∈X and then show the following proposition. Proposition 1. Consider the Lipschitz continuous sensor map p in (2) and a perception mapq with respective Lipschitz constants L p and Lq. 2 Assume that we constructed an -netX of X . For each x j ∈X , let (x j , y (i) j ) be N ≥ (N + 1)(1 − α) data pairs where y (i) j := p(x j , δ (i) ) with δ (i) ∼ D x j . Define Z (i) j := q(y (i) j ) − x j , and assume that Z (i) j are sorted in non-decreasing order, and letĒ x j := Z (r) j with r from Lemma 1. Then, for any x ∈ X , it holds that Prob e(x, t) ≤ sup jĒ x j + (L p Lq + 1) ≥ 1 − α,(4) Proof. See Appendix. The above result says that the state estimation error e(x, t) can essentially be bounded, with probability 1 − α, by the worst case of conformal prediction regionĒ x j within the gridX and by the gridding parameter . Under the assumption that our system operates in the workspace X and based on inequality (4), we can hence conclude that Prob x ∈ {ζ ∈ R n | ζ −x ≤ sup jĒ x j +(L p Lq +1) } ≥ 1 − α. Remark 2. We note that the Lipschitz constants of the sensor and perception maps are used in the upper bound in (4) (as commonly done in the literature [5,18,25]), which may lead to a conservative bound. One practical way to mitigate this conservatism is to decrease the gridding parameter , i.e., to increase the sampling density in the workspace X . Sampled-Data Controller using Conformal Estimation Regions After bounding the state estimation error in Proposition 1, we now design a uncertainty-aware controller based on equation (4). However, a technical challenge in doing so is that the measurements are stochastic. By designing a sampled-data controller, we can avoid difficulties dealing with stochastic calculus. To do so, we first present a slightly modified version of measurement robust control barrier function (MR-CBF) introduced in [5]. Definition 1. Let C ⊆ R n be the zero-superlevel set of a continuously differentiable function h : R n → R. The function h is a measurement robust control barrier function (MR-CBF) for the system in (1) with parameter function pair (a, b) : R l → R 2 ≥0 if there exists an extended class K ∞ function β such that , δ))}, and L f h(x) and L g h(x) denote the Lie derivatives. sup u∈U [L f h(x) + L g h(x)u − (a(y) + b(y) u )] ≥ −β(h(x)) (5) for all (y,x) ∈ V (C), where V (C) := {(y,x) ∈ R l × R n |∃(x, δ) ∈ C × D x s.t.x =q(p(x Compared to regular CBFs [41], a MR-CBF introduces a non-positive robustness term −(a(y)+b(y) u ) which makes the constraint in (5) more strict. Now, given a MR-CBF h(x), the set of MR-CBF consistent control inputs is K CBF (y) := {u ∈ U|L f h(x) + L g h(x)u − (a(y) + b(y) u ) + β(h(x)) ≥ 0}.(6) Note that we can not simply follow [5, Theorem 2] to obtain a safe control law as u(t) ∈ K CBF (y(t)) since y(t) and consequent u(t) are stochastic. We hence propose a sampled-data control law that keeps the trajectory x(t) within the set C with high probability. The sampled-data control lawû is piecewise continuous and defined asû (t) := u(t i ), ∀t ∈ [t i , t i+1 ),(7) where u(t i ) at triggering time t i is computed by solving the following quadratic optimization problem u(t i ) = argmin u∈K CBF (y) u − u nom (t i )) 2 ,(8) where u nom (t i ) is any nominal control law that may not necessarily be safe. Then, we select the triggering instances t i as follows: t 0 := 0, t i+1 := (∆ − sup jĒ x j − (L p Lq + 1) )/F + t i ,(9) where ∆ is a user-defined parameter that will define the parameter pair (a, b) of the MR-CBF and that has to be ∆ > sup jĒxj + (L p Lq + 1) . Naturally, larger ∆ lead to less frequent control updates, but will require more robustness and reduce the set of permissible control inputs in K CBF (y). Based on the computation of triggering times in (9), the following lemma holds. Lemma 2. Consider the sampled-data control lawû(t) in (7) with the triggering rules (9), it holds that Prob x(t) −x(t i ) ≤ ∆, ∀t ∈ [t i , t i+1 ) ≥ 1 − α.(10) Proof. See Appendix. Intuitively, the above lemma says that x(t) −x(t i ) ≤ ∆ holds with high probability in between triggering times if the sampled-data control lawû(t) in (7) with the triggering rules (9) is executed. Then, we can obtain the following probabilistic safety guarantees. Theorem 1. Consider a MR-CBF h with parameter pair (a(y), b(y)) = ((L L f h + L β•h )∆, L Lgh ∆) where L L f h , L β•h , and L Lgh are the Lipschitz constants of the functions L f h, β • h and L g h, respectively. Then, for any nominal control law u nom , the sampled-data lawû(t) in (7) with the triggering rule in (9) will render the set C forward invariant with a probability of at least 1 − α. In other words, we have that Prob x(t) ∈ C, ∀t ∈ [t i , t i+1 ) ≥ 1 − α.(11) Proof. See Appendix. The above theorem solves Problem 1 for the time interval T = [t i , t i+1 ). If we want to consider a larger time interval T = [0, T ) under the sampled-data control law, we have the following guarantees. Proposition 2. Under the same condition as in Theorem 1, for a time interval T = [0, T ), we have that: Prob x(t) ∈ C, ∀t ∈ [0, T ) ≥ (1 − α) m ,(12) where m ∈ N >0 such that t m−1 ≤ T < t m . Proof. See Appendix. Note that if we want to achieve any probability guarantee p ∈ (0, 1), we can just let (1 − α) m = p and obtain α = 1 − p 1/m . Simulation Results To demonstrate our proposed safe perception-based control law, we consider navigating an F1/10th autonomous vehicle in a structured environment [42], which is shown in Figure 2. The vehicle system has the state x = [p x , p y , θ], where [p x , p y ] denotes its position and θ denotes its orientation. We have [ṗ x ,ṗ y ] = [u x , u y ], where u x and u y are control inputs denoting velocities, and θ = arctan(u y /u x ). The control input constraint is (u x , u y ) ∈ [−1, 1] × [−1, 1]. Thus, the assumption that there exists an upper boundF for dynamics holds for this system. Observation model: The vehicle is equipped with a 2D LiDAR scanner from which it obtains LiDAR measurements as its observations. Specifically, the measurement include 64 LiDAR rays uniformly ranging from − 3π 4 to 3π 4 relative to the vehicle's heading direction. To model the uncertainty of measurements, unknown noise conforming to exponential distribution is added to each ray: y k n = y k + δ, δ ∼ exp(λ), where y k is the ground truth for ray k, y k n is the corrupted observed ray k, and λ is the parameter of exponential distribution, where the noise δ is drawn from. In our experiments, we let λ := 2/3. Perception map: We trained a feedforward neural network to estimate the state of the vehicle. The input is the 64-dimensional LiDAR measurement and output is the vehicle's state. The training dataset D train contains 4 × 10 5 data points, and the calibration dataset D cal for conformal prediction contains 1.25 × 10 4 data points. For illustration, under a fixed heading θ and longitudinal position p y , the errors e(x) of the learned perception map with respect to sensor noise δ and horizontal position p x is shown in Figure 3. Barrier functions: To prevent collision with the walls, when the vehicle is traversing the long hallway, the CBF is chosen as h(x) = min{h 1 (x), h 2 (x)}, where h 1 (x) = p x and h 2 (x) = 1.5 − p x . Then we have the safe set C = {x ∈ R 3 |h(x) ≥ 0}. CBFs can be similarly defined when the vehicle is operating in the corner. To demonstrate the effectiveness of our method, we compare the following two cases in simulations: 1. Measurement robust CBF: as shown in Theorem 1, we choose the parameters pair (a(y), b(y)) = ((L L f h + L β•h )∆, L Lgh ∆) to ensure robust safety. 2. Vanilla CBF: we choose the parameters pair (a(y), b(y)) = (0, 0), which essentially reduces to the vanilla non-robust CBF [41]. However, the perceived state is from perceptual estimation rather than real state, so this CBF cannot provide any safety guarantee. Note that we obtain necessary Lipschitz constants using sampling-based estimation method in simulations. Uncertainty and results: The vehicle is expected to track along the hallway and make a successful turn in the corner as shown in Figure 2. The nominal controller is a PID controller. We set the coverage error α = 0.25, so we desire P ( x t −x t ≤ ) ≥ 1 − α = 75%. Based on calibration of conformal prediction and Proposition 1, we calculate that = 0.34, and we choose ∆ = 0.35 > . The nonconformity score histogram is presented in Figure 4, in which the 75% quantile value is R 0.75 x = 0.32 < , so our Proposition 1 holds in practice. As presented in Figure 5, the safety rate of sampled-data measurement robust CBF is 93%, which is significantly higher than vanilla non-robust CBF case (16%). Conclusion In this paper, we consider the safe perception-based control problem under stochastic sensor noise. We use conformal prediction to quantify the state estimation uncertainty, and then integrate this uncertainty into the design of sampled-data safe controller. We obtain probabilistic safety guarantees for continuous-time systems. Note that, in this work, the perception map only depends on current observation, which might limit its accuracy in some cases. We plan to incorporate history observations into perception maps in the future. Also, we are interested in providing a more sample-efficient scheme while constructing calibration dataset. Figure 5: Traces for the sampled-data measurement robust CBF and vanilla CBF (5 traces are presented). All traces are tested with horizon T = 30s. We run 100 traces totally, and the safety rates are 93% and 16%, respectively. . Appendix Proof of Proposition 1: First, note that p(x, δ), p(x , δ) ≤ L p x − x , q(p(x, δ), p(x , δ)) ≤ Lq · (p(x, δ), p(x , δ) . due to Lipschitz continuity of p andq. Since, for any x ∈ X , there exists a x i ∈X such that x − x i ≤ , we know that x −x i = q(p(x, δ)) −q(p(x i , δ)) ≤ Lq p(x, δ), p(x i , δ) ≤ LqL p x − x i ≤ LqL p . Thus, we can bound the state estimation error e(x, t) with probabability at least 1 − α as e(x, t) = x − x = x − x +x i −x i + x i − x i ≤ x −x i + x i − x + x i − x i ≤ LqL p + +Ē x i ≤ (LqL p + 1) + sup jĒ x j . Particularly, note that the last inequality holds since Prob( x i − x i ≤Ē x i ) ≥ 1 − α for each x i ∈X . Finally, we have that Prob e(x, t) ≤ sup jĒxj + (L p Lq + 1) ≥ 1 − α. Proof of Lemma 2: First, recall the system dynamics: F (x(t), u(t)) := f (x(t)) + g(x(t))u(t) =ẋ(t). By integrating the above ODE, we have that x(t) = x(t i ) + t t i F (x(s), u(s)) ds. Then, for any t ∈ [t i , t i+1 ), it holds w.p. 1 − α that x(t) −x(t i ) = t t i F (x(s), u(s)) ds + x(t i ) −x(t i ) ≤ (t − t i )F + sup jĒ x j + (L p Lq + 1) ≤ ∆,(13) where we used that Prob( + x(t i )−x(t i ) ≤ sup jĒxj +(L p Lq +1) ) ≥ 1−α according to Proposition 1. Thus, we have that Prob x(t) −x(t i ) ≤ ∆, ∀t ∈ [t i , t i+1 ) ≥ 1 − α. Proof of Theorem 1: Let us first define c(x(t), u(t)) := L f h(x(t)) + L g h(x(t))u(t) + β(h(x(t))), c(x(t), u(t)) := L f h(x(t)) + L g h(x(t))u(t) + β(h(x(t))). For any t ∈ [t i , t i+1 ), we can now upper bound the absolute difference between c(x(t i ), u(t i )) and c(x(t), u(t)) as |c(x(t i ), u(t i )) − c(x(t), u(t))| =|(L f h(x(t i )) + L g h(x(t i ))u(t i ) + β(h(x(t i )))) − (L f h(x(t)) + L g h(x(t))u(t) + β(h(x(t))))| =|(L f h(x(t i )) − L f h(x(t))) + (L g h(x(t i ))u(t i ) − L g h(x(t))u(t)) + (β(h(x(t i ))) − β(h(x(t))))| ≤(L L f h + L Lgh u(t i ) + L β•h ) · x(t) −x(t i ) ≤(L L f h + L Lgh u(t i ) + L β•h ) · ∆ (w.p. 1 − α)(15) Since c(x(t i ), u(t i )) ≥ (L L f h +L Lgh u(t i ) +L β•h )·∆, we can quickly obtain c(x(t), u(t)) ≥ c(x(t i ), u(t i ))− (L L f h + L Lgh u(t i ) + L β•h ) · ∆ ≥ 0 using the absolute difference bound we derived above, which implies h(x(t)) ≥ 0, ∀t ∈ [t i , t i+1 ). Also, x(t) −x(t i ) ≤ ∆ holds with the probability 1 − α, so we finally obtain that Prob h(x(t)) ≥ 0, ∀t ∈ [t i , t i+1 ) ≥ 1 − α. This ends the proof. Proof of Proposition 2: We first prove that P {x(t) ∈ C, ∀t ∈ [0, t m )} ≥ (1 − α) · P {x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )}, which can be obtained by the following derivations: Note that the third equality holds due to the continuity of h(x), i.e., if lim t→t m−1 h(x(t)) ≥ 0, then we have that h(x(t m−1 )) ≥ 0, which implies that the event x(t) ∈ C, ∀t ∈ [t 0 , t m−1 ) and the event x(t) ∈ C, ∀t ∈ [t 0 , t m−1 ] are essentially the same event. Finally, we can recursively decompose the probability over time interval This completes the proof. Figure 2 : 2The F1/10 vehicle is equipped with a 2D LiDAR sensor that outputs an array of 64 laser scans. The vehicle starts at a random position on the starting line. Figure 3 : 3The empirical model errors e(x) w.r.t. p x and δ measured on a validation set. p y and θ are fixed. Figure 4 : 4Nonconformity scores R x histogram during runtime. We select the coverage rate as 75%. [ 2 ] 2Y. Lin, F. Gao, T. Qin, W. Gao, T. Liu, W. Wu, Z. Yang, and S. Shen, "Autonomous aerial navigation using monocular visual-inertial fusion," Journal of Field Robotics, vol. 35, no. 1, pp. 23-51, 2018. P {x(t) ∈ C, ∀t ∈ [0, t m )} =P {x(t) ∈ C, ∀t ∈ [0, t 1 ) ∪ [t 1 , t 2 ) ∪ · · · ∪ [t m−1 , t m )} =P {x(t) ∈ C, ∀t ∈ [t m−1 , t m )|x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )} · P {x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )} =P {x(t) ∈ C, ∀t ∈ [t m−1 , t m )|x(t) ∈ C, ∀t ∈ [t 0 , t m−1 ]} · P {x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )} =P {x(t) ∈ C, ∀t ∈ [t m−1 , t m )|x(t m−1 ) ∈ C} · P {x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )} ≥(1 − α) · P {x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )} [ 0 , 0T ): P {x(t) ∈ C, ∀t ∈ [0, T )} =P {x(t) ∈ C, ∀t ∈ [0, t m )} ≥(1 − α) · P {x(t) ∈ C, ∀t ∈ [t 0 , t m−1 )} ≥(1 − α) · (1 − α) · P {x(t) ∈ C, ∀t ∈ [t 0 , t α) m · P {x(0) ∈ C} =(1 − α) m are with the Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA. L. Lindemann is with the Department of Computer Science, University of Southern California, Los Angeles, CA 90089, USA. Email: {yangs1, pappasg, rahulm}@seas.upenn.edu, [email protected] Sensors Camera, LiDAR, radar, etc. ሶ = + state stochastic noise Perception map measurement = ( , ( , )) Neural networks Safe controller Sampled-data state estimation ො safe input ( ො ) Nominal controller ( ො ) if triggered Measurement robust CBF state estimation ො safe input ( ො ) Calibration dataset ( 1 , 1 1 , 1 ), ⋯ , ( 1 , 1 1 , 1 ) 2 , 2 1 , 2 , ⋯ , 2 , 2 2 , 2 ⋯ , 1 , , ⋯ , , , Prob || − ො || ≤ ≥ 1 − α conformal prediction Safe Controller To increase readability, we omit time indices when there is no risk of ambiguity, i.e., in this case we mean δ(x(t), t) ∼ D x(t) . We assume that the Lipschitz constant of the sensor map p is uniform over the parameter δ, i.e., that δ does not affect the value of Lp. Aggressive flight with suspended payloads using vision-based control. S Tang, V Wüest, V Kumar, IEEE Robotics and Automation Letters. 32S. Tang, V. Wüest, and V. Kumar, "Aggressive flight with suspended payloads using vision-based control," IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1152-1159, 2018. Safe perception-based control with minimal worst-case dynamic regret. H Zhou, V Tzoumas, arXiv:2208.08929arXiv preprintH. Zhou and V. Tzoumas, "Safe perception-based control with minimal worst-case dynamic regret," arXiv preprint arXiv:2208.08929, 2022. Perception-based temporal logic planning in uncertain semantic maps. Y Kantaros, S Kalluraya, Q Jin, G J Pappas, IEEE Transactions on Robotics. Y. Kantaros, S. Kalluraya, Q. Jin, and G. J. Pappas, "Perception-based temporal logic planning in uncertain semantic maps," IEEE Transactions on Robotics, 2022. Guaranteeing safety of learned perception modules via measurement-robust control barrier functions. S Dean, A J Taylor, R K Cosner, B Recht, A D Ames, arXiv:2010.16001arXiv preprintS. Dean, A. J. Taylor, R. K. Cosner, B. Recht, and A. D. Ames, "Guaranteeing safety of learned per- ception modules via measurement-robust control barrier functions," arXiv preprint arXiv:2010.16001, 2020. Learning certifiably robust controllers using fragile perception. D Sun, N Musavi, G Dullerud, S Shakkottai, S Mitra, arXiv:2209.11328arXiv preprintD. Sun, N. Musavi, G. Dullerud, S. Shakkottai, and S. Mitra, "Learning certifiably robust controllers using fragile perception," arXiv preprint arXiv:2209.11328, 2022. Self-supervised online learning for safety-critical control using stereo vision. R K Cosner, I D J Rodriguez, T G Molnar, W Ubellacker, Y Yue, A D Ames, K L Bouman, 2022 International Conference on Robotics and Automation (ICRA). IEEER. K. Cosner, I. D. J. Rodriguez, T. G. Molnar, W. Ubellacker, Y. Yue, A. D. Ames, and K. L. Bouman, "Self-supervised online learning for safety-critical control using stereo vision," in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 11 487-11 493. Measurement-robust control barrier functions: Certainty in safety with uncertainty in state. R K Cosner, A W Singletary, A J Taylor, T G Molnar, K L Bouman, A D Ames, 2021R. K. Cosner, A. W. Singletary, A. J. Taylor, T. G. Molnar, K. L. Bouman, and A. D. Ames, "Measurement-robust control barrier functions: Certainty in safety with uncertainty in state," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 6286-6291. Algorithmic learning in a random world. V Vovk, A Gammerman, G Shafer, Springer Science & Business MediaV. Vovk, A. Gammerman, and G. Shafer, Algorithmic learning in a random world. Springer Science & Business Media, 2005. Feedback from pixels: Output regulation via learning-based scene view synthesis. M Abu-Khalaf, S Karaman, D Rus, Learning for Dynamics and Control. PMLR, 2021. M. Abu-Khalaf, S. Karaman, and D. Rus, "Feedback from pixels: Output regulation via learning-based scene view synthesis," in Learning for Dynamics and Control. PMLR, 2021, pp. 828-841. Attacks on perception-based control systems: Modeling and fundamental limits. A Khazraei, H Pfister, M Pajic, arXiv:2206.07150arXiv preprintA. Khazraei, H. Pfister, and M. Pajic, "Attacks on perception-based control systems: Modeling and fundamental limits," arXiv preprint arXiv:2206.07150, 2022. Resiliency of perception-based controllers against attacks. Learning for Dynamics and Control Conference. PMLR, 2022. --, "Resiliency of perception-based controllers against attacks," in Learning for Dynamics and Con- trol Conference. PMLR, 2022, pp. 713-725. Probably approximately correct vision-based planning using motion primitives. S Veer, A Majumdar, Conference on Robot Learning. PMLR, 2021. S. Veer and A. Majumdar, "Probably approximately correct vision-based planning using motion prim- itives," in Conference on Robot Learning. PMLR, 2021, pp. 1001-1014. Pac-bayes control: learning policies that provably generalize to novel environments. A Majumdar, A Farid, A Sonar, The International Journal of Robotics Research. 402-3A. Majumdar, A. Farid, and A. Sonar, "Pac-bayes control: learning policies that provably generalize to novel environments," The International Journal of Robotics Research, vol. 40, no. 2-3, pp. 574-593, 2021. Trajectory planning and optimization for minimizing uncertainty in persistent monitoring applications. M Ostertag, N Atanasov, T Rosing, Journal of Intelligent & Robotic Systems. 1061M. Ostertag, N. Atanasov, and T. Rosing, "Trajectory planning and optimization for minimizing uncer- tainty in persistent monitoring applications," Journal of Intelligent & Robotic Systems, vol. 106, no. 1, p. 2, 2022. Learning robust output control barrier functions from safe expert demonstrations. L Lindemann, A Robey, L Jiang, S Tu, N Matni, arXiv:2111.09971arXiv preprintL. Lindemann, A. Robey, L. Jiang, S. Tu, and N. Matni, "Learning robust output control barrier func- tions from safe expert demonstrations," arXiv preprint arXiv:2111.09971, 2021. Learning safe, generalizable perception-based hybrid control with certificates. C Dawson, B Lowenkamp, D Goff, C Fan, IEEE Robotics and Automation Letters. 72C. Dawson, B. Lowenkamp, D. Goff, and C. Fan, "Learning safe, generalizable perception-based hybrid control with certificates," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1904-1911, 2022. Online optimization of dynamical systems with deep learning perception. L Cothren, G Bianchin, E Dall&apos;anese, IEEE Open Journal of Control Systems. 1L. Cothren, G. Bianchin, and E. Dall'Anese, "Online optimization of dynamical systems with deep learning perception," IEEE Open Journal of Control Systems, vol. 1, pp. 306-321, 2022. Perception-based sampled-data optimization of dynamical systems. L Cothren, G Bianchin, S Dean, E Dall&apos;anese, arXiv:2211.10020arXiv preprintL. Cothren, G. Bianchin, S. Dean, and E. Dall'Anese, "Perception-based sampled-data optimization of dynamical systems," arXiv preprint arXiv:2211.10020, 2022. Safe and robust observer-controller synthesis using control barrier functions. D R Agrawal, D Panagou, IEEE Control Systems Letters. 7D. R. Agrawal and D. Panagou, "Safe and robust observer-controller synthesis using control barrier functions," IEEE Control Systems Letters, vol. 7, pp. 127-132, 2022. Control barrier functions for complete and incomplete information stochastic systems. A Clark, 2019 American Control Conference (ACC). IEEEA. Clark, "Control barrier functions for complete and incomplete information stochastic systems," in 2019 American Control Conference (ACC). IEEE, 2019, pp. 2928-2935. Control barrier functions for stochastic systems. Automatica. 130109688--, "Control barrier functions for stochastic systems," Automatica, vol. 130, p. 109688, 2021. Observer-based control barrier functions for safety critical systems. Y Wang, X Xu, 2022 American Control Conference (ACC). IEEEY. Wang and X. Xu, "Observer-based control barrier functions for safety critical systems," in 2022 American Control Conference (ACC). IEEE, 2022, pp. 709-714. Control barrier function meets interval analysis: Safety-critical control with measurement and actuation uncertainties. Y Zhang, S Walters, X Xu, 2022 American Control Conference (ACC). Y. Zhang, S. Walters, and X. Xu, "Control barrier function meets interval analysis: Safety-critical control with measurement and actuation uncertainties," in 2022 American Control Conference (ACC). . IEEE. IEEE, 2022, pp. 3814-3819. Safe output feedback motion planning from images via learned perception modules and contraction theory. G Chou, N Ozay, D Berenson, Algorithmic Foundations of Robotics XV: Proceedings of the Fifteenth Workshop on the Algorithmic Foundations of Robotics. SpringerG. Chou, N. Ozay, and D. Berenson, "Safe output feedback motion planning from images via learned perception modules and contraction theory," in Algorithmic Foundations of Robotics XV: Proceedings of the Fifteenth Workshop on the Algorithmic Foundations of Robotics. Springer, 2022, pp. 349-367. Uncertainty-aware visual perception for safe motion planning. R Römer, A Lederer, S Tesfazgi, S Hirche, arXiv:2209.06936arXiv preprintR. Römer, A. Lederer, S. Tesfazgi, and S. Hirche, "Uncertainty-aware visual perception for safe motion planning," arXiv preprint arXiv:2209.06936, 2022. Uncertainty sets for image classifiers using conformal prediction. A Angelopoulos, S Bates, J Malik, M I Jordan, arXiv:2009.14193arXiv preprintA. Angelopoulos, S. Bates, J. Malik, and M. I. Jordan, "Uncertainty sets for image classifiers using conformal prediction," arXiv preprint arXiv:2009.14193, 2020. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. A N Angelopoulos, A P Kohli, S Bates, M Jordan, J Malik, T Alshaabi, S Upadhyayula, Y Romano, International Conference on Machine Learning. PMLR, 2022. A. N. Angelopoulos, A. P. Kohli, S. Bates, M. Jordan, J. Malik, T. Alshaabi, S. Upadhyayula, and Y. Romano, "Image-to-image regression with distribution-free uncertainty quantification and applica- tions in imaging," in International Conference on Machine Learning. PMLR, 2022, pp. 717-730. Conformal prediction for the design problem. C Fannjiang, S Bates, A Angelopoulos, J Listgarten, M I Jordan, arXiv:2202.03613arXiv preprintC. Fannjiang, S. Bates, A. Angelopoulos, J. Listgarten, and M. I. Jordan, "Conformal prediction for the design problem," arXiv preprint arXiv:2202.03613, 2022. Conformal predictions for hybrid system state classification. L Bortolussi, F Cairoli, N Paoletti, S D Stoller, From Reactive Systems to Cyber-Physical Systems. SpringerL. Bortolussi, F. Cairoli, N. Paoletti, and S. D. Stoller, "Conformal predictions for hybrid system state classification," in From Reactive Systems to Cyber-Physical Systems. Springer, 2019, pp. 225-241. Neural predictive monitoring under partial observability. F Cairoli, L Bortolussi, N Paoletti, Runtime Verification: 21st International Conference, RV 2021, Virtual Event. SpringerProceedings 21F. Cairoli, L. Bortolussi, and N. Paoletti, "Neural predictive monitoring under partial observability," in Runtime Verification: 21st International Conference, RV 2021, Virtual Event, October 11-14, 2021, Proceedings 21. Springer, 2021, pp. 121-141. Safe planning in dynamic environments using conformal prediction. L Lindemann, M Cleaveland, G Shim, G J Pappas, arXiv:2210.10254arXiv preprintL. Lindemann, M. Cleaveland, G. Shim, and G. J. Pappas, "Safe planning in dynamic environments using conformal prediction," arXiv preprint arXiv:2210.10254, 2022. Adaptive conformal prediction for motion planning among dynamic agents. A Dixit, L Lindemann, S Wei, M Cleaveland, G J Pappas, J W Burdick, arXiv:2212.00278arXiv preprintA. Dixit, L. Lindemann, S. Wei, M. Cleaveland, G. J. Pappas, and J. W. Burdick, "Adaptive conformal prediction for motion planning among dynamic agents," arXiv preprint arXiv:2212.00278, 2022. Towards real-world gas distribution mapping and leak localization using a mobile robot with 3d and remote gas sensing capabilities. V M H Bennetts, A J Lilienthal, A A Khaliq, V P Sese, M Trincavelli, 2013 IEEE International Conference on Robotics and Automation. IEEEV. M. H. Bennetts, A. J. Lilienthal, A. A. Khaliq, V. P. Sese, and M. Trincavelli, "Towards real-world gas distribution mapping and leak localization using a mobile robot with 3d and remote gas sensing capabilities," in 2013 IEEE International Conference on Robotics and Automation. IEEE, 2013, pp. 2335-2340. Robust guarantees for perception-based control. S Dean, N Matni, B Recht, V Ye, Learning for Dynamics and Control. PMLR, 2020. S. Dean, N. Matni, B. Recht, and V. Ye, "Robust guarantees for perception-based control," in Learning for Dynamics and Control. PMLR, 2020, pp. 350-360. A tutorial on conformal prediction. G Shafer, V Vovk, Journal of Machine Learning Research. 93G. Shafer and V. Vovk, "A tutorial on conformal prediction." Journal of Machine Learning Research, vol. 9, no. 3, 2008. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. A N Angelopoulos, S Bates, arXiv:2107.07511arXiv preprintA. N. Angelopoulos and S. Bates, "A gentle introduction to conformal prediction and distribution-free uncertainty quantification," arXiv preprint arXiv:2107.07511, 2021. Distribution-free predictive inference for regression. J Lei, M Sell, A Rinaldo, R J Tibshirani, L Wasserman, Journal of the American Statistical Association. 113523J. Lei, M. G'Sell, A. Rinaldo, R. J. Tibshirani, and L. Wasserman, "Distribution-free predictive infer- ence for regression," Journal of the American Statistical Association, vol. 113, no. 523, pp. 1094-1111, 2018. Conformal prediction under covariate shift. R J Tibshirani, R Barber, E Candes, A Ramdas, Advances in neural information processing systems. 32R. J. Tibshirani, R. Foygel Barber, E. Candes, and A. Ramdas, "Conformal prediction under covariate shift," Advances in neural information processing systems, vol. 32, 2019. High-dimensional probability: An introduction with applications in data science. R Vershynin, Cambridge university press47R. Vershynin, High-dimensional probability: An introduction with applications in data science. Cam- bridge university press, 2018, vol. 47. Control barrier function based quadratic programs for safety critical systems. A D Ames, X Xu, J W Grizzle, P Tabuada, IEEE Transactions on Automatic Control. 628A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, "Control barrier function based quadratic programs for safety critical systems," IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3861-3876, 2016. Case study: verifying the safety of an autonomous racing car with a neural network controller. R Ivanov, T J Carpenter, J Weimer, R Alur, G J Pappas, I Lee, Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control. the 23rd International Conference on Hybrid Systems: Computation and ControlR. Ivanov, T. J. Carpenter, J. Weimer, R. Alur, G. J. Pappas, and I. Lee, "Case study: verifying the safety of an autonomous racing car with a neural network controller," in Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control, 2020, pp. 1-7.
[]
[ "THE EUROPEAN PHYSICAL JOURNAL B Colloquium Micromagnetics and spintronics: models and numerical methods", "THE EUROPEAN PHYSICAL JOURNAL B Colloquium Micromagnetics and spintronics: models and numerical methods" ]
[ "Claas Abert \nFaculty of Physics\nChristian Doppler Laboratory for Advanced Magnetic Sensing and Materials\nUniversity of Vienna\n1090ViennaAustria\n" ]
[ "Faculty of Physics\nChristian Doppler Laboratory for Advanced Magnetic Sensing and Materials\nUniversity of Vienna\n1090ViennaAustria" ]
[]
Computational micromagnetics has become an indispensable tool for the theoretical investigation of magnetic structures. Classical micromagnetics has been successfully applied to a wide range of applications including magnetic storage media, magnetic sensors, permanent magnets and more. The recent advent of spintronics devices has led to various extensions to the micromagnetic model in order to account for spin-transport effects. This article aims to give an overview over the analytical micromagnetic model as well as its numerical implementation. The main focus is put on the integration of spin-transport effects with classical micromagnetics.
10.1140/epjb/e2019-90599-6
[ "https://web.archive.org/web/20201103072744/https:/services.phaidra.univie.ac.at/api/object/o:1071503/diss/Content/download" ]
53,635,528
1810.12365
ab5eadac0aaf6677defe4871a9c135d8e4983183
THE EUROPEAN PHYSICAL JOURNAL B Colloquium Micromagnetics and spintronics: models and numerical methods Published online 10 June 2019 Claas Abert Faculty of Physics Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials University of Vienna 1090ViennaAustria THE EUROPEAN PHYSICAL JOURNAL B Colloquium Micromagnetics and spintronics: models and numerical methods Published online 10 June 201910.1140/epjb/e2019-90599-6Received 11 October 2018 / Received in final form 26 January 2019Eur. Phys. J. B (2019) 92: 120 Computational micromagnetics has become an indispensable tool for the theoretical investigation of magnetic structures. Classical micromagnetics has been successfully applied to a wide range of applications including magnetic storage media, magnetic sensors, permanent magnets and more. The recent advent of spintronics devices has led to various extensions to the micromagnetic model in order to account for spin-transport effects. This article aims to give an overview over the analytical micromagnetic model as well as its numerical implementation. The main focus is put on the integration of spin-transport effects with classical micromagnetics. Introduction The micromagnetic model has proven to be a reliable tool for the theoretical description of magnetization processes on the micron scale. In contrast to purely quantum mechanical theories, such as density functional theory, micromagnetics does not account for distinct magnetic spins nor nondeterministic effects due to collapse of the wave function. However, micromagnetics integrates quantum mechanical effects that are essential to ferromagnetism, like the exchange interaction, with a classical continuous field description of the magnetization in the sense of expectation values. The main assumption of this model is that the organizing forces in the magnetic material are strong enough to keep the magnetization in parallel on a characteristic length scale λ well above the lattice constant a S i « S j for |x i´xj | ă λ " a(1) where S i{j and x i{j are distinct spins and their positions, respectively. For a homogeneous density of spins, the discrete distribution of magnetic moments S i is well approximated by a continuous vector density M pxq such that ż Ω M pxq dx « ÿ i 1 Ω px i qS i(2) renders approximately true for arbitrary volumes Ω of the size λˆλˆλ and larger with 1 Ω being the indicator function of Ω. The continuous magnetization M pxq a e-mail: [email protected] has a constant norm due to the homogeneous density of spins and can thus be written in terms of a unit vector field mpxq M pxq " M s mpxq with |mpxq| " 1, where M s is the spontaneous magnetization. In the case of zero temperature, which is often considered for micromagnetic modeling, M s is the saturation magnetization which is a material constant. While m and M have to be strictly distinguished, both are referred to as magnetization throughout this work for the sake of simplicity. Due to the combination of classical field theory with quantum mechanical effects, micromagnetics is often referred to as semiclassical theory. Opposed to the macroscopic Maxwell equations, the micromagnetic model resolves the structure of magnetic domains and domain walls. This enables accurate hysteresis computations of macroscopic magnets, since hysteresis itself is the direct result of field-induced domain generation and annihilation. While static hysteresis computations are very important for the development of novel permanent magnets [1], another application area for micromagnetics is the description of magnetization dynamics on the micron scale. The time and space-resolved description of magnetic switching processes and domain-wall movements are essential for the development of novel storage and sensing technologies such as magnetoresistive random-access memory (MRAM) [2], sensors for read heads [3], and angle sensors [4]. Besides the manipulation of the magnetization with external fields, the interaction of spin polarized electric currents with the magnetization plays an increasing role for novel devices. Several extensions to the classical micromagnetic theory have been proposed in order to account for these spintronics effects. A lot of articles and books have been published on analytical [5][6][7] as well as numerical [1,8,9] micromagnetics. This review article is supposed to serve two purposes. First, it is meant to give a comprehensive yet compact overview over the micromagnetic theory, both on an analytical as well as a numerical level. The article describes the most commonly used discretization strategies, namely the finite-difference method (FDM) and the finite-element method (FEM), and discusses advantages, disadvantages and pitfalls in their implementation. The second and main purpose of this article, however, is to give an overview over existing models for spin transport in the context of micromagnetics. This article reviews the applicability and the limits of these models and discusses their discretization. Energetics of a ferromagnet The total energy of a ferromagnetic system with respect to its magnetization is composed by a number of contributions depending on the properties of the respective material. While some of these contributions, like the demagnetization energy and the Zeeman energy, can be described by classical magnetostatics, other contributions like the exchange energy and the magnetocrystalline anisotropy energy have a quantum mechanical origin. This section aims to give an overview over typical energy contributions and their representation in the micromagnetic model. Zeeman energy The energy of a ferromagnetic body highly depends on the external field H zee . The corresponding energy contribution is often referred to as Zeeman energy. According to classical electromagnetics the Zeeman energy of a magnetic body Ω m is given by E zee "´µ 0 ż Ωm M s m¨H zee dx (4) with µ 0 being the vacuum permeability. Exchange energy The characteristic property of ferromagnetic materials is its remanent magnetization, i.e. even for a vanishing external field, a ferromagnetic system can have a nonvanishing macroscopic magnetization. In a system where the spins are coupled by their dipole-dipole interaction only, the net magnetization always vanishes for a vanishing external field as known from classical electrodynamics [10]. However, in ferromagnetic materials, the spins are subject to the so-called exchange interaction. For two localized spins, this quantum mechanical effect energetically favors a parallel over an antiparallel spin alignment. The origin of this energy contribution can be attributed to the Coulomb energy of the respective two-particle system, typically consisting of two electrons. Depending on the spin-alignment, the two-particle wave function is either symmetric or anti-symmetric leading to higher expectation value of the distance, and thus a lower expectation value of the Coulomb energy, in case of an parallel alignment. The classical description of the exchange interaction is given by the Heisenberg model. Details on its derivation can be found in any textbook on quantummechanics, e.g. [11]. The Heisenberg formulation of the exchange energy of two unit spins s i and s j is defined as E ex ij "´J s i¨sj(5) with J being the so-called exchange integral. With respect to the continuous magnetization field mpxq, the exchange energy associated with all couplings of a single spin site x is given by E ex x "´ÿ i J i 2 mpxq¨mpx`∆x i q (6) "´ÿ i J i 2 " 1´1 2 p∇m T¨∆ x i q 2 `O p∆x 3 i q (7) where the index i runs over all exchange coupled spin sites at positions x`∆x i , and J i denotes the exchange integral with the respective spin. Expression (7) is obtained by application of the unit-vector identity pn 1´n2 q 2 " 2´2n 1¨n2 and performing Taylor expansion of lowest order. The exchange integral J i highly depends on the distance of the spin sites. Hence, significant contributions to the exchange energy are only provided by nearby spins, usually next neighbors. The transition from the discrete Heisenberg model to a continuous expression for the total exchange energy is done by integrating (7) while considering a regular spin lattice, i.e. a regular spacing of the spin sites x as well as identical J i and ∆x i for each site. In the most general form this procedure yields (8) where the coefficients of the matrix A jk depend on the crystal structure and the resulting exchange couplings of the spins in the magnetic body. The term C results from the integration of the constant part of (7) and is usually neglected since it does not depend on m and thus only gives a constant offset to the energy without changing the physics of the system. The matrix A can always be diagonalized by a proper choice of coordinate system [12] which yields E ex " C`ż Ωm ÿ i,j,k A jk Bm i Bx j Bm i Bx k dxE ex " ż Ωm ÿ i,j A j˜B m i Bx 1 j¸2 dx 1 .(9) For cubic and isotropic lattice structures, the exchange coupling constants A j simplify further to the scalar exchange constant A which results in the typical micromagnetic expression for the exchange energy (10) where p∇mq 2 " ř i,j pBm i {Bx j q 2 is to be understood as a Frobenius inner product. Although derived for localized spins and isotropic lattice structures, this energy expression turns out to accurately describe a large number of materials including band magnets and anisotropic materials [13]. This is explained by the fact, that (10) exactly represents the lowest order phenomenological energy expression that penalizes inhomogeneous magnetization configurations. E ex " ż Ωm A ÿ i,jˆB m i Bx j˙2 dx " ż Ωm Ap∇mq 2 dx Demagnetization energy The demagnetization energy accounts for the dipoledipole interaction of a magnetic system. This energy contribution, that is also referred to as magnetostatic energy or stray-field energy, owes its name to the fact that magnetic systems energetically favor macroscopically demagnetized states if they are subject to dipole-dipole interaction only. For a continuous magnetization M " M s m the demagnetization energy can be derived from classical electromagnetics. Assuming a vanishing electric current j e " 0, Maxwell's macroscopic equations reduce to ∇¨B " 0 (11) ∇ˆH dem " 0 (12) where the magnetic flux B can be written in terms of the magnetic field H dem and the magnetization M B " µ 0 pH dem`M q. According to (12), the magnetic field H dem is conservative and thus has a scalar potential H dem "´∇u. With these definitions (11)- (12) can be reduced to the single equation ∇¨p´∇u`M q " 0(14) which is solved in the whole space R 3 . Assuming a localized magnetization configuration, the boundary conditions are given in an asymptotical fashion by upxq " Op1{|x|q for |x| Ñ 8 (15) which is referred to as open boundary condition since the potential is required to drop to zero at infinity. The defining equation (14) is often transformed in Poisson's equation ∆u " ∇¨M .(16) However, in contrast to the original equation (14), the divergence on the right-hand side of (16) may become singular at the boundary of the magnetic material in the case of a localized magnetization. In this case, (16) is well defined only in a distributional sense. The potential u can be expressed in terms of an integral equation by considering the well-known fundamental solution to the Laplacian that naturally fulfills the required open boundary conditions [10] upxq "´1 4π ż R 3 ∇ 1¨M px 1 q |x´x 1 | dx 1 .(17) For a localized magnetization with sharp boundaries, this solution, like (16), suffers from a singular divergence at the boundary of the magnetic body. However, this singularity is integrable as demonstrated in the following. Consider a finite magnet with |M pxq| " M s for x P Ω m surrounded by a thin transition shell Ω t where the magnetization continuously decays to zero, see Figure 1. In this case, the integration over R 3 in (17) can be reduced to an integration over Ω m Y Ω t since the magnetization, and thus the integrand of (17), vanishes outside the magnet. Further, the integral is split into integration over Ω m and Ω t , and the integral over Ω t is transformed with Green's theorem (18) where ds 1 denotes the areal measure to x 1 and n is an outward-pointing normal vector. In order to obtain the potential for an ideal magnet with a sharp transition of the magnetic region Ω m to the air region, we consider the limit of a vanishing transition region Ω t Ñ 0. In this case, the right-hand side of (18) reduces to the boundary integral. Furthermore, the boundary integral vanishes for the outer boundary of Ω t , because of a vanishing magnetization M . The inner boundary, however, coincides with the boundary of the magnetic region BΩ m except for its orientation. A complete integral form for the magnetic scalar potential of an ideal localized magnet in region Ω m accordingly reads In analogy to the integral equation for the electric field, the terms ρ "´∇¨M and σ " M¨n are often referred to as magnetic volume charges and magnetic surfaces charges, respectively. An alternative integral expression for the potential is obtained by applying Green's theorem to (19) upxq " 1 4π ż Ωt ∇ 1¨M px 1 q |x´x 1 | dx 1 " ż BΩt M px 1 q¨n |x´x 1 | ds 1 ż Ωt M px 1 q¨∇ 1 1 |x´x 1 | dx 1upxq"´1 4π "ż Ωm ∇ 1¨M px 1 q |x´x 1 | dx 1´ż BΩm M px 1 q¨n |x´x 1 | ds 1  .(19)ż Ωm M px 1 q¨∇ 1 1 |x´x 1 | dx 1 .(20) Starting from this formulation, the demagnetization field H dem can be expressed as a convolution H dem pxq "´∇upxq " ż ΩmÑ px´x 1 qM px 1 q dx 1 (21) with the so-called demagnetization tensorÑ given bỹ N px´x 1 q "´1 4π ∇∇ 1 1 |x´x 1 | .(22) According to classical electrodynamics, the energy connected to the demagnetization field is given by E dem "´µ 0 2 ż Ωt M¨H dem dx(23) where the factor 1{2 accounts for the quadratic dependence of the energy on the magnetization M . The competition of the demagnetization energy and the exchange energy leads to the formation of magnetic domains. A graphic example for this effect is the magnetic vortex structure depicted in Figure 2. In order to minimize surface charges σ, that contribute to the demagnetization energy, the magnetization field aligns parallel with edges and surfaces, which explains the curl-like configuration. The exchange energy favors a parallel alignment of the magnetization which leads to the creation of four distinct, almost homogeneously magnetized, triangular domains, each aligned with one of the square's edges. A perfect in-plane curl configuration, which completely avoids surface charges, is very unfavorable with respect to the exchange energy because it leads to a singularity in the center of the curl. In order to reduce the exchange energy, the magnetization rotates out-of-plane in a distinct area around the center of the vortex, called the vortex core. Crystalline anisotropy energy Another important contribution to the total free energy of a magnet is the anisotropy energy that favors the parallel alignment of the magnetization to certain axes referred to as easy axes. The origin of this energy lies in the spin-orbit coupling either due to an anisotropic crystal structure or due to lattice deformation at material interfaces [13]. Depending on the symmetry of these anisotropies, the respective material will exhibit one or multiple easy axes. These axes are undirected and thus the energy does not depend on the sign of the magnetization E ani pmq " E ani p´mq.(24) For the simplest case of a single easy axis, the anisotropy energy is given by E aniu "´ż Ωm rK u1 pm¨e u q 2`K u2 pm¨e u q 4`O pm 6 qs dx(25) where e u is a unit vector parallel to the easy axis and K u1 and K u2 are the scalar anisotropy constants. This phenomenological expression is obtained by symmetry considerations. For a uniaxial anisotropy, the energy may depend only on the angle between the magnetization and easy axis m¨e u . Furthermore, only even powers in m¨e u are considered in order to fulfill condition (24). Uniaxial anisotropy typically occurs in materials with a hexagonal or tetragonal crystal structure, e.g. cobalt. Materials with cubic lattice symmetry such as iron, which has a body-centered cubic structure, exhibit three easy axes e i which are pairwise orthogonal e i¨ej " δ ij .(26) Like for the uniaxial anisotropy, the expression for the cubic anisotropy energy is developed as series in magnetization components along the easy axes up to sixth order E anic " ż Ω rK c1 pm 2 1 m 2 2`m 2 2 m 2 3`m 2 3 m 2 1 q K c2 m 2 1 m 2 2 m 2 3 s dx(27) where m i " e i¨m is the projection of the magnetization m on the anisotropy axis e i . Only contributions compatible with the symmetry condition (24) are considered. Moreover, the resulting expression is required to be constant under permutation of magnetization components m i in order to have a cubic symmetry. While the magnetization prefers to align parallel to the respective axes in the case of positive anisotropy constants K u1 , K u2 , K c1 , K c2 , the magnetization avoids a parallel configuration for negative anisotropy constants. In the case of uniaxial anisotropy, this leads to an easyplane anisotropy. In the case of cubic anisotropy, this leads to four easy axes as is the case for nickel which has a face-centered cubic structure. Both, equations (25) and (27) hold for magnetic anisotropies in bulk material. If magnetic anisotropy is caused at material interfaces, either due to lattice deformation or due to the electric band structure, the energy depends on the magnetization configuration m at this interface only. The energy for such a surface anisotropy is obtained by similar expressions as (25) and (27). However, instead of integrating over the magnetic volume Ω m the integration in this case has to be carried out over the respective interface BΩ m only. Although being derived only phenomenologically, the expressions (25) and (27) have proven to describe anisotropy effects with high accuracy. For many application it is even sufficient to consider the lowest order contributions only and setting the higher order constants K u2 and K c2 to zero. Antisymmetric exchange energy As discovered by Dzyaloshinskii [14] and Moriya [15], neighboring spins can be subject to an antisymmetric exchange interaction in addition to the regular exchange interaction discussed in Section 2.2. This effect, that is often referred to as Dzyaloshinskii-Moriya interaction (DMI), is caused by the spin-orbit coupling in certain material systems. The general antisymmetric exchange energy of two spins s i and s j is given as E dmi ij " d ij¨p s iˆsj q(28) where the vector d ij depends on the symmetry of the system. A typical system that gives rise to DMI is a magnetic layer with an interface to a heavy-metal layer. In this case, the antisymmetric exchange between two neighboring magnetic spins near the interface is mediated by a single atom in the heavy metal layer and the vector d ij is given as d ij " dp∆xˆe d q(29) where d is a scalar coupling constant, ∆x " ∆x{|∆x| is a unit vector pointing from spin site i to spin site j, and e d is the interface normal. The transition to continuum theory is done similar to the exchange interaction. Namely, in a first step, the energy of the couplings for a single spin site is expressed in terms of the continuous magnetization field m and the magnetization at the neighboring site mpx`∆xq is expanded in powers of ∆x to the lowest order E dmii x " ÿ i d i 2 " ∆x iˆed ‰¨" mpxqˆmpx`∆x i q ‰ (30) " ÿ i d i 2 " ∆x i¨m ‰" e d¨p m`∇m T ∆x i q ‰ d i 2 " ∆x i¨p m`∇m T ∆x i q ‰" e d¨m ‰`O p∆x 2 i q (31) " ÿ i d i 2 " ∆x i¨m ‰" e d¨p ∇m T ∆x i q ‰ d i 2 " ∆x i¨p ∇m T ∆x i q ‰" e d¨m ‰`O p∆x 2 i q (32) where the vector identity paˆbq¨pcˆdq " pa¨cqpb¨dqṕ a¨dqpb¨cq was used and the summation is carried out over the coupled neighboring spins. Performing integration and assuming isotropic coupling d i as well as an isotropic lattice spacing ∆x i , similar to the exchange interaction in Section 2.2, yields the continuous expression E dmii " ż Ωm D i " m¨∇pe d¨m q´p∇¨mqpe d¨m q ‰ dx(33) for the total antisymmetric exchange energy for interface DMI. The scalar coupling constant D i depends on the coupling constants d i as well as the relative positions ∆x i . Another class of materials exhibiting DMI are magnetic bulk materials lacking inversion symmetry [16,17]. For these materials, the coupling vector d ij is given as d ij "´d∆x(34) which results in the following energy E x for the couplings of a single spin site x E dmib x "´ÿ i d i 2 ∆x i " mpxqˆmpx`∆x i q ‰ (35) "´ÿ i d i 2 ∆x i " mpxqˆp∇m T ∆x i q ‰ .(36) Again, assuming isotropic coupling d i and lattice spacing ∆x i results in the continuous formulation for the energy E dmib " ż Ωm D b m¨p∇ˆmq dx(37) with the coupling constant D b depending on the atomistic coupling constants d i and the lattice spacing ∆x i . Besides the prominent interface and bulk DMI, further antisymmetric exchange couplings are defined by Lifshitz invariants [18,19]. The antisymmetric exchange counteracts the regular exchange energy that favors homogeneous magnetization configurations and penalizes domain walls. It gives rise to a magnetization configuration called skyrmion, see Figure 3. A skyrmion is characterized by a continuous rotation of the magnetization field on any lateral axis crossing the center. It has topological charge meaning that the skyrmion configuration is not continuously transformable into the homogeneous ferromagnetic state. Interlayer-exchange energy The magnetic layers of a multilayer structure may be exchange coupled even when separated by a nonmagnetic spacer layer. This coupling, which was first proposed by Ruderman and Kittel [20], is mediated by the conducting electrons of the nonmagnetic layer. The coupling constant A shows oscillatory behavior with respect to the thickness of the spacer layer, i.e. depending on its thickness, the coupling of the magnetic layers may be either ferromagnetic or antiferromagnetic. This effect was described in a more generalized theory by Kasuya [21] and Yosida [22] and is referred to as Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction. In the continuous approximation, the interaction is assumed to couple the interface between one magnetic layer and the spacer layer Γ 1 and the interface between the other magnetic layer and the spacer layer Γ 2 . These interfaces are assumed to have equal size. Integration of the Heisenberg interaction (5) yields the continuous expression for the interlayer-exchange energy E iex "´ż Γ1 A mpxq¨mrP pxqs ds (38) with A being the exchange constant whose sign and strength depend on the thickness of the spacer layer and P : Γ 1 Ñ Γ 2 being an isomorphism that maps any point on Γ 1 to its nearest point on Γ 2 . The RKKY interaction is often exploited in order to build so-called synthetic antiferromagnets, see Figure 4. For this purpose, the thickness of the spacer layer is chosen such that A is negative which results in an antiferromagnetic coupling of the magnetic layers. Synthetic antiferromagnets are important for applications due to their stability and lack of strayfield. Other energy contributions and effects While this work will focus on the energy contributions introduced above, there are numerous additional energy contributions and other effects that may play important roles in certain systems [5]. For instance, the effect of magnetostriction coupled the mechanical properties of magnetic materials to its magnetization configuration [23,24]. If magnetic systems are subject to charge currents, eddy currents [25,26] and the Oersted field [27] need to be considered. Another important area of research is finite-temperature micromagnetics. Various approaches have been proposed in order account for temperature effects in micromagnetics, among them Langevin dynamics [28][29][30] and the Landau-Lifshitz-Bloch equation [31][32][33]. However, this comprehensive topic is out of the scope of this article. Static micromagnetics Static micromagnetics is the theory of stable magnetization configurations and hence a valuable model for the investigation of material properties such as hysteresis. The prerequisite for a stable magnetization configuration is a minimum of the total free energy E of the system with respect to its magnetization m. In order to be a valid micromagnetic solution, the solution m is further required to fulfill the micromagnetic unit-sphere constraint min Epmq subject to |mpxq| " 1. Since the solution variable m is a continuous vector field, variational calculus is applied in order to solve for an energetic minimum. A necessary condition for a minimum is a functional differential δE that vanishes for arbitrary test functions v P V m with V m being the function space of the magnetization m δEpm, vq " d d Epm` vq " lim Ñ0 Epm` vq´Epmq " 0 @ v P V m .(40) An alternative formulation for this condition can be stated in terms of the functional derivative δE{δm that is defined as ż Ωm δE δm¨v dx " δEpm, vq @ v P V 0 m(41) where the function space V 0 m Ă V m includes only functions of V m that vanish on the boundary vpBΩ m q " 0. That said, depending on the considered energy E, the differential δE as defined in (40) in general differs from the left-hand side of (41) by a boundary integral, i.e. δEpm, vq" ż Ωm δE δm¨v dx`ż BΩm Bpmq¨v ds @ v P V m .(42) This means that the knowledge of the functional derivative (41) is not sufficient in order to solve the minimization problem (39). In general, additional boundary conditions defined by Bpmq have to be considered. All variational considerations so far do not account for the unit-sphere constraint |m| " 1. This constraint can be incorporated by a Lagrange multiplier technique where the modified functional E λ pm, λ, µq, given by E λ pm, λ, µq " Epmq`ż Ωm λpxq`|m| 2´1˘d x ż BΩm µpxq`|m| 2´1˘d s,(43) is minimized with respect to both the magnetization m and the Lagrange multiplier fields λ and µ implementing the constraint on the volume and surface respectively. The solution to this minimization problem is again obtained by variational calculus where the variations of the solution variables v m , v λ and v µ can be treated separately δE λ ptm, λ, µu, v m q " 0 @ v m P V m (44) δE λ ptm, λ, µu, v λ q " 0 @ v λ P V λ (45) δE λ ptm, λ, µu, v µ q " 0 @ v µ P V µ(46) where V λ and V µ are appropriate function spaces for the variation of λ and µ, respectively. Expanding (44) considering the definition (43) yields δE λ ptm, λ, µu, v m q " δEpm, v m q`d d "ż Ωm λ`|m` v m | 2´1˘d x  "0 d d "ż BΩm µ`|m` v m | 2´1˘d s  "0 (47) " ż Ωm δE δm¨v m dx`ż BΩm B¨v m ds 2 ż Ωm λm¨v m dx`2 ż BΩm µm¨v m ds.(48) Since (48) has to vanish for arbitrary v m P V m , it also vanishes for test functions with vanishing boundary values v m P V 0 m . Hence the functional derivative of the energy has to fulfill δE δm "´2λm (49) in Ω m which is required to hold for arbitrary λ P V λ . This condition is satisfied if and only if δE{δm is parallel to m and hence mˆδ E δm " 0 (50) which is exactly Brown's condition [5]. Testing (48) with functions that are defined on the boundary only vpΩ m zBΩ m q " 0 and considering the surface Lagrange multiplier µ in the same manner as above yields the additional boundary condition mˆB " 0.(51) Moreover, inserting (43) into (45) yields δE λ ptm, λ, µu, v λ q " d d "ż Ωm pλ` v λ q`|m| 2´1˘d x  "0 (52) " ż Ωm v λ`| m| 2´1˘d x(53) " 0 (54) which is required to hold for arbitrary v λ P V λ and thus represents the micromagnetic constraint |m| 2 " 1. Due to the interface Lagrange multiplier µ, this constraint is further specifically enforced on the boundary by (46). In the following, the functional derivatives and boundary conditions for the energy contributions introduced in Section 2 will be discussed in detail. Zeeman energy The energy differential δE{δm for the Zeeman energy is obtained by variation of (4) which yields δE zee pm, v m q " d d "´µ 0 ż Ωm M s pm` v m q¨H zee dx  "0 (55) "´ż Ωm µ 0 M s H zee¨v m dx.(56) The variation does not give rise to any additional boundary integral. Hence, the derivative and boundary term B for the Zeeman energy are given by δE zee δm "´µ 0 M s H zee(57)B " 0.(58) Exchange energy The differential δE ex for the exchange energy is derived from (10) resulting in δE ex pm, v m q " d d "ż Ωm A " ∇pm` v m q ‰ 2 dx  "0 (59) "2 ż Ωm A∇m : ∇v m dx (60) "´2 ż Ωm r∇¨pA∇mqs¨v m dx 2 ż BΩm A Bm Bn¨v m ds.(61) Here, integration by parts is performed in order to eliminate spatial derivatives of the test functions v m . The resulting volume integral is of the same form as the integral in (41) which enables the identification of the functional derivative δE ex {δm. However, this necessary step also gives rise to a surface integral and thus to a boundary term B. The resulting derivative and boundary term for the exchange energy read δE ex δm "´2∇¨pA∇mq (62) B " 2A Bm Bn(63) where (62) can be simplified to δE ex {δm "´2A∆m if A is assumed constant throughout the magnetic region Ω m . Demagnetization energy The differential for the demagnetization energy is obtained similarly to the differential for the Zeeman energy. A decisive difference to the Zeeman energy, however, is the linear relation of the demagnetization field H dem pM q to the magnetization M . The variation of the magnetization therefore leads to an additional factor of 2 which results in the differential δE dem pm, v m q " d d "´µ 0 2 ż Ωm M s pm` v m q¨H dem pm` v m q dx  "0 (64) "´ż Ωm µ 0 M s H dem¨v m dx.(65) Consequently the derivative and boundary term for the demagnetization energy are given by δE dem δm "´µ 0 M s H dem(66)B " 0.(67) Anisotropy energy For the uniaxial anisotropy (25) the derivative and boundary terms are given by δE aniu δm "´2K u1 e u pe u¨m q´4K u2 e u pe u¨m q 3 (68) B " 0 (69) and for the cubic anisotropy (27) the respective terms read δE anic δm " 2K c1¨m 1 m 2 2`m1 m 2 3 m 2 m 2 3`m2 m 2 1 m 3 m 2 1`m3 m 2 2‚`2 K c2¨m 1 m 2 2 m 2 3 m 2 1 m 2 m 2 3 m 2 1 m 2 2 m 3‚ (70) B " 0.(71) For interface anisotropy contributions, the variational derivative δE{δm obviously vanishes and the influence of the energy contribution reduces to the boundary term B " ż BΩm pxq dx(72) with pxq being the respective areal energy density. Antisymmetric exchange energy Similar to the exchange energy, the variation of the antisymmetric exchange energy (33) needs to be transformed by partial integration in order to eliminate spatial derivatives of the test functions v m δE dmii pm, v m q " d d " ż Ωm D i " pm` v m q¨∇`e d¨p m` v m q∇¨p m` v m q`e d¨p m` v m q˘‰ dx  "0 (73) " ż Ωm D i " v m¨∇ pe d¨m q`m¨∇pe d¨vm q ∇¨v m pe d¨m q´∇¨mpe d¨vm q  dx (74) "2 ż Ωm D i " ∇pe d¨m q´p∇¨mqe d  v m dx ż BΩm D i " pe dˆn qˆm ‰¨v m ds.(75) The resulting variational derivative and the boundary term for the interface DMI energy read δE dmii δm " 2D i " ∇pe d¨m q´p∇¨mqe d ‰(76) B "´D i pe dˆn qˆm. For the antisymmetric bulk exchange (37) the differential is given by δE dmib pm, v m q " d d " ż Ωm D b pm` v m q¨∇ˆpm` v m q dx  "0 (78) " ż Ωm D b " m¨∇ˆv m`vm¨∇ˆm  dx (79) " 2 ż Ωm D b p∇ˆmq¨v m dx ż BΩm D b pnˆmq¨v m ds,(80) which leads to the following variational derivative and boundary term δE dmib δm " 2D b ∇ˆm (81) B "´D b nˆm.(82) Energy minimization with multiple contributions In order to minimize the total energy of a system subject to multiple energy contributions both Brown's condition (50) and the boundary condition (51) have to be fulfilled for the composite energy functional. Namely, if a system is subject to the exchange energy (10) and the demagnetization energy (23), the respective conditions for an energy minimum read mˆδ E δm " mˆp´2A∆m´µ 0 M s H dem q " 0 (83) mˆB " mˆˆ2A Bm Bn˙" 0(84) with (84) being the "classical" micromagnetic boundary condition. Spatial derivatives of the magnetization m are always orthogonal to the magnetization due to the micromagnetic unit-sphere constraint. Hence, the boundary condition (84) is usually simplified to Bm{Bn " 0. If the system is additionally subject to the antisymmetric exchange (33), both Brown's condition (83) and the boundary condition (84) are supplemented with the respective contributions. The resulting boundary condition reads 2A Bm Bn´D i pe dˆn qˆm " 0(85) where the cross product mˆB was again neglected by orthogonality arguments. Depending on the considered energy contributions, this boundary condition is supplemented by additional terms. Hence, adding energy contributions does not add additional boundary conditions, but changes the single boundary condition instead. Dynamic micromagnetics In Micromagnetics, magnetization dynamics is described by the Landau-Lifshitz (LL) equation that was originally proposed in [34]. This equation describes the spatially resolved motion of the magnetization in an effective field. Due to problems with the dissipative term, an alternative formulation was derived by Gilbert [35,36]. Both formulations are completely equivalent under proper parameter transformation. However, for the purpose of distinction, the latter is usually referred to as Landau-Lifshitz-Gilbert (LLG) equation or, alternatively, as Gilbert or implicit form of the LL equation. The LLG can be derived by means of classical Lagrangian mechanics by the choice of an appropriate action S. Due to the micromagnetic unit-sphere constraint |m| " 1, the magnetization field m can be described by means of spherical coordinates mpxq "˜s inrθpxqs cosrφpxqs sinrθpxqs sinrφpxqs cosrθpxqs¸( 86) with the polar angle θ and the azimuthal angle φ. For the sake of readability we omit the spatial dependence of fields in the following. According to Hamilton's principle, the temporal evolution of any field is given as the path with stationary action δS " 0 with the action S defined as Spφ, θq " ż T ż Ωm L dx dt(87) where L is the so-called Lagrangian which, in turn, is given by L " T´V(88) with T being the kinetic energy density and V being the potential energy density. The potential energy density V is naturally given by the free energy E " ş V dx whose contributions are introduced in Section 2. However, the choice of the kinetic energy T is not immediately clear. Due to the unit-sphere constraint, the motion of the magnetization is restricted to rotations. Hence, it seems reasonable to assume a kinetic energy similar to that of a rotating rigid body T " 1 2 ΩIΩ(89) with I being an inertial tensor and Ω being the angular velocity vector. In the rigid-body picture, the magnetization in a certain point mpxq is represented by a cylindrical stick with one end fixed at the coordinate origin, see Figure 5. We introduce the fixed-body frame with coordinate axes e 1 1 , e 1 2 , e 1 3 as shown in Figure 5 and mark vectors with a prime whose coordinates are expressed in terms of this new basis. In the fixed-body frame, the magnetization is trivially given as m "˜0 0 1¸1 .(90) Due to the cylindrical symmetry of the rigid-body representation of the magnetization, it is clear that the inertia tensor is diagonal in the fixed-body frame and thus reads I "˜I 1 0 0 0 I 2 0 0 0 I 3¸1 .(91) The angular velocity vector in the fixed-body frame is given as Ω "˜B t φ sinpθq sinpψq`B t θ cospψq B t φ sinpθq cospψq´B t θ sinpψq B t φ cospθq`B t ψ¸1(92) where the spherical coordinates θ and φ are complemented by the angle ψ that describes the rotation of magnetization's stick representation around its symmetry axis. In order to derive the LLG from the general kinetic energy density (89), two assumptions are required. The first assumption is that of vanishing moments of inertia I 1 " I 2 " 0. This assumption is reasonable since the magnetization stick has no mass in a classical sense. Hence, the rotation of a magnetic moment in an external field is expected to stop instantaneously if the external field is switched off rapidly. With this assumption, the kinetic energy (89) reduces to T " 1 2 I 3 Ω 2 3 .(93) The second assumption is that the angular momentum of the rotation around the symmetric axis L 3 is connected to the saturation magnetization by the relation M s " γ e L 3 " γ 3 I 3 Ω 3(94) where γ 3 is the electron's gyromagnetic ratio. This relation is reasonable since the saturation magnetization takes the place of the magnetic moment in the continuous theory of micromagnetics and the magnetic moment is generated by the spin, i.e. the angular momentum connected to the symmetry axis. Inserting (92) into (93) and further using the relation (94) yields the following expression for the variation of the integrated kinetic energy with respect to the azimuthal angle φ δ "ż T ż Ωm T dx dt `t φ, θu, v φ" d d ż T ż Ωm T pφ` v φ , θq dx dt (95) " ż T ż Ωm dT dΩ 3 d d Ω 3 pφ` v φ , θq dx dt (96) " ż T ż Ωm I 3 Ω 3 B t v φ cospθq dx dt (97) " ż T ż Ωm M s γ B t θ sinpθqv φ dx dt.(98) Applying the same procedure to the variation with respect to the polar angle θ yields δ "ż T ż Ωm T dx dt `t φ, θu, v θ" ż T ż Ωm´M s γ e B t φ sinpθqv θ dx dt.(99) The angular velocity (92) can be simplified by setting ψ " 0. This angle describes the rotation of the magnetization's stick representation around its symmetry axis. Hence, this assumption does not introduce any losses of generality [37]. Ω "˜B t θ B t φ sinpθq B t φ cospθq`B t ψ¸1 .(100) This simplification leads to the following relations for the time derivatives of magnetization coordinates and their respective variations B t m 1 " sinpθqB t φ v m1 " sinpθqv φ (101) B t m 2 "´B t θ v m2 "´v θ .(102) Inserting into the kinetic-energy variations (98) and (99) yields δ "ż T ż Ωm T dx dt `m , v m1˘" ż T ż Ωm M s γ e B t m 2 v m1 dx dt (103) δ "ż T ż Ωm T dx dt  pm, v m2 q "´ż T ż Ωm M s γ e B t m 1 v m2 dx dt(104) which, in the fixed-body frame, can be summarized in the vector valued variation (105) in the fixed-body frame where the magnetization is given as m " p0, 0, 1q, see (90). In order to compute the variation of the action δS, the variation of the kinetic energy (105) has to be complemented by the variation of the potential energy which is given as δ "ż T ż Ωm T dx dt `m , v m" ż T ż Ωm M s γ e pmˆB t mq¨v m dx dt,δ "ż T ż Ωm V dx dt `m , v m" δ "ż T E dt  pm, v m q (106) " ż T δE´m, v m ptq¯dt (107) " ż T "ż Ωm δE δm¨v m dx`ż BΩm B¨v m ds  dt(108) where the boundary term B is the same as introduced in Section 3 and hence depends on the particular choice of energy contributions. Putting together the variation of the kinetic energy (105) and the potential energy (108) results in the variation of the action δS`m, v m˘" ż T "ż ΩmˆM s γ e pmˆB t mq¨v m`δ E δm¨v m˙d x ż BΩm B¨v m ds  dt " 0(109)B t m´pB t m¨mqm " B t m " γ e M s mˆδ E δm (111) where B t m¨m vanishes due to the micromagnetic unitsphere constraint that requires any derivative of the magnetization to be perpendicular on m. The equation of motion (111) describes the magnetization dynamics without energy losses. However, realistic systems are expected to lose magnetic energy by conversion to e.g. phonons, eddy currents [5]. In the framework of Lagrangian mechanics, dissipative processes are described by a Rayleigh function D. The time evolution of the magnetization m subject to the dissipative function D is then given as δSpm, v m q "´δ "ż T ż Ωm D dx dt  pm, v m q.(112) The Rayleigh function D is usually chosen to be proportional to the square of the time derivative of the variable of motion. Choosing D " αM s {p2γ e qpB t mq 2 in the case of magnetization dynamics yields δSpm, v m q "´δ "ż T ż Ωm α M s 2γ e pB t mq 2 dx dt  pm, v m q (113) "´ż T ż Ωm α M s γ e B t mv m dx dt(114) where α ě 0 is a dimensionless damping parameter. Inserting the variation of the action (109) and once more considering variations v m P V 0 m that vanish on the boundary BΩ m results in the equation of motion B t m " γ e M s mˆδ E δm`α mˆB t m.(115) By introducing the effective field defined as H eff "´1 µ 0 M s δE δm ,(116) this equation can be turned into the well-known Gilbert form of the LLG B t m "´γmˆH eff`α mˆB t m(117) with γ " µ 0 γ e « 2.2128ˆ10 5 m{As being the reduced gyromagnetic ratio. This equation of motion is completed by the boundary condition mˆB " 0 which is obtained by varying v m on the boundary for (109) and by considering the same cross product with m that is applied to obtain (111). Note that this boundary condition resembles the static micromagnetic boundary condition (51). From (109) it is clear that this boundary condition has to hold at all times. This semi-implicit formulation introduced by Gilbert can be transformed into an explicit form by inserting the complete right-hand side of (117) into B t m on the righthand side of (117). Applying basic vector algebra and considering m¨B t m " 0 and m¨m " 1 yields B t m "´γ 1`α 2 mˆH eff´α γ 1`α 2 mˆpmˆH eff q (118) which, apart from the definition of the parameters γ and α, equals the original equation introduced by Landau and Lifshitz. While the presented derivation of the LLG is not rigorous, it should be noted that the required assumptions, namely vanishing moments of inertia I 1 " I 2 " 0 and the connection of the remaining moment of inertia with the saturation magnetization M s " γ e I 3 Ω 3 , are physically reasonable. The strict application of variational calculus not only yields the LLG but also its boundary conditions depending on the contributions to the effective field H eff . A more detailed investigation on the LLG as derived from a Lagrangian is presented in [38] where it is also shown that the kinetic contribution to the Lagrangian (93) is equivalent to the assumption T " M s γ e B t φ cospθq(119) introduced in the original work by Gilbert [35] and earlier by Döring [6]. A full quantum mechanical description of a spin subject to exchange interaction, anisotropy and Zeeman field is given in [39] where the LLG equation is also obtained in a limit case. Figure 6 illustrates the damped precessional motion of the magnetization m in an effective field H eff as described by the LLG. As noted in Section 1, the main assumption of micromagnetics is the constant modulus of the magnetization field m. This property is conserved by the LLG as can be seen by considering the time derivative of the squared magnetization Properties of the Landau-Lifshitz-Gilbert equation B t |m| 2 " B t pm¨mq " 2B t m¨m.(120) Inserting the right-hand side of (118) yields B t |m| 2 " 0 and hence also B t |m| " 0. In classical micromagnetics, the energy connected to the magnetization, as defined by the sum of the energy contributions introduced in Section 2, may only change due to external fields varying in time or due to the energy dissipation modeled by the damping term. In the case of an effective field that does not explicitly depend on the time t the time derivative of the energy is given by B t E " ż Ωm δE δm¨B t m dx (121) "´µ 0 ż Ωm M s H eff¨B t m dx.(122) Inserting (118) for B t m yields B t E " µ 0 ż Ωm M s H eff " γ 1`α 2 mˆH ef αγ 1`α 2¨mˆp mˆH eff q  dx (123) "´µ 0 ż Ωm M s αγ 1`α 2 |mˆH eff | 2 dx.(124) The value of the integral is zero in the case of an energy minimum, see Brown's condition (50), and positive otherwise. Hence, for a positive damping constant α ą 0, the energy of a magnetic system is a non-increasing function in time. In this case the LLG is said to have Lyapunov structure [40,41]. In the special case of no damping α " 0 the right-hand side of (124) vanishes B t E " 0(125) and the LLG has Hamiltonian structure, i.e. it preserves the energy. Spintronics in micromagnetics The term spintronics summarizes all effects caused by the interaction of electrons with solid state devices due to their spin rather than their charge. For magnetic systems, this particularly covers the origin of spinpolarized currents and their impact on the magnetization configuration. The term spintronics was coined in the 1980s when the giant magnetoresistance (GMR) was discovered by Fert and coworkers [42] and Grünberg and coworkers [43]. Exploiting the spin of electrons, in addition to their charge, adds extra degrees of freedom and allows for the development of novel devices especially in the areas of storage and sensing technology. In the semiclassical picture of micromagnetics, the spin polarization of an electric current is described by a threedimensional vector field p. If a polarized electric current passes a magnetic region, it exerts a torque on the magnetization. This so-called spin torque, similar to the torque generated by a magnetic field, can be split into a fieldlike contribution T field and a dampinglike contribution T damp , see Figure 7. The fieldlike torque has the same form as Fig. 7. Spin-torque contributions for neglectable damping α ! 1 with respect to the reference polarization p. The fieldlike torque T field leads to a precessional motion of the magnetization m around p. The dampinglike torque T damp leads to a direct relaxation of m toward p. the torque generated by a regular effective-field contribution, i.e. it leads to a damped precessional motion as described in Section 4. Since the Gilbert damping α is usually small α ! 1, the magnetization dynamics caused by the fieldlike torque are dominated by the precessional part. In contrast, the dynamics caused by the dampinglike torque are dominated by the direct rotation of the magnetization toward the polarization and accompanied by a small precessional contribution. Depending on the origin of the polarized current, the torque is either referred to as spin-transfer torque or spin-orbit torque. Spin-transfer torque in multilayers A typical device that exploits spin-transfer torque, consists of two magnetic layers separated by a nonmagnetic spacer layer and sandwiched with two nonmagnetic leads, see Figure 8. If passed by an electric current, the conducting electrons are subject to scattering processes depending on the spin configuration of the conducting electrons. Even if the applied current has a net spin polarization of zero, these spin-dependent scattering processes lead to a non-vanishing spin-polarization distribution across the multilayer. In particular, the interfaces between magnetic and non-magnetic regions act as scattering sites due to the rapid transition of magnetization. A simplified illustration of the scattering processes for different magnetization configurations of the multilayer, i.e. antiparallel and parallel, is given in Figure 8. In the case of an antiparallel configuration, a scattering process takes place at the first interface that is passed by the conducting electrons, see Figure 8a. Due to this scattering, the FM1 layer acts as a spin filter and the majority of the conducting electrons that reach the FM2 layer carry the polarization of FM1. The spin-transfer torque will cause the switching of the FM2 magnetization if exceeding a critical strength. In addition, a scattering process at the first FM2 interface will reflect electrons with the polarization of FM1 leading to a stabilization of the FM1 magnetization configuration. For a parallel configuration, the same scattering as for the antiparallel case appears at the first interface of FM1, see Figure 8b. This leads to a spin polarization parallel to the magnetization configuration of FM1 in the spacer layer between FM1 and FM2. However, if the spacer layer has sufficient thickness, the spin polarization reduces due to spin-flip events. At the first interface of FM2 the recovered electrons with antiparallel polarization to the magnetization of FM2 are scattered back to FM1. The scattered electrons exert a torque on the magnetization of FM1 and can switch it if exceeding a critical strength. The magnetization of FM2 on the other hand is stabilized by the electrons polarized by the FM1 layer. Possible applications of this torque mechanism, that was first investigated in by Slonczewski [44], Berger [45] and Waintal et al. [46], are the spin-transfer torque magnetoresistive random access memory (STT MRAM) [2,47] and spin torque oscillators (STO) [48,49]. A comprehensive theoretical overview over spin-transfer torque is given in a work by Ralph and Stiles [50]. A very popular model for the description of the magnetization dynamics in spin-transfer-torque devices is the model proposed by Slonczewski [51]. This model uses the macrospin approach, where the magnetic region subject to spin torque, also referred to as free layer, is described by a single spin m. The current is assumed to be polarized by another magnetic layer, referred to as polarizing layer, whose magnetization is described by the vector p. The motion of the free-layer magnetization m is described by the extended LLG B t m "´γmˆH eff`α mˆB t m`T(126) where the torque T consists of a dampinglike and fieldlike contribution T " T damp`T field . According to the model of Slonczewski these contributions are given by T damp " η damp pϑq j e γ 2eµ 0 M s mˆpmˆpq (127) T field " η field pϑq j e γ 2eµ 0 M s mˆp (128) where the dimensionless functions η damp and η field describe the angular dependence of the torque strength with ϑ being the angle between m and p. By comparing the torque contributions (127) and (128) with the effective-field term in the LLG (126), the torque can be expressed by means of an effective field contribution H T given by H T "´j e 2eµ 0 M s " η damp pϑq mˆp`η field pϑq p ‰ .(129) Inserting into the LLG (126) and transforming the LLG into the explicit form (118) yields B t m "´γ 1`α 2 mˆ"H eff`je 2eµ 0 M s pαη damp´ηfield qp  αγ 1`α 2 mˆˆmˆ"H eff`je 2eµ 0 M ŝˆ´1 α η damp´ηfield˙p ˙( 130) where the vector identity mˆrmˆpmˆpqs "´mˆp was used. From this formulation it is clear that both the dampinglike torque T damp and the fieldlike torque T field contribute to the precessional motion as well as the dampinglike motion. However, this intermixing highly depends on the Gilbert damping α. In the limit case of vanishing α, the LLG simplifies to B t m "´γmˆ"H eff´je 2eµ 0 M s η field p  γmˆˆmˆ" j e 2eµ 0 M s η damp p ˙( 131) where the fieldlike torque exclusively contributes to the precessional motion and the dampinglike torque exclusively contributes to the dampinglike motion. This limit demonstrates the unique feature of the dampinglike torque to facilitate a dampinglike motion independently from the Gilbert damping α. In the original work of Slonczewski, the expression ηpϑq " P Γ pΓ`1q`pΓ´1q cospϑq(132) is derived as angular dependence of the torque for symmetric systems, i.e. systems with two identical magnetic layers. This expression is valid for both the dampinglike and the fieldlike torque, but in general requires a different set of model parameters P and Γ. The dimensionless parameters P and Γ depend on geometry and materials of the complete system and describe the polarization strength and the angular asymmetry of the STT, respectively. A more general expression for the angular dependence is introduced in [52] as ηpϑq " qÀ`B cospϑq`qÁ´B cospϑq .(133) This expression accounts for asymmetric devices with two different magnetic layers. Again, the free parameters q`, q´, A and B are dimensionless and depend on the geometry and material composition of the complete stack. The model of Slonczewski is often used to describe STT devices with one hard-magnetic layer acting as spin polarizer and one soft magnetic layer that is subject to the spin torque induced by the hard magnetic layer. For these devices the magnetization dynamics in the hard magnetic layer, referred to as reference layer or pinned layer, are neglectable and LLG is solved for the soft magnetic layer, referred to as free layer, only [53]. However, the model can also be used to describe the bidirectional coupling of the magnetization configuration in both magnetic layers [54]. The macrospin approach as proposed in the original work of Slonczewski is accurate for small systems below the single-domain limit. Systems of this size are dominated by the exchange interaction and hence act much like a single spin. With growing size, other energy contributions such as the demagnetization energy gain influence leading to the generation of magnetic domains, which renders the macrospin approximation useless. For thin film structures with large lateral dimensions but small thicknesses below the exchange length, the generalization of the macrospin model is straightforward. In this case, the magnetization m and the polarization p in (127) and (128) are functions of the lateral position x in the multilayer stack and the tilting angle ϑ is computed accordingly, see Figure 9. The torque contributions (127) and (128) are usually applied as volume terms with the polarization p assumed to be independent of the perpendicular position in the free layer. However, the spin-transfer torque is considered to be a surface effect rather than a volume effect. For free-layer thicknesses below the exchange length, the treatment as volume effect does not affect the torque in a qualitative fashion, since the magnetization can be assumed constant across the free layer in this case. Yet the strength of the torque needs to be scaled with the reciprocal free-layer thickness 1{d in order to account for the surface nature of the effect. While the generalization from a macro-spin model to a spatially resolved model allows for the description of various multilayer devices, the model of Slonczewski has a number of shortcomings. The presented generalization neglects lateral diffusion of the spins which might introduce inaccuracies for strongly inhomogeneous magnetization configurations. Moreover, the treatment as volume term is only justified for free-layer thicknesses below the exchange length. Another disadvantage of this model are the free parameters introduced in (132) and (133) which depend on the geometry and material parameters of the complete system in a nontrivial fashion. A comprehensive overview over Slonczewski-like models is given in a work by Berkov and Miltat [55]. Spin-transfer torque in continuous media Spin-transfer torque cannot only be exploited in magnetic multilayer structures, but also in continuous single phase magnets. In these systems, magnetic domains take over the role of the distinct layers in multilayer stacks, i.e. they act as spin polarizer. While the spin torque in multilayers acts on the surfaces of the magnetic layers to the spacer layer, the spin torque in single phase magnets acts in regions of high magnetization gradients, i.e. domain walls. A simple picture for the origin of spin torque in single phase magnets is given in Figure 10. While the conducting electrons pass the magnetic region, they pick up the polarization from the local magnetization and carry this polarization in the direction of the electron flow where they exert a torque. This mechanism can be used to move domain walls and complete domain structures with electric currents. In contrast to field induced domain-wall motion, the spin-torque moves magnetic domain walls always in the direction of electron motion regardless of the nature of the wall, e.g. head-to-head, tail-to-tail. This property is exploited, for example, by the magnetic racetrack memory proposed by Parkin et al. [56]. An established model for the description of spin torque in continuous magnets is the model proposed by Zhang and Li [57]. In this model, the torque contribution T to the LLG is given as T "´b mˆrmˆpj e¨∇ qms´bξmˆpj e¨∇ qm(134) where ξ describes the degree of nonadiabacity according to [57] and b is given as b " βµ B eM s p1`ξ 2 q(135) with β being the dimensionless polarization rate of the conducting electrons, µ B being the Bohr magneton and The Zhang-Li model delivers reasonable results for the description of current driven domain-wall motion. However, the description of spin-torque is purely local since the torque only depends on first derivatives of the magnetization. This means that the diffusion of spin polarization is completely neglected. Consequently, the model is not suited for the description of spin torque in multilayers, since this requires the transport of spin across a nonmagnetic spacer layer. Moreover, the lack of diffusion also introduces inaccuracies in systems with highly inhomogeneous magnetization configurations. Spin-diffusion Both, the model of Slonczewski introduced in Section 5.1 and the model of Zhang and Li introduced in Section 5.2 are applicable only for specific material systems and magnetization configurations. Also, both models neglect the diffusion of the spin polarization to some extent. A more general approach to spin torque considers the torque generated by a vector field s referred to as spin accumulation T "´J M s mˆs(136) where J denotes the coupling strength of the spin accumulation s and the magnetization m. The torque definition leads to the extended LLG B t m "´γmˆˆH eff`J γM s s˙`αmˆB t m. (137) The spin accumulation spxq describes the deviation of the conducting electron's polarization compared to the equilibrium configuration at vanishing charge current j e " 0. That said, by definition s is zero if no current is applied to the system. Several variations of the spin-diffusion model have been proposed for the computation of the spin accumulation s. According to [58] the dynamics of s are given by B t s "´∇¨j s´s τ sf´J sˆm(138) where τ sf denotes the spin-flip relaxation time which is a material parameter. While the LLG (137) is defined in the magnetic region Ω m only, the spin accumulation s is generated also in nonmagnetic regions such as the leads or the spacer layer of an STT device and hence has to be solved in the complete sample region Ω. Consequently, (138) is potentially defined in composite media and thus all material parameters, such as J and τ sf , may vary spatially. The matrix-valuedj s in (138) denotes the spin current defined bỹ j s " 2C 0 β µ B e m b ∇u´2D 0 ∇s(139) with u being the electric potential, µ B being the Bohr magneton and e being the elementary charge. The variables C 0 , D 0 and β are material parameters. C 0 is connected to the electric conductivity σ by σ " 2C 0 and D 0 denotes the material's diffusion constant. β is a dimensionless constant that denotes the rate of polarized conducting electrons in magnetic materials. If the distribution of the electric potential u is known, the coupled system (137) and (138) can be solved in order to simultaneously resolve the dynamics of the magnetization mptq and the spin diffusion sptq. However, the distribution of the electric potential u might not be known upfront. Specifically, the charge current j e in the diffusion model is defined as j e "´2C 0 ∇u`2D 0 β 1 e µ B p∇sq T m(140) which suggests a strong coupling of the electric potential u with m and s. If the charge current distribution j e is known, the spin-diffusion dynamics (138) can be solved by inserting (140) into (139) via the gradient of the electric potential ∇u, which results in the spin-current definitioñ j s "´β µ B e m b j e´2 D 0`∇ s´ββ 1 m b " p∇sq T m ‰˘. (141) Inserting into (138) directly yields the spin-accumulation dynamics for a given magnetization configuration m. The resulting magnetization dynamics due to the spin torque can be resolved by simultaneous solution of (138) and the LLG (137). However, in most realistic systems, the spin accumulation relaxes two orders of magnitude faster than the magnetization configuration [57]. If the quantity of interest is the magnetization dynamics rather than the spin-accumulation dynamics, this difference in time scales can be exploited in order to simplify the model. Namely, the spin-accumulation can be assumed to instantaneously relax when the magnetization changes. In this case, the spin-accumulation does no longer explicitly depend on the time t, but only on the magnetization m. The defining equation for the spin accumulation spmq is derived by setting B t s " 0 in (138) which results in ∇¨j s`s τ sf`J sˆm " 0 in Ω.(142) Inserting the definition of the spin current (139) yields a linear partial differential equation of second order in s. Typical solutions for the spin accumulation s in STT devices as well as domain walls are depicted in Figure 11. By application of this simplified model, the treatment of the spin-accumulation s in the context of dynamical micromagnetics becomes similar to the treatment of effective-field contributions. Instead of performing a coupled time integration on both the magnetization m and the spin accumulation s as required by (138), the spin accumulation is defined by the magnetization m only. The previous methods require the knowledge of the charge-current distribution j e for the computation of the spin accumulation s. In a regular shaped sample with a homogeneous conductivity σ, the charge-current density may be assumed constant in a first-order approximation. However, in a magnetic system subject to spin-polarized currents, Ohm's law gives only one contribution to the total conductivity which may also depend on the magnetization configuration. In order to accurately account for magnetization dependent resistance effects, neither the electric potential u nor the charge current j e should be prescribed in the sample. Instead, these entities should be solution variables like the spin accumulation s. In order to set up such a self-consistent model, the source equation (142) has to be complemented by a source equation for the charge current, which is naturally given by the continuity equation ∇¨j e " 0 in Ω.(143) Inserting the current definitions (139) and (140) in the source equations (142) and (143) yields a system of linear partial differential equations that can be solved for the solution pair ps, uq with the magnetization m being an input to the system. In this self-consistent model, instead of prescribing the charge current j e or potential u in the complete sample, boundary conditions are used to define the potential or current inflow on specific interfaces. Boundary conditions for the spin-diffusion model Since the partial differential equations for the solution of the spin-diffusion model are of second order in both solution variables u and s, boundary conditions are required in order to find a unique solution. The boundary conditions of the electric potential u directly correspond to the voltage or charge current applied to the system through electric contacts. A typical multilayer system with contact regions Γ 1 and Γ 2 is depicted in Figure 12. By choosing either Dirichlet or Neumann conditions for u on a part of the boundary, a constant electric potential or chargecurrent inflow can be prescribed on this part, respectively. For example, in order to simulate a constant potential u 0 at contact Γ 1 , a Dirichlet condition is applied directly to the potential u u " u 0 on Γ 1 . Applying a constant current inflow j 0 on contact Γ 2 is achieved by applying the Neumann condition j e¨n ",´2 where n denotes the outward pointing normal to Γ 2 . In order to complete the set of boundary conditions for u, all parts of the sample's boundary which are not used as contacts are treated with homogeneous Neumann conditions " C 0 ∇u`D 0 β 1 e µ B " p∇sq T m ‰ ¨n " j 0 on Γ 2 (145)j e¨n " 0 on BΩzΓ 2 Y Γ 2 .(146) The spin accumulation s is treated with homogeneous Neumann conditions on the complete boundary ∇s¨n " 0 on BΩ(147) which is equivalent to a no-flux condition on the spin currentj s¨n " 0 for systems as depicted in Figure 12, where the contacts belong to the boundary of the nonmagnetic region Ω n . This equivalence is obtained by multiplying (141) with the boundary normal n and inserting the Neumann condition which yields the boundary flux j s¨n " β µ B e mpj e¨n q 2D 0 " ∇s¨n´ββ 1 mpp∇s¨nq¨mq ‰ (148) " β µ B e mpj e¨n q.(149) This spin-current flux is nonzero only at boundaries with both nonvanishing charge-current flux j e¨n and nonvanishing magnetization m. A vanishing charge-carrier flux usually implies a vanishing spin-current flux since the spin is transported by the charge carriers. This makes the no-flux condition on the spin current a reasonable choice for parts of the boundary that do not serve as electric contacts. For electric contacts, the no-flux condition is reasonable only if the thickness of the respective nonmagnetic regions exceeds the spin-flip relaxation length. In this case, the polarization of the current and thus the spin flux can be assumed zero at the contact, see e.g. Figure 11a. If this is not the case, the contact should be treated with a Robin condition instead ∇s¨n`1 ? 2D 0 τ sf s " 0(150) where the exponential decay of the spin accumulation in the nonmagnetic lead region is taken into account. While the boundary conditions on the electric potential u are relevant only for the self-consistent treatment of u and s, the boundary conditions for the spin accumulation s also hold for the simplified spin-diffusion models with prescribed electric potential or charge current, respectively. Spin-orbit torque Several extensions to the spin-diffusion model introduced in Section 5.3 have been proposed in order to account for further spintronics effects. An important class of effects describes the conversion of charge currents to spin currents and vice versa due to spin-orbit coupling. The origin for this conversion is the polarization dependent deflection of the conducting electrons either due to material impurities [59,60] or due to intrinsic asymmetries of the material [61,62]. Depending on the direction of the current conversion, this spin-orbit effect is either referred to as spin-Hall effect or inverse spin-Hall effect. The spin-Hall effect describes the conversion of charge currents into spin currents, see Figure 13a, while the conversion of spin currents into charge currents is referred to as inverse spin-Hall effect, see 13b. Incorporating these effects into the spindiffusion model is done by extending the original current definitions (140) and (139) according to Dyakonov [63]. The extended current definitions j 1 e andj 1 s are defined in terms of the original current definitions and read j 1 e,i " j e,i` ijk θ SH e µ B j s,jk(151)j 1 s,ij " j s,ij´ ijk θ SH µ B e j e,k(152) where index notation was used. Here, ijk is the Levi-Cevita tensor and θ SH is the dimensionless spin-Hall angle. Inserting j 1 e andj 1 s into the source equations (143) and (142) yields a self-consistent spin-diffusion model including spin-Hall effects. Typically, the spin-Hall effects are exploited in multilayer structures with heavy-metal layers which are subject to spin-orbit coupling and neighboring magnetic layers where the spin-polarized currents interact with the magnetization configuration. Similar to the considerations in the original spin-diffusion model, equations (151) and (152) together with the respective source equations are solved in the complete structure including magnetic and nonmagnetic regions, using spatially varying material parameters in order to account for the different material properties. Material parameters in the spin-diffusion model The spin-diffusion model introduces a number of material parameters to the set of parameters required by classical micromagnetics. Depending on the exact formulation of the spin-diffusion model, different sets of parameters are used. The spin-flip relaxation time τ sf and the exchange coupling of the spin-accumulation and the magnetization J are often specified in terms of the characteristic length scales λ sf and λ J defined by λ sf " a 2D 0 τ sf (153) λ J " a 2D 0 {J.(154) Alternatively, the exchange strength J may be quantified by the characteristic time τ J " {J. While classical micromagnetics is often used to describe single-phase materials, the spin-diffusion model is usually used to solve the spin accumulation in composite systems. In order to account for such systems, that expose different material properties in different regions, all material parameters in the governing equations are scalar fields rather than constants. It should be noted that none of the material parameters, both in classical micromagnetics as well as in the spin-diffusion model, are subject to spatial differentiation. Hence, the material-parameter fields may comprise jumps across material interfaces without compromising the mathematical formulation of the model. Spin dephasing In addition to the spin-flip relaxation and the exchange coupling in the original spin-diffusion model (138), an additional term for the description of spin-dephasing was proposed in several works [64][65][66] B t s "´∇¨j s´s τ sf´sˆm τ J´mˆp sˆmq τ φ ,(155) where τ φ is the spin-dephasing time. This extended spindiffusion model can be solved similar to the original model either in nonequilibrium or equilibrium and either for prescribed charge current or self-consistently. Valet-Fert model An alternative to the spin-diffusion model introduced in Section 5.3 was introduced by Valet and Fert in [67]. The originally one-dimensional model for collinear magnetization configurations was generalized to three dimensions and noncollinear configurations in [68]. Similar to the Zhang-Levy-Fert model introduced in Section 5.3, the Valet-Fert model defines charge and spin currents J e and J s as well as an electric potential φ and a spin potential φ s which corresponds to the spin accumulation s. However, in contrast to the Zhang-Levy-Fert model, the spin current and spin potential are assumed to be collinear to the magnetization in the ferromagnetic regions Ω m φ s " φ s m (156) J s " m b J s .(157) With these simplified assumptions, the Valet-Fert model defines the currents in the magnetic region Ω m as J e "´∇ φ ρ˚p1´β 22 q´β 2 ∇φ s 2ρ˚p1´β 22 q (158) J s "´β 2 ∇φ ρ˚p1´β 22 q´∇ φ s 2ρ˚p1´β 22 q(159) with ρ˚being connected to the electric conductivity and β 2 being a dimensionless polarization parameter. In the nonmagnetic region Ω n , the magnetization m as well as the polarization parameter β 2 vanishes and the currents are defined as J e "´∇ φ ρ˚( 160) J s "´∇ φ s 2ρ˚( 161) with (160) being Ohm's law. The source equations for the currents are given as ∇¨J e " 0 (162) 2ρ˚∇¨J s "´φ s λ 2(163) where λ is a spatially varying material parameter that denotes the characteristic spin-flip relaxation length. In contrast to the Zhang-Levy-Fert model, the Valet-Fert model does not require the electric potential φ and the spin potential φ s to be continuous across interfaces. Instead, these potentials are subject to a set of welldefined jump conditions. The charge current j e is continuous everywhere which includes interfaces. The spin current j s is continuous within magnetic/nonmagnetic layers. Furthermore, its longitudinal component is continuous across magnetic/nonmagnetic interfaces n¨Jś " " m¨Js ı¨n where the '´' superscript corresponds to values in the magnetic layer and the '`' superscript corresponds to values in the nonmagnetic layer, see Figure 14. Both, the charge potential φ and the spin potential φ s may have jumps at magnetic-nonmagnetic interfaces defined by n¨Jè "´φ`´φŕb p1´γ 2 sf q´γ sf m¨pφs´φś q 2rb p1´γ 2 sf q (165) n¨Js "´"γ sf φ`´φŕb p1´γ 2 sf q`m¨p φs´φś q 2rb p1´γ 2 sf q  ḿ g Ö " φs´φś´pm¨rφs´φś sqm ‰(166) where the interface properties rb , γ sf and g Ö denote the resistivity, the spin-flip probability and the spin-mixing conductance of the interface, respectively. While the interface resistivity rb and the splin-flip probability γ sf have bulk counterparts in the Zhang-Levy-Fert model, namely C´1 0 and τ´1 sf , the spin-mixing conductance g Ö is unique to the Valet-Fert model. It describes the interface resistivity for electrons polarized perpendicular to the magnetization in the ferromagnetic layer and contributes significantly to the overall resistivity of the interface. Moreover, the spin-mixing conductance g Ö plays an important role for the description of spin torque in the context of the Valet-Fert model. Since the spin potential φ s is collinear in the magnetic regions by definition of the model, see (156), it cannot exert a torque on the magnetization m. Thus, torque can only be generated at the interface between magnetic and nonmagnetic regions, where the spin potential φ s can have components perpendicular to the magnetization. In general, the spinmixing conductance is assumed to be complex-valued with the real and imaginary part describing the strength of the dampinglike and fieldlike torque, respectively. This said, the Valet-Fert model does not predict the ratio of these torque contributions in contrast to the spin-diffusion model introduced in Section 5.3. As for the simplified model by Slonczewski, this ratio is an input parameter to the method. Connecting the spintronics models Various micromagnetic models for the description of spin torque and other spintronics effects are described in the preceding section. While the spin-diffusion model introduced in Section 5.3 covers a multitude of spintronics effects, specialized models like the model by Slonczewski, see Section 5.1, were developed for very specific purposes. Slonczewski model The Slonczewski model describes the spin torque in multilayers in terms of a macrospin approach. The characteristic properties of this model are the angular dependencies η damp pϑq and η field pϑq of the torques defined in (127) and (128). While the original angular dependency proposed by Slonczewski (132) is predicted to be valid for structures with two similar magnetic layers, the more general form (133) is expected to work also for asymmetric structures. In order to compare the model of Slonczewski with the spin-diffusion model, the torque for a homogeneously magnetized free layer with varying tilting angle ϑ is computed with the spin-diffusion model. The angular dependence of this torque is extracted and compared to the Slonczewski model in Figure 15. The asymmetric system used for this comparison has two magnetic layers with thicknesses 3 nm and 5 nm and typical material parameters as used in [69]. The symmetric system has similar material parameters, but uses the same layer thickness of 3 nm for both magnetic layers. The symmetric structure is well fitted by the original expression for the angular expression, see Figure 15a. For the asymmetric structure, i.e. a structure with different free-layer and pinned-layer thicknesses, the more general expression (133) is required in order obtain an accurate fit, see Figure 15b. This proves agreement of the two models in the application scope of the Slonczewski model and consequently the superiority of the spin-diffusion model which is able to accurately describe further spin-transport effects and devices. Zhang-Li model Like the model of Slonczewski, the model of Zhang and Li introduced in Section 5.2 can be perceived as a special case of the spin-diffusion model. Setting D 0 " 0 in (141) and inserting in (142) yieldś ∇ˆβ µ B e m b j e˙`s τ sf`J sˆm "(167) βµ B e pj e¨∇ qm`s τ sf´J mˆs " 0. Multiplying with m and mˆm respectively and eliminating mˆpmˆsq terms results in the torque T "´J M s mˆs (169) " βµ B eM s 1 1`´ Jτ sf¯2 pmˆrmˆpj e¨∇ qms Jτ sf mˆpj e¨∇ qm˙(170) which has the form as the model of Zhang and Li (134) with the degree of nonadiabacity being defined as ξ " Jτ sf " λ 2 J λ 2 sf .(171) This means, that the model of Zhang and Li exactly reproduces the spin-diffusion model for vanishing diffusion D 0 . Neglecting the spin diffusion restricts the model of Zhang and Li to the description of local torque phenomena, i.e. torque due to local magnetization gradients such as domain walls. Valet-Fert model In contrast to the Slonczewski and Zhang-Li models, the Valet-Fert model is closely linked to the spin-diffusion model introduced in Section 5.3. Within the nonmagnetic layers the models are completely similar and in the magnetic regions the equations have common terms. A major difference of the models is the role of interfaces that have distinct properties such as the resistivity rb , the spinflip probability γ sf and the spin-mixing conductance g Ö in the Valet-Fert model. In contrast, the Zhang-Levy-Fert model introduced in Section 5.3 solely relies on bulk material properties. For the bulk properties, there is a straightforward mapping of the solution variables and material parameters. Using (156) and (157) and assuming a constant magnetization in the ferromagnetic layers ∇m " 0 yields J e "´∇ φ ρ˚p1´β 22 q´β 2 p∇φ s q T m 2ρ˚p1´β 22 q (172) J s "´β 2 m b ∇φ ρ˚p1´β 22 q´∇ φ s 2ρ˚p1´β 22 q(173) for the Valet-Fert model. With spatially resolved material parameters ρ and β 2 , these current definitions can be used for the complete sample region Ω. Assuming the following relations, (172) and (173) can be identified with the definitions of the Zhang-Levy-Fert model (140) and (139). J e " j e(174)J s "´e µ Bj s (175) φ " u (176) φ s "´2 D 0 e C 0 µ B s (177) C 0 " 1 2ρ˚p1´β 2 q (178) β 2 " β " β 1(179) where it should be noted that instead of the two polarization parameters β and β 1 of the Zhang-Levy-Fert model, the Valet-Fert model introduces only a single parameter β 2 . Comparison of the source equations for the spin current in the different models (142) and (163) yields the following additional parameter mappings J " 0 (180) τ sf " λ 2 2D 0 p1´β 22 q .(181) With these parameter mappings, the Zhang-Levy-Fert model can be used to solve the Valet-Fert model in both the magnetic regions Ω m as well as the nonmagnetic regions Ω n . While both models perfectly agree in the bulk, the essential differences of the models are the continuity conditions of the potentials. While the Zhang-Levy-Fert model assumes continuous potentials u and s, the Valet-Fert model allows jumps which are defined by interface properties, namely the interface resistivity rb , the spin-flip probability γ sf and the spin-mixing conductance g Ö . However, these jumps can be mimicked in the Zhang-Levy-Fert model by introducing thin layers with effective material properties at the positions of the respective interfaces. Approximation of the charge current in the Zhang-Levy-Fert model (140) within the effective-interface layer Ω e by means of finite differences and multiplication with the boundary normal n yields n¨j e "´2C 0 u`´ud`2 D 0 β 1 e µ B m¨ps`´s´q d(182) with d being the thickness of the effective-interface layer. Considering the potential mappings (176), (177) and the current mapping (174), this translates to the following jump condition across the effective-interface layer n¨J e "´2C 0 φ`´φd´β 1 C 0 m¨pφs´φś q d(183) with d being the thickness of the layer. Comparison with the jump condition (165) of the Valet-Fert model results in the parameter mappings C 0 " d 2rb p1´γ 2 sf q (184) β 1 " γ sf(185) for the effective-interface layer. Applying the same procedure to the spin current of the Zhang-Levy-Fert model (139) yields n¨j s " 2C 0 β µ B e m¨pu`´u´q d´2 D 0 s`´sd(186) which translates to n¨J s "´2C 0 β mpφ`´φ´q d´C 0 φ s`´φsd (187) "´"2C 0 β pφ`´φ´q d`C 0 m¨pφ s`´φs´q d  ḿ C 0 d rφ s`´φs´´m¨p φ s`´φs´q ms (188) and leads to the additional mappings β " γ sf (189) g Ö " C 0 d .(190) According to these relations, the spin-mixing conductance g Ö depends implicitly on rb and γ sf which contradicts the Valet-Fert model where g Ö is an independent interface property. However, while the charge current j e is approximately constant throughout the effective-interface layer Ω e due to the continuity equation (143), the spin currentj s has sources in Ω e according to (142) which renders the finite-difference approximation (187) inaccurate. While an appropriate choice of τ sf and J in the effectiveinterface layer can approximately reproduce the behavior of the Valet-Fert model, the dependency of these material parameters from the spin-mixing conductance g Ö is nontrivial and is best resolved by a fitting procedure. For the special case of a collinear magnetization configuration in the magnetic layers, the parameter mapping between the Zhang-Levy-Fert model and the Valet-Fert model is exact. In this case, the spin-accumulation in both models is also collinear to the magnetization. As a consequence, the spin-mixing-conductance term vanishes which reflects the fact, that a collinear magnetization configuration results in a vanishing spin torque. That said, applying the bulk mappings (174)-(181), and adding effective-interface layers with thickness d and material parameters (184)-(185) in order to simulate the interface properties of the Valet-Fert model in the Zhang-Levy-Fert model, results in the same spin accumulation and electric potential for both models in the limit of small d. The Valet-Fert model is very popular in the experimental community where the interface properties rb , λ and g Ö are discussed and determined for various material systems. However, the Zhang-Levy-Fert model provides a more general approach for the description of spin transport and spin torque. In particular, the bidirectional description of spin torque that accounts for both the torque exerted from the current polarization on the magnetization and vice versa leads to a better representation of the physical processes. Interface properties as defined by the Valet-Fert model can be modeled with additional thin layers. Beyond the spin-diffusion model While the spin-diffusion model introduced in Section 5.3 has been shown to incorporate various models for the description of spin torque and other spintronics effects, its area of application is restricted to diffusive transport. Some effects that are not included in the equations presented in Section 5.3, such as inplane GMR, spin pumping and anomalous Hall effect might be added by means of additional terms to the diffusion model since they are in principle compatible with the diffusive transport assumption [70]. An important class of spintronics devices though is making use of magnetic tunnel junctions in order to exploit tunnel-magneto resistance (TMR) and spin torque. The spin transport in tunnel junctions, however, is not diffusive. Various ab initio models have been developed in order to accurately describe magnetic tunnel junctions [71][72][73]. However, ab initio models are computationally challenging and do not integrate well with the semiclassical micromagnetic model. The development of a suitable model that integrates well with the spin-diffusion model and micromagnetics is still subject to ongoing research. Discretization The micromagnetic model as introduced in the preceding sections defines a set of nonlinear partial differential equations in space and time, which can only be solved analytically for simple edge cases. In general, the solution of both static and dynamic micromagnetics calls for numerical methods. However, the development of efficient numerical methods is challenging due to the following properties of the micromagnetic equations. 1. The demagnetization field, that describes the dipoledipole interaction in the magnetic material, is a long-range interaction. A naive implementation of such an interaction has a computational complexity of Opn 2 q with n being the number of simulation cells. Various methods have been proposed to reduce this complexity to Opn log nq [74] or even Opnq [75]. 2. The exchange interaction adds a local coupling with high stiffness due to its second order in space. While the competition of the long-range demagnetization field with the local exchange field is crucial for the generation of magnetic domains, it also poses high demands on the numerical time-integration methods. 3. The nonlinear nature of most of the energy contributions leads to a complex energy landscape, which makes it difficult to efficiently seek for energy minima. In the context of quasistatic hysteresis computation, the nearest local energy minimum to a given magnetization configuration has to be found. A complex energy landscape increases the risk to miss a local minimum, and calls for a thoughtful choice of minimization algorithm. A large variety of tailored numerical methods for the solution of micromagnetic equations has been proposed. Typically, these methods introduce distinct discretizations for space and time. Among the spatial discretizations, the most popular methods applied in micromagnetics are the FDM and the FEM. For both methods the magnetic region is subdivided into simulation cells resulting in a mesh of cells. However, the requirements for the mesh differ significantly for both methods. While the FDM usually requires a regular cuboid mesh, the FEM typically works on irregular tetrahedral meshes, see Figure 16. While FDM or FEM are used for spatial discretization in order to compute the effective field H eff or the respective energy contributions E, another class of algorithms is required in order to either minimize the total energy with respect to the magnetization configuration m or to compute the time evolution of m according to the LLG. Independent from the discretization method, the cell size has to be chosen sufficiently small in order to accurately resolve the structure of domain walls. The characteristic length for the domain-wall width is the so-called exchange length which is defined by l " c A K eff(191) where K eff is the effective anisotropy constant that includes contributions from the crystalline anisotropy as well as the shape anisotropy which is introduced by the demagnetization field [13]. Note that for both energy minimization and magnetization dynamics, the effective field needs to be computed in the magnetic domain only. In the following sections, the spatial discretization with FDM and FEM is discussed in detail. In further sections, numerical methods for efficient integration of the LLG, energy minimization and energy barrier calculations will be discussed. The finite-difference method The FDM is a very popular numerical tool for the solution of micromagnetic equations. While the restrictions to regular meshes renders this method inapt for certain problems involving complex geometries, this restriction allows the application of very fast algorithms. Demagnetization field Among the effective-field contributions introduced in Section 2, the demagnetization field holds the special role as the only long-range interaction. Since long-range interactions are computationally costly, the value of a spatial discretization strategy is significantly influenced by its demagnetization-field algorithm. The FDM solves partial differential equations by approximation of the differential operators with finite differences. In case of the demagnetization-field problem (16), which has the form of Poisson's equation, this would require the approximation of the Laplacian. However, the application of this classical finite-difference procedure is complicated by the open boundary condition (15), which prevents the restriction of the computational domain to the magnetic region Ω m . Hence, in finite-difference micromagnetics instead of discretizing Poisson's equation, the demagnetization field is usually solved by direct integration of (21). Consider a cellwise constant normalized magnetization with the cells Ω i being an arbitrary partitioning of the magnetic domain Ω m mpxq " m i @ r P Ω i (192)Ω m " ď i Ω i with Ω i X Ω j " H if i ‰ j. (193) Inserting the discretization (192) into the integral formulation of the demagnetization field (21) yields H dem prq " ż ΩÑ px´x 1 qM 1 pxq dx 1 (194) " M s ÿ j « ż ΩjÑ px´x 1 q dx 1 ff m j .(195) In order to compute the demagnetization field H dem with the same discretization as the magnetization m, the field is averaged over each cell Ω i which results in H dem i " M s ÿ j « 1 V i ż Ωi ż ΩjÑ px´x 1 q dx dx 1 ff m j (196) " ÿ j A ij m j(197) where V i is the volume of the simulation cell i. Here, A denotes the linear demagnetization-field operator, which is represented by a dense 3nˆ3n matrix with n being the number of simulation cells. While this method could be used for the numerical demagnetization-field computation, it scales with Opn 2 q for both storage requirements and computational complexity which is unfeasible for large problems. A better scaling can be accomplished by exploiting the convolutional structure of the integral equation (21). In order to preserve this structure on the discrete level, a regular spatial discretization is required, i.e. all simulation cells Ω i must be of the same shape Ω ref Figure 17. Consequently, the offset from a simulation cell Ω i to another simulation cell Ω j is given by a multiple of the cell spacing ∆x in every spatial dimension 1 Ω i prq " 1 Ω ref px´x i q(198)x i´xj " ÿ k pi k´jk q∆x k .(199) Using (198) and (199), the integration in (197) can be carried out over the reference cell Ω ref H dem i " M s ÿ j » -1 V i ij Ω refÑÿ k pi k´jk q∆x k`x´x 1¸d x dx 1 fi fl m j(200) which has the form of a discrete convolution since it only depends on the difference of the multiindices i and j H dem i " M s ÿ jÑ i´j m j (201) N i´j " 1 V i ij Ω refјÿ k pi k´jk q∆x k`x´x 1¸d x dx 1 .(202) The discrete demagnetization tensorñ i´j has entries for every possible cell distance which amounts to ś k p2n k1 q « 8n, where n k denotes the number of cells in spatial dimension k. Compared to the direct demagnetizationfield operator A, the demagnetization tensor reduces the storage requirements from Opn 2 q to Opnq. The computational complexity, however, still amounts to Opn 2 q when implementing (201) literally. In order to reduce the computational complexity, the discrete convolution (18) is computed in Fourier space where it reduces to a cell-wise multiplication according to the convolution theorem FpÑ˚mq " FpÑ qFpmq(203) where the Fourier transform is applied componentwise. Since the cell-wise multiplication has a low complexity of Opnq, the overall complexity of the demagnetizationfield computation is governed by the complexity of the Fourier-transform computation. In case of the fast Fourier transform this amounts to Opn log nq. Note that this fast-convolution algorithm requires the discrete demagnetization tensorÑ i´j to be of the same size as the discrete magnetization m i in order to perform cell-wise multiplication in Fourier space. However, while the magnetization is discretized with ś i n i cells, the discrete demagnetization tensor has a size of ś i 2n i´1 . Hence, the discrete magnetization has to be expanded in order to match the size of the demagnetization tensor. Due to the cyclic nature of the discrete Fourier transform pf˚gq i " n´1 ÿ j"0 " f pi´j`nq%n¨gj (204) all entries of the demagnetization tensorÑ i´j are considered for every field evaluation H dem i . For instance, the negative-distance entryÑ´1 ,´1 is considered for the computation of the field H dem at position p0, 0q, although the p0, 0q cell has no neighbors at negative distances. In order to neglect these unphysical distances, the only reasonable choice for the expansion of the magnetization is by adding zero entries, which is often referred to as zeropadding, see [76]. The complete convolution algorithm for the demagnetization-field computation is visualized in Figure 18, where m andÑ are reduced to two spatial dimensions for the sake of simplicity. The result of the convolution algorithm is of the size ś i 2n i´1 like the demagnetization tensor. Physical meaningful values of the computed field H dem , however, are found only in the first ś i n i entries. The remaining entries are algorithmic byproducts and can be neglected. Note that the only requirement for the application of the fast convolution is a mesh regularity as described by (198) and (199). However, the evaluation of the demagnetization tensor (202) might be unfeasible for complicated reference cells such as illustrated in Figure 17a. For threedimensional cuboid cells, an analytical formula for the demagnetization tensorÑ i´j was derived by Newell et al. [77]. According to this work the diagonal element N 1,1 of the tensor is given by N 1,1 px, ∆xq " 1 4π∆x 1 ∆x 2 ∆x 3 ÿ i,jPt0,1u p´1q ř x ix`jx f rx 1`p i 1´j1 q∆x 1 , x 2`p i 2´j2 q∆x 2 , x 3`p i 3´j3 q∆x 3 s (205) gpx 1 , x 2 , x 3 q " px 1 x 2 x 3 q sinh´1˜x 3 a x 2 1`x 2 2¸`x 2 6 p3x 2 3´x 2 2 q sinh´1˜x 1 a x 2 2`x 2 3¸`x 1 6 p3x 2 3´x 2 1 q sinh´1˜x 2 a x 2 1`x 2 3x 3 3 6 tan´1˜x 1 x 2 x 3 a x 2 1`x 2 2`x 2 3¸´x 3 x 2 2 2 tan´1˜x 1 x 3 x 2 a x 2 1`x 2 2`x 2 3x 3 x 2 1 2 tan´1˜x 2 x 3 x 1 a x 2 1`x 2 2`x 2 3¸´x 1 x 2 a x 2 1`x 2 2`x 2 3 3 .(210) where the function f is defined as f px 1 , x 2 , x 3 q " |x 2 | 2 px 2 3´x 2 1 q sinh´1˜| x 2 | a x 2 1`x 2 3| x 3 | 2 px 2 2´x 2 1 q sinh´1˜| x 3 | a x 2 1`x 2 2| x 1 x 2 x 3 | tan´1˜| x 2 x 3 | x 1 a x 2 1`x 2 2`x 2 31 6 p2x 2 1´x 2 2´x 2 3 q b x 2 1`x 2 2`x 2 3 .(206) The elements N 2,2 and N 3,3 are obtained by circular permutation of the coordinates N 2,2 px, ∆xq " N 1,1 rpx 2 , x 3 , x 1 q, p∆x 2 , ∆x 3 , ∆x 1 qs (207) N 3,3 pr, ∆rq " N 1,1 rpx 3 , x 1 , x 2 q, p∆x 3 , ∆x 1 , ∆x 2 qs. (208) The off-diagonal element N 1,2 is given by N 1,2 px, ∆xq " 1 4π∆x 1 ∆x 2 ∆x 3 ÿ i,jPt0,1u p´1q ř x ix`jx grx 1`p i 1´j1 q∆x 1 , x 2`p i 2´j2 q∆x 2 , x 3 pi 3´j3 q∆x 3 s(209) where the function g is defined as See equation (210) above. Again other off-diagonal elements are obtained by permutation of coordinates N 1,3 px, ∆xq " N 1,2 rpx 1 , x 3 , x 2 q, p∆x 1 , ∆x 3 , ∆x 2 qs (211) N 2,3 px, ∆xq " N 1,2 rpx 2 , x 3 , x 1 q, p∆x 2 , ∆x 3 , ∆x 1 qs. Like the continuous tensorÑ px´x 1 q the discrete tensor N i´j is symmetric N ij " N ji .(213) Hence, the above definitions of N 1,2 , N 1,3 and N 2,3 can be used to obtain the remaining off-diagonal elements. While these analytical expressions are exact, their numerical evaluation can lead to inaccuracies due to floating-point errors. These errors especially occur for large cell distances and degenerated cells and can be avoided by numerical integration as shown by Lebecki et al. [78] and Krüger et al. [79]. The fast-convolution algorithm can be further optimized by considering the specific properties of the demagnetization-field problem, i.e. Fourier transforms of zero values and evaluation of unneeded data can be omitted [80] and symmetries in the demagnetization tensor can be exploited [8] to further speed up computations. Instead of computing the demagnetization field directly from a convolution of the magnetization m with the demagnetization tensorÑ , the field can also be computed as negative gradient of the scalar potential u [81,82]. The scalar potential u itself is the result of a convolution, see (20), and can be computed by the fast-convolution algorithm. The gradient can be approximated with finite differences. Compared to the direct computation, the advantage of this approach is the reduction of Fouriertransform operations. However, the additional gradient computation compromises the overall accuracy of the computation. Local field contributions Except for the demagnetization field, all energy contributions introduced in Section 2 have either local or short-range character in the sense that they depend on derivatives of the magnetization. Local contributions to the effective field, such as the anisotropy fields (68) and (70), are simply evaluated cellwise. Differential operators in short-range contributions such as the exchange field (62) are approximated with finite differences. The secondorder finite-difference approximations for the first and second spatial derivative of the discretized magnetization m i readˆB which results in the following discretization of the exchange field H ex m Bx i˙j " m j`ei´mj´ei 2∆x i (214) B 2 m Bx 2 i˙j " m j`ei´2 m j`mj´ei ∆x 2 i (215)H ex i " 2A µ 0 M s ∆m i « 2A µ 0 M s ÿ k m i`e k´2 m i`mi´e k ∆x 2 k . (216) While higher-order finite-differences might lead to better convergence properties for some applications, they also increase the computational effort by involving more neighboring cells in the computation. Since second-order approximation is usually considered sufficiently accurate, most finite-difference codes stick with this approximation. The boundary condition B " 0, as derived in Section 3, is usually implemented by introducing virtual cells surrounding the magnetic region Ω m , see Figure 19, and computing the magnetization values m i in these cells accordingly. In case of the exchange interaction this boundary condition is defined by Bm{Bn " 0 as derived in Section 3.2. For the boundary at x 1 " 0 this leads to the following equation Bm Bx i˙p 0,j,kq " m p1,j,kq´mp´1,j,kq 2∆x i " 0 (217) which needs to hold for all 0 ď j ď n 2´1 and 0 ď k ď n 3´1 and determines the values of the virtual cells m p´1,j,kq as m p´1,j,kq " m p1,j,kq . After computing the values of the virtual cells, the derivatives required for the distinct field computations can be evaluated according to (214) and (215) without special treatment for the boundary cells. Note that the boundary conditions, and thus the computation of the virtual cells, differ depending on the choice of field contributions, see Section 3.6. Spintronics The implementation of the spin-torque effects of Slonczewski as well as Zhang and Li as introduced in Sections 5.1 and 5.2 is straightforward as the torque contributions depend on the local magnetization and their derivatives only. In order to solve the spin-diffusion model, a second-order partial differential equation in space has to be solved in order to compute the spin-accumulation s and the electric potential u. The discretization of equations (142) and (143) with finite differences is straightforward. However, care has to be taken in order to properly account for discontinuous material parameters when dealing with multilayer structures. For instance, the spin-currentj s , as defined by (139), may be discontinuous across material interfaces due to discontinuities of the material constants C 0 and D 0 . Hence, the discretization of the divergence in (142) must be carefully chosen to account for these jumps in a distributional sense. A possible finite-difference discretization of the time dependent spin-diffusion model with prescribed electric current (138) and (141) is presented by García-Cervera et al. [83]. Existing software packages Various software packages implementing the FDM with FFT accelerated demagnetization-field computation have been developed. Probably the most popular open-source finite-difference micromagnetic software is OOMMF [84]. OOMMF is a multiplatform code running on central processing units (CPUs). Other finite-difference CPU codes include the open-source software Fidimag [85] and the commercial package MicroMagus [86]. A very simple CPU implementation of the finite-difference algorithms with the Python library NumPy is presented in [87]. The recent advent of general-purpose graphicsprocessing units (GPGPUs) allowed for the significant acceleration of scientific software. A popular open-source package for finite-difference micromagnetics on GPGPUs is MuMax3 [88,89]. Other GPGPU codes include magnum.fd [90] and the recently developed GPGPU extension to OOMMF [82]. The finite-element method The FEM is a powerful numerical tool for the solution of partial differential equations. In contrast to the FDM where the differential operators are discretized directly, in finite-elements the original problem is transformed into a variational problem before discretization. Consider Poisson's equation´∆ u " f in Ω. (218) Multiplying both sides with a test function v from a suitable function space V and integrating the original problem turns the original problem into a variational problem. The solution u P V is required to satisfý ż Ω ∆u v dx " ż Ω f v dx @ v P V.(219) Applying integration by parts yields ż Ω ∇u¨∇v dx " ż Ω f v dx`ż BΩ u N v ds @ v P V( 220) where ∇u¨n in the boundary integral was replaced by u N in order to implement Neumann boundary conditions ∇u¨n " u N for x P BΩ. This formulation is referred to as weak form of the original problem as it weakens the and additionally restricting the test functions to V D given by V D " tv P V : vpxq " 0 @ x P Γ D u(222) so that the solution u is not tested on the Dirichlet boundary Γ D . By appropriate choice of the function space V , the variational problem (220), can be shown to have a unique solution [91]. Discretization of the continuous problem (220) is achieved by choice of a discrete function space V h Ă V . While the FEM is very general concerning the choice of discrete function spaces [91], the most common choice is that of piecewise affine, globally continuous functions. A suitable function basis is constructed using a tetrahedral mesh as depicted in Figure 16b. Each mesh node x i is associated with a basis function φ i with node values φ i px j q " δ ij(223) which is affine within each cell, see Figure 20. In order to discretize the weak formulation (220), both the solution function u and the test function v are expressed in terms of the basis functions u h " ÿ i u i φ i (224) v h " ÿ i v i φ i .(225) Inserting into (220) and neglecting the boundary term for the sake of simplicity yields ÿ i u i ż Ω ∇φ i¨∇ φ j dx " ż Ω f φ j dx @ j P r1, ns. (226) Instead of testing with all possible test functions v h , the test functions are reduced to the individual basis functions φ j . Since both, the left-hand side and the right-hand side of (220) are linear in the test function v, the equality (226) holds also true for any test function v h . The discretized solution u i is given by a linear system of equations ÿ i A ij u i " b j (227) with A ij " ż Ω ∇φ i¨∇ φ j dx (228) b j " ż Ω f φ j dx (229) where the matrix A ij , which is referred to as stiffness matrix, depends only on the geometry of the mesh. Furthermore, A ij is sparsely populated due to the choice of basis functions that result in nonzero contributions only for neighboring nodes. The sparsity is an important aspect from a computational point of view, since it reduces the storage requirements from Opn 2 q to Opnq. Furthermore, this property can be exploited by the use of iterative methods for the solution of linear systems. These methods avoid the direct inversion of the matrix and require only the computation of matrix-vector multiplications which leads to a superior scaling compared to direct methods [92]. The procedure of turning a partial differential equation into a variational problem by multiplication with test functions, which is referred to as Galerkin method, can be applied to a large variety of problems. Demagnetization field As shown in Section 2.3, the demagnetization-field potential is given by Poisson's equation (16). However, in contrast to the example given in the preceding section, the demagnetization-field problem is subject to open boundary conditions (15). Hence, Poisson's equation has to be solved in the complete space R 3 which is not trivially possible with the FEM that applies only to finite regions. A simple approach to approximate the required boundary conditions with finite elements is the so-called truncation approach. In order to compute the demagnetization field H dem of a finite magnetization region Ω m , the outer region R 3 zΩ m is approximated with a finite external region Ω e . Since the magnetic scalar potential is known to space V 0 with V 0 " tv P V : vpxq " 0 @ x P BΩu (231) yields ż Ω ∇u¨∇v dx " ż Ω M s m¨∇v dx @ v P V 0 (232) which solves the homogeneous Dirichlet problem in the domain Ω " Ω m Y Ω e . Note that the right-hand side of (232) is integrated by parts in order to loosen the continuity requirements on the magnetization m. This is essential in order to accurately account for the discontinuity of the magnetization at the boundary of the magnetization region BΩ m . If the magnetization m is discretized with the usual piecewise affine, globally continuous functions, this discontinuity can be described by restricting the integration domain of the right-hand side of (232) to the magnetic domain Ω m . The accuracy of the truncation solution is significantly influenced by the size of the external region Ω e . However, a larger size of the external region usually increases the number of mesh nodes and thus the computational effort. Choosing Ω e five times the size of Ω m has been found to be a reasonable trade-off between accuracy and computational complexity [93]. In order to further reduce the computational costs, the mesh in the external region can be gradually coarsened to the outside, see Figure 21. This leads to a reduction of degrees of freedom and has no significant impact on the accuracy of the solution, since the scalar potential is known to smoothly decay to the outside. A related approach to the truncation method is the socalled shell-transformation method. Instead of describing the exterior space R 3 zΩ m with a relatively large region Ω e , this method uses a rather small shell region Ω s and employs a transformation T that maps the shell region onto the complete exterior space T : Ω s Ñ R 3 zΩ.(233) Specifically, the transformation T maps the inner boundary of the shell region onto itself and the outer outer shell boundary onto infinity T pxq " x for x P BΩ (234) T pxq Ñ 8 for x P BpΩ Y Ω s q.(235) For a spherical shell, the transformation is illustrated in Figure 22. By substitution of variables, integrals over the exterior space R 3 zΩ can be expressed by integrals over the shell region Ω s . For the left-hand side of the weak formulation (232) this substitution yields ż R 3 zΩ ∇u¨∇v dx " ż Ωs p∇uq T g∇v dx(236) where g is the so-called metric tensor defined by g " pJ´1q T |det J | J´1(237) with J " ∇T being the Jacobian of the transformation T . This leads to the weak formulation ż Ω ∇u¨∇v dx`ż Ωs p∇uq T g∇v dx " ż Ωm M s m¨∇v dx(238) for the solution of the scalar potential u. Various shell geometries and transformations have been proposed to solve (238) [94][95][96][97]. Like for the truncation method, application of the shell-transformation method results in sparse linear systems. The advantage of the transformation method over the truncation approach is a finite representation of the infinite exterior region. However, the metric tensor g is by definition singular at the outer shell boundary which makes the computation of the discrete entries for the system matrix difficult. Furthermore, this singularity leads to a bad matrix condition, which has a negative effect on the numerical solution of the linear system. In order to further reduce the degrees of freedom, it is desirable to consider only the magnetic region Ω m for spatial discretization. This can be achieved by a hybrid finiteelement-boundary-element method proposed by Fredkin and Koehler [98]. From (14) and the divergence theorem the following jump conditions for the scalar potential at the boundary of the magnetic region BΩ m apply u´´u`" 0 (239) p∇u´´∇u`q¨n " M s m¨n(240) where u´denotes the value in the magnetic region and u`denotes the corresponding value outside. Consider the following splitting of the potential u u " u 1`u2(241) where u 1 is solved by ∆u 1 "´∇¨pM s mq (242) Bu 1 Bn " M s n¨m on BΩ m(243) in the magnetic region Ω m and zero in the exterior region R 3 zΩ m . While u 1 satisfies the jump condition (240), it violates the continuity condition (239). Thus, u 2 has to be chosen to fix (239) while preserving (240) and being a solution to the Laplace equation ∆u 2 " 0, i.e. u2´u2 " u1 (244) Bu2 Bn´B u2 Bn " 0(245)∆u 2 " 0 in R 3 .(246) These requirements are fulfilled by the double-layer potential defined as u 2 " ż BΩ u 1 B Bn 1 |x´x 1 | dx.(247) While this integral expression can be used to evaluate u 2 in Ω m , this procedure has a high computational complexity of Opn 2 q. Hence, (247) is only evaluated at the boundary BΩ m using the boundary-element method. The result of this computation is then used as Dirichlet boundary condition for a Laplace problem that is solved with finite-elements. This means that a computation of the scalar potential u requires the finite-element solution of Poisson's equation with Neumann boundary conditions, followed by a boundary-element evaluation of the doublelayer potential and the finite-element solution of a Laplace problem with Dirichlet conditions. Compared to the pure finite-element presented in this section, the advantage of this hybrid method is that only the magnetic region Ω m has to be discretized. This is even true if Ω m consists of various separated domains. The disadvantage, however, is the loss of sparsity. The boundary-element operator for the double-layer computation is a dense n BˆnB matrix with n B being the number of boundary nodes. A common procedure to reduce the storage requirements of this matrix is the application of hierarchical matrices [99,100] which reduces the storage requirements as well as the computational complexity for the potential evaluation from Opn 2 B q to Opn B log n B q. An alternative hybrid method using the single-layer potential was proposed by García-Cervera and Roma [101]. The methods presented in this section are devoted to the calculation of the magnetic scalar potential u rather than the demagnetization field H dem which is defined as the negative gradient of the potential H dem "´∇u. Since the discrete solution of u is a piecewise affine function, the gradient of u can be easily computed. However, the gradient of a piecewise affine function is a piecewise constant, discontinuous function, see Figure 23. Often it is desirable to compute the demagnetization field with the same discretization as the magnetization. This is required for nodewise operations such as the nodewise evaluation of the right-hand side of the LLG equation (117). The approximation of the demagnetization field as a piecewise affine, globally continuous function can be achieved by solving the weak form ż Ωm H dem¨v dx "´ż Ωm ∇u¨v dx @ v P V (248) with H dem P V . This procedure is referred to as projection. Discretization of (248) with the usual basis functions The system matrix resulting from this weak form is called mass matrix M M ij " ż Ωm φ i¨φj dx.(250) Like the stiffness matrix arising from discretization of the Laplacian, the mass matrix is sparse since only basis functions of neighboring mesh nodes give nonzero contributions. In order to avoid solving another linear system for the computation of the demagnetization field, the mass matrix can be approximated by a diagonal matrix Mg iven by Mi j " δ ij ż Ωm φ i¨1 dx(251) with 1 " p1, 1, 1q. This approximation is referred to as mass lumping since all off-diagonal mass entries are lumped in the diagonal Mi i " ÿ j M ij .(252) This approximation preserves the integral of the solution variable H dem which justifies its application ż Ωm H dem h dx " ÿ ij H i ż Ωm φ i¨φj dx " ÿ ij δ ij H i ż Ωm φ i¨1 dx.(253) Since the lumped mass matrix is diagonal, its inverse is trivially given by the reciprocal diagonal entries. Hence, instead of solving a linear system, the projection can be expressed as a single sparse matrix-vector multiplication H dem i " ÿ j A ij u j (254) A ij "´" ż Ωm φ i¨1 dx ´1 ż Ωm ∇φ j¨φi dx. (255) The illustrative description of mass lumping is that of an weighted average with the weight being the cell sizes as described in [1]. The result of a gradient projection is depicted in Figure 23 along with the original function. Local field contributions Local field contributions are usually computed with a similar procedure as applied for the gradient computation in Section 6.2.1 For the exchange field defined by (62) this procedure yields ż Ωm H ex¨v dx " ż Ωm 2 µ 0 M s ∇¨pA∇mq¨v dx (256) "´ż Ωm A∇m : ∇ˆ2 µ 0 M s v˙dx ż BΩm 2A µ 0 M s Bm Bn¨v ds (257) where integration by parts was applied in order to avoid second spatial derivatives. Due to the boundary condition (63), the boundary integral on the right-hand side vanishes which results in ż Ωm H ex¨v dx "´ż Ωm 2A∇m : ∇ˆ1 µ 0 M s v˙dx.(258) While this weak form poses no further difficulties for single-phase magnets with a constant saturation magnetization M s , the gradient on the test function diverges at material interfaces with discontinuous M s . In order to avoid this problem, the original problem (256) can be multiplied with´µ 0 M s before integrating by parts resulting iń ż Ωm µ 0 M s H ex¨v dx " 2 ż Ωm A∇m : ∇v dx " δE ex pm, vq(259) with δE ex being the exchange differential defined in (60). This method can be used to compute any local field contribution introduced in Section 2. In order to avoid the solution of a linear system for the retrieval of field contributions, the mass-lumping procedure introduced in Section 6.2.1 can be applied. For energy contributions quadratic in the magnetization m, such as the exchange field, the field computation can be reduced to a single sparse matrix-vector multiplication given by H i " ÿ j A ij m j (260) A ij "´" ż Ωm µ 0 M s φ i¨1 dx ´1 δEpφ j , φ i q.(261) Note that this method accounts for the boundary conditions introduced by the exchange and antisymmetric exchange field in a generic fashion since it considers the differential of the energy δE rather than the functional derivative δE{δm. Since the boundary conditions are embedded in the differential, they do not have to be explicitly applied. While some effective-field contributions like the uniaxial-anisotropy field (68) can also be computed nodewise for single-phase magnets, the nodewise computation fails for discontinuous material parameters. Due to the integral formulation, the calculation according to (260) and (261) also works in these cases. Interface contributions Some energy contributions introduced in Section 2 are interface effects rather than bulk effects. This means that the energy connected to these effects is the result of an interface integral. These interface contributions usually enter the micromagnetic formalism through boundary conditions, see e.g. Section 3.4. However, in the framework of dynamical micromagnetics, it is often desirable to be able to express all energy contribution in terms of an effective field instead of a boundary condition in order to add up the different contributions for their use in the LLG. In order to compute a discretized effective field that accounts for interface energy contributions, we require the energy generated by the effective field to equal the interface energy contribution. Consider the interface exchange energy given by (38). Since this energy contribution is quadratic in m, the equality of energies reads (262) with p " AmrP pxqs. Discretization of the field H eff yields E "´1 2 ż Ωm µ 0 M s m¨H eff dx "´ż Γ m¨ppmq dsÿ i´1 2 H i ż Ωm µ 0 M s m¨φ i dx "´ż Γ m¨p ds (263) which has to hold for arbitrary magnetizations m. Hence, the magnetization can be replaced with basis functions and the equality ÿ i´1 2 H i ż Ωm µ 0 M s φ j¨φi dx "´ż Γ φ j¨p ds (264) has to hold for any φ j . Application of mass lumping yields the system H i " ÿ j A ij m j (265) A ij " 2 "ż Ωm µ 0 M s φ i¨1 dx ´1 ż Γ φ j¨p pφ j q ds (266) "´" ż Ωm µ 0 M s φ i¨1 dx ´1 δEpφ j , φ i q(267) which has the exact same form as any local bulk-field contribution as shown in Section 6.2.2. Spintronics As for the FDM, the simplified spin-torque models by Slonczewski as well as Zhang and Li can be incorporated in finite-element micromagnetics along the lines of the local field contributions shown in Section 6.2.2. In order to account for the Slonczewski spin-torque as an interface effect, the consideration of Section 6.2.3 can be applied. A more challenging task is the discretization of the spin-diffusion model introduced in Section 5.3. The spindiffusion model requires the computation of the spin accumulation s which is the solution to a partial differential equation. This equation is turned into a weak form using the Galerkin method. In order to solve the equilibrium spin accumulation s for a given magnetization m and a prescribed charge current j e as defined by (141) and (142), the following weak form applies 2 ż Ω D 0 ∇s : ∇v dx 2 ż Ωm D 0 ββ 1 m b " p∇sq T m ‰ : ∇v dx ż Ω s τ sf¨v dx`ż Ωm J sˆm ¨v dx " ż Ωm βµ B e m b j e : ∇v dx´ż BΩXBΩm βµ B e pj e¨n q pm¨vq ds.(268) Note that integration by parts was not only applied in order to eliminate second derivatives of the spin accumulation s, but also in order to eliminate any derivative on the magnetization m or material parameters since these variables may have discontinuities in the problem domain Ω. While material parameters may have arbitrary discontinuities at material interfaces, the magnetization is continuous within the magnetic domain Ω m . However, since the magnetization vanishes in the nonmagnetic domain ΩzΩ m , it is by definition discontinuous across magnetic-nonmagnetic interfaces. For the discrete formulation, these properties are take into account by appropriate choices of function spaces and integration domains. Namely, all material parameters are discretized with piecewise constant functions. Since the magnetization is continuous within the magnetic region, it is discretized with the usual piecewise affine, globally continuous functions. In order to account for the discontinuity at magnetic-nonmagnetic interfaces, any integral including the magnetization m is restricted to the magnetic domain Ω m which is equivalent to a rapid drop of the magnetization to zero outside the magnetic domain. The boundary integrals arising from partial integration of ∆s vanish due to the homogeneous Neumann boundary condition on s, see Section 5.4. Since the spin accumulation s is assumed to be continuous even across material interfaces, both s as well as the test functions are discretized with componentwise piecewise affine, globally continuous functions s, v P V . For the solution of the self-consistent spin-diffusion model given by the current definitions (140), (139) and their respective source equations (143), (143), the function space for both the solution and the test functions has to be extended. The solution of the self-consistent model comprises both the spin accumulation s and the electric potential u. Both the components of s and the scalar field u are discretized with piecewise affine, globally continuous functions ts, uu P VˆV . Applying the Galerkin method and performing integration by parts in order to avoid diverging derivatives yield two coupled weak formulations. The weak formulation of (140) and (143) reads ż Ω 2C 0 ∇u¨∇v dx´ż Ωm 2β 1 D 0 e µ B p∇sq T m¨∇v dx "´ż ΓN pj 0 e¨n qv ds @ v P V(269) and the weak formulation of (139) and (142) is given by ż Ωm 2βC 0 µ B e m b ∇u : ∇v dx ż BΩmXΓD 2βC 0 µ B e p∇u¨nqpm¨vq dś ż Ω 2D 0 ∇s : ∇v dx´ż Ω s¨v τ sf dx´ż Ωm J psˆmq¨v dx "´ż BΩmXΓN β µ B e pj 0 e¨n qpm¨vq ds. @ v P V . (270) Here, Γ N P BΩ denotes all external interfaces that act as contacts with prescribed charge-current inflow j 0 e¨n and Γ D P BΩ denotes contacts with prescribed electric potential u 0 . While the current inflow is implemented as natural Neumann boundary condition, the prescribed potential is implemented as Dirichlet boundary condition, see Section 5.4. In addition to the boundary conditions on the potential u, homogeneous Neumann conditions are applied to the spin accumulation s. Discretization of (269) and (270) yields a single sparse system of the size 4nˆ4n with n being the number of mesh nodes. Incorporating spin-orbit interactions given by (151) and (152) or spin dephasing given by (155) is straightforward and can be done along the lines of the weak forms (268)-(270). Existing software packages The implementation of finite-element solvers is a challenging task, since it involves the nontrivial generation of tetrahedral meshes, the numerical computation of integrals for the system-matrix assembly and the solution of large linear systems. Various software packages and libraries have been developed in order to solve one or more of these tasks. Popular open-source packages for the mesh generation are Gmsh [102] and NetGen [103]. With ONELAB [104] and NGSolve [105], these mesh generators also act as full stack finite-element libraries. Other open-source libraries for the formulation and solution of finite-element problems are Escript [106] and MFEM [107]. A very comprehensive and fast, yet easy to use finite-element library is FEniCS [108] which, by default, uses PETSc [109] as linear-algebra backend. Libraries for the boundary-element method as required by the hybrid demagnetization-field method introduced in Section 6.2.1 include BEM++ [110] and H2Lib [111]. Both libraries are open source and provide routines for the assembly of boundary-element matrices and their compression with hierarchical matrices. Besides these multipurpose libraries, a number of specialized micromagnetic finite-element packages are available. The open-source library FinMag [112] and the closed-source library magnum.fe [113] are micromagnetic simulators based on FEniCS. While FinMag concentrates on classical micromagnetics, magnum.fe also implements the spin-diffusion model [114][115][116]. Another closed-source finite-element code that solves the micromagnetic equations coupled to the spin-diffusion model is FEELLGOOD [117,118]. Other finite-element codes include the opensource packages Magpar [119] and NMag [120] as well as the closed-source package FEMME [121]. The closedsource packages Tetramag [122] and Fastmag [123] make use of graphics processing units (GPUs) to speed up computations. Other spatial discretization methods While the FDM and the finite-element are by far the most common discretization methods used in the micromagnetic community, other methods have been proposed to solve parts of the micromagnetic model. Especially the computation of the demagnetization field is an ongoing matter of research. A method that aims to combine the speed of the FFT based convolution with the flexibility of irregular meshes is the non-uniform FFT [124,125]. Both the FFT accelerated convolution and the finite-element demagnetization-field computation exhibit at least a computational complexity of Opn log nq. A well known method for the interaction of particle clouds which scales with n is the fast multipole method which has been shown to be also applicable to the continuous demagnetization-field problem [126,127]. Another class of methods that scales below Opn log nq employs low-rank tensor approximations [128,129]. Time integration Numerical integration of the LLG equation (117) poses several challenges on the applied method. One of these challenges is the high stiffness that is introduced by the exchange interaction [130]. Another challenge is the micromagnetic unit-sphere constraint |m| " 1 that is required to be preserved by a time-integration scheme. Several methods, that address these difficulties, have been proposed in order to efficiently integrate the LLG. Most numerical time-integration methods used in micromagnetics completely separate the spatial discretization from the time discretization. For these methods, both the magnetization m and its time derivative B t m are represented by vectors and the differential equation in space and time is transformed into n coupled ordinary differential equation Bm i Bt " f i pt, mq.(271) In order to evaluate the time derivative f i , the effectivefield contributions are computed according to the methods introduced in the preceding sections, and the righthand side of the LLG (117) or (118) is evaluated cellwise/nodewise. Due to its first order in time, the LLG is an initial value problem. We denote the initial value of the magnetization at time t 0 as mpt 0 q " m 0 .(272) A discrete time-integration scheme approximates the magnetization dynamics as a series of magnetization snapshots at times t i that we denote by m i « mpt i q with t i " t 0`i ∆t.(273) In the following, the most common time integrators in micromagnetics will be discussed in detail. A more comprehensive overview of methods can be found in [41]. Explicit Runge-Kutta methods A reliable and efficient class of numerical integrators for initial value problems are the explicit Runge-Kutta methods. According to the most general form of the Runge-Kutta method, the magnetization configuration m i is obtained from a known magnetization configuration m i´1 by m i " m i´1`∆ t ÿ j b j k j (274) k j " B t m « t i´1`cj ∆t, m i´1`∆ t˜ÿ kăj a jk k k¸ff(275) where a set of coefficients b j , c j and a ij defines a particular Runge-Kutta method. The auxiliary results k j are computed one after another. Due to the restriction j ă k in the summation on the right-hand side, each k j depends only on previously computed k k and the previous magnetization m i´1 which makes this method explicit. Explicit methods are usually computationally cheap, since they are linear in the solution variable m i . However, explicit methods lack stability which might be a disadvantage especially in the case of stiff problems. The simplest Runge-Kutta method is the explicit Euler method which is obtained by setting b 1 " 1 and c 1 " 0 m i " m i´1`∆ t B t mpt i´1 , m i´1 q.(276) This first-order method is not a good choice in terms of accuracy and efficiency. However, it is well suited to investigate the preservation of the unit-sphere constraint. Inserting the LLG (117) into (276) results in m i "m i´1`∆ t m i´1ˆ"´γ H eff pm i´1 q`αB t mpm i´1 q ‰ .(277) Multiplying with m i`mi´1 and reinserting (276) yields |m i | 2 " |m i´1 | 2`ˇ∆ t mˆ"´γH eff pm i´1 q αB t mpm i´1 q ‰ˇˇˇ2(278) and thus |m i | ď |m i´1 | where the preservation of norm |m 1 | " |m i´1 | only holds for a vanishing right-hand side of the LLG B t m " 0. In order to enforce the norm preservation, the magnetization is usually renormalized after each Runge-Kutta step, i.e. the magnetization m i is replaced by m i 1 given by m i 1 " m i |m i | .(279) This renormalization is also performed for higher order methods. While these methods reduce the violation of the unit-sphere constraint, they do not guarantee its preservation. Higher-order schemes are obtained by the choice of appropriate parameters b i , c j and a ij . The classical Runge-Kutta method requires the computation of four auxiliary results k j and is of fourth order. Since the evaluation of each k j comes at the price of an effective-field evaluation, the computation of a single integration step is more expensive for higher-order methods than for lowerorder methods. However, usually this disadvantage is more than compensated by the size of the time step which may by significantly larger for higher-order methods without compromising accuracy. While the classical fourth-order Runge-Kutta method is very efficient, the choice of time step is difficult and may even change in the course of integration. This problem can be solved by application of Runge-Kutta methods with adaptive stepsize control. These methods derive differentorder approximations from a shared pool of auxiliary results k j and estimate the integration error by comparison of these approximations. Prominent candidates which have proven valuable for micromagnetics are the Runge-Kutta-Fehlberg method [131] and the Dormand-Prince method [132] that both use fourth/fifth-order approximations for the stepsize control. The usage of explicit methods is usually not advised for stiff problems because of their lack of numerical stability [133]. In micromagnetics, a strong exchange coupling can lead to a very high stiffness of the differential equation. However, since the time-step size is coupled to the size of the smallest mesh cell, the use of a regular grid can mitigate this problem [130]. Hence, Runge-Kutta methods are the standard choice in finite-difference micromagnetics [8]. Implicit midpoint scheme An implicit integration scheme that specifically accounts for the micromagnetic unit-sphere constraint is the implicit midpoint rule [40]. According to this method, the magnetization at m i is obtained from the previous magnetization snapshot m i´1 by m i " m i´1`∆ t B t m " t i´1`∆ t 2 , m i`mi´1 2  . (280) Inserting the time derivative of m according to the LLG (117) and setting B t m " pm i´mi´1 q{∆t on the righthand side yields m i " m i´1`∆ t m i`mi´1 2ˆ´γ H eff " t i´1`∆ t 2 , m i`mi´1 2 `α m i´mi´1 ∆t˙.(281) Multiplying both sides with m i`mi´1 immediately yields |m i | " |m i´1 |. Hence, the midpoint rule exactly preserves the magnetization norm for arbitrary time-step sizes. Moreover, it can be shown by a similar procedure, that the midpoint scheme preserves the energy of a magnetic system with quadratic energy contributions in m for vanishing damping α [40]. However, the implicit nature of this method comes at the price of nonlinearity in the solution variable m i . Since the nonlinear system (281) cannot be solved analytically, it requires the application of an iterative procedure such as Newton's method for the computation of m i . Each Newton iteration requires the evaluation of the effective field, which results in a high computational effort. Tangent-plane integration Another class of integrators especially suited for the application in the framework of finite elements was introduced by Alouges [134]. This method relies on an alternative formulation of the LLG which is obtained by crossmultiplying the LLG (117) with m αB t m`mˆB t m " γH eff´γ pm¨H eff qm. This formulation is equivalent to the original LLG since all terms in (117) are perpendicular to m. In order to solve for the time derivative B t m, (282) is reformulated in a weak form. However, instead of seeking for the solution to w " B t m in the complete solution space V : R 3 Ñ R 3 , the solution space is restricted to the tangent space of the magnetization V T " tv : v¨m " 0u. This also allows for the restriction of the test space to the same space V T which simplifies the weak formulation of (282) to ż Ωm pαw`mˆwq¨v dx " ż Ωm γH eff pmq¨v dx @ v P V T .(283) Instead of the original LLG, the right-hand side of this form is of the same order in the magnetization m as the effective field H eff . This feature can be exploited in order to construct an implicit integration scheme for field terms linear in m without losing the linearity of the weak form in w. The time derivative w at t i´1 is obtained by setting m " m i´1`θ ∆tw (284) with 0 ď θ ď 1 where θ " 0 leads to an explicit scheme and θ " 1 leads to an implicit scheme. Employing the considerations concerning discontinuous material parameters in Section 6.2.2 yields the weak form ż Ωm µ 0 M s pαw`m i´1ˆw q¨v dx"γδEpm i´1`θ ∆tw, vq. (285) Considering only the exchange field, the weak formulation reads ż Ωm µ 0 M s pαw`m i´1ˆw q¨v dx " 2γ ż Ωm A∇pm i´1`θ ∆twq : ∇v dx.(286) Each field contribution can be treated with an individual θ. While the exchange field adds a high measure of stiffness to the problem which calls for an implicit integration scheme, other field terms may well be treated explicitly [135]. This usually applies to the demagnetization field. Despite its linearity in m the demagnetization field cannot be treated implicitly in the same manner as the exchange field, since it is the solution to another partial differential equation, see Section 6.2.1. After computing the time derivative w " B t m, the actual time step is performed by computing m i " m i´1`∆ tw |m i´1`∆ tw|(287) where the right-hand side is evaluated nodewise. Various techniques have been proposed in order to implement the tangent-plane function space V T within a finite-element formulation. A possible solution is the application of Lagrange multipliers [113,135] that enforce the orthogonality condition on w. An alternative approach uses a local mapping of two-dimensional vector fields R 3 Ñ R 2 onto the tangent plane V T [136]. A combination of this integration scheme with a time marching scheme for the spin accumulation yields a method for the coupled solution of micromagnetics with the dynamic spin-diffusion equation (138) [114,137]. Variants of this method include higher-order methods [117,124]. However, by increasing the order of the method, the linearity in the solution variable w is lost, which leads to a higher computational effort. Backward differentiation formula All previously introduced methods are so-called singlestep methods, that require the knowledge of a single magnetization snapshot m i´1 in order to compute the subsequent magnetization m i . In contrast, the backward differentiation formula (BDF) is a multi-step method, i.e. the magnetization m i is computed from a series of preceding snapshots m i´j with 0 ď j ď s and s being the order of the method. In the most general form, the BDF method is given by s ÿ j"0 a i m i´j`∆ tβ B t mpt i , m i´1 q " 0 (288) where a i and β can be chosen such that the method is of order s. Normalizing the parameter a i and β so that a 0 "´1 yields Gpm i q " m i´∆ tβ B t mpt i , m i q´s ÿ j"1 a i m i´j " 0 (289) which is a nonlinear equation in m i that can be solved with Newton's method. Starting from an initial configuration m i,0 that is usually approximated by extrapolation of previous snapshots m i´j , the Newton iteration reads δG δm i pm i,k qpm i,k`1´mi,k q "´Gpm i,k q,(290) where the variational derivative δG{δm i is given by δG δm i " 1´∆tβ δw δm with w " B t m.(291) In order to avoid the numerically expensive inversion of δG{δm, the discretized version of the linear equation (290) is usually solved iteratively for ∆m i,k " m i,k`1´mi,k . However, in order to reduce the number of required iterations, it is advised to employ a preconditioning procedure to (290). Instead of solving (290) directly, the preconditioned system δG δm i pm i,k qP´1P ∆m i,k "´Gpm i,k q(292) is solved for P ∆m i,k . The preconditioner P is chosen to approximate the original problem while being easily invertible. A good choice for this purpose is a simplification of (291) where the linearization of the LLG δw{δm only includes local and linear effective-field contributions like the exchange field. Following this procedure, the computation of a single time step is computationally expensive since for every Newton iteration, the preconditioning system has to be solved. However, this method has proven to be an superior choice for a large range of problems in the framework of finite-element micromagnetics [130]. Existing software packages The implementation of explicit integration schemes, such as the Runge-Kutta scheme introduced in Section 6.4.1, is straightforward and does not necessarily require the use of external libraries. However, implicit methods call for the solution of nonlinear systems. Linear algebra libraries such as PETSc [109] provide useful functionality for the implementation of suitable Newton methods. The opensource library SUNDIALS [138] includes a very efficient implementation of an adaptive-time-step BDF integration scheme as introduced in Section 6.4.4. Energy minimization and barrier computation While dynamical micromagnetic simulations are useful to gain insight into the mechanism of fast magnetization processes like magnetization switching, they are not feasible for investigations in the MHz regime and below. A typical example for quasi static micromagnetics is the computation of hysteresis curves. Experimental measurements of hysteresis curves are performed with field sweeps that are orders of magnitude slowers than the response time of magnetic systems which is in the GHz regime. Hence, the hysteresis curve can be computed by energy minimization instead of dynamical simulation. This is done by increasing the external field stepwise end computing the new energy minimum for each step. In order to compute hysteresis properties, the minimization algorithm is required to converge into the nearest local energetic minimum starting from a given magnetization configuration. A global minimization would yield unique magnetization configurations for a given external field and hence be unsuited for hysteresis computations. Numerical minimization is usually performed iteratively based on gradient evaluations. These gradient methods are particularly useful for hysteresis computation since they start from a given magnetization configuration and converge to local minima. The simplest gradient based minimization algorithm is the steepest-descent method with a single iteration given by m i " m i´1´τ δE δm(293) with τ being the step size. In order to account for the micromagnetic unit-sphere constraint |m| " 1, the descent direction is usually projected onto the tangent space of the magnetization δE δm Km " δE δm´ˆδ E δm¨m˙m "´mˆˆmˆδ E δm˙.(294) Inserting into (293) and considering the definition of the effective field (116) yields m i " m i´1´τ µ 0 M s m i´1ˆ" m i´1ˆH eff pm i´1 q ‰(295) which is equivalent to an integration step of the LLGdamping-term with an explicit Euler method. While this method still violates the unit-sphere constraint as shown in Section 6.4.1, it represents a significant improvement compared to the original steepest-descent method (293). Instead of using an explicit Euler method, any of the integration methods introduced in Section 6.4 may be used to progress the steepest descent. However, time-integration methods are optimized to accurately solve for the complete magnetization trajectory, while the only measure of interest for a minimization problem is the final magnetization configuration. Various tailored approaches for micromagnetic energy minimization including optimized steepest descend methods and variants of the conjugate gradient method have been proposed in order to achieve high performance and reduce the risk to miss local minima [28,[139][140][141]. Another class of minimization methods addresses the computation of energy barriers between two given local minima. Energy barriers are an important measure for the stability of magnetic devices. In magnetic storage devices, binary information is usually stored by putting the system in one of two possible energy minima, e.g. the up or down configuration in an anisotropic magnetic layer. The energy barrier between those minima defines the characteristic lifetime of the information by the Néel-Arrhenius equation τ N " τ 0 expˆE k B T˙( 296) with E being the energy barrier, T being the temperature and τ 0 being the attempt time. A robust yet simple method for the numerical calculation of the energy barrier between two magnetic states is the string method that yields the minimum energy path between two states [142]. The string method is an iterative method that evolves an initial magnetization path mpϕq toward a minimum energy path defined bý ∇Ermpϕqs¯K " 0(297) where ϕ denotes the position on the path and the K subscript denotes magnetization variations that are orthogonal to the path mpϕq. The magnetization path is discretized by a finite number of magnetization images mpϕq Ñ m i with i P t0, 1, . . . , nu. As illustrated in Figure 24, these images are evolved individually. For a single string-method step, each magnetization image is relaxed toward an energetic minimum by a certain amount. Using the projected steepest descent method (295), the updated images are given as m i,j " m i,j´1´ζ m i,j´1ˆ" m i,j´1ˆH eff pm i,j´1 q ‰ (299) with ζ being the step size of the descent method. Evolving each magnetization image toward the nearest local minimum would eventually relax every image into either the first minimum m 0 or the second minimum m n . In this case all information about the transition path, and thus also the barrier information, would be lost. In order to prevent the images to separate on the transition path, each evolution step (299) is followed by a reparametrization of the path. The position ϕ i of each magnetization image m i on the transition path is determined by the pairwise distance norm ϕ 0 " 0, ϕ i " ϕ i´1`} m i´mi´1 } with ϕ P t1, 2, . . . , nu.(300) After determining the position φ i , the magnetization images m i are interpolated onto the regular grid ϕ 1 i " iϕ n n(301) by cubic spline interpolation. Various variants of this method have been proposed in order to optimize the accuracy and convergence properties. An effective improvement is the use of an energy weighted norm for the reparametrization in order to increase the image density in the barrier regime. A popular alternative to the string method is the nudged elastic band method that introduces a spring force between the images in order to preserve a homogeneous discretization of the transition path [143]. Applications This section is dedicated to applications of micromagnetic simulations. While the first application deals with a classical micromagnetic problem and discusses the performance differences of finite-difference and finite-element tools, the remaining examples focus on the simulation of spintronics effects. Standard problem #4 A well known dynamical micromagnetic problem is the standard problem #4 [144] that was developed by the µMAG group with the aim to serve as a benchmark problem for micromagnetic simulation tools. The problem considers a cuboid shaped magnetic system with dimensions 500 nmˆ125 nmˆ3 nm and material parameters similar to permalloy M s " 8ˆ10 5 A{m, A " 1.3ˆ10´1 1 J{m, Fig. 25. Time evolution of the average magnetization components for the switching process of a permalloy thin film according to the first part of the µMAG standard problem #4 computed with a finite-difference solver (FDM) and a finite-element solver (FEM). K u1 " 0 and α " 0.02. The system is prepared in a socalled s-state e.g. by relaxing the initial magnetization m 0 "`cosr0.1s, sinr0.1s, 0˘into an energetic minimum. After relaxation, an external field with a magnitude of 25 mT is applied directed 170˝counterclockwise from the positive x-axis. This field results in the switching of the magnetization in the thin film. The dynamics of the switching process should be resolved by numerical integration of the LLG. An alternative field which is defined in the original problem specification in not considered in this work. Figure 25 shows the time evolution of the averaged magnetization components computed with the finitedifference solver magnum.fd [90] and the finite-element solver magnum.fe [113] that exhibit good agreement. The finite-difference solver employs the FFT-accelerated demagnetization-field method presented in Section 6.1.1 and an adaptive Runge-Kutta-Fehlberg method of 4/5 order for the time integration, see Section 6.4.1. The FEM employs the hybrid FEM/BEM method presented in Section 6.2.1 and an adaptive preconditioned BDF scheme for time integration, see Section 6.4.4. For the FDM we choose a discretization of 200ˆ50ˆ1 " 10 000 cells and for the FEM we use a regular tetrahedral grid based on a 100ˆ24ˆ2 cuboid grid which leads to 7878 mesh nodes. Both grids are chosen considering the exchange length of permalloy in order obtain accurate results. Due to the different algorithms for demagnetizationfield computation and time integration, the finitedifference solver and the finite-element solver show significantly deviating computation times for the problem, see Figure 26. While the finite-element solver spends more than half of the computation time on the assembly and solution of the preconditioner of the time integration scheme, the finite-difference solver is completely dominated by the demagnetization-field computation. Another interesting comparison of the two methods is illustrated in Figure 27 where the required number of LLG right-hand-side evaluations and the total simulation time is compared for different problem sizes. Starting from the above discretization, we perform mesh refinement on both the finite-difference grid and the finite-element mesh, which leads to an increased stiffness of the problem. For the implicit time integration scheme used in the finite-element code, the number of right-hand-side evaluations remains almost the same for a given integration accuracy. In contrast, the number of right-hand-side evaluations of the finite-difference solver increase faster than linear with the number of simulation cells. However, despite the much larger number of right-hand-side evaluations of the FDM, it still beats the FEM with respect to the overall simulation time. One of the reasons for the superiority of the FDM for this example is the thin shape of the problem domain. While this reduces the demagnetization-field computation to a two-dimensional convolution in the case of the finite-difference solver, the demagnetization-field algorithm of the finite-element code is dominated by the boundary-element method that has inferior scaling properties. Standard problem #5 The first and only µMAG standard problem dealing with spintronics effects is the standard problem #5 [145] which is derived from a work of Najafi et al. [146]. It describes the precessional motion of a magnetic vortex core due to spin-torque. A cuboid magnetic thin film, with dimensions 100 nmˆ100 nmˆ10 nm and the material parameters of permalloy, see Section 7.1, is initialized in a magnetic vortex state. The application of a homogeneous charge current leads to the precessional motion of the vortex core which eventually relaxes in a shifted equilibrium position. The problem definition suggests the application of the model of Zhang and Li as introduced in Section 5.2 for the calculation of the magnetization dynamics. The current j e is driven through the sample in x-direction. Setting the product of the coupling constant and the current density bj e " 72.17 m{s and choosing the degree of nonadiabaticity ξ " 0.05 and the damping α " 0.1 yields the damped precessional motion depicted in Figure 28. As shown in Section 5.9.2, the spin-diffusion model is equivalent to the model of Zhang and Li in the case of a vanishing diffusion constant D 0 . The equality of the two models can also be achieved with a finite D 0 by considering the limit of vanshing λ sf and λ J while preserving the ratio λ 2 J λ 2 sf " ξ " 0.05. In this limit, the terms linear in D 0 become neglectable and the resulting torque is the same as for vanishing D 0 . In order to investigate the influence of diffusion effects on the magnetization dynamics we consider a finite diffusion constant of D 0 " 10´3 A{m which is a reasonable choice for magnetic materials [147]. Furthermore, we choose β 1 " 0.8 and β " 0.9 which yields a current density of j e " 1.15ˆ10 12 A{m 2 according to the required definition of bj e . We perform dynamic simulations for different λ sf while always choosing λ J to fulfill (302). The resulting equilibrium magnetizations are summarized in Figure 29a. While the simulations with small diffusion lengths λ sf and λ J show a perfect agreement with the results obtained from the model of Zhang and Li, there are significant deviations for larger diffusion lengths. These deviations are most significant in the x-shift of the vortex core which is characterized by the y-component of the averaged magnetization. The same comparison for a different degree of nonadiabacity ξ " 0.5 yields significant deviations in the y-shift of the vortex core, see Figure 29b. These simulations demonstrate that the model of Zhang and Li is not accurate in the presence of spin diffusion. While the simplifications of the Zhang-Li model make it very attractive for computational micromagnetics, the results obtained from this model have to be evaluated with care. Simulation results for materials with large diffusion lengths might be useful for qualitative investigations. However, for an accurate description, the solution of the full diffusion model should be considered. Spin-torque oscillator Another spin-torque driven device with various potential applications is the spin-torque oscillator (STO) [48,49]. A simple STO consists of two magnetic layers separated by a nonmagnetic layer. One of the magnetic layers is designed to be very hard magnetic in order to act as stable spin polarizing layer. The other magnetic layer, referred to as free layer, is soft magnetic and usually stabilized by an external field. By applying a current perpendicular to the layer system, the free layer is subject to spin torque which drives its magnetization out of the externalfield direction. By suitable choice of current strength, the free-layer magnetization performs a precessional motion due to the external field while the damping contribution of the external field is exactly compensated by the spin torque, see Figure 30. Numerous variants of STOs have been proposed and simulated with the spin-torque model of Slonczewski [54,[148][149][150]. The model of Slonczewski is in principle perfectly suited for the simulation of STOs. However, the input parameters to this model are the angular dependencies η damp and η field that cannot be trivially Fig. 30. Schematic illustration of a field-stabilized spin-torque oscillator. The pinned layer acts as spin polarizer and the spin-torque compensates the damping contribution from the external field. derived from the geometry and material parameters of the system. The spin-diffusion model renders very useful for the investigation of geometry dependent properties of such devices. We simulate a cylindrical system with a radius of R " 30 nm, a free-layer thickness of d free " 3 nm and a spacer-layer thickness of d spacer " 1.5 nm. The material parameters of the magnetic layers are chosen as α " 0.1, A " 2.8ˆ10´1 1 J{m, D 0 " 10´3 A{m, β " 0.8, β 1 " 0.9, λ sf " 10 nm, λ J " 2.24 nm. For the free layer we further choose µ 0 M s " 1 T and K u1 " 0 and for the spin-polarizing layer we chose µ 0 M s " 1.24 T and K u1 " 10 6 J{m 3 with a perpendicular anisotropy axis. The nonmagnetic spacer layer is simulated with material parameters D 0 " 5ˆ10´3 A{m and λ sf " 10 nm. We set the external field µ 0 H " 0.6 T in z-direction and a current density j e " 4ˆ10 11 A{m 2 in negative z-direction as shown in Figure 30. This system is simulated with varying polarization-layer thicknesses with the spin-diffusion model with prescribed current density. The resulting oscillation frequencies and tilting angles are shown in Figure 31. The qualitative dependence of the frequency and tilting angle from the thickness of the polarizing layer does not come as a surprise, since a thicker polarizing layer will obviously lead to higher spin polarization of the electrons in the free layer. However, the spin-diffusion model does not only account for geometry changes, but also for the changes of material parameters in distinct layers. In this respect it outperforms the simple model of Slonczewski that, on the other hand, has a much lower computational complexity which makes it a fast alternative to the spin-diffusion model for many problem settings. Spin-orbit torque MRAM In a final numerical experiment, we demonstrate the capabilities of the self-consistent spin-diffusion model with spin-orbit interactions. We aim to simulate the write and read process of a perpendicular spin-orbit torque magnetoresistive random-access memory (SOT MRAM) [151,152]. Consider a circular magnetic multilayer with a radius of R " 10 nm consisting of 4 layers with thicknesses 1 nm, 0.5 nm, 1 nm and 4 nm from bottom to top, see Figure 32. From these 4 layers, only the bottom layer (free layer) and the third layer from below (pinned layer) are magnetic while all layers are conducting. The circular stack is centered on top of a rectangular conducting underlayer with dimensions 50 nmˆ50 nmˆ10 nm and the complete structure is meshed with prescribed mesh size of 2 nm. We define the two faces of the underlayer lying in the yz-plane as contact 1 and contact 2 respectively and the top interface of the circular stack as contact 3. Material parameters in the conducting underlayer are chosen as D 0 " 10´3 m 2 {s, C 0 " 6ˆ10 6 A{Vm, τ sf " 2ˆ10´1 5 s, θ " 0.3 which is typical for heavy metals such as Ta that give rise to the spin Hall effect. For the magnetic free layer we choose material parameters typical for perpendicular MRAM, namely M s " 0.796ˆ10 6 A{m, α " 0.02, A " 16ˆ10´1 2 J{m, K u1 " 0.4ˆ10 6 J{m 3 , D 0 " 10´3 m 2 {s, C 0 " 10 6 A{Vm, β " 0.9, β 1 " 0.8, τ sf " 5ˆ10´1 4 s, J " 2.1ˆ10´1 7 J and θ " 0 with the anisotropy axis pointing in z-direction. For the pinned layer, we choose the same parameters, but with a higher anisotropy K u1 " 10 6 J{m 3 . The nonmagnetic spacer and cap layers are simulated with parameters similar to Ag D 0 " 5ˆ10´3 m 2 {s, C 0 " 6ˆ10 6 A{Vm, τ sf " 1.225ˆ10´1 3 s and θ " 0.0. For the simulation of the write process, the magnetic layers are initialized in positive z-direction. By applying an in-plane electric current in the underlayer, the spin Hall effect gives rise to a spin current in the z-direction. This leads to spin-accumulation in the multilayer stack and thus to spin torque. The torque on the free layer is expected to be much larger than on the pinned layer due its direct neighborhood with the underlayer. Moreover, the pinned layer has a much higher K u1 than the free layer. Hence, the free layer is expected to switch easily while the pinned layer is expected to preserve its magnetization configuration even at high currents. In a first numerical experiment, we determine the current dependent effective-field contributions, namely the spin accumulation s and the Oersted field H c , for the initial magnetization configuration. We prescribe a constant electric potential at contact 1 (u " 0) and a constant current outflow at contact 2 (j e " 10 12 A{m 2 ). All remaining interfaces are treated with homogeneous Neumann conditions (Bj e {Bn " 0). The resulting fields are plotted in Figure 33. Both the spin accumulation and the Oersted field exhibit a curl-like structure but with opposite sign. In contrast to the Oersted field, the spin accumulation is much smaller in the stack compared to the underlayer, This is due to the interplay of the spin accumulation with the magnetization in the magnetic stack layers. In the next experiment, we perform time integration in order to resolve the switching of the free layer. In order to enable the spin-orbit torque driven switching we apply an additional external field H zee " p´31830, 0, 0qA{m. Since an electric current in x-direction generates a spin current with polarization y in the z-direction, this additional field is required for the perpendicular switching of the magnetization [152]. Otherwise the spin torque would draw the magnetization toward the xy-plane and the magnetization would return to its initial configuration once the current is turned off. In a first step, we perform time integration of the LLG (117) including the external field H zee , the exchange field H ex , the demagnetization field H dem and the anisotropy field H ani starting from the initial configuration in order to find an energetic minimum for the system without electric current. After relaxation, we apply a constant current pulse of 10 12 A{m 2 for the first 0.5 ns that linearly decays to 0 within another 0.5 ns. This pulse is applied in the same fashion as for the field computations, i.e. u " 0 at contact 1 and j e set as Neumann boundary condition on contact 2. In addition to the effective-field contributions considered for the relaxation process, the spin torque due the spin accumulation as well as the Oersted field is included in the simulation. The resulting magnetization dynamics are shown in Figure 34. While the pinned-layer magnetization remains completely fixed in z-direction, the free-layer magnetization performs a fast switch during the current pulse and then relaxes into the´z-direction. As a final experiment, we simulate the read process of the SOT MRAM. In order to read the magnetization of the free layer, the magnetization dependent resistance of the magnetic multilayer is exploited. When applying a current through the multilayer stack, the resistance of the structure changes either due to giant magnetoresistance (GMR) in the case of a conducting spacer or due to the TMR in the case of an insulating spacer. In both cases, the multilayer is expected to have a lower resistance in case of parallel alignment of the free layer and the pinned layer and a high resistance in case of antiparallel alignment. For the readout we apply constant potential boundary conditions u " 0 at both contact 1 and contact 2 and set a constant current outflow j e " 10 A{m 2 at contact 3. The pinned-layer magnetization is homogeneously set in z-direction while the free-layer magnetization is homogeneously set to m free " p0, sin θ, cos θq. The coupled system for the spin accumulation and the electric potential is solved for various angles and the potential difference between contact 3 and contact 1/2 is evaluated. Figure 35 shows the simulation results. As expected, the potential, and thus the resistance, of the stack is higher for antiparallel configurations. The difference of the resistance for parallel R p and antiparallel R a configurations is significant pR a´Rp q{R a « 23%. This proves that the presented model includes all crucial physical effects for the self-consistent simulation of both the write process and the read process of an SOT MRAM device. Conclusion By bridging the gap between experiment and simple analytical models, computational micromagnetics has proven to be an invaluable tool for the development of magnetic devices. With the rise of spintronics, many extensions to the classical micromagnetic model have been proposed in order to account for the bidirectional coupling of spin-polarized currents and the magnetization configuration. Among the existing models, the spin-diffusion model by Zhang, Levy and Fert is one of the most complete approaches. However, besides other shortcomings, the spin-diffusion model is only valid for diffusive transport which renders this model useless for the accurate description of tunnel barriers. Since the spin-transport through tunnel barriers is dominated by quantummechnical effects, existing models mostly rely on ab initio techniques. The efficient integration of such methods with the micromagnetic model is a challenging yet important task for future research. The micromagnetic spintronics models introduced in this work already cover a lot of applications. However, detailed knowledge of the applied model and its limitations is crucial for the successful application of micromagnetic simulations. While a complex model might be required in order to understand certain details of magnetization dynamics, for other applications a simpler model might provide sufficient detail and allow to quickly perform a large number of simulations. Moreover, the choice of discretization method has a significant impact on the accuracy and computation time and should be chosen carefully depending on the problem at hand. Fig. 1 . 1Limiting procedure for the demagnetization-field calculation of a finite magnetic body. The magnetization M is defined in the magnetic region Ωm and continuously decreases in the transitional shell region Ωt. The overall continuous and differentiable definition of M allows for the application of Green's theorem. Fig. 2 . 2Magnetic vortex configuration in a square-shaped thin film. The magnetization can be roughly divided into four triangular domains, each aligned with one of the film edges. In the vortex core the magnetization points out of plane. Fig. 3 . 3Magnetic skyrmion configuration in a circular thin film. This configuration is characterized by a continuous rotation of the magnetization m across the center. Fig. 4 . 4Synthetic antiferromagnet. Two magnetic layers with perpendicular crystalline anisotropy are antiferromagnetically coupled through a nonmagnetic layer by the RKKY interaction. Fig. 5 . 5Absolute and fixed-body coordinates for the magnetization m. The magnetization is described by the spherical coordinates θ and φ. The fixed-body frame is constructed such that the third axis e 1 3 is parallel to the magnetization vector m. Fig. 6 . 6Visualization of the contributions to the magnetization dynamics as described by the LLG equation. (a) Precessional motion around the effective field H eff . (b) Dissipative motion of the magnetization toward H eff . (c) Combined precessional and dissipative motion as described by the LLG. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi:10. Fig. 8 . 8Schematic illustration of the most important scattering processes in a multilayer with magnetic layers FM1, FM2 and nonmagnetic layers NM subject to a current perpendicular to the layers. (a) Antiparallel magnetization configuration of FM1 and FM2. Electrons with opposite polarization of FM1 are scattered before entering the layer. Hence, FM1 acts as a spin polarizer and FM2 is subject to spin torque. FM1 is stabilized by electrons scattered by FM2. (b) Parallel magnetization configuration of the FM1 and FM2. FM1 acts as spin polarizer leading to a stabilization of FM2. Scattered electrons from FM2 exert spin torque in FM1. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi:10.1007/ 978-3-319-42913-7 76-1, @ 2019.) Fig. 9 . 9Generalization of the macrospin model of Slonczewski to laterally inhomogeneous magnetization configurations. The magnetization configuration of the polarizing layer FM1 p is projected onto the magnetic free layer FM2. The model of Slonczewski is applied locally by evaluating the angle ϑpxq between the projected polarization p and the local magnetization m. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https: //doi.org/doi:10.1007/978-3-319-42913-7 76-1, @ 2019.) Fig. 10 . 10Spin-torque in magnetic domain walls as predicted by the model of Zhang and Li. Spin polarization is carried in the direction of electron motion and exerts a torque according to the local gradient of the magnetization. The magnetization is depicted with desaturated colors while the polarization of the conducting electrons, which are responsible for the spin torque, is depicted with saturated colors. The top row illustrates an electron motion from left to right. The bottom row illustrates an electron motion from right to left. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi:10.1007/ 978-3-319-42913-7 76-1, @ 2019.)e being the elementary charge. Instead of the coupling constant b, this model is often defined in terms of the spin-drift velocity u " bj e which has the dimension of a velocity. Moreover, the letter β is often used as degree of nonadiabacity instead of ξ. Fig. 11 . 11Spin accumulation s for typical magnetization configurations. (a) Spin accumulation in a magnetization multilayer with magnetic layers FM1 and FM2. sz is shown for a parallel and antiparallel magnetization configuration in˘z-direction. (b) Spin accumulation due to a magnetic domain wall. The magnetization configuration mz is plotted along with the resulting spin accumulation sz. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi:10.1007/ 978-3-319-42913-7 76-1, @ 2019.) Fig. 12 . 12Illustration of a typical magnetic multilayer system. The magnetic layers FM1 and FM2 are separated by a nonmagnetic spacer layer and sandwiched by nonmagnetic leads. Ω denotes the volume of the complete system and Ωm denotes the volume of magnetic layers. The bottom and top surface of the complete system are denoted by Γ1 and Γ2. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi: 10.1007/978-3-319-42913-7 76-1, @ 2019.) Fig. 13 . 13Illustration of current conversion due to the spin-Hall and inverse spin-Hall effect. (a) Spin-Hall effect. A non-polarized current is subject to spin splitting due to spin-orbit coupling. The result is the conversion of a charge current j e into a spin currentjs perpendicular to j e . (b) Inverse spin-Hall effect. A pure spin currentjs is considered to be constituted by two charge currents of opposite direction and spin polarization. The spin-orbit induced deflection of the polarized electrons leads to the creation of a charge current j e perpendicular tojs. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi:10.1007/978-3-319-42913-7 76-1, @ 2019.) Fig. 14 . 14Magnetic-nonmagnetic interface in the Valet-Fert model. The interface normal is defined to point from the magnetic layer Ωm to the nonmagnetic layer Ωn. Function evaluations at the interface are marked either by the superscriptt o denote evaluation in Ωm or by the superscript`to denote evaluation in Ωn. Fig. 15 . 15Spin-torque angular dependence η fitted to the simulation results of the spin-diffusion model. (a) Angular dependence for a symmetric multilayer with two similar magnetic layers. The original model by Slonczewski shows a good agreement with the spin-diffusion results. (b) Angular dependence for an asymmetric multilayer. The original model by Slonczewski is insufficient, the generalized model shows a good agreement. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https: //doi.org/doi:10.1007/978-3-319-42913-7 76-1, @ 2019.) Fig. 16 . 16Spatial discretization of a sphere. (a) Regular cuboid grid with 8217 cells as required by finite-difference methods. (b) Tetrahedral mesh with 7149 vertices as required by FEMs. (Reprinted by permission from Springer Nature, Spintronics in Micromagnetics in Springer Handbook of Materials Modeling, Vol. 1 Methods: Theory and Modeling, https://doi.org/doi: Fig. 17 . 17Examples of regular grids in two dimensions as required for the convolutional computation of the demagnetization field. (a) Irregularly shaped but periodic mesh. (b) Cuboid mesh as usually used for finite-difference computations. where 1 Ω 1ref denotes the indicator function of Ω ref , the multiindex i addresses the simulation cell and x i denotes the offset of the simulation cell Ω i from the reference cell Ω ref , see Fig. 18 . 18Visualization of the discrete convolution of the magnetization field M with the demagnetization-tensor fieldÑ . The color blocks in the result matrix represent the multiplications of the respective input values. Fig. 19 . 19Visualization of the virtual cells, marked with a blue background color, introduced for the finite-difference implementation of boundary conditions. Fig. 20 . 20Piecewise affine, globally continuous basis functions for the finite-element method in two dimensions. (a) Single basis function. (b) Possible discrete function obtained by superposition of basis functions. requirements on the solution u. While the original equation requires the solution to be twice differentiable, the weak form requires u to be only once differentiable almost everywhere. Dirichlet conditions are applied on Γ D Ă BΩ by restricting the solution space to functions satisfying the Dirichlet condition upxq " u D @ x P Γ D Ă BΩ (221) Fig. 21 . 21Finite-element mesh for the computation of the demagnetization field with the truncation approach. (a) The magnetic region is marked red and the external region, marked green, is chosen to be approximately five times larger than the magnetic region in each spatial dimension. (b) The magnetic region, as the region of interest, is discretized with a small mesh size. In the external region the mesh is coarsened toward the outer boundary in order to reduce the overall number of mesh nodes. decay outside of the magnetic region, the open boundary conditions can be approximated by applying homogeneous Dirichlet or Neumann conditions to the outer boundary of the problem domain BΩ with Ω " Ω m Y Ω e . Starting from the problem definition(14), the magnetic scalar potential is given by the weak formulationż Ω ∇¨p∇u´M s mq v dx " 0.(230)Performing integration by parts and restricting both the solution function u and the test function v to the function Fig. 22 . 22Visualization of the shell-transformation method for the demagnetization-field computation. (a) Definition of regions, the shell domain Ωs that is subject to the transformation surrounds the inner domain Ω which is defined as the union of the magnetic domain Ωm and the external domain Ωe. (b) The transformation in a spherical shell is performed in a radial fashion. Fig. 23 . 23Gradient computation of a piecewise affine, globally continuous function u. The analytical calculation yields a piecewise constant gradient marked as blue. Projection onto the function space of piecewise affine, globally continuous functions yields the green curve. Fig. 24 . 24Illustration of the string method for the computation of minimum energy paths between two magnetization configurations m 0 and m n in two dimensions. An initial transition path is discretized with a finite number of magnetization images and evolved towards the minimum. Fig. 26 . 26Relative time consumption for different parts of the solution process. Comparison of a finite-difference solver (FDM) with a finite-element solver (FEM). Fig. 27 . 27Performance comparison of a finite-difference solver (FDM) with a finite-element solver (FEM). The x-axes denotes the number of cells in the case of FDM and the number of mesh nodes in the case of FEM. The upper plot shows the number of required right-hand-side evaluations for the computation of the first 2 ns of the standard problem #4. The lower plot shows the overall computation time for single core computations on an Intel Core i7 system. Fig. 28 . 28Time evolution of the averaged magnetization components for the current induced motion of a magnetic vortex according to the µMAG standard problem #5. The inset shows the new equilibrium vortex configuration. Fig. 29 . 29Averaged magnetization components for equilibrium vortex configurations according to the µMAG standard problem #5 computed with the spin-diffusion model for different diffusion lengths λ sf . (a) Results for ξ " 0.05. (b) Results for ξ " 0.5. Fig. 31 . 31Oscillation frequency and tilting angle of the freelayer magnetization for varying pinned-layer thicknesses. Fig. 32 . 32Geometry and domains of a model system consisting of a magnetic multilayer structure on top of a heavy-metal strip. Fig. 33 . 33Effective-field contributions computed for an electric current in x-direction and both free-layer and pinned-layer magnetization pointing in z-direction (a) spin accumulation, (b) Oersted field. Fig. 34 . 34Time evolution of the averaged magnetization component xmzy in the free layer and the pinned layer during switching due to spin-orbit torque generated by a current pulse in in the heavy-metal layer. Fig. 35 . 35Potential difference between contact 3 and contact 1/2 for various tilting angles of the free-layer and the pinnedlayer magnetization at constant current. Open access funding provided by University of Vienna. The author would like to thank Prof. Dieter Suess and Prof. Thomas Schrefl for endless discussions and valuable input concerning this article. The financial support by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development is gratefully acknowledged.Author contribution statementClaas Abert developed the idea of this article, performed all simulations and wrote the manuscript.Open Access This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Numerical methods in micromagnetics (finite element method), in Handbook of Magnetism and Advanced Magnetic Materials. T Schrefl, G Hrkac, S Bance, D Suess, O Ertl, J Fidler, John Wiley & SonsNJT. Schrefl, G. Hrkac, S. Bance, D. Suess, O. Ertl, J. Fidler, Numerical methods in micromagnetics (finite element method), in Handbook of Magnetism and Advanced Magnetic Materials (John Wiley & Sons, NJ, 2007) Y Huai, Spin-transfer torque MRAM (STT-MRAM): challenges and prospects, AAPPS Bull. 1833Y. Huai, Spin-transfer torque MRAM (STT-MRAM): challenges and prospects, AAPPS Bull. 18, 33 (2008) Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers. S S Parkin, C Kaiser, A Panchula, P M Rice, B Hughes, M Samant, S.-H Yang, Nat. Mater. 3862S.S. Parkin, C. Kaiser, A. Panchula, P.M. Rice, B. Hughes, M. Samant, S.-H. Yang, Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers, Nat. Mater. 3, 862 (2004) Integrated gigant magnetic resistance based angle sensor. W Granig, C Kolle, D Hammerschmidt, B Schaffer, R Borgschulze, C Reidl, J Zimmer, Proceedings of the IEEE Sensors. the IEEE SensorsW. Granig, C. Kolle, D. Hammerschmidt, B. Schaffer, R. Borgschulze, C. Reidl, J. Zimmer, Integrated gigant magnetic resistance based angle sensor, in Proceedings of the IEEE Sensors (2006), pp. 542-545 Micromagnetics (Interscience Publisher. W F BrownJr, New YorkW.F. Brown, Jr., Micromagnetics (Interscience Pub- lisher, New York, 1963) W Döring, Über die trägheit der wände zwischen weißschen bezirken. 3373W. Döring,Über die trägheit der wände zwischen weißschen bezirken, Z. Naturforsch. A 3, 373 (1948) General micromagnetic theory. H Kronmüller, Handbook of Magnetism and Advanced Magnetic Materials. NJJohn Wiley & SonsH. Kronmüller, General micromagnetic theory, in Hand- book of Magnetism and Advanced Magnetic Materials (John Wiley & Sons, NJ, 2007) Numerical micromagnetics: finite difference methods. J E Miltat, M J Donahue, Handbook of Magnetism and Advanced Magnetic Materials. NJJohn Wiley & SonsJ.E. Miltat, M.J. Donahue, Numerical micromagnetics: finite difference methods, in Handbook of Magnetism and Advanced Magnetic Materials (John Wiley & Sons, NJ, 2007) Fast micromagnetic simulations on GPU -recent advances made with mumax 3. J Leliaert, M Dvornik, J Mulkers, J D Clercq, M V Milošević, B V Waeyenberge, J. Phys. D. 51123002J. Leliaert, M. Dvornik, J. Mulkers, J.D. Clercq, M.V. Milošević, B.V. Waeyenberge, Fast micromagnetic simu- lations on GPU -recent advances made with mumax 3 , J. Phys. D 51, 123002 (2018) J D Jackson, Classical Electrodynamics. NJJohn Wiley & SonsJ.D. Jackson, Classical Electrodynamics (John Wiley & Sons, NJ, 2012) D J Griffiths, Introduction to Quantum Mechanics. New JerseyPrentice HallD.J. Griffiths, Introduction to Quantum Mechanics (Prentice Hall, New Jersey, 1994) W Döring, Mikromagnetismus , Handbuch der Physik. S. FlüggeBerlin, HeidelbergSpringer18W. Döring, Mikromagnetismus, in Handbuch der Physik, edited by S. Flügge (Springer, Berlin, Heidelberg, 1966), Vol. 18/2, pp. 314-437 A Hubert, R Schäfer, Magnetic Domains. BerlinSpringerA. Hubert, R. Schäfer, Magnetic Domains (Springer, Berlin, 1998) A thermodynamic theory of weak ferromagnetism of antiferromagnetics. I Dzyaloshinsky, J. Phys. Chem. Solids. 4241I. Dzyaloshinsky, A thermodynamic theory of weak ferro- magnetism of antiferromagnetics, J. Phys. Chem. Solids 4, 241 (1958) Anisotropic superexchange interaction and weak ferromagnetism. T Moriya, Phys. Rev. 12091T. Moriya, Anisotropic superexchange interaction and weak ferromagnetism, Phys. Rev. 120, 91 (1960) Real-space observation of a two-dimensional skyrmion crystal. X Yu, Y Onose, N Kanazawa, J Park, J Han, Y Matsui, N Nagaosa, Y Tokura, Nature. 465901X. Yu, Y. Onose, N. Kanazawa, J. Park, J. Han, Y. Matsui, N. Nagaosa, Y. Tokura, Real-space observation of a two-dimensional skyrmion crystal, Nature 465, 901 (2010) . Eur. Phys. J. B. 92120Eur. Phys. J. B (2019) 92: 120 Magnetic stripes and skyrmions with helicity reversals. X Yu, M Mostovoy, Y Tokunaga, W Zhang, K Kimoto, Y Matsui, Y Kaneko, N Nagaosa, Y Tokura, Proc. Natl. Acad. Sci. 1098856X. Yu, M. Mostovoy, Y. Tokunaga, W. Zhang, K. Kimoto, Y. Matsui, Y. Kaneko, N. Nagaosa, Y. Tokura, Magnetic stripes and skyrmions with helicity reversals, Proc. Natl. Acad. Sci. 109, 8856 (2012) Chiral symmetry breaking in magnetic thin films and multilayers. A Bogdanov, U Rößler, Phys. Rev. Lett. 8737203A. Bogdanov, U. Rößler, Chiral symmetry breaking in magnetic thin films and multilayers, Phys. Rev. Lett. 87, 037203 (2001) Influence of the Dzyaloshinskii-Moriya interaction on the spin-wave spectra of thin films. D Cortés-Ortuño, P Landeros, J. Phys. Condens. Matter. 25156001D. Cortés-Ortuño, P. Landeros, Influence of the Dzyaloshinskii-Moriya interaction on the spin-wave spec- tra of thin films, J. Phys. Condens. Matter 25, 156001 (2013) Indirect exchange coupling of nuclear magnetic moments by conduction electrons. M A Ruderman, C Kittel, Phys. Rev. 9699M.A. Ruderman, C. Kittel, Indirect exchange coupling of nuclear magnetic moments by conduction electrons, Phys. Rev. 96, 99 (1954) A theory of metallic ferro-and antiferromagnetism on Zener's model. T Kasuya, Prog. Theor. Phys. 1645T. Kasuya, A theory of metallic ferro-and antiferromag- netism on Zener's model, Prog. Theor. Phys. 16, 45 (1956) Magnetic properties of Cu-Mn alloys. K Yosida, Phys. Rev. 106893K. Yosida, Magnetic properties of Cu-Mn alloys, Phys. Rev. 106, 893 (1957) How to include magnetostriction in micromagnetic models of titanomagnetite grains. K Fabian, F Heider, Geophys. Res. Lett. 232839K. Fabian, F. Heider, How to include magnetostric- tion in micromagnetic models of titanomagnetite grains, Geophys. Res. Lett. 23, 2839 (1996) Micromagnetic modeling of magnetostrictive materials under intrinsic stress. Y Shu, M Lin, K Wu, Mech. Mater. 36975Y. Shu, M. Lin, K. Wu, Micromagnetic modeling of magnetostrictive materials under intrinsic stress, Mech. Mater. 36, 975 (2004) Micromagnetic dynamic computations including eddy currents. L Torres, L Lopez-Diaz, E Martinez, O , IEEE Trans. Magn. 392498L. Torres, L. Lopez-Diaz, E. Martinez, O. Alejos, Micro- magnetic dynamic computations including eddy currents, IEEE Trans. Magn. 39, 2498 (2003) Three-dimensional micromagnetic finite element simulations including eddy currents. G Hrkac, M Kirschner, F Dorfbauer, D Suess, O Ertl, J Fidler, T Schrefl, J. Appl. Phys. 97G. Hrkac, M. Kirschner, F. Dorfbauer, D. Suess, O. Ertl, J. Fidler, T. Schrefl, Three-dimensional micromagnetic finite element simulations including eddy currents, J. Appl. Phys. 97, 10E311 (2005) . R Hertel, A Kákay, J. Magn. Magn. Mater. 369189R. Hertel, A. Kákay, J. Magn. Magn. Mater. 369, 189 (2014) Scalable parallel micromagnetic solvers for magnetic nanostructures. W Scholz, J Fidler, T Schrefl, D Suess, H Forster, V Tsiantos, Comput. Mater. Sci. 28366W. Scholz, J. Fidler, T. Schrefl, D. Suess, H. Forster, V. Tsiantos, et al., Scalable parallel micromagnetic solvers for magnetic nanostructures, Comput. Mater. Sci. 28, 366 (2003) Fast switching of magnetic nanoparticles: Simulation of thermal noise effects using the Langevin dynamics. D V Berkov, IEEE Trans. Magn. 382489D.V. Berkov, Fast switching of magnetic nanoparticles: Simulation of thermal noise effects using the Langevin dynamics, IEEE Trans. Magn. 38, 2489 (2002) Langevin dynamic simulation of spin waves in a micromagnetic model. O Chubykalo, J Hannay, M Wongsam, R Chantrell, J Gonzalez, Phys. Rev. B. 65184428O. Chubykalo, J. Hannay, M. Wongsam, R. Chantrell, J. Gonzalez, Langevin dynamic simulation of spin waves in a micromagnetic model, Phys. Rev. B 65, 184428 (2002) Fokker-Planck and Landau-Lifshitz-Bloch equations for classical ferromagnets. D A Garanin, Phys. Rev. B. 553050D.A. Garanin, Fokker-Planck and Landau-Lifshitz-Bloch equations for classical ferromagnets, Phys. Rev. B 55, 3050 (1997) Micromagnetic modeling of laser-induced magnetization dynamics using the Landau-Lifshitz-Bloch equation. U Atxitia, O Chubykalo-Fesenko, N Kazantseva, D Hinzke, U Nowak, R W Chantrell, Appl. Phys. Lett. 91232507U. Atxitia, O. Chubykalo-Fesenko, N. Kazantseva, D. Hinzke, U. Nowak, R.W. Chantrell, Micromagnetic modeling of laser-induced magnetization dynamics using the Landau-Lifshitz-Bloch equation, Appl. Phys. Lett. 91, 232507 (2007) Stochastic form of the Landau-Lifshitz-Bloch equation. R F L Evans, D Hinzke, U Atxitia, U Nowak, R W Chantrell, O Chubykalo-Fesenko, Phys. Rev. B. 8514433R.F.L. Evans, D. Hinzke, U. Atxitia, U. Nowak, R.W. Chantrell, O. Chubykalo-Fesenko, Stochastic form of the Landau-Lifshitz-Bloch equation, Phys. Rev. B 85, 014433 (2012) On the theory of the dispersion of magnetic permeability in ferromagnetic bodies. L D Landau, E M Lifshitz, Phys. Z. Sowjetunion. 8153L.D. Landau, E.M. Lifshitz, On the theory of the disper- sion of magnetic permeability in ferromagnetic bodies, Phys. Z. Sowjetunion 8, 153 (1935) A Lagrangian formulation of the gyromagnetic equation of the magnetic field. T L Gilbert, Phys. Rev. 1001243T.L. Gilbert, A Lagrangian formulation of the gyromag- netic equation of the magnetic field, Phys. Rev. 100, 1243 (1955) A phenomenological theory of damping in ferromagnetic materials. T L Gilbert, IEEE Trans. Magn. 403443T.L. Gilbert, A phenomenological theory of damping in ferromagnetic materials, IEEE Trans. Magn. 40, 3443 (2004) L D Landau, E M Lifshitz, Mechanics, Course of Theoretical Physics. OxfordPergamon PressL.D. Landau, E.M. Lifshitz, Mechanics, in Course of Theoretical Physics (Pergamon Press, Oxford, 1969) Magnetization dynamics, gyromagnetic relation, and inertial effects. J.-E Wegrowe, M.-C Ciornei, Am. J. Phys. 80607J.-E. Wegrowe, M.-C. Ciornei, Magnetization dynamics, gyromagnetic relation, and inertial effects, Am. J. Phys. 80, 607 (2012) Current-induced switching in transport through anisotropic magnetic molecules. N Bode, L Arrachea, G S Lozano, T S Nunner, F Von Oppen, Phys. Rev. B. 85115440N. Bode, L. Arrachea, G.S. Lozano, T.S. Nunner, F. von Oppen, Current-induced switching in transport through anisotropic magnetic molecules, Phys. Rev. B 85, 115440 (2012) Geometrical integration of Landau-Lifshitz-Gilbert equation based on the mid-point rule. M Aquino, C Serpico, G Miano, J. Comput. Phys. 209730M. d'Aquino, C. Serpico, G. Miano, Geometrical inte- gration of Landau-Lifshitz-Gilbert equation based on the mid-point rule, J. Comput. Phys. 209, 730 (2005) A survey on the numerics and computations for the Landau-Lifshitz equation of micromagnetism. I Cimrák, Arch. Comput. Methods Eng. 151I. Cimrák, A survey on the numerics and computations for the Landau-Lifshitz equation of micromagnetism, Arch. Comput. Methods Eng. 15, 1 (2007) Giant magnetoresistance of (001) Fe/(001) Cr magnetic superlattices. M N Baibich, J M Broto, A Fert, F N Van Dau, F Petroff, P Etienne, G Creuzet, A Friederich, J Chazelas, Phys. Rev. Lett. 612472M.N. Baibich, J.M. Broto, A. Fert, F.N. Van Dau, F. Petroff, P. Etienne, G. Creuzet, A. Friederich, J. Chazelas, Giant magnetoresistance of (001) Fe/(001) Cr magnetic superlattices, Phys. Rev. Lett. 61, 2472 (1988) Enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange. G Binasch, P Grünberg, F Saurenbach, W Zinn, Phys. Rev. B. 39G. Binasch, P. Grünberg, F. Saurenbach, W. Zinn, Enhanced magnetoresistance in layered magnetic struc- tures with antiferromagnetic interlayer exchange, Phys. Rev. B 39, 4828 1989 Current-driven excitation of magnetic multilayers. J C Slonczewski, J. Magn. Magn. Mater. 1591J.C. Slonczewski, Current-driven excitation of magnetic multilayers, J. Magn. Magn. Mater. 159, L1 (1996) Emission of spin waves by a magnetic multilayer traversed by a current. L Berger, Phys. Rev. B. 549353L. Berger, Emission of spin waves by a magnetic mul- tilayer traversed by a current, Phys. Rev. B 54, 9353 (1996) Role of spin-dependent interface scattering in generating currentinduced torques in magnetic multilayers. X Waintal, E B Myers, P W Brouwer, D Ralph, Phys. Rev. B. 6212317X. Waintal, E.B. Myers, P.W. Brouwer, D. Ralph, Role of spin-dependent interface scattering in generating current- induced torques in magnetic multilayers, Phys. Rev. B 62, 12317 (2000) Spin torque switching of perpendicular Ta-CoFeB-MgO-based magnetic tunnel junctions. D Worledge, G Hu, D W Abraham, J Sun, P Trouilloud, J Nowak, S Brown, M Gaidis, E O&apos;sullivan, R Robertazzi, Appl. Phys. Lett. 9822501D. Worledge, G. Hu, D.W. Abraham, J. Sun, P. Trouilloud, J. Nowak, S. Brown, M. Gaidis, E. O'sullivan, R. Robertazzi, Spin torque switching of perpendicular Ta-CoFeB-MgO-based magnetic tunnel junctions, Appl. Phys. Lett. 98, 022501 (2011) . D Houssameddine, U Ebels, B Delaët, B Rodmacq, I Firastrau, F Ponthenier, M Brunet, C Thirion, J.-P , D. Houssameddine, U. Ebels, B. Delaët, B. Rodmacq, I. Firastrau, F. Ponthenier, M. Brunet, C. Thirion, J.-P. Spin-torque oscillator using a perpendicular polarizer and a planar free layer. L Michel, Prejbeanu-Buda, Nat. Mater. 6447Michel, L. Prejbeanu-Buda et al., Spin-torque oscillator using a perpendicular polarizer and a planar free layer, Nat. Mater. 6, 447 (2007) Spin-torque oscillators, Solid State Phys. J.-V Kim, 63217J.-V. Kim, Spin-torque oscillators, Solid State Phys. 63, 217 (2012) Spin transfer torques. D C Ralph, M D Stiles, J. Magn. Magn. Mater. 3201190D.C. Ralph, M.D. Stiles, Spin transfer torques, J. Magn. Magn. Mater. 320, 1190 (2008) Currents and torques in metallic magnetic multilayers. J Slonczewski, J. Magn. Magn. Mater. 247324J. Slonczewski, Currents and torques in metallic magnetic multilayers, J. Magn. Magn. Mater. 247, 324 (2002) Macrospin models of spin transfer dynamics. J Xiao, A Zangwill, M D Stiles, Phys. Rev. B. 7214446J. Xiao, A. Zangwill, M.D. Stiles, Macrospin models of spin transfer dynamics, Phys. Rev. B 72, 014446 (2005) Micromagnetic simulation of spin transfer torque switching by nanosecond current pulses. D Apalkov, M Pakala, Y Huai, J. Appl. Phys. 99D. Apalkov, M. Pakala, Y. Huai, Micromagnetic simu- lation of spin transfer torque switching by nanosecond current pulses, J. Appl. Phys. 99, 08B907 (2006) Magnetization dynamics in a dual free-layer spin-torque nano-oscillator. G E Rowlands, I N Krivorotov, Phys. Rev. B. 8694425G.E. Rowlands, I.N. Krivorotov, Magnetization dynam- ics in a dual free-layer spin-torque nano-oscillator, Phys. Rev. B 86, 094425 (2012) Spin-torque driven magnetization dynamics: micromagnetic modeling. D V Berkov, J Miltat, J. Magn. Magn. Mater. 3201238D.V. Berkov, J. Miltat, Spin-torque driven magnetiza- tion dynamics: micromagnetic modeling, J. Magn. Magn. Mater. 320, 1238 (2008) . Eur. Phys. J. B. 92Eur. Phys. J. B (2019) 92: 120 Page 43 of 45 Magnetic domainwall racetrack memory. S S Parkin, M Hayashi, L Thomas, Science. 320190S.S. Parkin, M. Hayashi, L. Thomas, Magnetic domain- wall racetrack memory, Science 320, 190 (2008) Roles of nonequilibrium conduction electrons on the magnetization dynamics of ferromagnets. S Zhang, Z Li, Phys. Rev. Lett. 93127204S. Zhang, Z. Li, Roles of nonequilibrium conduction elec- trons on the magnetization dynamics of ferromagnets, Phys. Rev. Lett. 93, 127204 (2004) Mechanisms of spin-polarized current-driven magnetization switching. S Zhang, P Levy, A Fert, Phys. Rev. Lett. 88236601S. Zhang, P. Levy, A. Fert, Mechanisms of spin-polarized current-driven magnetization switching, Phys. Rev. Lett. 88, 236601 (2002) Current-induced spin orientation of electrons in semiconductors. M Dyakonov, V Perel, Phys. Lett. A. 35459M. Dyakonov, V. Perel, Current-induced spin orientation of electrons in semiconductors, Phys. Lett. A 35, 459 (1971) Spin Hall effect. J Hirsch, Phys. Rev. Lett. 831834J. Hirsch, Spin Hall effect, Phys. Rev. Lett. 83, 1834 (1999) Dissipationless quantum spin current at room temperature. S Murakami, N Nagaosa, S.-C Zhang, Science. 3011348S. Murakami, N. Nagaosa, S.-C. Zhang, Dissipationless quantum spin current at room temperature, Science 301, 1348 (2003) Universal intrinsic spin Hall effect. J Sinova, D Culcer, Q Niu, N Sinitsyn, T Jungwirth, A Macdonald, Phys. Rev. Lett. 92126603J. Sinova, D. Culcer, Q. Niu, N. Sinitsyn, T. Jungwirth, A. MacDonald, Universal intrinsic spin Hall effect, Phys. Rev. Lett. 92, 126603 (2004) Magnetoresistance due to edge spin accumulation. M Dyakonov, Phys. Rev. Lett. 99126601M. Dyakonov, Magnetoresistance due to edge spin accu- mulation, Phys. Rev. Lett. 99, 126601 (2007) Unified drift-diffusion theory for transverse spin currents in spin valves, domain walls, and other textured magnets. C Petitjean, D Luc, X , Phys. Rev. Lett. 109117204C. Petitjean, D. Luc, X. Waintal, Unified drift-diffusion theory for transverse spin currents in spin valves, domain walls, and other textured magnets, Phys. Rev. Lett. 109, 117204 (2012) Role of spin diffusion in current-induced domain wall motion for disordered ferromagnets. C A Akosa, W.-S Kim, A Bisig, M Kläui, K.-J Lee, A Manchon, Phys. Rev. B. 9194411C.A. Akosa, W.-S. Kim, A. Bisig, M. Kläui, K.-J. Lee, A. Manchon, Role of spin diffusion in current-induced domain wall motion for disordered ferromagnets, Phys. Rev. B 91, 094411 (2015) Current induced torques and interfacial spinorbit coupling: Semiclassical modeling. P M Haney, H.-W Lee, K.-J Lee, A Manchon, M D Stiles, Phys. Rev. B. 87174411P.M. Haney, H.-W. Lee, K.-J. Lee, A. Manchon, M.D. Stiles, Current induced torques and interfacial spin- orbit coupling: Semiclassical modeling, Phys. Rev. B 87, 174411 (2013) Theory of the perpendicular magnetoresistance in magnetic multilayers. T Valet, A Fert, Phys. Rev. B. 487099T. Valet, A. Fert, Theory of the perpendicular magne- toresistance in magnetic multilayers, Phys. Rev. B 48, 7099 1993 Giant spin Hall effect induced by skew scattering from bismuth impurities inside thin film CuBi alloys. Y Niimi, Y Kawanishi, D Wei, C Deranlot, H Yang, M Chshiev, T Valet, A Fert, Y Otani, Phys. Rev. Lett. 109156602Y. Niimi, Y. Kawanishi, D. Wei, C. Deranlot, H. Yang, M. Chshiev, T. Valet, A. Fert, Y. Otani, Giant spin Hall effect induced by skew scattering from bismuth impurities inside thin film CuBi alloys, Phys. Rev. Lett. 109, 156602 (2012) Efficient micromagnetic modelling of spin-transfer torque and spin-orbit torque. C Abert, F Bruckner, C Vogler, D Suess, AIP Adv. 856008C. Abert, F. Bruckner, C. Vogler, D. Suess, Effi- cient micromagnetic modelling of spin-transfer torque and spin-orbit torque, AIP Adv. 8, 056008 (2018) Spin pumping and magnetization dynamics in metallic multilayers. Y Tserkovnyak, A Brataas, G E Bauer, Phys. Rev. B. 66224403Y. Tserkovnyak, A. Brataas, G.E. Bauer, Spin pump- ing and magnetization dynamics in metallic multilayers, Phys. Rev. B 66, 224403 (2002) Theory of tunneling magnetoresistance of an epitaxial Fe/MgO/Fe (001) junction. J Mathon, A Umerski, Phys. Rev. B. 63220403J. Mathon, A. Umerski, Theory of tunneling magnetore- sistance of an epitaxial Fe/MgO/Fe (001) junction, Phys. Rev. B 63, 220403 (2001) Prediction of large bias-dependent magnetoresistance in all-oxide magnetic tunnel junctions with a ferroelectric barrier. N M Caffrey, T Archer, I Rungger, S Sanvito, Phys. Rev. B. 83125409N.M. Caffrey, T. Archer, I. Rungger, S. Sanvito, Pre- diction of large bias-dependent magnetoresistance in all-oxide magnetic tunnel junctions with a ferroelectric barrier, Phys. Rev. B 83, 125409 (2011) Spin-dependent tunneling conductance of Fe-MgO-Fe sandwiches. W Butler, X.-G Zhang, T Schulthess, J Maclaren, Phys. Rev. B. 6354416W. Butler, X.-G. Zhang, T. Schulthess, J. MacLaren, Spin-dependent tunneling conductance of Fe-MgO-Fe sandwiches, Phys. Rev. B 63, 054416 (2001) Solving micromagnetic problems. towards an optimal numerical method. D Berkov, K Ramstöck, A Hubert, Phys. Status Solidi a. 137207D. Berkov, K. Ramstöck, A. Hubert, Solving micromag- netic problems. towards an optimal numerical method, Phys. Status Solidi a 137, 207 (1993) Concise, efficient threedimensional fast multipole method for micromagnetics. C Seberino, H N Bertram, IEEE Trans. Magn. 371078C. Seberino, H.N. Bertram, Concise, efficient three- dimensional fast multipole method for micromagnetics, IEEE Trans. Magn. 37, 1078 (2001) W H Press, S A Teukolsky, W T Vetterling, B P Flannery, Numerical Recipes: The art of Scientific Computing. CambridgeCambridge University Press3rd edn.W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes: The art of Scientific Com- puting, 3rd edn. (Cambridge University Press, Cam- bridge, 2007) A generalization of the demagnetizing tensor for nonuniform magnetization. A J Newell, W Williams, D J Dunlop, J. Geophys. Res. Solid Earth. 989551A.J. Newell, W. Williams, D.J. Dunlop, A generalization of the demagnetizing tensor for nonuniform magnetiza- tion, J. Geophys. Res. Solid Earth 98, 9551 (1993) Periodic boundary conditions for demagnetization interactions in micromagnetic simulations. K M Lebecki, M J Donahue, M W Gutowski, J. Phys. D Appl. Phys. 41175005K.M. Lebecki, M.J. Donahue, M.W. Gutowski, Periodic boundary conditions for demagnetization interactions in micromagnetic simulations, J. Phys. D Appl. Phys. 41, 175005 (2008) Fast and accurate calculation of the demagnetization tensor for systems with periodic boundary conditions. B Krüger, G Selke, A Drews, D Pfannkuche, IEEE Trans. Magn. 494749B. Krüger, G. Selke, A. Drews, D. Pfannkuche, Fast and accurate calculation of the demagnetization tensor for systems with periodic boundary conditions, IEEE Trans. Magn. 49, 4749 (2013) Micromagnetic analysis of shielded write heads using symmetric multiprocessing systems. Y Kanai, K Koyama, M Ueki, T Tsukamoto, K Yoshida, S J Greaves, H Muraoka, IEEE Trans. Magn. 463337Y. Kanai, K. Koyama, M. Ueki, T. Tsukamoto, K. Yoshida, S.J. Greaves, H. Muraoka, Micromagnetic analysis of shielded write heads using symmetric multi- processing systems, IEEE Trans. Magn. 46, 3337 (2010) A fast finitedifference method for micromagnetics using the magnetic scalar potential. C Abert, G Selke, B Krüger, A Drews, IEEE Trans. Magn. 481105C. Abert, G. Selke, B. Krüger, A. Drews, A fast finite- difference method for micromagnetics using the mag- netic scalar potential, IEEE Trans. Magn. 48, 1105 (2012) Finite-difference micromagnetic solvers with the object-oriented micromagnetic framework on graphics processing units. S Fu, W Cui, M Hu, R Chang, M J Donahue, V Lomakin, IEEE Trans. Magn. 521S. Fu, W. Cui, M. Hu, R. Chang, M.J. Donahue, V. Lomakin, Finite-difference micromagnetic solvers with the object-oriented micromagnetic framework on graphics processing units, IEEE Trans. Magn. 52, 1 (2016) Spin-polarized currents in ferromagnetic multilayers. C J García-Cervera, X.-P Wang, J. Comput. Phys. 224699C.J. García-Cervera, X.-P. Wang, Spin-polarized currents in ferromagnetic multilayers, J. Comput. Phys. 224, 699 (2007) OOMMF user's guide, version 1.0, Tech. Rep. M J Donahue, M.J. Donahue, OOMMF user's guide, version 1.0, Tech. Rep., 1999 . D Cortés-Ortuño, W Wang, R Pepper, M.-A Bisotti, T Kluyver, M Vousden, H Fangohr, Fidimag v2.0.D. Cortés-Ortuño, W. Wang, R. Pepper, M.-A. Bisotti, T. Kluyver, M. Vousden, H. Fangohr, Fidimag v2.0. https://github.com/computationalmodelling/fidimag (accessed 2019/02/04) MicroMagus-package for micromagnetic simulations. D Berkov, N Gorn, 2019/02/04D. Berkov, N. Gorn, MicroMagus-package for micro- magnetic simulations (2007). http://www.micromagus.de (accessed 2019/02/04) A full-fledged micromagnetic code in fewer than 70 lines of NumPy. C Abert, F Bruckner, C Vogler, R Windl, R Thanhoffer, D Suess, J. Magn. Magn. Mater. 38713C. Abert, F. Bruckner, C. Vogler, R. Windl, R. Thanhoffer, D. Suess, A full-fledged micromagnetic code in fewer than 70 lines of NumPy, J. Magn. Magn. Mater. 387, 13 (2015) Fast micromagnetic simulations on GPU-recent advances made with MuMax3. J Leliaert, M Dvornik, J Mulkers, J De Clercq, M Milošević, B Van Waeyenberge, J. Phys. D: Appl. Phys. 51123002J. Leliaert, M. Dvornik, J. Mulkers, J. De Clercq, M. Milošević, B. Van Waeyenberge, Fast micromag- netic simulations on GPU-recent advances made with MuMax3, J. Phys. D: Appl. Phys. 51, 123002 (2018) The design and verification of MuMax3. A Vansteenkiste, J Leliaert, M Dvornik, M Helsen, F Garcia-Sanchez, B Van Waeyenberge, AIP Adv. 4107133A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, B. Van Waeyenberge, The design and verification of MuMax3, AIP Adv. 4, 107133 (2014) . G Selke, B Krüger, A Drews, C Abert, T Gerhardt, Magnum , G. Selke, B. Krüger, A. Drews, C. Abert, T. Gerhardt, magnum.fd. (2014). https://github.com/micromagnetics/ magnum.fd (accessed 2019/02/04) D Braess, Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics. CambridgeCambridge University PressD. Braess, Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics (Cambridge University Press, Cambridge, 2007) Y Saad, Iterative Methods for Sparse Linear Systems. SIAM, PA82Y. Saad, in Iterative Methods for Sparse Linear Systems (SIAM, PA, 2003), Vol. 82 A review of finite element open boundary techniques for static and quasi-static electromagnetic field problems. Q Chen, A Konrad, IEEE Trans. Magn. 33663Q. Chen, A. Konrad, A review of finite element open boundary techniques for static and quasi-static electro- magnetic field problems, IEEE Trans. Magn. 33, 663 (1997) An original solution for unbounded electromagnetic. J Imhoff, G Meunier, X Brunotte, J Sabonnadiere, J. Imhoff, G. Meunier, X. Brunotte, J. Sabonnadiere, An original solution for unbounded electromagnetic . Eur. Phys. J. B. 92120Eur. Phys. J. B (2019) 92: 120 2D-and 3D-problems throughout the finite element method. IEEE Trans. Magn. 2616592D-and 3D-problems throughout the finite element method, IEEE Trans. Magn. 26, 1659 (1990) Finite element modeling of unbounded problems using transformations: a rigorous, powerful and easy solution. X Brunotte, G Meunier, J.-F Imhoff, IEEE Trans. Magn. 281663X. Brunotte, G. Meunier, J.-F. Imhoff, Finite element modeling of unbounded problems using transformations: a rigorous, powerful and easy solution, IEEE Trans. Magn. 28, 1663 (1992) Finite element modelling with transformation techniques. F Henrotte, B Meys, H Hedia, P Dular, W Legros, IEEE Trans. Magn. 351434F. Henrotte, B. Meys, H. Hedia, P. Dular, W. Legros, Finite element modelling with transformation techniques, IEEE Trans. Magn. 35, 1434 (1999) Numerical methods for the stray-field calculation: A comparison of recently developed algorithms. C Abert, L Exl, G Selke, A Drews, T Schrefl, J. Magn. Magn. Mater. 326176C. Abert, L. Exl, G. Selke, A. Drews, T. Schrefl, Numeri- cal methods for the stray-field calculation: A comparison of recently developed algorithms, J. Magn. Magn. Mater. 326, 176 (2013) Hybrid method for computing demagnetizing fields. D Fredkin, T Koehler, IEEE Trans. Magn. 26415D. Fredkin, T. Koehler, Hybrid method for computing demagnetizing fields, IEEE Trans. Magn. 26, 415 (1990) W Hackbusch, Hierarchical Matrices: Algorithms and Analysis. BerlinSpringer49W. Hackbusch, in Hierarchical Matrices: Algorithms and Analysis (Springer, Berlin, 2015), Vol. 49 Applications of H-matrix techniques in micromagnetics. N Popović, D Praetorius, Computing. 74177N. Popović, D. Praetorius, Applications of H-matrix techniques in micromagnetics, Computing 74, 177 (2005) Adaptive mesh refinement for micromagnetics simulations. C J Garcia-Cervera, A M Roma, IEEE Trans. Magn. 421648C.J. Garcia-Cervera, A.M. Roma, "Adaptive mesh refine- ment for micromagnetics simulations, IEEE Trans. Magn. 42, 1648 (2006) Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities. C Geuzaine, J.-F Remacle, Int. J. Numer. Methods Eng. 791309C. Geuzaine, J.-F. Remacle, Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities, Int. J. Numer. Methods Eng. 79, 1309 (2009) NETGEN an advancing front 2D/3D-mesh generator based on abstract rules. J Schöberl, Comput. Vis. Sci. 141J. Schöberl, NETGEN an advancing front 2D/3D-mesh generator based on abstract rules, Comput. Vis. Sci. 1, 41 (1997) Onelab: open numerical engineering laboratory. C Geuzaine, F Henrotte, J.-F Remacle, E Marchandise, R Sabariego, in 11e Colloque National en Calcul des StructuresC. Geuzaine, F. Henrotte, J.-F. Remacle, E. Marchandise, R. Sabariego, Onelab: open numerical engineering labo- ratory, in 11e Colloque National en Calcul des Structures (2013) J Schöberl, C++ 11 Implementation of Finite Elements in NGSolve (Institute for Analysis and Scientific Computing. Vienna University of TechnologyJ. Schöberl, C++ 11 Implementation of Finite Ele- ments in NGSolve (Institute for Analysis and Scientific Computing, Vienna University of Technology, 2014) Escript: numerical modelling with python. L Gross, P Cochrane, M Davies, H Muhlhaus, J Smillie, Australian Partnership for Advanced Computing (APAC) Conferene, APAC (2005). 131L. Gross, P. Cochrane, M. Davies, H. Muhlhaus, J. Smillie, Escript: numerical modelling with python, in Australian Partnership for Advanced Computing (APAC) Conferene, APAC (2005), Vol. 1, p. 31 R Anderson, A Barker, J Bramwell, J Camier, J Ceverny, J Dahm, Y Dudouit, V Dobrev, A Fisher, T Kolev, D Medina, M Stowell, V Tomov, 10.11578/dc.20171025.1248MFEM: a modular finite element library. R. Anderson, A. Barker, J. Bramwell, J. Camier, J. Ceverny, J. Dahm, Y. Dudouit, V. Dobrev, A. Fisher, T. Kolev, D. Medina, M. Stowell, V. Tomov, MFEM: a modular finite element library (2010), DOI: 10.11578/dc.20171025.1248 M S Alnaes, J Blechta, J Hake, A Johansson, B Kehlet, A Logg, C Richardson, J Ring, M E Rognes, G N Wells, The FEniCS project version 1.5. 39M.S. Alnaes, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M.E. Rognes, G.N. Wells, The FEniCS project version 1.5, Arch. Numer. Softw. 3, 9 (2015) Petsc users manual revision 3.8, Tech. Rep., Argonne National Lab. S Balay, S Abhyankar, M Adams, J Brown, P Brune, K Buschelman, L Dalcin, V Eijkhout, W Gropp, D Kaushik, Argonne, IL, United StatesS. Balay, S. Abhyankar, M. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. Gropp, D. Kaushik et al., Petsc users manual revision 3.8, Tech. Rep., Argonne National Lab.(ANL), Argonne, IL, United States, 2017 Solving boundary integral problems with bem++. W Śmigaj, T Betcke, S Arridge, J Phillips, M Schweiger, ACM Trans. Math. Softw. (TOMS). 416W.Śmigaj, T. Betcke, S. Arridge, J. Phillips, M. Schweiger, Solving boundary integral problems with bem++, ACM Trans. Math. Softw. (TOMS) 41, 6 (2015) . N Albrecht, C Börst, D Boysen, S Christophersen, S Börm, 2019/02/042N. Albrecht, C. Börst, D. Boysen, S. Christophersen, S. Börm, H2Lib (2016). http://www.h2lib.org (accessed 2019/02/04) . M.-A Bisotti, M Beg, W Wang, M Albert, D Chernyshenko, D Cortés-Ortuño, R A Pepper, M Vousden, R Carey, H Fuchs, A Johansen, G Balaban, L B T Kluyver, H Fangohr, Finmag , M.-A. Bisotti, M. Beg, W. Wang, M. Albert, D. Chernyshenko, D. Cortés-Ortuño, R.A. Pepper, M. Vousden, R. Carey, H. Fuchs, A. Johansen, G. Balaban, L.B.T. Kluyver, H. Fangohr, FinMag (2018). https://github.com/fangohr/finmag (accessed 2019/02/04) fe: a micromagnetic finite-element simulation code based on FEniCS. C Abert, L Exl, F Bruckner, A Drews, D Suess, Magnum , J. Magn. Magn. Mater. 34529C. Abert, L. Exl, F. Bruckner, A. Drews, D. Suess, magnum.fe: a micromagnetic finite-element simulation code based on FEniCS, J. Magn. Magn. Mater. 345, 29 (2013) A three-dimensional spindiffusion model for micromagnetics. C Abert, M Ruggeri, F Bruckner, C Vogler, G Hrkac, D Praetorius, D Suess, Sci. Rep. 514855C. Abert, M. Ruggeri, F. Bruckner, C. Vogler, G. Hrkac, D. Praetorius, D. Suess, A three-dimensional spin- diffusion model for micromagnetics, Sci. Rep. 5, 14855 (2015) Coupling of dynamical micromagnetism and a stationary spin drift-diffusion equation: a step towards a fully self-consistent spintronics framework. M Ruggeri, C Abert, G Hrkac, D Suess, D Praetorius, Phys. B Condens. Matter. 48688M. Ruggeri, C. Abert, G. Hrkac, D. Suess, D. Praetorius, Coupling of dynamical micromagnetism and a station- ary spin drift-diffusion equation: a step towards a fully self-consistent spintronics framework, Phys. B Condens. Matter 486, 88 (2016) A self-consistent spindiffusion model for micromagnetics. C Abert, M Ruggeri, F Bruckner, C Vogler, A Manchon, D Praetorius, D Suess, Sci. Rep. 616C. Abert, M. Ruggeri, F. Bruckner, C. Vogler, A. Manchon, D. Praetorius, D. Suess, A self-consistent spin- diffusion model for micromagnetics, Sci. Rep. 6, 16 (2016) A convergent finite element approximation for Landau-Lifschitz-Gilbert equation. F Alouges, E Kritsikis, J.-C Toussaint, Physica B. 4071345F. Alouges, E. Kritsikis, J.-C. Toussaint, A conver- gent finite element approximation for Landau-Lifschitz- Gilbert equation, Physica B 407, 1345 (2012) Geometry effects on magnetization dynamics in circular crosssection wires. M Sturma, J.-C Toussaint, D Gusakova, J. Appl. Phys. 117243901M. Sturma, J.-C. Toussaint, D. Gusakova, Geometry effects on magnetization dynamics in circular cross- section wires, J. Appl. Phys. 117, 243901 (2015) . W Scholz, Magpar , 2019/02/04W. Scholz, MagPar (2010). http://www.magpar.net/ (accessed 2019/02/04) A systematic approach to multiphysics extensions of finite-element-based micromagnetic simulations: Nmag. T Fischbacher, M Franchin, G Bordignon, H Fangohr, IEEE Trans. Magn. 432896T. Fischbacher, M. Franchin, G. Bordignon, H. Fangohr, A systematic approach to multiphysics exten- sions of finite-element-based micromagnetic simulations: Nmag, IEEE Trans. Magn. 43, 2896 (2007) . D Suess, T Schrefl, Femme, 2019/02/04D. Suess, T. Schrefl, FEMME (2018). http: //suessco.com/simulations/solutions/femme-software/ (accessed 2019/02/04) Speedup of FEM micromagnetic simulations with graphical processing units. A Kakay, E Westphal, R Hertel, IEEE Trans. Magn. 462303A. Kakay, E. Westphal, R. Hertel, Speedup of FEM micromagnetic simulations with graphical processing units, IEEE Trans. Magn. 46, 2303 (2010) FastMag: fast micromagnetic simulator for complex magnetic structures. R Chang, S Li, M Lubarda, B Livshitz, V Lomakin, J. Appl. Phys. 109R. Chang, S. Li, M. Lubarda, B. Livshitz, V. Lomakin, FastMag: fast micromagnetic simulator for complex mag- netic structures, J. Appl. Phys. 109, 07D358 (2011) Beyond first-order finite element schemes in micromagnetics. E Kritsikis, A Vaysset, L Buda-Prejbeanu, F Alouges, J.-C Toussaint, J. Comput. Phys. 256357E. Kritsikis, A. Vaysset, L. Buda-Prejbeanu, F. Alouges, J.-C. Toussaint, Beyond first-order finite element schemes in micromagnetics, J. Comput. Phys. 256, 357 (2014) Non-uniform FFT for the finite element computation of the micromagnetic scalar potential. L Exl, T Schrefl, J. Comput. Phys. 270490L. Exl, T. Schrefl, Non-uniform FFT for the finite ele- ment computation of the micromagnetic scalar potential, J. Comput. Phys. 270, 490 (2014) Fast multipole method for micromagnetic simulation of periodic systems. D Apalkov, P Visscher, IEEE Trans. Magn. 393478D. Apalkov, P. Visscher, Fast multipole method for micro- magnetic simulation of periodic systems, IEEE Trans. Magn. 39, 3478 (2003) Highly parallel demagnetization field calculation using the fast multipole method on tetrahedral meshes with continuous sources. P Palmesi, L Exl, F Bruckner, C Abert, D Suess, J. Magn. Magn. Mater. 442409P. Palmesi, L. Exl, F. Bruckner, C. Abert, D. Suess, Highly parallel demagnetization field calculation using the fast multipole method on tetrahedral meshes with continuous sources, J. Magn. Magn. Mater. 442, 409 (2017) Fast stray field computation on tensor grids. L Exl, W Auzinger, S Bance, M Gusenbauer, F Reichel, T Schrefl, J. Comput. Phys. 2312840L. Exl, W. Auzinger, S. Bance, M. Gusenbauer, F. Reichel, T. Schrefl, Fast stray field computation on tensor grids, J. Comput. Phys. 231, 2840 (2012) FFT-based Kronecker product approximation to micromagnetic long-range interactions. L Exl, C Abert, N J Mauser, T Schrefl, H P Stimming, D Suess, Math. Models Methods Appl. Sci. 241877L. Exl, C. Abert, N.J. Mauser, T. Schrefl, H.P. Stimming, D. Suess, FFT-based Kronecker product approximation to micromagnetic long-range interactions, Math. Models Methods Appl. Sci. 24, 1877 (2014) Time resolved micromagnetics using a preconditioned time integration method. D Suess, V Tsiantos, T Schrefl, J Fidler, W Scholz, H Forster, R Dittrich, J Miles, J. Magn. Magn. Mater. 248298D. Suess, V. Tsiantos, T. Schrefl, J. Fidler, W. Scholz, H. Forster, R. Dittrich, J. Miles, Time resolved micromagnetics using a preconditioned time integration method, J. Magn. Magn. Mater. 248, 298 (2002) Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems. E Fehlberg, NASA Technical Report. 315E. Fehlberg, Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems, NASA Technical Report, Vol. 315, 1969 A family of embedded Runge-Kutta formulae. J R Dormand, P J Prince, J. Comput. Appl. Math. 619J.R. Dormand, P.J. Prince, A family of embedded Runge- Kutta formulae, J. Comput. Appl. Math. 6, 19 (1980) R L Burden, J D Faires, Numerical Analysis (Cengage Learning. MAR.L. Burden, J.D. Faires, Numerical Analysis (Cengage Learning, MA, 2010) A new finite element scheme for landaulifchitz equations. F Alouges, Discrete Contin. Dyn. Syst. Ser. S. 1187F. Alouges, A new finite element scheme for landau- lifchitz equations, Discrete Contin. Dyn. Syst. Ser. S 1, 187 (2008) An effective integrator for the Landau-Lifshitz-Gilbert equation. P Goldenits, G Hrkac, D Praetorius, D Suess, Proceedings of Mathmod 2012 Conference. Mathmod 2012 ConferenceP. Goldenits, G. Hrkac, D. Praetorius, D. Suess, An effec- tive integrator for the Landau-Lifshitz-Gilbert equation, in Proceedings of Mathmod 2012 Conference (2012) Coupling and numerical integration of the Landau-Lifshitz-Gilbert equation. M Ruggeri, TU WienPh.D. thesisM. Ruggeri, Coupling and numerical integration of the Landau-Lifshitz-Gilbert equation, Ph.D. thesis, TU Wien, 2016 Spin-polarized transport in ferromagnetic multilayers: An unconditionally convergent FEM integrator. C Abert, G Hrkac, M Page, D Praetorius, M Ruggeri, D Suess, Comput. Math. Appl. 68639C. Abert, G. Hrkac, M. Page, D. Praetorius, M. Ruggeri, D. Suess, Spin-polarized transport in ferromagnetic mul- tilayers: An unconditionally convergent FEM integrator, Comput. Math. Appl. 68, 639 (2014) Sundials: Suite of nonlinear and differential/algebraic equation solvers. A C Hindmarsh, P N Brown, K E Grant, S L Lee, R Serban, D E Shumaker, C S Woodward, ACM Trans. Math. Softw. (TOMS). 31363A.C. Hindmarsh, P.N. Brown, K.E. Grant, S.L. Lee, R. Serban, D.E. Shumaker, C.S. Woodward, Sundials: Suite of nonlinear and differential/algebraic equation solvers, ACM Trans. Math. Softw. (TOMS) 31, 363 (2005) Micromagnetic simulations of magnetostatically coupled nickel nanowires. R Hertel, J. Appl. Phys. 905752R. Hertel, Micromagnetic simulations of magnetostati- cally coupled nickel nanowires, J. Appl. Phys. 90, 5752 (2001) J Fischbacher, A Kovacs, H Oezelt, T Schrefl, L Exl, J Fidler, D Suess, N Sakuma, M Yano, A Kato, Nonlinear conjugate gradient methods in micromagnetics. 745310J. Fischbacher, A. Kovacs, H. Oezelt, T. Schrefl, L. Exl, J. Fidler, D. Suess, N. Sakuma, M. Yano, A. Kato et al., Nonlinear conjugate gradient methods in micromagnet- ics, AIP Adv. 7, 045310 (2017) LaBonte's method revisited: An effective steepest descent method for micromagnetic energy minimization. L Exl, S Bance, F Reichel, T Schrefl, H Peter, N J Stimming, Mauser, J. Appl. Phys. 115L. Exl, S. Bance, F. Reichel, T. Schrefl, H. Peter Stimming, N.J. Mauser, LaBonte's method revisited: An effective steepest descent method for micromagnetic energy minimization, J. Appl. Phys. 115, 17D118 (2014) Simplified and improved string method for computing the minimum energy paths in barrier-crossing events. E Weinan, W Ren, E Vanden-Eijnden, J. Chem. Phys. 126164103E. Weinan, W. Ren, E. Vanden-Eijnden, Simplified and improved string method for computing the minimum energy paths in barrier-crossing events, J. Chem. Phys. 126, 164103 (2007) A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems. R Dittrich, T Schrefl, D Suess, W Scholz, H Forster, J Fidler, J. Magn. Magn. Mater. 25012R. Dittrich, T. Schrefl, D. Suess, W. Scholz, H. Forster, J. Fidler, A path method for finding energy barriers and minimum energy paths in complex micromagnetic systems, J. Magn. Magn. Mater. 250, 12 (2002) Proposal for a standard problem for micromagnetic simulations including spin-transfer torque. M Najafi, B Krüger, S Bohlens, M Franchin, H Fangohr, A Vanhaverbeke, R Allenspach, M Bolte, U Merkt, D Pfannkuche, J. Appl. Phys. 105113914M. Najafi, B. Krüger, S. Bohlens, M. Franchin, H. Fangohr, A. Vanhaverbeke, R. Allenspach, M. Bolte, U. Merkt, D. Pfannkuche et al., Proposal for a standard problem for micromagnetic simulations includ- ing spin-transfer torque, J. Appl. Phys. 105, 113914 (2009) Self-consistent treatment of nonequilibrium spin torques in magnetic multilayers. A Shpiro, P M Levy, S Zhang, Phys. Rev. B. 67104430A. Shpiro, P.M. Levy, S. Zhang, Self-consistent treatment of nonequilibrium spin torques in magnetic multilayers, Phys. Rev. B 67, 104430 (2003) Bias-field-free microwave oscillator driven by perpendicularly polarized spin current. X Zhu, J.-G Zhu, IEEE Trans. Magn. 422670X. Zhu, J.-G. Zhu, Bias-field-free microwave oscillator driven by perpendicularly polarized spin current, IEEE Trans. Magn. 42, 2670 (2006) Magnetic vortex oscillator driven by dc spin-polarized current. V Pribiag, I Krivorotov, G Fuchs, P Braganca, O Ozatay, J Sankey, D Ralph, R Buhrman, Nat. Phys. 3498V. Pribiag, I. Krivorotov, G. Fuchs, P. Braganca, O. Ozatay, J. Sankey, D. Ralph, R. Buhrman, Magnetic vortex oscillator driven by dc spin-polarized current, Nat. Phys. 3, 498 (2007) . I Firastrau, D Gusakova, D Houssameddine, U Ebels, M.-C Cyrille, B Delaet, B Dieny, O Redon, J.-C , I. Firastrau, D. Gusakova, D. Houssameddine, U. Ebels, M.-C. Cyrille, B. Delaet, B. Dieny, O. Redon, J.-C. Modeling of the perpendicular polarizer-planar free layer spin torque oscillator: micromagnetic simulations. L Toussaint, Buda-Prejbeanu, Phys. Rev. B. 7824437Toussaint, L. Buda-Prejbeanu, Modeling of the perpen- dicular polarizer-planar free layer spin torque oscillator: micromagnetic simulations, Phys. Rev. B 78, 024437 (2008) Spin-torque switching with the giant spin Hall effect of tantalum. L Liu, C.-F Pai, Y Li, H Tseng, D Ralph, R Buhrman, Science. 336555L. Liu, C.-F. Pai, Y. Li, H. Tseng, D. Ralph, R. Buhrman, Spin-torque switching with the giant spin Hall effect of tantalum, Science 336, 555 (2012) Spin-orbit torque magnetization switching of a three-terminal perpendicular magnetic tunnel junction. M Cubukcu, O Boulle, M Drouard, K Garello, C Avci, I Miron, J Langer, B Ocker, P Gambardella, G Gaudin, Appl. Phys. Lett. 10442406M. Cubukcu, O. Boulle, M. Drouard, K. Garello, C. Onur Avci, I. Mihai Miron, J. Langer, B. Ocker, P. Gambardella, G. Gaudin, Spin-orbit torque mag- netization switching of a three-terminal perpendicular magnetic tunnel junction, Appl. Phys. Lett. 104, 042406 (2014)
[ "https://github.com/computationalmodelling/fidimag", "https://github.com/micromagnetics/", "https://github.com/fangohr/finmag" ]
[ "Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms", "Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms" ]
[ "Maciejświechowski \nInstitute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland\n", "Tomasz Tajmajer [email protected] \nInstitute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland\n", "Andrzej Janusz [email protected] \nInstitute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland\n", "Esensei Mazowiecka \nInstitute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland\n" ]
[ "Institute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland", "Institute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland", "Institute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland", "Institute of Informatics\nSilver Bullet Labs Liwiecka 25\nUniversity of Warsaw\nBanacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland" ]
[]
Warning: this is not the final camera-ready version. Abstract-We investigate the impact of supervised prediction models on the strength and efficiency of artificial agents that use the Monte-Carlo Tree Search (MCTS) algorithm to play a popular video game Hearthstone: Heroes of Warcraft. We overview our custom implementation of the MCTS that is well-suited for games with partially hidden information and random effects. We also describe experiments which we designed to quantify the performance of our Hearthstone agent's decision making. We show that even simple neural networks can be trained and successfully used for the evaluation of game states. Moreover, we demonstrate that by providing a guidance to the game state search heuristic, it is possible to substantially improve the win rate, and at the same time reduce the required computations.
10.1109/cig.2018.8490368
[ "https://arxiv.org/pdf/1808.04794v1.pdf" ]
52,001,814
1808.04794
30e9e1181b27db3df852ead234b6a186a58936c4
Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms 14 Aug 2018 Maciejświechowski Institute of Informatics Silver Bullet Labs Liwiecka 25 University of Warsaw Banacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland Tomasz Tajmajer [email protected] Institute of Informatics Silver Bullet Labs Liwiecka 25 University of Warsaw Banacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland Andrzej Janusz [email protected] Institute of Informatics Silver Bullet Labs Liwiecka 25 University of Warsaw Banacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland Esensei Mazowiecka Institute of Informatics Silver Bullet Labs Liwiecka 25 University of Warsaw Banacha 2Warsaw, Warsaw, WarsawPoland, Poland, Poland Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms 14 Aug 2018Index Terms-MCTSHearthstonemachine learningneural networksheuristic Warning: this is not the final camera-ready version. Abstract-We investigate the impact of supervised prediction models on the strength and efficiency of artificial agents that use the Monte-Carlo Tree Search (MCTS) algorithm to play a popular video game Hearthstone: Heroes of Warcraft. We overview our custom implementation of the MCTS that is well-suited for games with partially hidden information and random effects. We also describe experiments which we designed to quantify the performance of our Hearthstone agent's decision making. We show that even simple neural networks can be trained and successfully used for the evaluation of game states. Moreover, we demonstrate that by providing a guidance to the game state search heuristic, it is possible to substantially improve the win rate, and at the same time reduce the required computations. I. INTRODUCTION Hearthstone: Heroes of Warcraft is a free-to-play online video game developed and published by Blizzard Entertainment. Its simple rules and appealing design made this game successful among casual players. According to Blizzard's data, in 2017 the player-base of the game was about 70 million and it grows with each of the released expansions. The game is also popular within the eSport community, with cash-prize tournaments and many international events every year. Hearthstone is an example of a turn-based collectible card game. During the game, two players choose their hero with a unique power and compose a deck of thirty cards. They spend mana points to cast spells, weapons and summon minions to attack the opponent, with the goal to reduce the opponent's health to zero or below. Due to a large number of distinct cards which implement various game mechanics, and special in-game effects which often have randomized outcomes, Hearthstone is an example of a game where actions may have non-deterministic results. Moreover, during a game each player is unaware of cards that the opponent holds in hand, nor the ordering of yet-to-be-drawn cards in his deck. Finally, since a player may perform several actions in each turn of the game and ordering of those actions is pivotal to player's success, Hearthstone features great combinatoric complexity. All the This research was co-funded by the Smart Growth Operational Programme 2014-2020, financed by the European Regional Development Fund under a GameINN projects POIR.01.02.00-00-0150/16 and POIR.01.02.00-00-0184/17, operated by The National Centre for Research and Development (NCBiR). above properties make Hearthstone a demanding challenge for AI-controlled bots that are designed to play this game. One objective of this article is to explain how our implementation of the Monte Carlo Tree Search (MCTS) algorithm deals with those problems. We also aim to discuss the means by which MCTS can be facilitated by machine learning algorithms and provide experimental evaluation of its performance. The paper is organized as follows. In the next section, we continue with providing context of the research and show related initiatives. In Section III, the MCTS algorithm is discussed with the focus on problems encountered in Hearthstone such as randomness, hidden information and combinatorial complexity. We also shed some light on the game simulator used for this research. The subsequent section is devoted to methods of combining MCTS with machine-learning-based heuristics. Finally, the last two sections contain a description of empirical experiments which we conducted to evaluate our Hearthstone agents and conclusions, respectively. II. RELATED WORK In recent years, Hearthstone has become a testbed for AI research. A community of passionate players and developers have started the HearthSim project (https://hearthsim.info/) and created several applications that allow simulating the game for the purpose of AI and machine learning experiments. A few spin-offs of that project, e.g. HearthPWN and MetaStats, provide tools for the players, which facilitate gathering data from their games. These portals obtain and aggregate users' data, such as game results, deck compositions, card usage statistics and provide this information to the community. Several groups of researchers from the field of machine learning and AI have already chosen Hearthstone for their studies. In [1], authors used evolutionary algorithms to tackle the problem of building good decks. They used the results of simulated games performed by simple AI bots as fitness function values. Even though this study was described by the authors as preliminary, the developed method was able to construct reasonable decks from a basic set of cards. However, one drawback of this method is the fact that it strongly depends on the performance of the AI bots used for the evaluation of the decks. A few research groups were also considering a problem of constructing an artificial agent able to play Hearthstone. In particular, [2] used Monte-Carlo Tree Search (MCTS) algorithm to choose an optimal action policy in the game. Furthermore, [3] used deep neural networks to improve performance of a MCTS-based Hearthstone bot, called Silverfish. The combination of MCTS with prediction models make those approaches similar to early versions of DeepMind's AlphaGo program [4]. It is worth noticing, however, that unlike Go, in Hearthstone players do not have full information about the game state and many actions have non-deterministic outcomes. These two properties make this game much more challenging for the game state tree search algorithms, such as MCTS [5]. There were also attempts at constructing models for predicting cards that are likely to be played by an opponent during a game. For instance, in [6] the author used data from 45,000 Hearthstone games to extract sequences of played cards and represent each record as a bag of card bi-grams. By investigating co-occurrence probabilities, the method described in that study was able to correctly predict opponent's card which will most likely appear during the following turns of the game, in over 50% of cases. Such a high predictability can be explained by the fact that even though the number of possible Hearthstone decks is enormous, players tend to build their decks in accordance to certain archetypes and their composition is often inspired by the decks used by other influential players. Hearthstone was also a topic of international data mining competitions. The first one, AAIA'17 Data Mining Challenge: Helping AI to Play Hearthstone 1 , was focused on developing a scoring model for predicting win chances of a player, based on detailed description of a single game state [7]. Although the data in this competition was generated using very simple bots which were choosing their moves at random, the best models created by participants were able to achieve AUC scores above 0.80. The winner used an ensemble of 1-dimensional convolutional neural networks to extract features from each combination of both players' cards on the board [8]. A year later, the second edition of this challenge was launched. The task in AAIA'18 Data Mining Challenge was to predict winrates of Hearthstone decks, based on a history of match-ups between AI bots playing with similar decks. Various other card games were also studied in the literature related to machine learning and AI. For instance, in [9] authors consider heads-up no-limit poker as an example of a game with hidden information. They describe a DeepStack algorithm which aims to handle the information asymmetry between players by combining recursive reasoning with learning from self-played games. As a different example one can give the game Magic: The Gathering, studied, e.g. in [10]. Due to the notable similarity to Hearthstone, these games pose many similar challenges. In our work, however, we focus only on Hearthstone. The growing interest of the machine learning community in applications related to video games stems from the fact that solutions to many game-related problems could be 1 Competition's web page: https://knowledgepit.fedcsis.org/contest/view.php?id=120 easily transfered to real-life issues, such as planning [11], realtime decision making [12], [13] and, ultimately, general AI. III. PLAYING HEARTHSTONE WITH MONTE-CARLO TREE SEARCH A. Game Simulator The access to a game simulator allows game-playing agents to perform dynamic reasoning about the game. The idea is to run separate simulations that do not affect the actual (main) state of the played game. This is a reason why a simulator is often called a "forward model" as it enables forward planning. Its performance, i.e., how many states it can visit per second, is crucial for all methods that are based on searching the space of the game such as MCTS, min-max or MTD(f). Therefore, we have written a simulator for Hearthstone with the aim of achieving the highest run-time performance. The main features of our simulator are: (1) written entirely in C++ for highperformance, (2) it performs 10K full games per second, in average, and 30K when limiting to basic cards only, (4) makes big use of inheritance and polymorphism (e.g., Secret : Spell : Card), (5) effects such as hero powers are modeled as (noncollectible) cards, (6) the total number of implemented cards = 483, (7) the implemented cards allow for making staple decks from the standard meta-game. The simulator calculates legal moves in each state of the game, updates the state after a move is chosen, tests whether the game reached a terminal state and calculates scores in a finished game. States and actions are comparable and hashable. We have divided complex game actions into atomic simple actions, e.g., when the "SI-7 Agent" card is played, up to three simple actions are generated: (1) Choose a card from your hand (SI-7 Agent), (2) Choose a target on the battle-field, where the minion is about to be placed, (3) Choose a target for the battle-cry: deal 2 damage, provided that the required combo condition was met. Similarly, an attack move consists of two simple actions -choosing a character, which will attack and choosing a target to attack. B. Monte Carlo Tree Search Monte-Carlo Tree Search (MCTS) [14] has become the state-of-the-art algorithm for game tree search. It is the algorithm to go in domains such as Go [15], Hex [16], Arimaa [17], General Game Playing (GGP) [18] or General Video Game Playing (GVGP) [19]. This technique is a natural candidate for universal domains such as GGP or GVGP, because given only the way (interface) to simulate games, the same implementation of MCTS will work for any game. It has also been increasingly successful in board games such as Settlers of Catan [20] or 7 Wonders [21]. In essence, the MCTS is a combination of three ideas: storing statistics in the game tree, random sampling by means of simulations to gather statistics and the Upper Confidence Bounds method to select nodes based on the statistics gathered so far. The Upper Confidence Bounds applied to Trees (UCT) addresses the exploitation-exploration problem and it is a generalization of the Upper Confidence Bounds (UCB-1) method. The UCT formula is as follows: a * = arg max a∈A(s) Q(s, a) + C ln [N (s)] N (s, a)(1) where A(s) is a set of actions available in state s, Q(s, a) denotes the average result of playing action a in state s in the simulations performed so far, N (s) -a number of times state s has been visited in previous simulations and N (s, a)a number of times action a has been sampled in this state in previous simulations. Constant C controls the balance between exploration and exploitation. It has to be tuned, but provided that scores of games are confined to the [0, 1] interval, the sensible starting value is √ 2. The algorithm typically consists of four phases: selection, expansion, simulation and backpropagation. Algorithms (1) and (2) describe the usage of these phases. (1) Selection. Traverse the nodes, that are already stored in the tree. At each level, the next node is chosen according to the selection policy -the UCT method, by default. (2) Expansion. A certain number of new nodes is added to the tree. In the classical MCTS variant, only one node is added by each iteration, which is a good trade-off between the algorithm's efficiency and memory usage. (3) Simulation. Starting from the last visited state in the tree, play (simulate) the game till the end. No nodes are added to the tree in this phase. Actions for each player are chosen randomly, however, there are extensions of the MCTS algorithm that introduce heuristics in the simulation. This phase is also called "Monte-Carlo phase". (4) Back-propagation. Starting from the last visited node in the tree, which is the one the simulation started from, all the way up to the root node, update the Q(s, a) values based on the result of the simulation. 1) Handling Imperfect Information: The majority of successful applications of the MCTS algorithm have been done in the realm of perfect information games, i.e., games in which each player has complete information about the current state of the game. Games with hidden information have been proven to be difficult for any combinatorial method such as game-tree search. There have been many variants and extensions to the MCTS proposed to deal with imperfect information. However, they can be clustered into two types of approaches: 1) Perfect Information Monte Carlo Tree Search (PIMC) -this method determines (guesses) all information that is hidden and, from that point, treats the game as perfect information one. Variants of PIMC differ in the way how many distinct determinizations they perform and how the knowledge obtained from running the algorithm with different determinizations is combined. The two major problems related to PIMC [22] are strategy fusion and nonlocality [23]. 2) Information Set Monte Carlo Tree Search (ISM-CTS) [23] -this variant uses the concept of information sets, which are abstract groups of states that are indistinguishable from a particular player's perspective. In ISMCTS, a node in the game tree is associated with an information set rather than a single state. Therefore, the decisions of a player are made based upon what the player actually observes. ISMCTS is much less susceptible to the problems of strategy fusion and nonlocality. However, ISMCTS is typically much harder to implement as it requires to simulate games under imperfect information or deal with partially observable moves. We propose an algorithm, which is a combination of ISMCTS and PIMC. From the first concept, we borrow the idea of information sets. However, they are not used to simulate games under hidden information. Instead, they serve as keys in the so-called transposition table. The transposition tables are a way to model the "game-tree" without duplicated nodes, which would occur if there is more than one way to reach the same state. The "tree" effectively then becomes a directed acyclic graph (DAG). Transposition tables are also often used to combine symmetric states in order to reuse calculations. In the transposition table we used, the values are nodes and there is a unique key-value mapping between information sets and nodes. Each node contains a hashmap of edges with key being a player's move. Each edge contains the statistics of the particular move and a pointer to the next node as observed in the current iteration of MCTS. The next node pointer might vary in subsequent iterations if the same move can have multiple outcomes (non-determinism) and thus lead to various information sets. From the PIMC concept, we borrow the idea of determinizations. At the beginning of each MCTS iteration, a copy of a hidden information state is determined into a perfect information state. This is not to be confused with information set. The default solution to determinization is to sample the state randomly among possible legal states. However, when generating games for machine learning experiments, we used the "cheater" approach that can determinize the correct state. Such an approach is often used in teaching sessions. In particular, in card games, human experts teach beginners how to play with open cards. In our case, the justification is that the "cheater" allows for generating stronger games quicker. In our implementation, there are two interfaces for the concept of the game state: Game state for simulations (GS) -this is the only interface used to apply the logic of the game such as determining legal moves, applying moves, checking whether the game has ended or getting the result of the game. This interface is used both in the selection and simulation phases. However, in the selection, the other interface (information sets) is used as well. Information Set Game state for statistics (IS) -this is an abstraction of a state with possible hidden information. It represents all kind of information, based on which a player will take actions. The idea is to use only a subset of the simulation game state in order to group states. Such a separate interface not only allows for ignoring hidden information but also for reducing the resolution of the state. For instance, states that are similar in terms of some arbitrary measure can be grouped together. The information sets in our approach are plain data storage objects. The only methods the IS interface contains are hash and equals, what enables efficient equality comparisons. After the GS has been determined, the selection phase starts from the root node. In each visited node during that phase, the set of currently legal moves is computed and intersected with the set of all moves observed in the node so far. Each move is associated with an edge. Active edges are the ones that correspond to moves that are currently available. The active edges are scored according to the selection formula (c.f. Equation 1) and the best scored edge is chosen. Next, the GS interface is used to apply the selected edge's move and compute the resulting state. This state is then used to generate an information set. We call this process capturing the information set and the GS requires an implementation of the capture() method that returns the IS from a given player's perspective. The perspective is decided based on which player is active in the current state. Once the IS is created, it is used to query the transposition table for the next node to traverse. If no such node exists, it is added to the transposition table with the key equal to the current IS and the selection phase is terminated. The selection phase is repeated for the next node until the termination condition (a node visited for the first time) is not satisfied. Because nodes are matched with information sets, this statistics of actions performed within the same IS are clustered together. Moreover, this allows to significantly reduce the combinatorial size of the game tree in comparison with using regular game states as nodes. When the selection phase ends, the last seen GS is passed to the simulation phase as the starting state. The result of a simulation is propagated to all edges chosen in the selection phase. Algorithm 1 Pseudocode of the main MCTS loop. The simulation method starts from the movingState and performs a quasi-random simulation and returns the result of the game. It can be replaced by another evaluation procedure as discussed later in the paper. 1 tt ← mcts.getT ranspositionT able() 22: chosenEdge.nextN ode ← tt.f indOrCreate(is) 23: return chosenEdge.nextN ode 24: end procedure possible in the game of Bridge. Randomness is also prevalent in Hearthstone, with effects such as "discover a random spell" or "deal from X to Y damage". Each unique random outcome would most likely result in a different state, and therefore, would require its own node in the tree. The novelty of our MCTS implementation is complete exclusion of nature moves. This makes the game modeling and simulating significantly easier using our library. Actions may include any non-determinism. This is possible, because we do not store game-states directly in the tree as results of actions. As shown on Algorithm (2), each time a move is played, we compute the resulting state dynamically, even if the move has been already sampled in previous iterations. The resulting state is used to create the information set, which then is used to fetch the next node to visit. In consequence, statistics of moves are averaged according to the probability distribution of various random effects. If a move is good in average, the score will be high and it will be chosen more frequently in the selection phase of the MCTS algorithm. 3) Handling Combinatorial Explosion: We have already introduced the idea of the separation of "virtual game states" modeled as Information Sets and the regular game states for simulations. This allowed us to gather statistics in a much more coarse-grained representation of state-space. However, the combinatorial complexity of the game is still very high due to the number of possible attacks, the fact that attacks can be done in chosen order and the options to intertwine playing cards between the attacks. The authors of [24] have calculated that, in the pessimistic case, there are approximately 10 10 possible ways of performing the attacks. Quite often, however, lots of permutations of attacks will result in the same state in the end and there is no need to examine all of them. To tackle this problem, we have developed the so-called "board solver" -a heuristic that generates a sequence of attack actions in a given state. In general, the heuristic first checks if it can kill the opponent in one turn and does it if possible. If not, the heuristic will check whether the opponent is likely to win during their next turn and if so, the attacks will focus on killing the most threatening opponent minions. If no of these cases appear, the heuristic will score all possible single attacks based on the gain − loss of the board potential. A single attack is a pair (attacker, defender). In Hearthstone, there are at most 8 attackers and 8 defenders, so, in the pessimistic case, 64 scores need to be calculated. The attacks are applied in a greedy fashion, i.e., the best scored attack is applied first (if possible), next the second best and the process continues until there are no more legal attacks. An application of an attack may render some of the following attacks illegal, for example when they use an attacker that has already attacked or defender that has already been killed. The heuristic for attacks is used as an artificial action in the game: "use solver". The MCTS is allowed to choose this action at any point during the turn, but only once per turn. Once the action is chosen, the attack moves are generated and applied, to there will not be any attacks move anymore during the turn for the minions that are already on the board. 4) Interfacing heuristics with MCTS: The MCTS algorithm is quite powerful on its own, but it can still benefit from domain-specific optimizations. It has been proven that, in more complex games such as Go [4] with huge branching factor and delayed rewards of taking actions, the vanilla method needs to be enhanced by some form of heuristics. This weakness has motivated us to combine this algorithm with heuristics represented by prediction models. Such prediction models can be trained to either predict the outcome of the game by looking at a potential next state (candidate state) of the game or at a potential action (candidate action). In the scope of this paper, we will use the terms "machine learning prediction models" and "heuristic evaluation" interchangeably. There is a couple of ways to combine external heuristics with the MCTS algorithm. The authors of paper [25] give a nice review of four common methods: Tree Policy Bias, Simulation Policy Bias, Early Cutoff and Move Ordering. We use the first three of them: (1) Tree Policy Bias -here the heuristic evaluation function is included together with the Q(s, a) in the UCT formula (see Eq. 1) or its equivalent. A typical implementation of this idea is called Progressive Bias [26], in which the standard UCT evaluation is linearly combined with the heuristic evaluation with the weight proportional to the number of simulations. The more simulations are performed, the more statistical confidence, and therefore, the higher weight is assigned to the standard UCT formula. (2) Simulation Policy Bias -here the heuristic values affect probabilities of certain actions in the simulation phase to make simulated players stronger and, therefore, each simulation a better approximation of a potential future game. The two most common implementations are pseudo-roulette selection with probabilities computed using Boltzmann distribution (where the heuristic evaluation is used) or the so-called epsilon-greedy approach [27]. In the latter, the action with the highest heuristic evaluation is chosen with the probability of ǫ or a random one with the probability of 1 − ǫ. (3) Early Cutoff -terminate the simulation earlier (e.g., with some probability or at fixed depth) and return the heuristic evaluation of the last reached state instead of the terminal one. In [25], this enhancement is reported to achieve the best results among the tested methods. The aforementioned AlphaGo program employs both, Tree Policy Bias and Simulation Policy Bias. Motivated by its success, we decided to apply a similar approach for Hearthstone. IV. AUGMENTING MCTS WITH MACHINE LEARNING The state of the art implementations of MCTS, such as AlphaZero, use deep neural networks for providing heuristic evaluations of states and actions. Two main approaches are used -so called value network is a deep neural network that provides the predictions of a game outcome given a state of the game. The predictions are usually provided as scores which can be interpreted as probabilities of winning the game by each player. Such predictions may be used by MCTS to foresee an outcome of a playout without simulating it until the terminal state, or even to entirely replace the simulation phase. A policy network is another type of a neural network that given the state of a game provides values of each action available in that state. Policy network may thus provide information about which actions should be chosen in a state. As shown in [4], [28], the use of value and policy network heuristics significantly improves the performance of MCTS methods, enabling them to beat humans in very complex games. In our solution we will focus on the value network heuristic for Hearthstone. We will use an iterative approach to neural network training, which uses large amount of hearthstone games, generated by self-playing bots. A. Game-state vectorization with embeddings Heuristic functions for evaluating game states require a vectorized representation of the state. It is common to use hand-crafted attributes to represent particular aspects of the state and then, using some weighted combination of those attributes, derive a value representing the utility of a state. While this approach works for games such as chess, it may be difficult to engineer such attributes for much more complex games such as Go or Hearthstone. As we use deep learning methods for obtaining heuristic functions, it is possible to represent Hearthstone states by large vectors composed of values of low-level features such as: attributes of each minion on the board (HP, attack, taunt, charge etc.), attributes of each player (HP, weapons, mana, hero type, etc.), attributes of cards in hand (type, mana cost, etc.) and general attributes (turn number, cards in deck, etc.). Moreover, as most cards in Hearthstone have custom descriptions that define special effects, it is necessary to extend the vectors by meaningful representations of particular cards. One way to represent the cards in a relatively lowdimensional vector space is by using a word2vec model [29] to learn the embeddings from cards' textual descriptions. It can be done either by aggregating vector representations of words from the texts or by training a paragraph vector model [30], where each paragraph corresponds to a single card. Since descriptions of Hearthstone cards are relatively short and use a limited vocabulary, it is expected that a dimensionality of our embeddings should be much lower than in other common applications of the word2vec model. We experimentally checked that using more than 16 dimensions brings negligible improvements, and thus we used embedding size 10 in our further experiments. To learn the embeddings, we used the skip-gram model implemented in TensorFlow. Apart from the embedding size, standard parameter values were used, i.e. context size was set to 10 and the batch size was 256. The model was trained for 300 epochs using a stochastic gradient descent optimizer, with a learning rate 0.1, decreased by a factor of 10 −1 after every 100 epochs. In our final solution, we used a vectorizer that had 750 elements, including all low-level features for both players and utilized embeddings to represent all cards and minions. B. State evaluation with value network Our state evaluation heuristic uses a fully connected neural network for providing the win probabilities of each player. The network consists of three dense layers with 256, 128 and 64 neurons respectively and uses tanh activation function. The input is a vector of size 750 (as described in the previous section), while the output consists of two neurons with a softmax activation. The network thus solves a classification task: given a state predict the winner. The training data for the network is generated by recording games played between bots. During a simulation, the state of the game is vectorized to vector S at each step, and the final score of the game is stored as a two-element vector: score = [p1 score , p2 score ]. Next, the vectorized states are sampled randomly with some probability p and pairs [ S, score] are added to the training dataset. Random sampling is required, as consecutive states are highly correlated. Finally the network is trained to provide score given a state vector. We used ADAM optimizer with learning rate = 0.001 Value networks are trained to predict scores of games that were played with different decks as well as from the perspective of any of the two players. However, the accuracy of the predictions are better if there are separate networks trained for particular decks and even for particular player positions (first or second player). In our preliminary tests we created a dataset with over 3.5M samples from games played by strong MCTS bots (cheater MCTS with 1 second per move) playing with 400 different decks. The network were trained to predict outcomes of the games played with any of the available decks and for any of the players. The accuracy of the value network trained using this dataset was evaluated on a separate validation set and reached 0.76. We have used the trained value network for early termination of random simulations. The termination was done after the last move of a player in turn, but not earlier than after k=20 steps. After termination, the statistics in MCTS tree were updated with probabilities of winning obtained from the value network. C. Iterative learning -mastering Hearthstone To further improve the performance of our solution, we have prepared an environment for continuous, iterated learning of our machine learning models. The main idea is that MCTS with a heuristic may be used to generate games of progressively better quality. Those games may then be used to create more accurate heuristics, which may be used to generate games of even better quality. This process may be repeated many times for better optimization of the heuristics. In our approach to iterative learning, we have started with plain MCTS to generate over 20000 games. Next, those games were used to generate an initial dataset consisting of randomly selected states and corresponding scores. Models for value networks were trained and used to generate the next version of the bot. Then, in each iteration, the bot played 3000 games, from which new state-score pairs were sampled and added to the training dataset. The training dataset length was clipped to 1M samples, so that after a few iterations older samples were removed and most recent samples were appended as in a FIFO buffer. The state-score pairs were sampled with probability p = 0.5. In each iteration, value networks were retrained from scratch using 80% of the training dataset. Remaining 20% was used for validation of the network. Using iterated learning, we were able to achieve an accuracy of 0.775 for the first player and 0.794 for the second player, when training for one type of deck only. In the next section we describe in details the performance of particular bots. V. EXPERIMENTS We have conducted a series of experiments to measure the skill of various Hearthstone bots based on MCTS and different heuristics. Due to the high complexity of Hearthstone, mainly caused by the large number of possible decks and the impact of random effects on the game outcome, we have restricted our test cases to only two decks: ZooWarlock and CubeWarlock. Moreover, we have fixed the positions of both players, so that ZooWarlock deck was always played by the first player, while CubeWarlock by the second. In order to obtain the best possible version of the value network, we have run iterative training for 64 iterations. Next, we have created a hearthstone bot for each version of the value network obtained during the iterative learning. Finally, we have used 64 versions of the bot to play over 50k matches between themselves and assigned a glicko2 rating [31] to each bot. Based on the glicko2 rating, we have selected the best bot, and thus the best value network, for the first and second player (obtained from 21st and 33rd iteration respectively). For our final evaluation, we have compared plain MCTS (denoted by mcts) with two different heuristics: a) previously selected, best value networks from iterative learning -denoted by V; b) board solver described in section III-B3 -denoted by S. We have measured the impact of the value network, board solver and both of those combined together. Each configuration of the bot was used to play 500 games against plain MCTS bot. Moreover, we have also compared our solution with a randomly playing bot. To have a baseline for the performance, a 500-game match between only plain MCTS bots was played as well. The games were played with two time limits per move used by MCTS: 0.5 and 1.0 second. The results are presented in tables I and II. The strength of each bot is measured by the percentages of won games. The baseline win-rates are 73.5% for the first player and 26.5% for the second in case of 0.5 second per move time limit. Increasing the time limit improves the strength of the second player, resulting in win-rates 70.6% for the first player and 29.4% for the second. The evaluation results show that each heuristic has a noticeable impact on the strength of the bot. As the first player has already a high win-rate, adding heuristics improves the win-rate by up to 9 percentage points. However, in case of the second player, adding heuristics may It is important to note here that the type of deck used has a huge impact on the strength of the bot. The deck used by the first player has an aggressive, but fairly straightforward, style of play. The deck used by the second player, has on the other hand, a lot of complex strategies and needs to be played carefully; yet used by a skillful player, it has a much greater winning potential compared to the first deck. This fact may help to understand why the strength of the second player is increased so dramatically when using well-crafted heuristics. Moreover, heuristics provide a larger advantage, when playing with lower time per move limit as MCTS performs a fewer number of iterations. A combination of a value network and board solver, when only 0.5 seconds per move are available for the MCTS to perform simulations, provide the greatest boost to the bot's strength. With 1 second per move available, the difference between using only value network and the combination of value network and board solver is minimal. Finally, we have arranged matches between a few hearthstone players and our bot. The results are presented in table III. Games were played by two regular players (Hearthstone rank > 15, which is held by approx. 75% players) and two players with a Legendary rank (the best one with less than 0.5% of players). VI. CONCLUSIONS In this paper, a fully-fledged approach to constructing a Hearthstone playing bot was presented. Some novel features of the approach include modification of the MCTS algorithm to handle randomness without explicitly defined nature moves, a combination of the PIMC and ISMCTS methods to tackle imperfect information, and a heuristic solver for calculating attacks in Monte Carlo simulations. In addition, we designed and conducted machine learning experiments aimed at learning game state evaluation functions. Finally, an iterative learning loop aimed at creating the "ultimate bot" was proposed. We can conclude that the resulting agent is likely to be among the strongest Hearthstone bots at the moment. Although Hearthstone has become a testbed for AI, there has not been yet proposed any universal benchmarking methods, so it is difficult to assess the strength other than by human observation, self-play between various versions of the agent or a random player. However, in all cases, the proposed solution shows its upper hand. The bot is able to win, with an impressive consistency, 100% games against the random player. It is also capable of winning games against Legend rank players, which alone can be regarded as very promising. The human players reported that in many situations they felt the bot played really well. Finally, we have shown the progressive improvement of the bot's skills by sparing it against previous versions. We designated two decks for this experiment, but the approach can be generalized for any number of decks easily, e.g., as an ensemble that chooses the right model (or even blends a few of them) for the deck on the fly. In order to benchmark our agent against other Hearthstone bots, we plan to submit it to the 2018 Hearthstone AI Competition held under the CIG (Computational Intelligence in Games) conference. Our submission to this competition will differ with the approach described in this paper in several details. It will work with the Sabber-Stone (https://github.com/HearthSim/SabberStone) simulation engine as this is the official engine to be used during the competition. This simulator is only able to simulate approximately 200 games per second, on a modern high-end consumer PC, whereas our simulator performs 10000 games, on average. Because of this fact, we choose to limit the depth of the Monte Carlo simulations to the end of a single turn. At the end of the turn, the state evaluation function powered by machine learning will be used. We hope that the solutions adopted for the CIG competition will help us in designing even more cunning artificial Hearthstone agent, and as a consequence, move us one step further in the pursuit of the Grail of video games -smarter and challenging AI. Handling Randomness: Non-determinism in games can quickly increase the combinatorial complexity to enormous levels. For example, there are 5.36 * 10 28 different deals Algorithm 2 Pseudocode of the inner MCTS loop. The f indOrCreate method accepts an information set and returns the corresponding node from the transposition table.: procedure ITERATE(state) 2: rootN ode ← createRoot(state) 3: node ← rootN ode ⊲ current node 4: while elapsedT ime < allotedT ime do 5: movingState ← determinize(state) 6: while mcts.selection = f inished do 7: if movingState.terminal = true then 8: node ← node.select(movingState) 9: end if 10: end while 11: propagate(simulation(movingState)) 12: end while 13: end procedure 2) 1: procedure NODE.SELECT(movingState) 2: moves ← movingState.getM oves() 3: currentEdges ← [] 4: for each move in moves do 5: edge ← allEdges[move] 6: if edge not found then 7: edge ← new edge(move) 8: allEdges[move] ← edge 9: end if 10: edge.N ← +1 ⊲ incr. observed count 11: currentEdges.push(edge) 12: end for 13: chosenEdge ← selection(currentEdges) ⊲ UCT 14: chosenM ove ← chosenEdge.getM ove() 15: chosenEdge.V ← +1 ⊲ incr. visit count 16: if chosenEdge.V == 1 then 17: mcts.selection ← f inished 18: end if 19: movingState.apply(chosenM ove) 20: is ← capture(movingState) ⊲ create IS 21: TABLE I : IEvaluation results -0.5 second per move P1 P1 wins P2 wins P1 win % P2 win % P2 mcts 735 265 73,5% 26,5% mcts mctsVS 500 0 100,0% random mctsVS 391 108 78,4% mcts mctsV 410 90 82,0% mcts mctsS 395 105 79,0% mcts random 0 500 100,0% mctsVS mcts 219 280 56,1% mctsVS mcts 249 251 50,2% mctsV mcts 266 234 46,8% mctsS TABLE II : IIEvaluation results -1 second per moveP1 P1 wins P2 wins P1 win % P2 win % P2 mcts 705 294 70,6% 29,4% mcts mctsVS 500 0 100,0% random mctsVS 364 135 72,9% mcts mctsV 380 120 76,0% mcts mctsS 358 143 71,6% mcts random 0 500 100,0% mctsVS mcts 224 276 55,2% mctsVS mcts 220 279 55,9% mctsV mcts 263 236 47,3% mctsS TABLE III : IIIA summary of results obtained in games between AI agents and human opponents.P1 P1 wins P2 wins P1 win % P2 win % P2 Regular 7 7 50% mctsVS-1s Legend 12 9 43% mctsVS-1s mctsVS-1s 9 6 60% Regular mctsVS-1s 3 15 17% Legend even double the win-rate. Evolutionary deckbuilding in hearthstone. P García-Sánchez, A Tonda, G Squillero, A Mora, J J Merelo, Computational Intelligence and Games (CIG), 2016 IEEE Conference on. IEEEP. García-Sánchez, A. Tonda, G. Squillero, A. Mora, and J. J. Merelo, "Evolutionary deckbuilding in hearthstone," in Computational Intelli- gence and Games (CIG), 2016 IEEE Conference on. IEEE, 2016, pp. 1-8. Monte carlo tree search experiments in hearthstone. A Santos, P A Santos, F S Melo, 10.1109/CIG.2017.8080446IEEE Conference on Computational Intelligence and Games. New York, NY, USAA. Santos, P. A. Santos, and F. S. Melo, "Monte carlo tree search experiments in hearthstone," in IEEE Conference on Computational Intelligence and Games, CIG 2017, New York, NY, USA, August 22-25, 2017, 2017, pp. 272-279. [Online]. Available: https://doi.org/10.1109/CIG.2017.8080446 Improving hearthstone AI by learning high-level rollout policies and bucketing chance node events. S Zhang, M Buro, IEEE Conference on Computational Intelligence and Games. New York, NY, USAS. Zhang and M. Buro, "Improving hearthstone AI by learning high-level rollout policies and bucketing chance node events," in IEEE Conference on Computational Intelligence and Games, CIG 2017, New York, NY, USA, August 22-25, 2017, 2017, pp. 309-316. [Online]. . 10.1109/CIG.2017.8080452Available: https://doi.org/10.1109/CIG.2017.8080452 Mastering the game of go with deep neural networks and tree search. D Silver, A Huang, C J Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, Nature. 5297587D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., "Mastering the game of go with deep neural networks and tree search," Nature, vol. 529, no. 7587, pp. 484-489, 2016. Information set monte carlo tree search. P I Cowling, E J Powley, D Whitehouse, IEEE Transactions on Computational Intelligence and AI in Games. 42P. I. Cowling, E. J. Powley, and D. Whitehouse, "Information set monte carlo tree search," IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 2, pp. 120-143, June 2012. I am a legend: Hacking Hearthstone using statistical learning methods. E Bursztein, 10.1109/CIG.2016.7860416IEEE Conference on Computational Intelligence and Games. Santorini, GreeceE. Bursztein, "I am a legend: Hacking Hearthstone using statistical learning methods," in IEEE Conference on Computational Intelligence and Games, CIG 2016, Santorini, Greece, September 20-23, 2016, 2016, pp. 1-8. [Online]. Available: https://doi.org/10.1109/CIG.2016.7860416 Helping AI to play Hearthstone: AAIA'17 data mining challenge. A Janusz, T Tajmajer, M Świechowski, 10.15439/2017F573Proceedings of the 2017 Federated Conference on Computer Science and Information Systems. the 2017 Federated Conference on Computer Science and Information SystemsPrague, Czech RepublicA. Janusz, T. Tajmajer, and M.Świechowski, "Helping AI to play Hearthstone: AAIA'17 data mining challenge," in Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, FedCSIS 2017, Prague, Czech Republic, September 3-6, 2017., 2017, pp. 121-125. [Online]. Available: https://doi.org/10.15439/2017F573 Helping AI to play Hearthstone using neural networks. Ł Grad, 10.15439/2017F561Proceedings of the 2017 Federated Conference on Computer Science and Information Systems. the 2017 Federated Conference on Computer Science and Information SystemsPrague, Czech RepublicŁ. Grad, "Helping AI to play Hearthstone using neural networks," in Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, FedCSIS 2017, Prague, Czech Republic, September 3-6, 2017., 2017, pp. 131-134. [Online]. Available: https://doi.org/10.15439/2017F561 Deepstack: Expertlevel artificial intelligence in heads-up no-limit poker. M Moravčík, M Schmid, N Burch, V Lisý, D Morrill, N Bard, T Davis, K Waugh, M Johanson, M Bowling, 356M. Moravčík, M. Schmid, N. Burch, V. Lisý, D. Morrill, N. Bard, T. Davis, K. Waugh, M. Johanson, and M. Bowling, "Deepstack: Expert- level artificial intelligence in heads-up no-limit poker," vol. 356, no. 6337, pp. 508-513, 2017. Ensemble determinization in monte carlo tree search for the imperfect information card game magic: The gathering. P I Cowling, C D Ward, E J Powley, IEEE Transactions on Computational Intelligence and AI in Games. 44P. I. Cowling, C. D. Ward, and E. J. Powley, "Ensemble determinization in monte carlo tree search for the imperfect information card game magic: The gathering." IEEE Transactions on Computational Intelli- gence and AI in Games, vol. 4, no. 4, pp. 241-257, 2012. From bandits to monte-carlo tree search: The optimistic principle applied to optimization and planning. R Munos, Foundations and Trends® in Machine Learning. 7R. Munos et al., "From bandits to monte-carlo tree search: The opti- mistic principle applied to optimization and planning," Foundations and Trends® in Machine Learning, vol. 7, no. 1, pp. 1-129, 2014. Game theory and neural basis of social decision making. D Lee, Nature neuroscience. 114404D. Lee, "Game theory and neural basis of social decision making," Nature neuroscience, vol. 11, no. 4, p. 404, 2008. Rts games as test-bed for real-time ai research. M Buro, T Furtak, Proceedings of the 7th Joint Conference on Information Science (JCIS 2003). the 7th Joint Conference on Information Science (JCIS 2003)M. Buro and T. Furtak, "Rts games as test-bed for real-time ai research," in Proceedings of the 7th Joint Conference on Information Science (JCIS 2003), vol. 2003, 2003, pp. 481-484. A Survey of Monte Carlo Tree Search Methods. C B Browne, E Powley, D Whitehouse, S M Lucas, P I Cowling, P Rohlfshagen, S Tavener, D Perez, S Samothrakis, S Colton, IEEE Transactions on Computational Intelligence and AI in Games. 41C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton, "A Survey of Monte Carlo Tree Search Methods," IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 1, pp. 1-43, 2012. The grand challenge of computer go: Monte carlo tree search and extensions. S Gelly, L Kocsis, M Schoenauer, M Sebag, D Silver, C Szepesvári, O Teytaud, http:/doi.acm.org/10.1145/2093548.2093574Commun. ACM. 553S. Gelly, L. Kocsis, M. Schoenauer, M. Sebag, D. Silver, C. Szepesvári, and O. Teytaud, "The grand challenge of computer go: Monte carlo tree search and extensions," Commun. ACM, vol. 55, no. 3, pp. 106-113, Mar. 2012. [Online]. Available: http://doi.acm.org/10.1145/2093548.2093574 Monte Carlo Tree Search in Hex. B Arneson, R B Hayward, P Henderson, IEEE Transactions on Computational Intelligence and AI in Games. 24B. Arneson, R. B. Hayward, and P. Henderson, "Monte Carlo Tree Search in Hex," IEEE Transactions on Computational Intelligence and AI in Games, vol. 2, no. 4, pp. 251-258, 2010. O Syed, A Syed, Arimaa -A New Game Designed to be Difficult for Computers. Institute for Knowledge and Agent Technology. 26O. Syed and A. Syed, Arimaa -A New Game Designed to be Difficult for Computers. Institute for Knowledge and Agent Technology, 2003, vol. 26, no. 2. General Game Playing: Overview of the AAAI Competition. M R Genesereth, N Love, B Pell, AI Magazine. 262M. R. Genesereth, N. Love, and B. Pell, "General Game Playing: Overview of the AAAI Competition," AI Magazine, vol. 26, no. 2, pp. 62-72, 2005. General Video Game Playing. J Levine, C B Congdon, M Ebner, G Kendall, S M Lucas, R Miikkulainen, T Schaul, T Thompson, Dagstuhl Follow-Ups. 6J. Levine, C. B. Congdon, M. Ebner, G. Kendall, S. M. Lucas, R. Mi- ikkulainen, T. Schaul, and T. Thompson, "General Video Game Playing," Dagstuhl Follow-Ups, vol. 6, 2013. Monte-carlo tree search in settlers of catan. I Szita, G Chaslot, P Spronck, Advances in Computer Games. SpringerI. Szita, G. Chaslot, and P. Spronck, "Monte-carlo tree search in settlers of catan," in Advances in Computer Games. Springer, 2009, pp. 21-32. Monte-carlo tree search for the game of 7 wonders. D Robilliard, C Fonlupt, F Teytaud, Workshop on Computer Games. SpringerD. Robilliard, C. Fonlupt, and F. Teytaud, "Monte-carlo tree search for the game of 7 wonders," in Workshop on Computer Games. Springer, 2014, pp. 64-77. Understanding the success of perfect information monte carlo sampling in game tree search. J R Long, N R Sturtevant, M Buro, T Furtak, AAAI. J. R. Long, N. R. Sturtevant, M. Buro, and T. Furtak, "Understanding the success of perfect information monte carlo sampling in game tree search." in AAAI, 2010. Information set monte carlo tree search. P I Cowling, E J Powley, D Whitehouse, IEEE Transactions on Computational Intelligence and AI in Games. 42P. I. Cowling, E. J. Powley, and D. Whitehouse, "Information set monte carlo tree search," IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 2, pp. 120-143, 2012. Programming a hearthstone agent using monte carlo tree search. M H Andersson, H H Hesselberg, NTNUMaster's thesisM. H. Andersson and H. H. Hesselberg, "Programming a hearthstone agent using monte carlo tree search," Master's thesis, NTNU, 2016. An Automatically Generated Evaluation Function in General Game Playing. K Waledzik, J Mańdziuk, IEEE Transactions on Computational Intelligence and AI in Games. 63K. Waledzik and J. Mańdziuk, "An Automatically Generated Evaluation Function in General Game Playing," IEEE Transactions on Computa- tional Intelligence and AI in Games, vol. 6, no. 3, pp. 258-270, 2014. Progressive strategies for monte-carlo tree search. G M J B Chaslot, M H M Winands, H J Van Den Herik, J W H M Uiterwijk, B Bouzy, New Mathematics and Natural Computation. 403G. M. J. B. Chaslot, M. H. M. Winands, H. J. van den Herik, J. W. H. M. Uiterwijk, and B. Bouzy, "Progressive strategies for monte-carlo tree search," New Mathematics and Natural Computation, vol. 4, no. 03, pp. 343-357, 2008. Self-Adaptation of Playing Strategies in General Game Playing. M Świechowski, J Mańdziuk, IEEE Transactions on Computational Intelligence and AI in Games. 64M.Świechowski and J. Mańdziuk, "Self-Adaptation of Playing Strate- gies in General Game Playing," IEEE Transactions on Computational Intelligence and AI in Games, vol. 6, no. 4, pp. 367-381, Dec 2014. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. D Silver, T Hubert, J Schrittwieser, I Antonoglou, M Lai, A Guez, M Lanctot, L Sifre, D Kumaran, T Graepel, arXiv:1712.01815arXiv preprintD. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel et al., "Mastering chess and shogi by self-play with a general reinforcement learning algorithm," arXiv preprint arXiv:1712.01815, 2017. Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G Corrado, J Dean, Proceedings of the 26th International Conference on Neural Information Processing Systems. the 26th International Conference on Neural Information Processing Systems2ser. NIPS'13T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, "Distributed representations of words and phrases and their compositionality," in Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2, ser. NIPS'13, 2013, pp. 3111-3119. Distributed representations of sentences and documents. Q Le, T Mikolov, Proceedings of the 31st International Conference on Machine Learning. the 31st International Conference on Machine Learning32ser. ICML'14Q. Le and T. Mikolov, "Distributed representations of sentences and documents," in Proceedings of the 31st International Conference on Machine Learning -Volume 32, ser. ICML'14, 2014, pp. II-1188-II- 1196. This figure "mcts_smaller.png" is available in "png. M E Glickman, Boston UniversityThe glicko systemM. E. Glickman, "The glicko system," Boston University, 1995. This figure "mcts_smaller.png" is available in "png" format from: http://arxiv.org/ps/1808.04794v1
[ "https://github.com/HearthSim/SabberStone)" ]
[ "Elimination of imaginaries in C((Γ))", "Elimination of imaginaries in C((Γ))" ]
[ "Mariana Vicaría " ]
[]
[]
In this paper we study elimination of imaginaries in henselian valued fields of equicharacteristic zero and residue field algebraically closed. The results are sensitive to the complexity of the value group. We focus first on the case where the ordered abelian group has finite spines, and then prove a better result for the dp-minimal case. In[26]it was shown that an ordered abelian with finite spines weakly eliminates imaginaries once one adds sorts for the quotient groups Γ ∆ for each definable convex subgroup ∆, and sorts for the quotient groups Γ (∆ + lΓ) where ∆ is a definable convex subgroup and l ∈ N≥2. We refer to these sorts as the quotient sorts. In [28] F. Janke, P. Simon and E. Walsberg characterized dp-minimal ordered abelian groups as those without singular primes, i.e. for every prime number p [Γ ∶ pΓ] < ∞.We prove the following two theorems:Theorem. Let K be a henselian valued field of equicharacteristic zero with residue field algebraically closed and value group of finite spines. Then K admits weak elimination of imaginaries once one adds codes for all the definable O-submodules of K n for each n ∈ N, and the quotient sorts for the value group.Theorem. Let K be a henselian valued field of equicharacteristic zero, residue field algebraically closed and dp-minimal value group. Then K eliminates imaginaries once one adds codes for all the definable O-submodules of K n for each n ∈ N, the quotient sorts for the value group and constants to distinguish the elements of each of the finite groups Γ ℓΓ, where ℓ ∈ N.
10.1112/jlms.12751
[ "https://arxiv.org/pdf/2109.08140v2.pdf" ]
248,572,558
2109.08140
33186b18e979b1c76490fd21f8f22e797495dde7
Elimination of imaginaries in C((Γ)) May 10, 2022 Mariana Vicaría Elimination of imaginaries in C((Γ)) May 10, 2022arXiv:2109.08140v2 [math.LO] 9 May 2022 In this paper we study elimination of imaginaries in henselian valued fields of equicharacteristic zero and residue field algebraically closed. The results are sensitive to the complexity of the value group. We focus first on the case where the ordered abelian group has finite spines, and then prove a better result for the dp-minimal case. In[26]it was shown that an ordered abelian with finite spines weakly eliminates imaginaries once one adds sorts for the quotient groups Γ ∆ for each definable convex subgroup ∆, and sorts for the quotient groups Γ (∆ + lΓ) where ∆ is a definable convex subgroup and l ∈ N≥2. We refer to these sorts as the quotient sorts. In [28] F. Janke, P. Simon and E. Walsberg characterized dp-minimal ordered abelian groups as those without singular primes, i.e. for every prime number p [Γ ∶ pΓ] < ∞.We prove the following two theorems:Theorem. Let K be a henselian valued field of equicharacteristic zero with residue field algebraically closed and value group of finite spines. Then K admits weak elimination of imaginaries once one adds codes for all the definable O-submodules of K n for each n ∈ N, and the quotient sorts for the value group.Theorem. Let K be a henselian valued field of equicharacteristic zero, residue field algebraically closed and dp-minimal value group. Then K eliminates imaginaries once one adds codes for all the definable O-submodules of K n for each n ∈ N, the quotient sorts for the value group and constants to distinguish the elements of each of the finite groups Γ ℓΓ, where ℓ ∈ N. Introduction The model theory of henselian valued fields has been a major topic of study during the last century, it was initiated by Robinson's model completeness results for algebraically closed valued fields in [17]. Remarkable work has been achieved by Haskell, Hrushovski and Macpherson to understand the model theory of algebraically closed valued fields. In a sequence of papers [2] and [3] they developed the notion of stable domination, that rather than being a new form of stability should be understood as a way to apply techniques of stability in the setting of valued fields. Further work of Ealy, Haskell and Mařícová in [1] for the setting of real closed convexly valued fields, suggested that the notion of having a stable part of the structure was not fundamental to achieve domination results and indicated that the right notion should be residue field domination or domination by the sorts internal to the residue field. Our main motivation for the present document arises from the natural question of how much further a notion of residue field domination could be extended to broader classes of valued fields to gain a deeper model theoretic insight of henselian valued fields, and the first step is finding a reasonable language where the valued field will eliminate imaginaries. The starting point in this project relies on the Ax-Kochen theorem, which states that the first order theory of a henselian valued field of equicharacteristic zero or unramified mixed characteristic with perfect residue field is completely determined by the first order theory of its valued group and its residue field. A natural principle follows from this theorem: model theoretic questions about the valued field itself can be understood by reducing them to its residue field, its value group and their interaction in the field. A fruitful application of this principle has been achieved to describe the class of definable sets. For example, in [4] Pas proved field quantifier elimination relative to the residue field and the value group once angular component maps are added in the equicharacteristic case. Further studies of Basarab and F.V. Kuhlmann show a quantifier elimination relative to the RV sorts [see [5], [7] respectively]. The question of whether a henselian valued field eliminates imaginaries in a given language is of course subject to the complexity of its value group and its residue field as both are interpretable structures in the valued field itself. The case for algebraically closed valued fields was finalized by Haskell, Hrushovski and Macpherson in their important work [3] , where elimination of imaginaries for ACV F is achieved once the geometric sorts S n (codes for the O-lattices of rank n ) and T n (codes for the residue classes of the elements in S n ) are added. This proof was later significantly simplified by Will Johnson in [12] by using a criterion isolated by Hurshovski [ see [16]]. Recent work has been done to achieve elimination of imaginaries in some other examples of henselian valued fields, as the case of separably closed valued fields in [18], the p-adic case in [19] or enrichments of ACVF in [20]. However, the above results are all obtained for particular instances of henselian valued fields while the more general approach of obtaining a relative statement for broader classes of henselian valued fields is still a very interesting open question. Following the Ax-Kochen style principle, it seems natural to first attempt to solve this question by looking at the problem in two orthogonal directions: one by making the residue field as docile as possible and studying which troubles would the value group bring into the picture, or by making the value group tame and understanding the difficulties that the residue field would contribute to the problem. Hils and Rideau [21] had proved that under the assumption of having a definably complete value group and requiring that the residue field eliminates the ∃ ∞ quantifier, then any definable set admits a code once the geometric sorts and the linear sorts are added to the language. Any definably complete ordered abelian group is either divisible or a Z-group (i.e. a model of Presburger Arithmetic). This paper is addressing the first approach in the setting of henselian valued fields of equicharacteristic zero. We suppose the residue field to be algebraically closed and we obtain results which are sensitive to the complexity of the value group. We first analyze the case where the value group has finite spines. An ordered abelian with finite spines weakly eliminates imaginaries once we add sorts for the quotient groups Γ ∆ for each definable convex subgroup ∆, and sorts for the quotient groups Γ ∆ + lΓ where ∆ is a definable convex subgroup and l ∈ N ≥2 . We refer to these sorts as the quotient sorts. The first result that we obtain is: Theorem 1.1. Let K be a valued field of equicharacteristic zero, residue field algebraically closed and value group with finite spines. Then K admits weak elimination of imaginaries once we add codes for all the definable O-submodules of K n for each n ∈ N, and the quotient sorts for the value group. Later, we prove a better result for the dp-minimal case, this is: Theorem 1.2. Let K be a henselian valued field of equicharacteristic zero, residue field algebraically closed and dp-minimal value group. Then K eliminates imaginaries once we add codes for all the definable Osubmodules of K n for each n ∈ N, the quotient sorts for the value group and constants to distinguish the elements of the finite groups Γ ℓΓ where l ∈ N ≥2 . This document is organized as follows: • Section 2: We introduce the required background, including quantifier elimination statements, the state of the model theory of ordered abelian groups and some results about valued vector spaces. • Section 3: We study definable O-modules of K n . • Section 4: We start by presenting Hrushovski's criterion to eliminate imaginaries. We introduce the stabilizer sorts, where the O-submodules of K n can be coded. • Section 5: We prove that each of the conditions of Hrushovski's criterion hold. This is the density of definable types in definable sets in 1-variable X ⊆ K and finding canonical basis for definable types in the stabilizer sorts and Γ eq . We conclude this section proving the weak elimination of imaginaries of any henselian valued field of equicharacteristic zero, residue field algebraically closed and value group with finite spines down to the stabilizer sorts. • Section 6: We show a complete elimination of imaginaries statement when the value group is dpminimal. We prove that any finite set of tuples in the stabilizer sorts can be coded. Aknowledgements: The author would like to thank Pierre Simon and Thomas Scanlon for many insightful mathematical conversations, their time, their constant support and finally for the elegance of their mathematical approach. The author would like to thank particularly Scanlon, for having introduce her to the model theory of valued fields with a huge mathematical generosity and curiosity. The author would like to express her gratitude for the financial support given by the NSF grant 1600441, that allowed her to attend the conference on Model Theory of Valued fields in Paris. The author would like to express her gratitude to the anonymous referee, for a very careful reading and for pointing out a gap in the original draft. The provided list of comments helped significantly to improve the presentation of the paper, and we extend a warm message of gratitude to them. Some results on the model theory of ordered abelian groups In this Subsection we summarize many interesting results about the model theory of ordered abelian groups. We start by recalling the following folklore fact. Fact 2.4. Let (Γ, ≤, +, 0) be a non-trivial ordered abelian group. Then the topology induced by the order in Γ is discrete if and only if Γ has a minimum positive element. In this case we say that Γ is discrete, otherwise we say that it is dense. The following notions were isolated in the sixties by Robinson and Zakon in [6] to understand some model complete extensions of the theory of ordered abelian groups. Definition 2.5. Let Γ be an ordered abelian group and n ∈ N ≥2 . 1. Let γ ∈ Γ. We say that γ is n-divisible if there is some β ∈ Γ such that γ = nβ. 2. We say that Γ is n-divisible if every element γ ∈ Γ is n-divisible. 3. Γ is said to be n-regular if any interval with at least n points contains an n-divisible element. Definition 2.6. An ordered abelian group Γ is said to be regular if it is n-regular for all n ∈ N. Robinson and Zakon in their seminal paper [6] completely characterized the possible completions of the theory of regular groups, obtained by extending the first order theory of ordered abelian groups with axioms asserting that for each n ∈ N if an interval contains at least n-elements then it contains an n-divisible element. The following is [6,Theorem 4.7]. Robinson and Zakon proved as well that each of these completions is the theory of some archimedean group. In particular, any discrete regular group is elementarily equivalent to (Z, ≤, +, 0). The following definitions were introduced by Schmitt in [31]. Definition 2.8. We fix an ordered abelian group Γ and n ∈ N ≥2 . Let γ ∈ Γ. We define: • A(γ) = the largest convex subgroup of Γ not containing γ. • B(γ) = the smallest convex subgroup of Γ containing γ. • C(γ) = B(γ) A(γ). • A n (γ) = the smallest convex subgroup C of Γ such that B(g) C is n-regular. • B n (g) = the largest convex subgroup C of Γ such that C A n (γ) is n-regular. In [31,Chapter 2], Schmitt shows that the groups A n (γ) and B n (γ) are definable in the language of ordered abelian groups L OAG = {+, −, ≤, 0} by a first order formula using only the parameter γ. We recall that the set of convex subgroups of an ordered abelian group is totally ordered by inclusion. Definition 2.9. Let Γ be an ordered abelian group and n ∈ N ≥2 , we define the n-regular rank to be the order type of: {A n (γ) γ ∈ Γ {0}}, ⊆ . The n-regular rank of an ordered abelian group Γ is a linear order, and when it is finite we can identify it with its cardinal. In [22], Farré emphasizes that we can characterize the n-regular rank without mentioning the subgroups A n (γ). The following is [22, Remark 2.2]. Definition 2.10. Let Γ be an ordered abelian group and n ∈ N ≥2 , then: 1. Γ has n-regular rank equal to 0 if and only if Γ = {0}, 2. Γ has n-regular rank equal to 1 if and only if Γ is n-regular and not trivial, 3. Γ has n-regular rank equal to m if there are ∆ 0 , . . . , ∆ m convex subgroups of Γ, such that: • {0} = ∆ 0 < ∆ 1 < ⋅ ⋅ ⋅ < ∆ m = Γ, • for each 0 ≤ i < m, the quotient group ∆ i+1 ∆ i is n-regular, • the quotient group ∆ i+1 ∆ i is not n-divisible for 0 < i < m. In this case we define RJ n (Γ) = {∆ 0 , . . . , ∆ m−1 }. The elements of this set are called the n-regular jumps. Definition 2.11. Let Γ be an ordered abelian group. We say that it is poly-regular if it is elementarily equivalent to a subgroup of the lexicographically ordered group (R n , +, ≤ lex , 0). In [11] Belegradek studied poly-regular groups and proved that an ordered abelian group is poly-regular if and only if it has finitely many proper definable convex subgroups, and all the proper definable subgroups are definable over the empty set. In [10, Theorem 2.9] Weispfenning obtained quantifier elimination for the class of poly-regular groups in the language of ordered abelian groups extended with predicates to distinguish the subgroups ∆ + ℓΓ where ∆ is a convex subgroup and ℓ ∈ N ≥2 . Definition 2.12. Let Γ be an ordered abelian group. We say that it has bounded regular rank if it has finite n-regular rank for each n ∈ N ≥2 . For notation, we will use RJ(Γ) = ⋃ n∈N≥2 RJ n (Γ). The class of ordered abelian groups of bounded regular rank extends the class of poly-regular groups and regular groups. The terminology of bounded regular rank becomes clear with the following Proposition (item 3). Proposition 2.13. Let Γ be an ordered abelian group. The following are all equivalent: 1. Γ has finite p-regular rank for each prime number p. 2. Γ has finite n-regular rank for each n ≥ 2. 3. There is some cardinal κ such that for any H ≡ Γ, RJ(H) ≤ κ. 4. For any H ≡ Γ, any definable convex subgroup of H has a definition without parameters. 5. There is some cardinal κ such that for any H ≡ Γ, H has at most κ definable convex subgroups. Moreover, in this case RJ(Γ) is the collection of all proper definable convex subgroups of Γ and all are definable without parameters. In particular, there are only countably many definable convex subgroups. Proof. This is [22, Proposition 2.3]. The first results about the model completions of ordered abelian groups appear in [6] (1960), where the notion of n-regularity was isolated. Definition 2.14. 1. Let Γ be an ordered abelian group and γ ∈ Γ, we say that γ is n-divisible if there is some β ∈ Γ such that γ = nβ. 2. Let n ∈ N ≥2 . An ordered abelian group Γ is said to be n-regular if any interval with at least n-points contains an n-divisible element. 3. Let Γ be an ordered abelian group, we say that it is regular if it is n-regular for all n ∈ N ≥2 . Quantifier elimination and the quotient sorts In [23] Cluckers and Halupczok introduced a language L qe to obtain quantifier elimination for ordered abelian groups relative to the auxiliary sorts S n , T n and T + n , whose precise description can be found in [23,Definition 1.5]. This language is similar in spirit to the one introduced by Schmitt in [31], but has lately been preferred by the community as it is more in line with the many-sorted language of Shelah's imaginary expansion M eq . Schmitt does not distinguish between the sorts S n , T n and T + n . Instead for each n ∈ N he works with a single sort Sp n (Γ) called the n-spine of Γ, whose description can be found in [24,Section 2]. In [23,Section 1.5] it is explained how the auxiliary sorts of Cluckers and Halupczok are related to the n-spines Sp n (Γ) of Schmitt. In [22, Section 2], it is shown that an ordered abelian group Γ has bounded regular rank if and only if all the n-spines are finite, and Sp n (Γ) = RJ n (Γ). In this case, we define the regular rank of Γ as the cardinal RJ(Γ) , which is either finite or ℵ 0 . Instead of saying that Γ is an ordered abelian group with finite spines, we prefer to use the classical terminology of bounded regular rank, as it emphasizes the relevance of the n-regular jumps and the role of the divisibilities to describe the definable convex subgroups. We define the Presburger Language L Pres = {0, 1, +, −, <, (P m ) m∈N≥2 }. Given an ordered abelian group Γ we naturally see it as a L Pres -structure. The symbols {0, +, −, <} take their obvious interpretation. If Γ is discrete, the constant symbol 1 is interpreted as the least positive element of Γ, and by 0 otherwise. For each m ∈ N ≥2 the symbol P m is a unary predicate interpreted as mΓ. Definition 2.15. [The language L b ] Let Γ be an ordered abelian group with bounded regular rank, we view Γ as a multi-sorted structure where: 1. We add a sort for the ordered abelian group Γ, and we equip it with a copy of the language L Pres extended with predicates to distinguish each of the convex subgroups ∆ ∈ RJ(Γ). We refer to this sort as the main sort. 2. We add a sort for each of the ordered abelian groups Γ ∆, equipped with a copy of the language L ∆ Pres = {0 ∆ , 1 ∆ , + ∆ , − ∆ , < ∆ , (P ∆ m ) m∈N≥2 }. We add as well a map ρ ∆ ∶ Γ → Γ ∆, interpreted as the natural projection map. Remark 2.16. To keep the notation as simple and clear as possible, for each ∆ ∈ RJ(Γ) and n ∈ N ≥2 and β ∈ Γ ∆ we will write β ∈ n(Γ ∆) instead of P ∆ n (β). The following statement is a direct consequence of [29, Proposition 3.14]. Theorem 2.17. Let Γ be an ordered abelian group with bounded regular rank. Then Γ admits quantifier elimination in the language L b . We will consider an extension of this language that we will denote as L bq , where for each natural number n ≥ 2 and ∆ ∈ RJ(Γ) we add a sort for the quotient group Γ (∆ + nΓ) and a map π n ∆ ∶ Γ → Γ (∆ + nΓ). We will refer to the sorts in the language L bq as quotient sorts. The following is [26, Theorem 5.1]. Theorem 2.18. Let Γ be an ordered abelian group with bounded regular rank. Then Γ admits weak elimination of imaginaries in the language L bq , i.e. once one adds all the quotient sorts. Definable end-segments in ordered abelian groups with bounded regular rank Definition 2.19. 1. A non-empty set S ⊂ Γ is said to be an end-segment if for any x ∈ S and y ∈ Γ, x < y we have that y ∈ S. 2. Let n ∈ N, ∆ ∈ RJ(Γ), β ∈ Γ ∪ {−∞} and ◻ ∈ {≥, >}. The set: S ∆ n (β) ∶= {η ∈ Γ nη + ∆ ◻ β + ∆} is an end-segment of Γ. We call any of the end-segments of this form as divisibility end-segments. 3. Let S ⊆ Γ be a definable end-segment and ∆ ∈ RJ(Γ). We consider the projection map ρ ∆ ∶ Γ → Γ ∆, and we write S ∆ to denote ρ ∆ (S). This is a definable end-segment of Γ ∆. 4. Let ∆ ∈ RJ(Γ) and S ⊆ Γ an end-segment. We say that S is ∆-decomposable if it is a union of ∆-cosets. 5. We denote as ∆ S the stabilizer of S, i.e. ∆ S ∶= {η ∈ Γ η + S = S}. Definition 2.20. Let Γ be an ordered abelian group. Let S, S ′ ⊆ Γ be definable end-segments. We say that S is a translate of S ′ if there some β ∈ Γ such that S = β + S ′ . Given a family S of definable end-segments we say that S is complete if every definable end-segment is a translate of some S ′ ∈ S. Fact 2.21. Let Γ be an ordered abelian group with bounded regular rank. Let β, γ ∈ Γ, ∆ ∈ RJ(Γ) and n ∈ N ≥2 . If β − γ ∈ ∆ + nΓ then S ∆ n (γ) is a translate of S ∆ n (β). The following is [26,Proposition 3.3]. Proposition 2.22. Let Γ be an ordered abelian group of bounded regular rank. Any definable end-segment is a divisibility end-segment. Remark 2.23. Let Γ be an ordered abelian group and ∆ be a convex subgroup. Any complete set of representatives in Γ modulo kΓ for k ∈ N is also a complete set of representative of Γ modulo ∆ + kΓ. Moreover, there is and ∅-definable surjective function f ∶ Γ kΓ → Γ (∆ + kΓ). Proof. For the first part of the statement take γ, β ∈ Γ, if γ − β ∈ kΓ then γ − β ∈ ∆ + kΓ. For the second part, consider the ∅-definable function: f ∶ Γ kΓ → Γ (∆ + kΓ) γ + kΓ → γ + (∆ + kΓ). This function is surjective by the first part of the statement. Corollary 2.24. Let Γ be an ordered abelian group with bounded regular rank. For each n ∈ N ≥2 let C n be a complete set of representatives of the cosets nΓ in Γ. Define S ∆ n ∶= {S ∆ n (β) β ∈ C n }. Then S = ⋃ ∆∈RJ(Γ),∆ S ∈ RJ(Γ). Furthermore, ∆ S = ⋃ ∆∈C ∆, where C = {∆ ∈ RJ(Γ) S is ∆-decomposable}. Definition 2.26. Let S ⊆ Γ be a definable end-segment. Let Σ gen S (x) ∶ = {x ∈ S} ∪ {x ∉ B B ⊊ S and B is a definable end-segment }. We refer to this partial type as the generic type in S. This partial type is ⌜S⌝-definable. The dp-minimal case In 1984 the classification of the model theoretic complexity of ordered abelian groups was initiated by Gurevich and Schmitt, who proved that no ordered abelian group has the independence property. During the last years finer classifications have been achieved, in particular dp-minimal ordered abelian groups have been characterized in [28]. Definition 2.27. Let Γ be an ordered abelian group and let p be a prime number. We say that p is a singular prime if [Γ ∶ pΓ] = ∞. The following result corresponds to [28,Proposition 5.1]. Proposition 2.28. Let Γ be an ordered abelian group, the following conditions are equivalent: 1. Γ does not have singular primes, 2. Γ is dp-minimal. Definition 2.29. [The language L dp ] Let Γ be a dp-minimal ordered abelian group. We consider the language extension L dp of L bq [see Definition 2.15] where for each n ∈ N ≥2 we add a set of constant for the elements of the finite group Γ nΓ. The following is [26,Corollary 5.2]. Corollary 2.30. Let Γ be a dp-minimal ordered abelian group. Then Γ admits elimination of imaginaries in the language L dp . The following will be a very useful fact. Fact 2.31. Let Γ be a dp-minimal ordered abelian group and let S ⊆ Γ be a definable end-segment. Then any complete type q(x) extending Σ gen S (x) is ⌜S⌝-definable. Proof. Let Σ gen S (x) be the generic type of S and q(x) be any complete extension. Σ gen S (x) is ⌜S⌝-definable, and by Theorem 2.17 q(x) is completely determined by the quantifier free formulas. It is sufficient to verify that for each ∆ ∈ RJ(Γ), k ∈ Z and n ∈ N the set: Z = {β ∈ Γ ρ ∆ (x) − ρ ∆ (β) + k ∆ ∈ n(Γ ∆) ∈ q(x)} is ⌜S⌝-definable. First, we note that there is a canonical one to one correspondence g ∶ = (Γ ∆) n Γ ∆ → Γ (∆ + nΓ). Let c = g(k ∆ + n(Γ ∆)) ∈ dcl eq (∅). Take µ ∈ Γ nΓ be such that π k (x) = µ ∈ q(x). Let f be the ∅-definable function given by Remark 2.23. Then β ∈ Z if and only if ⊧ π n ∆ (β) = f (µ) + c, and f (µ) + c ∈ dcl eq (∅). We conclude this subsection with the following Remark, that simplifies the presentation of a complete family in the dp-minimal case. Remark 2.32. Let Γ be a dp-minimal ordered abelian group. For each n ∈ N ≥2 let Ω n be a finite set of constants in Γ to distinguish representatives for each of the cosets of nΓ in Γ. Let S ∆ n ∶= {S ∆ n (d) d ∈ Ω n }. The set S dp = ⋃ ∆∈RJ(Γ),n∈N≥2 S ∆ n is a complete family whose elements are all definable over ∅. Henselian valued fields of equicharacteristic zero with residue field algebraically closed and value group with bounded regular rank The main goal of this section is to describe the 1-definable subsets X ⊆ K, where K is a valued field with residue field algebraically closed and with value group of bounded regular rank. The language L Let (K, v) be a valued field of equicharactieristic zero, whose residue field is algebraically closed and whose value group is of bounded regular rank. We will view this valued field as an L-structure, where L is the language extending L val in which the value group sort is equipped with the language L b described in Definition 2.15. Let T be the complete L-first ordered theory of (K, v). (In particular, we are fixing a complete theory for the value group) Corollary 2.33. The first order theory T admits quantifier elimination in the language L. Proof. This is a direct consequence of Theorem 2.2 and Theorem 2.17. Description of definable sets in 1-variable In this Subsection we give a description of the definable subsets in 1-variable X ⊆ K, where K ⊧ T . We denote as O its valuation ring. Definition 2.34. Let (K, O) be a henselian valued field of equicharacteristic zero and let Γ be its value group. Let ∆ be a convex subgroup of Γ then the map: v ∆ ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ K → Γ ∆ x ↦ v(x) + ∆, is a henselian valuation on K and it is commonly called as the coarsened valuation induced by ∆. Note that v ∆ = ρ ∆ ○ v. The following is a folklore fact. 3. A 1-torsor of K is a set of the form a + bI where a, b ∈ K and I ∈ F . ∶= {x ∈ K v(x) ∈ S} is an O-submodule of K. Definition 2.36. 1. Let M and N be O-submodules of K, we say that M is a scaling of N if there is some b ∈ K such that M = bN . 2. A family F of definable O-submodules of K is said to be complete if any definable submodule M ⊆ K is a scaling of some O-submodule N ∈ F . 4. A generalized swiss cheese is either a singleton element in the field {a} or a set of the form A (B 1 ∪ ⋅ ⋅ ⋅ ∪ B n ) where A is a 1-torsor for each i ≤ n, B i ⊊ A and the B i is either a 1-torsor or a singleton element {b i } of the field. 5. A basic positive congruence formula in the valued field is a formula of the form zv ∆ (x − α) − ρ ∆ (β) + k ∆ ∈ n(Γ ∆), where k, z ∈ Z, α ∈ K, β ∈ Γ, n ∈ N ≥2 and k ∆ = k ⋅ 1 ∆ , where 1 ∆ is the minimum positive element of Γ ∆ if it exists. 6. A basic negative congruence formula in the valued field is a formula of the form zv ∆ (x − α) − ρ ∆ (β) + k ∆ ∉ n(Γ ∆), where k, z ∈ Z, α ∈ K, β ∈ Γ, n ∈ N ≥2 and k ∆ = k ⋅ 1 ∆ , where 1 ∆ is the minimum positive element of Γ ∆ if it exists. 7. A basic congruence formula in the valued field is either a basic positive congruence formula in the valued fied or a basic negative congruence formula in the valued field. 8. A finite congruence restriction in the valued field is a finite conjunction of basic congruence formulas in the valued field. 9. A nice set is a set of the form S ∩ C where S is a generalized swiss cheese and C is the set defined by a finite congruence restriction in the valued field. To describe completely the definable subsets of K we will need the following lemmas, which permit us to reduce the valuation of a polynomial into the valuation of linear factor of the form v(x − a). We recall a definition and some results present in [13] that will be useful for this purpose. Definition 2.39. Let (K, w) be a henselian valued field, α ∈ K and S a swiss cheese. Let p(x) ∈ K[x], we define: m(p, α, S) ∶= max{i ≤ d ∃x ∈ S ∀j ≤ d w(a i (x − α) i ) ≤ w a j (x − α) j }, where the a i are the coefficients of the expansion of p around α, i.e. p( x) = i≤d a i (x − α) i . Thus m(p, α, S) is the highest order term in p centered at α which can have minimal valuation (among the other terms of p) in S. The following is [13,Proposition 3.4]. Proposition 2.40. Let K be a valued field of characteristic zero. Let p(x) ∈ K[x] and S be a swiss cheese in K. Then there are (disjoint) sub-swiss cheeses T 1 , . . . , T n ⊆ S and α 1 , . . . , α n ∈ K such that S = ⋃ 1≤i≤n T i , where for all x ∈ T i w(p(x)) = w a imi (x − α i ) mi , where p(x) = d n=0 a in (x − α i ) n and m i = m(p, α i , T i ). Furthermore, α 1 , . . . , α k can be taken algebraic over the subfield of K generated by the coefficients of p(x). Though the preceding proposition is stated for a single polynomial, the same result will hold for any finite number of polynomials Σ. To obtain the desired decomposition, simply apply the proposition to each p(x) ∈ Σ, then intersect the resulting partitions to get one that works for all p(x) ∈ Σ, using the fact that intersection of two swiss cheeses is again a swiss cheese. 1 (x), Q 2 (x) ∈ K[x] be two polynomials in a single variable. Let R = {x ∈ K Q 2 (x) = 0}. There is a finite union of swiss cheeses K = ⋃ i≤k T i , coefficients ǫ i ∈ K, elements γ i ∈ Γ and integers z i ∈ Z such that for any x ∈ T i R: w(Q 1 (x)) − w(Q 2 (x)) = γ i + z i w(x − ǫ i ). Proof. The statement is a straightforward computation after applying Proposition 2.40, and it is left to the reader. Proposition 2.42. Let K ⊧ T , for each ∆ ∈ RJ(Γ) let v ∆ ∶ K → Γ ∆ be the coarsened valuation induced by ∆. Let Q 1 (x), Q 2 (x) ∈ K[x] and R = {x ∈ K Q 1 (x) = 0 or Q 2 (x) = 0} . Let X ⊆ K R be the set defined by a formula of the form: γ ≤ ∆ v ∆ (Q 1 (x)) − v ∆ (Q 2 (x)) or v ∆ Q 1 (x) Q 2 (x) − γ ∈ n (Γ ∆) ; where γ ∈ Γ ∆ and n ∈ N. Then X is a finite union of nice sets. Proof. First we observe that a swiss cheese with respect to the coarsened valuation v ∆ is a generalized swiss cheese with respect to v. The statement follows by a straightforward computation after applying Fact 2.41, and it is left to the reader. We conclude this section by characterizing the definable sets in 1-variable. Theorem 2.43. Let K ⊧ T and X ⊆ K be a definable set. Then X is a finite union of nice sets. Proof. By Corollary 2.33 , X is a boolean combination of sets defined by formulas of the form γ ≤ ∆ v ∆ (Q 1 (x)) − v ∆ (Q 2 (x)) or v ∆ Q 1 (x) Q 2 (x) − γ ∈ n (Γ ∆), where ∆ ∈ RJ(Γ), γ ∈ Γ ∆ and n ∈ N ≥2 . By Proposition 2.42 each of these formulas defines a finite union of nice sets. Because the intersection of two generalized swiss cheeses is again a generalized swiss cheese and the complement of a generalized swiss cheese is a finite union of generalized swiss cheeses the statement follows. O-modules and homomorphisms in maximal valued fields In this section we recall some results about modules over maximally complete valued fields. We follow ideas of Kaplansky in [14] to characterize the O-submodules of finite dimensional K-vector spaces. Definition 2.44. 1. Let K be a valued field and O its valuation ring. We say that K is maximal, if whenever α r ∈ K and (integral or fractional) ideals I r are such that the congruences x − α r ∈ I r are pairwise consistent, then there exists in K a simultaneous solution of all the congruences. 2. Let K be a valued field and M ⊆ K n be an O-module. We say that M is maximal if whenever ideals I r ⊆ O and elements s r ∈ M are such that x − s r ∈ I r M is pairwise consistent in M , then there exists in M a simultaneous solution of all the congruences. 3. Let N ⊆ K n be an O-submodule. Let x ∈ N we say that x is α-divisible in N if there is some n ∈ N such that x = αn. We start by recalling a very useful fact. Fact 2.45. Let K be a henselian valued field of equicharacteristic zero, then there is an elementary extension K ≺ K ′ that is maximal. Proof. Let K be a henselian valued field of equicharacteristic zero, let T be its L val -complete first order theory and C the monster model of T Definition 2.48. Let K be a field and n ∈ N ≥1 , we say that a set {a 1 , . . . , a n } is an upper triangular basis of the vector space K n if it is a K-linearly independent set and the matrix [a 1 , . . . , a n ] is upper triangular. . . , a n } of K n such that N = {a 1 x 1 + ⋅ ⋅ ⋅ + a n x n x i ∈ I i }. In this case we say that [a 1 , . . . , a n ] is a representation matrix for the module N . Proof. We proceed by induction on n, the base case is given by Fact 2.47 and Lemma 2.46. For the inductive step, let π ∶ K n+1 → K be the projection into the last coordinate and let M = π(N ). We consider the exact sequence of O-modules 0 → N ∩ K n × {0} → N → M → 0. By induction, N ∩ (K n × {0} ) is maximal and of the required form. And there is an upper triangular basis {a 1 , . . . , a n } of K n × {0} such that [a 1 , . . . , a n ] is a representation matrix for N ∩ (K n × {0}). If M = {0} we are all set, so we may take m ∈ M such that m ≠ 0. Claim 2.49.1. There is some element x ∈ N such that π(x) = m and for any α ∈ O, if m is α-divisible in M then x is α-divisible in N . Proof. Let J = {α ∈ O m is α-divisible in M }. For each α ∈ J, let m α ∈ M be such that m = αm α and take n α ∈ π −1 (m α ) ∩ N . Fix an element y ∈ N satisfying π(y) = m and let s α = y − αn α ∈ N ∩ (K n × {0}). Consider S = {x − s α ∈ αN ∩ (K n × {0}) α ∈ J} this is system of congruences in N ∩ (K n × {0}) . We will argue that it is pairwise consistent. Let α, β ∈ O, then either α β ∈ O or β α ∈ O (or both). Without loss of generality assume that α β ∈ O, then: s α − s β = (y − αn α ) − (y − βn β ) = βn β − αn α = β n β − α β n α ∈N ∩(K n ×{0}) Thus s α is a solution to the system {x − s α ∈ αN ∩ (K n × {0})} ∪ {x − s β ∈ βN ∩ (K n × {0})}. By maximality of N ∩ (K n × {0}) we can find an element z ∈ N ∩ (K n × {0}) such that z is a simultaneous solution to the whole system of congruences in S. Let x = y − z ∈ N , then x satisfies the requirements. Indeed, for each α ∈ J, we had chosen z − s α ∈ αN ∩ (K n × {0}), so z = s α + αw for some w ∈ N ∩ (K n × {0}). Thus, x = y − z = y − s α − αw = y − (y − αn α ) − αw = α(n α − w) ∈ αN , as desired. Let s ∶ M → N be the map sending an element αm to αx, where α ∈ K. As N is a torsion free module, s is well defined. One can easily verify that s is a homomorphism such that π ○ s = id M . Thus, N is the direct sum of N ∩ (K n × {0}) and s(M ), so it is maximal by Lemma 2.46. Moreover, [a 1 , . . . , a n , x] is a representation matrix for N , as required. Proposition 2.50. Let K be a maximal valued field. Let M, N ⊆ K be O-submodules. For any O- homomorphism h ∶ M → K N there is some a ∈ K such that for any x ∈ M , h(x) = ax + N. Proof. By Fact 2.47 M = bI where I is a copy of K, O or an (integral or fractional ideal) of O. It is sufficient to prove the statement for b = 1. Let S I = {v(y) y ∈ I} be the end-segment induced by I. Let {γ α α ∈ κ} be a co-initial decreasing sequence in S I . Choose an element x α ∈ K such that v(x α ) = γ α , then for each α < β < κ, x β O ⊆ x α O and I = ⋃ α∈κ x α O. Claim 2.50.1. For each α ∈ κ there is an element a α ∈ K such that for all x ∈ x α O we have h(x) = a α x + N . For each α choose an element y α such that h(x α ) = y α + N and let a α = x −1 α y α . Fix an element x ∈ x α O, then: h(x) = h(x α (x −1 α x) ∈O ) = x −1 α x ⋅ h(x α ) = (x −1 α x) ⋅ (a α x α + N ) = a α x + N. Claim 2.50.2. Given β < α < κ, then a β − a α ∈ x −1 β N . Note that x β ∈ x β O ⊆ x α O, by Claim 2.50.1 we have h(x β ) = a α x β + N = a β x β + N , then (a α − a β )x β ∈ N . Hence, (a α − a β ) ∈ x −1 β N . Claim 2.50.3. Without loss of generality we may assume that for any α < κ there is some α < α ′ < κ such that for any α ′ < α ′′ < κ a α − a α ′′ ∉ x −1 α ′′ N . Suppose the statement is false. Then there is some α such that for any α < α ′ we can find α ′ < α ′′ such that a α − a α ′′ ∈ x −1 α ′′ N . Define: h * ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ I → K N x → a α x + N. We will show that for any x ∈ I, h(x) = h * (x). Fix an element x ∈ I, since < γ α α ∈ κ > is coinitial and decreasing in S I we can find an element α ′ > α such that v(x) > γ α ′ , so x ∈ x α ′ O ⊆ x α ′′ O. Then (a α − a α ′ )x = (a α − a α ′′ )x ∈x −1 α ′′ xN ⊆N + (a α ′′ − a α ′ )x ∈x −1 α ′ xN ⊆N . we conclude that (a α − a α ′ )x ∈ N . By Claim 2.50.1 we have h(x) = a α ′ x + N , thus h * (x) = h(x) and h * witnesses the conclusion of the statement. Claim 2.50.4. There is a subsequence < b α α ∈ cof(κ) > of < a α α ∈ κ > that is pseudo-convergent. Proof. Let g ∶ cof(κ) → κ be a cofinal function in κ i.e. for any δ ∈ κ there is some α ∈ cof(κ) such that g(α) > δ. We construct the desired sequence by transfinite recursion in cof(κ), building a strictly increasing function f ∶ cof(κ) → κ satisfying the following conditions: 1. for each α < cof(κ) we have b α = a f (α) and f (α) > g(α), 2. for any α < cof(κ) the sequence (b η η < α) is pseudo-convergent. This is, given η 1 < η 2 < η 3 < α v(b η3 − b η2 ) > v(b η2 − b η1 ), 3. for each α < cof(κ) we have that: for any η < α, and f (α) < η ′ < κ v(a η ′ − b α ) > v(b η − b α ) and a η ′ − b α ∉ x −1 η ′ N. For the base case, set b 0 = a 0 and f (0) = g(0) + 1. Suppose that for µ < cof(κ), f ↾ µ has been defined and < b η η < µ > has been constructed. Let µ * = sup{f (η) η < µ}, by Claim 2.50.3 (applied to α = max{µ * , g(µ)}) there is some max{µ * , g(µ)} < v < κ satisfying the following property: for any v < η ′ < κ, a α − a η ′ ∉ x −1 η ′ N . Set f (µ) = v and b µ = a v . We continue verifying that the three conditions are satisfied. The first condition b µ = a f (µ) and f (µ) > g(µ) follows immediately by construction. We continue checking that (b η η ≤ µ) is a pseudo-convergent sequence. Fix η 1 < η 2 < µ we must show that v(b µ − b η2 ) > v(b η2 − b η1 ). By construction b µ = a f (µ) = a v and f (µ) = v > µ * ≥ f (η 2 ). Since the third condition holds for η 2 we must have v(a v − b η2 ) > v(b η2 − b η1 ), as required. Lastly, we verify that the third condition holds for µ. Let η < µ and v = f (µ) < η ′ , we aim to show v(a η ′ − b µ ) > v(b µ − b η ) . Suppose by contradiction that this inequality does not hold, then bµ−bη a η ′ −bµ ∈ O. Because the third condition holds for η and by construction v = f (µ) > f (η), we have that b µ − b η = a v − a f (η) ∉ x −1 v N . By Claim 2.50.2 a η ′ − b µ = a η ′ − a v ∈ x −1 v N , then: b µ − b η = b µ − b η a η ′ − b µ ∈O (a η ′ − b µ ) ∈ x −1 v N, which leads us to a contradiction. It is only left to show that for any f (µ) < η ′ < κ we have that a η ′ −b µ ∉ x −1 η ′ N . By construction, we have chose b µ = a v where α = max{g(µ), µ * } < v < κ and for any v < η ′ < κ we have: a α − a η ′ ∉ x −1 η ′ N. As α < f (µ), by Claim 2.50.2 a α − a f (µ) ∈ x −1 µ N ⊆ x −1 η ′ N . Fix η ′ > v = f (µ), then a η ′ − b µ = a η ′ − a f (µ) ∉ x −1 η ′ N . Otherwise, a η ′ − a α = (a η ′ − a f (µ) ) ∈x −1 η ′ N + (a f (µ) − a α ) ∈x −1 η ′ N ∈ x −1 η ′ N because x −1 η N is an O-submodule of K, which leads us to a contradiction. Since K is maximal there is some a ∈ K that is a pseudolimit of < b α α ∈ cof(κ) >. We aim to prove that h(x) = ax + N for x ∈ I. Fix an element x ∈ I. The function f is cofinal in κ because of the first condition combined with the fact that g is cofinal in κ. We can find some α ∈ cof(κ) such that x ∈ x f (α) O ⊆ I. By Claim 2.50.1 h(x) = a f (α) x+N , hence it is sufficient to prove that (a−a f (α) )x ∈ N . As x ∈ x f (α) O it is enough to show that (a−a f (α) ) = (a−b α ) ∈ x −1 f (α) N . Let α < β < κ, by Claim 2.50.2 (b β −b α ) = (a f (β) −a f (α) ) ∈ x f (α) −1 N . Also, v(a − a f (α) ) = v(a f (β) − a f (α) ) thus (a − a f (α) ) = u(a f (β) − a f (α) ) for some u ∈ O × , thus (a − a f (α) ) ∈ x −1 f (α) N , as desired. Valued vector spaces We introduce valued vector spaces and some facts that will be required through this paper. An avid and curious reader can consult [15, Section 2.3] for a more exhaustive presentation. Through this section we fix (K, Γ, v) a valued field and V a K-vector space. Definition 2.51. A tuple (V, Γ(V ), val, +) is a valued vector space structure if: 1. Γ(V ) is a linear order, 2. there is an action + ∶ Γ × Γ(V ) → Γ(V ) which is order preserving in each coordinate, 3. val ∶ V → Γ(V ) is a map such that for all v, w ∈ V and α ∈ K we have: • val(v + w) ≥ min{val(w), val(v)}, • val(αv) = v(α) + val(v). The following Fact is [12, Remark 1.2]. Fact 2.52. Let V be a finite dimensional valued vector space over K, then the action of Γ(K) over Γ(V ) has finitely many orbits. In fact, Γ(V ) Γ(K) ≤ dim K (V ). Definition 2.53. Let (V, Γ(V ), val, +) be a valued vector space: 1. Let a ∈ V and γ ∈ Γ(V ). A ball in V is a set of the form: Ball α (a) = {x ∈ V val(x − a) ≥ γ} or Ball α (a) = {x ∈ V val(x − a) > γ}. 2. We say that (V, Γ(V ), val, +) is maximal if every nested family of balls in V has non-empty intersection. Definition 2.54. Let (V, Γ(V ), val, +) be a valued vector space and let W be a subspace of V . Then (W, Γ(W ), val, +) is also a valued vector space, where Γ(W ) = {val(w) w ∈ W }. We say that: 1. W is maximal in V if every family of nested balls {Ball α (x α ) α ∈ S}, where S ⊆ Γ(W ) and for each α ∈ S x α ∈ W . that has non-empty intersection in V has non-empty intersection in W . W ≤ V has the optimal approximation property if for any v ∈ V W the set {val(v −w) w ∈ W } attains a maximum. The following is a folklore fact. Fact 2.55. Let (V, Γ(V ), val, +) be a valued vector space, and W a subspace of V the following statements are equivalent: 1. W is maximal in V , 2. W has the optimal approximation property in V . Additionally, if W is maximal then it is maximal in V . We conclude this subsection with the definition of separated basis. Definition 2.56. Let (V, Γ(V ), val, +) be a valued vector space. Assume that V is a K-vector space of dimension n. A basis {v 1 , . . . , v n } ⊆ V is a separated basis if for any α 1 , . . . , α n ∈ K we have that: val( i≤n α i v i ) = min{val(α i v i ) i ≤ n}. In this section we study definable O-submodules in henselian valued fields of equicharacteristic zero. Corollary 3.1. Let (F, v) be a henselian valued field of equicharacteristic zero and N be a definable Osubmodule of F n . Then N is definably isomorphic to a direct sum of copies of F , O, or (integral or fractional) ideals of O. Moreover, if N ≅ ⊕ i≤n I i there is some upper triangular basis {a 1 , . . . , a n } of F n such that [a 1 , . . . , a n ] is a representation matrix of N . Proof. By Fact 2.45 we can find F ′ an elementary extension of F that is maximal, so we can apply Theorem 2.49. As the statement that we are trying to show is first order expressible, it must hold as well in F . Proof. By Fact 2.45 we can find an elementary extension F ≺ F ′ that is maximal. The statement follows by applying Proposition 2.50, because it is first order expressible. Definable modules in valued fields of equicharacteristic zero with residue field algebraically closed and value group with bounded regular rank Let (K, v) be a henselian valued field of equicharacteristic zero with residue field algebraically closed and value group with bounded regular rank. Let O be its valuation ring and T be the complete L-first order theory of (K, v). In this section we study the definable O-modules and torsors. Let I ′ be the complete family of O-submodules of K described in Fact 2.37. From now on we fix a complete family I = I ′ {0, K}. . Remark 3.3. If K ⊧ T , then N ≅ ⊕ i≤n I i , where each I i ∈ F ∪ {0, K}. This follows because F is a complete family of O-modules. Definition 3.4. Let K ⊧ T . A definable torsor U is a coset in K n of a definable O-submodule of K n , if n = 1 we say that U is a 1-torsor. Let U be a definable 1-torsor, we say that U is: 1. closed if it is a translate of a submodule of K of the form aO. 2. it is open if it is either K or a translate of a submodule of the form aI for some a ∈ K, where I ∈ F O. Definition 3.5. Let (I 1 , . . . , I n ) ∈ F n be a fixed tuple. 1 . An O-module M ⊆ K n is of type (I 1 , . . . , I n ) if M ≅ ⊕ i≤n I i . 2. An O-module M ⊆ K n of type (O, . . . , O) is said to be an O-lattice of rank n. 3. A torsor Z is of type (I 1 , . . . , I n ), if Z =d + M where M ⊆ K n is an O-submodule of K n of type (I 1 , . . . , I n ). Proposition 3.6. Let Z be a torsor of type (I 1 , . . . , I n ). Then there is some O-module L ⊆ K n+1 of type (I 1 , . . . , I n , O) such that ⌜Z⌝ and ⌜L⌝ are interdefinable. Proof. Let N ⊆ K n be the O-submodule and taked ∈ K n be such that Z =d + N . Let N 2 = N × {0} which is an O submodule of K n+1 and letb = d 1 . Define the O-module of K n+1 : Ld ∶= N 2 +bO = { n +dr r r ∈ O, n ∈ N }. By a standard computation, one can verify that the definition of Ld is independent of the choice ofd, i.e. ifd −d ′ ∈ N then Ld = Ld′. So we can denote L = Ld, and we aim to show that L and Z are interdefinable. It is clear that ⌜L⌝ ∈ dcl eq (⌜Z⌝), while ⌜Z⌝ ∈ dcl eq (⌜L⌝) because Z = π 2≤n+1 L ∩ (K n × {1}) where π 2≤n+1 ∶ K n+1 → K n is the projection into the last n-coordinates. Definable 1-O-modules In this subsection we study the quotient modules of 1-dimensional modules. Notation 3.7. Let M ⊆ K be a definable O-module. We denote by S M ∶= {v(x) x ∈ M } the end-segment induced by M . We recall as well that we write F to denote the complete family of O-submodules of K previously fixed. Definition 3.8. A definable 1-O-module is an O-module which is definably isomorphic to a quotient of a definable O-submodule of K by another, i.e. something of the form aI bJ where a, b ∈ K and I, J ∈ F ∪{0, K}. The following operation between O-modules will be particularly useful in our setting. Definition 3.9. Let N, M be O-submodules of K, we define the colon module Col(N ∶ M ) = {x ∈ K xM ⊆ N }. It is a well known fact from Commutative Algebra that Col(N ∶ M ) is also an O-module. Lemma 3.10. Let K ⊧ T . Let A be a 1-definable O-module. Suppose that A = A 1 A 2 , where A 2 ≤ A 1 are O-submodules of K. Then the O-module Hom O (A, A) is definably isomorphic to the 1-definable O-module Col(A 1 ∶ A 1 ) ∩ Col(A 2 ∶ A 2 ) Col(A 2 , A 1 ). Proof. By Fact 2.45 without loss of generality we may assume K to be maximal, because the statement is first order expressible. Let B = {f ∶ A 1 → A 1 A 2 f is a homomorphism and A 2 ⊆ ker(f )}. B is canonically in one-to-one correspondence with Hom O (A, A). By Corollary 3.2, for every homomorphism f ∈ B there is some b f ∈ K satisfying that for any x ∈ A 1 , f (x) = b f x + A 2 and we say that b f is a linear representation of f . Claim 3.10.1. Let f ∈ B if b f is a linear representation of f , then b f ∈ Col(A 1 ∶ A 1 ) ∩ Col(A 2 ∶ A 2 ). Proof. First we verify that b f ∈ Col(A 1 ∶ A 1 ). Let x ∈ A 1 , by hypothesis f (x) = b f x + A 2 ∈ A 1 A 2 . Then there is some y ∈ A 1 such that b f x + A 2 = y + A 2 and therefore b f x − y ∈ A 2 ⊆ A 1 . Consequently, b f x ∈ y + A 1 = A 1 , and as x is an arbitrary element we conclude that b f ∈ Col(A 1 ∶ A 1 ). We check now that b f ∈ Col(A 2 ∶ A 2 ), and we fix an element x ∈ A 2 . By hypothesis, b f x + A 2 = A 2 so b f x ∈ A 2 , and as x ∈ A 2 is an arbitrary element we conclude that b f ∈ Col(A 2 ∶ A 2 ). Claim 3.10.2. Let f ∈ B if b f , b ′ f are linear representations of f , then b f − b ′ f ∈ Col(A 2 ∶ A 1 ) Proof. Let x ∈ A 1 , by hypothesis f (x) = b f x + A 2 = b ′ f x + A 2 , so (b f − b ′ f )x ∈ A 2 . Because x is arbitrary in A 1 we have that (b f − b ′ f ) ∈ Col(A 2 ∶ A 1 ). We consider the map φ ∶ B → Col(A 1 ∶ A 1 ) ∩ Col(A 2 ∶ A 2 ) Col(A 2 ∶ A 1 ) that sends an O-homomorphism f to the coset b f + Col(A 1 ∶ A 2 ). By Claim 3.10.2 such map is well defined. By a standard computation φ is an injective O-homomorphism. To show that φ is surjective, let b ∈ Col(A 1 ∶ A 1 ) ∩ Col(A 2 ∶ A 2 ) , and consider f b ∶ A 1 → A 1 A 2 , the map that sends the element x to bx + A 2 . Because b ∈ Col(A 2 ∶ A 2 ), for any x ∈ A 2 we have that bx ∈ A 2 thus A 2 ⊆ ker(f b ). Consequently, f b ∈ B and φ(f b ) = b + Col(A 1 ∶ A 2 ). Lemma 3.11. Let n ∈ N ≥2 and M ⊆ K n be an O-module. 1. Let π n−1 ∶ K n → K n−1 be the projection into the first (n − 1)-coordinates and B n−1 = π n−1 (M ). Take A 1 ⊆ K be the O-module such that ker(π n−1 ) = M ∩ ({0} n−1 × K) = ({0} n−1 × A 1 ). 2. Let π n ∶ K n → K be the projection into the last coordinate and B 1 = π n (M ). Let A n−1 ⊆ K n−1 be the O-module such that ker(π n ) = M ∩ (K n−1 × {0}) = (A n−1 × {0}). Then A n−1 ≤ B n−1 and both lie in K n−1 , and A 1 ≤ B 1 and both lie in K. The map φ ∶ B n−1 → B 1 A 1 given by b ↦ a + A 1 where (b, a) ∈ M , is a well defined homomorphism of O-modules whose kernel is A n−1 . In particular, B n−1 A n−1 ≅ B 1 A 1 . Furthermore if M is definable, φ is also definable. Proof. Letm ∈ A n−1 , then (m, 0) ∈ M thus π n−1 (m, 0) =m ∈ B n−1 . We conclude that A n−1 is a submodule of B n−1 . Likewise A 1 ≤ B 1 . For the second part of the statement, it is a straightforward computation to verify that the map φ ∶ B n−1 → B 1 A 1 ( defined as in the statement) , is a well defined surjective homomorphism of O-modules whose kernel is A n−1 . Lastly, the definability of φ follows immediately by the definability of M . The Stabilizer sorts An abstract criterion to eliminate imaginaries We start by recalling Hurshovski's criterion, The following is [16, Lemma 1.17]. Theorem 4.1. Let T be a first order theory with home sort K (meaning that M eq = dcl eq (K)). Let G be some collection of sorts. If the following conditions all hold, then T has weak elimination of imaginaries in the sorts G. 1. Density of definable types: for every non-empty definable set X ⊆ K there is an acl eq (⌜X⌝)-definable type in X. 2. Coding definable types: every definable type in K n has a code in G (possibly infinite). This is, if p is any (global) definable type in K n , then the set ⌜p⌝ of codes of the definitions of p is interdefinable with some (possibly infinite) tuple from G. Proof. A very detailed proof can be found in [12,Theorem 6.3]. The first part of the proof shows weak elimination of imaginaries as it is shown that for any imaginary element e we can find a tuple a ∈ G such that e ∈ dcl eq (a) and a ∈ acl eq (e). ◻ We start by describing the sorts that are required to be added to apply this criterion and show that any valued field of equicharacteristic zero, with residue field algebraically closed and value group with bounded regular rank admits weak elimination of imaginaries. x i e i x i ∈ I i }, we refer to this module as the canonical O-submodule of K n of type (I 1 , . . . , I n ). 2. We denote as B n (K) the multiplicative group of n × n-upper triangular and invertible matrices. 5. Let U n ⊆ (K n ) n be the set of n-tuples (b 1 , . . . ,b n ), such that B = [b 1 , . . . ,b n ] is an invertible upper triangular matrix. We define the equivalence relation E (I1,...,In) on U n as: E (I1,...,In) ā 1 , . . . ,ā n ;b 1 , . . . ,b n holds if and only if (ā 1 , . . . ,ā n ) and (b 1 , . . . ,b n ) generate the same O-module of type (I 1 , . . . , I n ), i.e. { 1≤i≤n x iāi x i ∈ I i } = { 1≤i≤n x ibi x i ∈ I i }. 6. We denote asρ (I1,...,In) the canonical projection map: ρ (I1,...,In) ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ U n → U n E (I1,...,In) (ā 1 , . . . ,ā n ) ↦ [(ā 1 , . . . ,ā n )] E (I 1 ,...,In ) Remark 4.3. 1. The set {⌜M ⌝ M ∈ Λ (I1,...,In) } can be canonically identified with B n (K) Stab (I1,...,In) . Indeed, by Corollary 3.1 given any O-module M of type (I 1 , . . . , I n ) we can find an upper triangular basis {ā 1 , . . . ,ā n } of K n such that [a 1 , . . . , a n ] is a matrix representation of M . The code ⌜M ⌝ is interdefinable with the coset [a 1 , . . . , a n ]Stab (I1,...,In) . 2. Fix some n ∈ N ≥2 and let (I 1 , . . . , I n ) be a fixed tuple. The sort B n (K) Stab (I1,...,In) is in definable bijection with the equivalence classes of U n E (I1,...,In) . In fact we can consider the ∅-definable map: f ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ U n E (I1,...,In) → B n (K) Stab (I1,...,In) [(ā 1 , . . . ,ā n )] E (I 1 ,...,In) ↦ [ā 1 , . . . ,ā n ]Stab (I1,...,In) . We denote as ρ (I1,...,In) = U n → B n (K) Stab (I 1,...,In ) the composition maps ρ (I1,...,In) = f ○ρ (I1,...,In) . 1. We equipped the value group with the multi-sorted language L bq introduced in Subsection 2.2.1. 2. For each n ∈ N we consider the parametrized family of sorts B n (K) Stab (I1,...,In) and maps ρ (I1,...,In) ∶ U n → B n (K) Stab (I1,...,In) where (I 1 , . . . , I n ) ∈ F n . We refer to the sorts in the language L G as the stabilizer sorts. We denote as G their union, i.e. K ∪ k ∪ Γ ∪ {Γ ∆ ∆ ∈ RJ(Γ)} ∪ {Γ ∆ + nΓ ∆ ∈ RJ(Γ), n ∈ N ≥2 } ∪ {B n (K) Stab (I1,...,In) n ∈ N, (I 1 , . . . , I n ) ∈ F n }.each Λ ∈ S n , let res(Λ) = Λ ⊗ O k = Λ MΛ, which is a k -vector space of dimension n. Let T n = ⋃ Λ∈Sn res(Λ) = {(Λ, x) Λ ∈ S n , x ∈ res(Λ)}. Each An explicit description of the stabilizer sorts In this subsection we state an explicit description of the subgroups Stab (I1,...,In) . ∆ S I = {x ∈ K v(x) ∈ ∆ S I }. Proof. This is an immediate consequence of Fact 2.25. Stab (I1,...,In) = {((a i,j ) 1≤i,j≤n ∈ B n (K) a ii ∈ O × ∆ S I i ∧ a ij ∈ Col(I i , I j ) for each 1 ≤ i < j ≤ n } Proof. This is a straightforward computation and it is left to the reader. Weak Elimination of imaginaries for henselian valued field with value group with bounded regular rank Let (K, v) be a henselian valued field of equicharacteristic zero, with residue field algebraically closed and value group with bounded regular rank. Let T be its complete L G -first order theory and M its monster model. In this section we show that both conditions required by Hrushovski's criterion to obtain weak elimination of imaginaries down to the stabilizer sorts hold. Density of definable types In this section we prove density of definable types for 1-definable sets X ⊆ M. There are two ways to tackle this problem. One can either use the quantifer elimination (see Corollary 2.33) and obtain a canonical decomposition of X into nice sets T i ∈ acl eq (⌜X⌝) and then build a global type p(x) ∈ x ∈ T i which is acl eq (⌜T i ⌝). This approach was successfully achieved by Holly in [25] for the case of ACV F and real closed valued fields, and her work essentially gives a way to code one-definable sets in the main field down to the geometric sorts. It is worth pointing out, that finding a canonical decomposition is often a detailed technical work. Instead of following this strategy, we follow a different approach that exploits the power of generic types, which are definable partial types. Definition 5.1. Let U ⊆ M be a definable 1-torsor, let Σ gen U (x) = {x ∈ U } ∪ {x ∉ B B ⊊ U B is a proper sub-torsor of U }. This is a ⌜U ⌝-definable partial type. Proposition 5.2. Let U ⊆ M be a definable closed 1-torsor. Then there is a unique complete global type p(x) extending Σ gen U (x) i.e. Σ gen U (x) ⊆ p(x). Moreover, p(x) is ⌜U ⌝-definable. Proof. Let a ∈ U and Y a = {v(x − a) x ∈ U } ⊆ Γ. As U is a closed O-module then Y a has a minimum element γ. For any other element b ∈ U , we have that γ = min(Y a ) = min(Y b ), thus γ ∈ dcl eq (⌜U ⌝). By quantifier elimination (see Corollary 2.33) it is sufficient to show that Σ gen U (x) determines also the congruence and coset formulas. Let c be a realization of Σ gen U (x) and then for any a ∈ U (M) we have that v(c − a) = γ. Let p(x) = tp(c M), ∆ ∈ RJ(Γ), ℓ ∈ N and β ∈ Γ. If a ∈ U (M) then: ⊧ v ∆ (c − a) − ρ ∆ (β) + k ∆ ∈ ∆ if and only if ⊧ φ k ∆ (β) ∶= γ − ρ ∆ (β) + k ∆ , and ⊧ v ∆ (c − a) − ρ ∆ (β) + k ∆ ∈ ℓ Γ ∆) if and only if ⊧ ψ k ∆ (β) ∶= γ − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆). We observe that ψ k ∆ (β) and φ k ∆ (β) are L(dcl eq (⌜U ⌝))-formulas, and their definition is completely independent from the choice of c. b ∈ U (M) we have that v(c − a) = v(b − a). Therefore ⊧ v ∆ (c − a) − ρ ∆ (β) + k ∆ ∈ ∆ if and only if ⊧ ǫ k ∆ (a, β) ∶= ∃b ∈ U v ∆ (b − a) − ρ ∆ (β) + k ∆ , and ⊧ v ∆ (c − a) − ρ ∆ (β) + k ∆ ∈ ℓ Γ ∆) if and only if ⊧ η k ∆ (a, β) ∶= ∃b ∈ U v ∆ (b − a) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) . Both formulas ǫ k ∆ (a, β) and η k ∆ (a, β) are L(⌜U ⌝)-definable and completely independent from the choice of c. We conclude that p(x) is a ⌜U ⌝definable type. Furthermore, for any possible realization c ⊧ Σ gen U (x) we obtain the same scheme of definition. Hence, there is a unique extension p(x) of Σ gen U (x). Theorem 5.3. For every non-empty definable set X ⊆ M, there is a acl eq (⌜X⌝)-definable global type p(x) ⊢ x ∈ X. Proof. Let X ⊆ M be a 1-definable set. Claim 5.3.1. There is a 1-torsor U such that ⌜U ⌝ ∈ acl eq (⌜X⌝) and the partial type: Σ gen U (x) ∪ {x ∈ X} is consistent. Proof. Let F be the family of closed balls B such that B ∩ X ≠ ∅. We say that B 1 ∼ B 2 if and only if B 1 ∩ X = B 2 ∩ X. This is a ⌜X⌝-definable equivalence relation over F . Let π ∶ F → F ∼, the natural ⌜X⌝-definable map sending a closed ball to its class [B] ∼ . For each class µ ∈ F ∼ the set U µ = ⋂ B∈F ,π(B)=µ B is a µ-definable 1-torsor. Moreover, for any B ∈ F , B ∩ X = U µ ∩ X if and only if π(B) = µ. In particular, if B is a proper closed subball of U µ , then π(B) ≠ µ. Then set F ∼ admits a partial ⌜X⌝-definable order defined as: µ 1 ◁ µ 2 if and only B 1 ∩ X ⊊ B 2 ∩ X where π(B 1 ) = µ 1 and π(B 2 ) = µ 2 . The set F ∼, ◁ is a tree with a maximal element µ 0 ∈ acl eq (⌜X⌝). This class is obtained by taking the projection of a ball B 0 such that B 0 ∩ X = X. By quantifier elimination (see Corollary 2.33) X is a finite union of nice sets, thus such ball B 0 exists. For each µ ∈ F ∼, we write P (µ) to denote the set of immediate predecessors of µ (if they exists). This is P (µ) ∶= {β ∈ F ∼ β ◁ µ and¬∃z(β ◁ z ◁ µ)}. If Σ gen Uµ (x) ∪ {x ∈ X} is inconsistent then P (µ) is finite and has size at least 2. Indeed, Σ gen Uµ (x) ∪ {x ∈ X} is consistent if and only if {x ∈ U µ } ∪ {x ∉ B B ⊆ U is a closed ball} ∪ {x ∈ X} is consistent. Hence, if Σ gen Uµ (x) ∪ {x ∈ X} is inconsistent, by compactness we can find finitely many disjoint closed balls B 1 , . . . , B k such that B i ∩ X ≠ ∅ and U µ ∩ X ⊆ ⋃ i≤k B i Let β i = π(B i ) ◁ µ. Then P (µ) = {β i i ≤ k} ⊆ acl eq (⌜X⌝, µ). We now start looking for the 1-torsor U ∈ acl eq (⌜X⌝) such that Σ gen U (x) ∪ {x ∈ X} is consistent. Let µ 0 ∈ acl eq (⌜X⌝) be the maximal element of (F ∼, ◁), if Σ gen Uµ 0 (x) ∪ {x ∈ X} is consistent, the torsor U µ0 satisfies the required conditions. We may assume that Σ gen Uµ 0 (x) ∪ {x ∈ X} is inconsistent, thus it has finitely many predecessors P (µ 0 ) ⊆ acl eq (⌜X⌝). For each β ∈ P (µ 0 ) exactly one of the following cases hold: 1. Σ gen U β (x) ∪ {x ∈ X} is consistent, then the torsor U β satisfies the required conditions of the claim; or 2. Σ gen U β (x) ∪ {x ∈ X} is inconsistent, and β has finitely many predecessors P (β) ⊆ acl eq (⌜X⌝, β) ⊆ acl eq (⌜X⌝). By iterating this process for each of the predecessors, we build a discrete tree T ⊆ F ∼ of finite ramification. Hence, it is sufficient to argue that every path in this tree is finite. Suppose by contradiction that a path is infinite, then we can find an infinite decreasing sequence < γ i i ∈ N > of elements in F ∼ such that U γ0 = U µ0 , and: 1. for each i ∈ N, P (γ i ) is finite and of size at least 2. Given η 1 ≠ η 2 ∈ P (γ i ) we have that U η1 ∩ U η2 = ∅. And U η1 is a proper subtorsor of U γi . For each µ ∈ F ∼, U µ ⊆ U γi for some i ∈ N, or there is some i ∈ N such that U µ ⊆ U η for some η ∈ P (γ i ) {γ i+1 }. U γ0 U γ1 U γ2 U γ3 a By compactness we can find an element a ∈ M such that a ∈ ⋂ i∈N U γi . We note that {U µ µ ∈ F ∼} is a uniform definable family of 1-torsors. Then we can define the set D = {x ∈ K ∃µ ∈ F ∼ x ∈ U µ and a ∉ U µ }, but this set is not a finite union of nice sets (by the conditions in 5.1), which leads us to a contradiction. Proof. If U is a closed 1-torsor, we let c be a realization of Σ gen U (x) ∪ {x ∈ X}. By Proposition 5.2 the type p(x) = tp(c M) ⊢ x ∈ X is ⌜U ⌝-definable. The statement follows as ⌜U ⌝ ∈ acl eq (⌜X⌝). We may assume that U is an open torsor. We observe that for any realization c ⊧ Σ gen U (x) given a ≠ a ′ ∈ U ((M )) we have that v(c − a) = v(c − a ′ ). Let π ∶= N → N × N ≥1 be a fixed bijection. We build an increasing sequence of partial consistent types Σ k (x) k ∈ N) by induction: (n, ℓ). At this stage we decide the congruence modulo ∆ n + ℓΓ. To simplify the notation we will assume that ℓ ≥ 2, otherwise the argument will follow in a similar manner (instead of working with ℓ(Γ ∆ n ) we argue with Γ ∆ n ). Let • Stage 0: Let Σ 0 (x) ∶= Σ gen U (x) ∪ {x ∈ X}. • Stage k + 1: Let π(k) =Λ k (x) ∶= Σ k (x) ∪ {v ∆n (x − a) − ρ ∆n (β) ∉ ℓ(Γ ∆ n ) a ∈ U (M), β ∈ Γ}. If the partial type Λ k (x) is consistent, then we set Σ k+1 (x) = Λ(x). Otherwise, let A i = {µ ∈ Γ (∆ n + ℓΓ) Σ k (x) ∪ {π ℓ ∆n (v(x − a)) = µ a ∈ U (M)} is consistent}. A i is a finite set. We set Σ k+1 (x) ∶= Σ k (x) ∪ {π ℓ ∆n (v(x − a)) = µ a ∈ U (M)}. Let J = {k ∈ N ≥1 Λ k (x) is inconsistent }. Claim 5.3.2. For all k ∈ N we have that for any automorphism σ ∈ Aut(M acl eq (⌜X⌝), σ(Σ k (x)) = Σ k (x) and if k ∈ J then σ(A k ) = A k . In particular, A k ⊆ acl eq (⌜X⌝) for all k ∈ J . Proof. We proceed by induction, for the base case k = 0 the statement follows because ⌜U ⌝ ∈ acl eq (⌜X⌝). We assume that for any σ ∈ Aut(M ⌜X⌝) we have that σ(Σ k (x)) = Σ k (x). We fix τ ∈ Aut(M ⌜X⌝) and we aim to show that τ (Σ k+1 (x)) = Σ(Σ k+1 (x)). If Λ k (x), then: τ Σ k+1 (x) = τ Σ k (x) ∪ {v ∆n (x − a) − ρ ∆n (β) ∉ ℓ(Γ ∆ n ) a ∈ U (M), β ∈ Γ} = Σ k (x) ∪ {v ∆n (x − τ (a)) − ρ ∆n (τ (β)) ∉ ℓ(Γ ∆ n ) a ∈ U (M), β ∈ Γ} = Σ k+1 (x). If Λ k (x) is not consistent then k ∈ J . And we first argue that τ (A k ) = A k . By definition of A k , given µ ∈ A k then Σ k (x) ∪ {π ℓ ∆n (v(x − a)) = µ a ∈ U (M)} is consistent, because τ is an isomorphism, τ Σ k (x) ∪ {π ℓ ∆n (v(x − a)) = µ a ∈ U (M)}) = Σ k (x) ∪ {π ℓ ∆n (v(x − τ (a))) = τ (µ) a ∈ U (M)} = Σ k (x) ∪ {π ℓ ∆n (v(x − a)) = τ (µ) a ∈ U (M)} is consistent, hence τ (µ) ∈ A i . We conclude that τ (A i ) = A i , and because τ is an arbitrary element in Aut(M acl eq (⌜X⌝)) we conclude that A i ⊆ acl eq acl eq (⌜X⌝) = acl eq (⌜X⌝). In particular, for any µ ∈ A i , τ (µ) = µ. Consequently, τ Σ k+1 (x) = Σ k (x) ∪ {π ℓ ∆n (v(x − τ (a))) = µ a ∈ U (M)} = Σ k (x) ∪ {π ℓ ∆n (v(x − a)) = µ a ∈ U (M)} = Σ k+1 (x), as required. Let Σ ∞ (x) ∶= ⋃ k∈N Σ k (x) , by construction this is a consistent partial type acl eq (⌜X⌝)-definable and Σ ∞ (x) ⊢ x ∈ X. By quantifier elimination, Σ ∞ (x) determines a complete global type p(x) ⊢ x ∈ X. This type p(x) is acl eq (⌜X⌝)-definable as Σ ∞ (x) is. Coding definable types In this subsection we prove that any definable type can be coded in the stabilizer sorts G. Let x = (x 1 , . . . , x k ) be a tuple of variables in the main field sort. By quantifier elimination any definable type p(x) over a model K is completely determined by the boolean combinations formulas of the form: 1. Q 1 (x) = 0, 2. v ∆ (Q 1 (x)) ≤ v ∆ (Q 2 (x)), 3. v ∆ Q1(x) Q2(x) − k ∆ ∈ n(Γ ∆), 4. v ∆ Q1(x) Q2(x) = k ∆ . where Q 1 (x), Q 2 (x), ∈ K[X 1 , . . . , X k ], n ∈ N ≥2 , ∆ ∈ RJ(Γ), k ∈ Z and k ∆ = k ⋅ 1 ∆ where 1 ∆ is the minimal element of Γ ∆ if it exists. We will approximate such a type by considering for each l ∈ N the definable vector space D l I l , where D l is the set of polynomials of degree at most l and I l is the subspace of D l of polynomials Q(x) such that Q(x) = 0 is a formula in p(x). The formulas of the second kind, essentially give D l I l a valued vector space structure with all the coarsened valuations, while the formulas of the third and forth kind simply impose some binary relations in the linear order Γ(D l I l ). This philosophy reduces the problem of coding definable types into finding a way to code the possible valuations that could be induced over some power of K while taking care as well for the congruences. The following is [12, Lemma 3.3]. Fact 5.4. Let K be any field. Let V be a subspace of K n then V can be coded by a tuple of K, and V and K n V have a ⌜V ⌝-definable basis. We start by coding the O-submodules of K n . Definition 5.6. [Valued relation] Let K ⊧ T , and Γ be its value group. Let V be some finite dimensional K-vector space and R ⊆ V × V be a definable subset that defines a total pre-order.We say that R is a valued relation if there is an interpretable valued vector space structure (V, Γ(V ), val, +) in K such that (v, w) ∈ R if and only if val(v) ≤ val(w). In fact, given a relation R ⊆ V × V that defines a total pre-order satisfying that: • for all v, w ∈ V (v, v + w) ∈ R or (w, v + w) ∈ R, • for all v ∈ V (v, v) ∈ R, • for all v, w ∈ V and α ∈ K, if (v, w) ∈ R then (αv, αw) ∈ R. We can define an equivalence relation E R over V as E R (v, w) ↔ (v, w) ∈ R ∧ (w, v) ∈ R. The set Γ(V ) = V E R is therefore interpretable in K and we call it as the linear order induced by R. Let val ∶ V → Γ(V ) be the canonical projection map that sends each vector to its class. We can naturally define an action of Γ over Γ(V ) as: + ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ Γ × Γ(V ) → Γ(V ) (α, [v] E R ) ↦ [av] E R , where a ∈ K satisfying v(a) = α. This is a well defined map by the third condition imposed over R. The structure (V, Γ(V ), val, +) is an interpretable valued vector space structure over V and we refer to it as the valued vector space structure induced by R. Lemma 5.7. Let K be a model of T and let R ⊂ K n × K n be a binary relation inducing a valued vector space structure (K n , Γ(K n ), val, +) over K n . Then we can find a basis {v 1 , . . . , v n } of K n such that: 1. It is a separated basis for val, this is given any set of coefficients λ 1 , . . . , λ n ∈ K, val i≤n λ i v i = min{v(λ i ) + val(v i ) i ≤ n}. 2. For each i ≤ n, γ i = val(v i ) ∈ dcl eq (⌜R⌝). Proof. Because the statement we are proving is first order expressible, by Fact 2.45 we may assume that K is maximal. We proceed by induction on n. For the base case, note that K = span K {1} then γ = val(1) ∈ dcl eq (⌜R⌝). We assume the statement for n and we want to prove it for n + 1. Let W = K n × {0}, val W = v ↾ W , Γ(W ) = {val(w) w ∈ W }, and R W = R ∩ (W × W ) . Then (W, Γ(W ), val W , +) is a valued vector space structure over W and ⌜R W ⌝ ∈ dcl eq (⌜R⌝). The subspace W admits an ∅-definable basis, so it can be canonically identified with K n . By the induction hypothesis we can find {w 1 , . . . , w n } a separated basis of W such that val W (w i ) ∈ dcl eq (⌜R W ⌝) ⊆ dcl eq (⌜R⌝). As W is finite dimensional it is maximal by Lemma 2.46. By Fact 2.55 W has the optimal approximation property in K n+1 . We can therefore define the valuation over the quotient space K n+1 W as follows: val K n+1 W ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ K n+1 W → Γ(K n ) v + W ↦ max{val(v + w 0 ) w 0 ∈ W }. Define R K n+1 W = {(w 1 + W, w 2 + W ) val K n+1 W (w 1 + W ) ≤ val K n+1 W (w 2 + W )}, which is a valued relation over the quotient space K n+1 W . As K n+1 W = K n+1 (K n ×{0}) is definably isomorphic over the ∅-set to K, we can find a non zero coset v + W such that val K n+1 W (v + W ) ∈ dcl eq (⌜R K n+1 W ⌝) ⊆ dcl eq (⌜R⌝). Let w * ∈ W be a vector where the maximum of {val K n+1 W (v+w) w ∈ W } is attained, i.e. val K n+1 W (v+W ) = val(v+w * ). It is sufficient to show that {w 1 , . . . , w n , v + w * } is a separated basis for K n+1 . Let α ∈ K, we show that for any w ∈ W val((v + w * ) + αw) = min{val(v + w * ), val(αw)}. If val(v + w * ) ≠ val(αw) then val((v + w * ) + αw) = min{val(v + w * ), val(αw)}. So let's assume that γ = val(v + w * ) = val(αw), by the ultrametric inequality val((v + w * ) + αw) ≥ γ. By the maximal choice of w * , we have that val((v + w * ) + αw) ≤ val(v + w * ) = γ. So val((v + w * ) + αw) = min{val(v + w * ), val(αw)} as required. Theorem 5.8. Let K be a model of T and Γ its value group. Let R be a definable valued relation over K n and (K n , Γ(K n ), val, +) be the valued vector space structure induced by R. Then ⌜R⌝ is interdefinable with a tuple of elements in the stabilizer sorts and there is an ⌜R⌝definable bijection Γ(K n ) and finitely many disjoint copies of Γ (all contained in Γ s , where s is the number of Γ-orbits over Γ(K n )). Proof. As the statement that we are trying to prove is first order expressible, without loss of generality we may assume that K is maximal. Let R be a valued relation over K n and let (K n , Γ(K n ), val, +) be the valued vector space structure induced by R. By Lemma 5.7, we can find a separated basis {v 1 , . . . , v n } of K n , such that for each i ≤ n, val(v i ) ∈ dcl eq (⌜R⌝). Let {γ 1 , . . . , γ s } ⊆ {val(v i ) i ≤ n} be a complete set of representatives of the orbits of Γ over the linear order Γ(K n ), this is: Γ(K n ) =⋃ i≤s Γ + γ i . For each i ≤ s, we define B i ∶= {x ∈ K n val(x) ≥ γ i }. Each B i is an O-submodule of K n , so by Lemma 5.5 ⌜B i ⌝ is interdefinable with a tuple in the stabilizer sorts. The valued vector space structure over K n is completely determined by the closed balls containing 0, and each of these ones is of the form αB i for some α ∈ K and i ≤ s. Thus the code ⌜R⌝ is interdefinable with the tuple (⌜B 1 ⌝, . . . , ⌜B s ⌝). We conclude that ⌜R⌝ can be coded in the stabilizer sorts. For the second part of the statement, consider the map: ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ f ∶⋃ i≤s Γ + γ i → Γ s α + γ i ↦ (0, . . . , 0, α i−th coordinate , 0, . . . , 0). As {γ 1 , . . . , γ s } ⊆ dcl eq (⌜R⌝) this is a ⌜R⌝-definable bijection between Γ(K n ) to finitely many disjoint copies of Γ, contained in Γ s . Theorem 5.9. Let p(x) be a definable global type in M n . Then p(x) can be coded in G ∪ Γ eq . Proof. Let p(x) be a definable global type, and let K be a small model where p(x) is defined. Let q(x) = p(x) ↾ K it is sufficient to code q(x). For each ℓ ∈ N let D ℓ be the space of polynomials in K[X 1 , . . . , X n ] of degree less or equal than ℓ. This is a finite dimensional K-vector space with an ∅-definable basis. Let I ℓ ∶= {Q(x) ∈ D ℓ Q(x) = 0 ∈ q(x)}, this is a subspace of D ℓ . Let R ℓ ∶= {(Q 1 (x), Q 2 (x)) ∈ D ℓ × D ℓ v(Q 1 (x)) ≤ v(Q 2 (x) ) ∈ q(x)}, this relation induces a valued vector space structure on the quotient space V ℓ = D ℓ I ℓ . Let (V ℓ , Γ(V ℓ ), val ℓ , + ℓ ) be the valued vector space structure induced by R ℓ over V ℓ . For each ∆ ∈ RJ(Γ) and k ∈ Z, a formula of the form v ∆ (Q 1 (x)) = v ∆ (Q 2 (x)) + k ∆ determines a definable relation φ k ∆ ⊆ Γ(V ℓ ) 2 , defined as: (val ℓ (Q 1 (x)), val ℓ (Q 2 (x)) ∈ φ k ∆ if and only if v ∆ (Q 1 (x)) = v ∆ (Q 2 (x)) + k ∆ ∈ q(x). Similarly, for each ∆ ∈ RJ(Γ), k ∈ Z and n ∈ N ≥2 we consider the definable binary relation ψ n,k ∆ ⊆ Γ(V ℓ ) 2 determined as: (val ℓ (Q 1 (x)), val ℓ (Q 2 (x))) ∈ ψ k,n ∆ if and only if v ∆ (Q 1 (x)) − v ∆ (Q 2 (x)) + k ∆ ∈ n(Γ ∆) ∈ q(x). Likewise, for each ∆ ∈ RJ(Γ) and k ∈ Z we consider the definable binary relations θ k ∆ ⊆ Γ(V ℓ ) 2 defined as: (val ℓ (Q 1 (x)), val ℓ (Q 2 (x))) ∈ θ k ∆ if and only if v ∆ (Q 1 (x)) < v ∆ (Q 2 (x)) + k ∆ ∈ q(x). Let S ℓ = {φ k ∆ ∆ ∈ RJ(Γ), k ∈ Z} ∪ {ψ k,n ∆ ∆ ∈ RJ(Γ), k ∈ Z, n ∈ N ≥2 } ∪ {θ k ∆ ∆ ∈ RJ(Γ), k ∈ Z} We denote as V ℓ = (V ℓ , Γ(V ℓ ) , val ℓ , + ℓ , S ℓ ) the valued vector space over V ℓ with the enriched structure over the linear order Γ(V ℓ ). By quantifier elimination (see Corollary 2.33), the type q(x) is completely determined by boolean combinations of formulas of the form: • Q 1 (x) = 0, • v ∆ (Q 1 (x)) < v ∆ (Q 2 (x)), • v ∆ Q1(x) Q2(x) − k ∆ ∈ n(Γ ∆), • v ∆ Q1(x) Q2(x) = k ∆ . where Q 1 (x), Q 2 (x), ∈ K[X 1 , . . . , X k ], n ∈ N ≥2 , ∆ ∈ RJ(Γ), k ∈ Z and k ∆ = k ⋅ 1 ∆ where 1 ∆ is the minimum positive element of Γ ∆ if it exists. Hence the type p(x) is entirely determined (and determines completely) by the sequence of valued vector spaces with enriched structure over the linear order (V ℓ ℓ ∈ N). By Fact 5.4 for each ℓ ∈ N we can find codes ⌜I ℓ ⌝ in the home sort for the I ′ ℓ s. After naming these codes, each quotient space V ℓ = D ℓ I ℓ has a definable basis, so it can be definably identified with some power of K. Therefore, without loss of generality we may assume that the underlying set of the valued vector space with enriched structure V ℓ is some power of K. By Theorem 5.8, the relation R ℓ admits a code ⌜R ℓ ⌝ in the stabilizer sorts. Moreover, there is a ⌜R ℓ ⌝ definable bijection f ∶ Γ(V ℓ ) → Γ s , where s ∈ N ≥2 is the number of Γ-orbits over Γ(V ℓ ). In particular, for each ∆ ∈ RJ(Γ), n ∈ N and k ∈ Z the definable relations φ k ∆ , ψ k,n ∆ and θ k ∆ are interdefinable over ⌜R⌝ with f (φ k ∆ ), f (ψ k,n ∆ ) and f (θ k ∆ ), all subsets of Γ 2s . Consequently, the type q(x) can be coded in the sorts Γ ∪ Γ eq , as every definable subset D in some power of Γ admits a code in Γ eq . Theorem 5.10. Let K be a valued field of equicharacteristic zero, residue field algebraically closed and value group with bounded regular rank. Then K admits weak elimination of imaginaries in the language L G , where the stabilizer sorts are added. Proof. By Theorem 4.1, K admits weak elimination of imaginaries down to the sorts G ∪ Γ eq , where G are the stabilizer sorts. In fact, Hrushovski's criterion requires us to verify the following two conditions: 1. the density of definable types, this is Theorem 5.3, and 2. the coding of definable types, this is Theorem 5.9. By Corollary 2.3 the value group Γ is stably embedded. By Theorem 2.18, the ordered abelian group with bounded regular rank Γ admits weak elimination of imaginaries once one adds the quotient sorts, {Γ ∆ ∆ ∈ RJ(Γ)} ∪ {Γ ∆ + nΓ ∆ ∈ RJ(Γ), n ∈ N ≥2 }. We conclude that K admits weak elimination of imaginaries down to the stabilizer sorts G. 6 Elimination of imaginaries for henselian valued field with dpminimal value group Let (K, v) be a henselian valued field of equicharacteristic zero, residue field algebraically closed and dpminimal value group. We see K as a multisorted structure in the languageL extending the language L G (described in Definition 4.4), where the value group is equipped with the language L dp described in Subsection 2.3.1. Let I ′ be the complete family of O-submodules of K described in Fact 2.37. From now on we fix a complete family F = I ′ {0, K}. We refer to these sorts as the stabilizer sorts and we denote their union (I 1 , . . . , I n ) (I 1 , . . . , I n ) ∈ I n } Remark 6.1. If we work with the complete family I of end-segments given by Remark 2.32, each of Omodules in I is definable over the empty set. In this setting we are adding a finite set of constants Ω n in Γ choosing representatives of nΓ in Γ for each n ∈ N. The results we obtain in this section will hold in the same manner if we work with this language instead. G = K ∪ k ∪ Γ ∪ {Γ ∆ ∆ ∈ RJ(Γ)} ∪ {B n (K) Stab Our main goal is the following Theorem. Theorem 6.2. Let K be a henselian valued field of equicharacteristic zero, residue field algebraically closed and dp-minimal value group. Then K eliminates imaginaries in the languageL, where the stabilizer sorts are added. Definition 6.3. We say that a multi-sorted first order theory T codes finite sets if for every model M ⊧ T , and every finite subset S ⊆ M , the code ⌜S⌝ is interdefinable with a tuple of elements in M . The following is a folklore fact (see for example [30]). Fact 6.4. Let T be a complete multi-sorted theory. If T has weak elimination of imaginaries and codes finite sets then T eliminates imaginaries. In view of Theorem 5.10 and Fact 6.4 it is only left to show that any finite set can be coded in G. Definition 6.5. 1. An equivalence relation E on a set X is said to be proper if it has at least two different equivalence classes. It is said to be trivial if for any x, y ∈ X we have E(x, y) if and only if x = y. A finite set F is primitive over A if there is no proper non-trivial (⌜F ⌝∪A)-definable equivalence relation on F . If F is primitive over ∅ we just say that it is primitive. To code finite sets we need numerous smaller results. This section is organized as follows: 1. Subsection 6.1: we analyze the stable and stably embedded multi-sorted structure V S k,C , consisting of the k-vector spaces red(s), where s is some O-lattice definable over C, an arbitrary imaginary set of parameters. This structure has elimination of imaginaries by results of Hurshovski in [27]. 2. Subsection 6.2: we introduce the notion of germ of a definable function f over a definable type p. We prove that germs can be coded in the stabilizer sorts. 3. Subsection 6.3: later we show that the code of any O-submodule M ⊆ K n is interdefinable with the code of its projection to the last coordinate and the germ of the function describing each of the fibers. We show that the same statement holds for torsors. 4. Subsection 6.4: we prove several results on coding finite sets in the one-dimensional case, e.g. if F is a primitive finite set of 1-torsors then it can be coded in G. 5. Subsection 6.5: we carry a simultaneous induction to prove that any finite set F ⊆ G r can be coded in the stabilizer sorts, and any definable function f ∶ F → G admits a code in the stabilizer sorts. 6. Subsection 6.6: We state the result on full elimination of imaginaries down to the stabilizer sorts. The multi-sorted structure of k-vector spaces By Corollary 2.3 the residue field k is stably embedded and it is a strongly minimal structure, because it is an algebraically closed field. This enables us to construct, over any imaginary base set of parameters C, a part of the structure that naturally inherits stability-theoretic properties from the residue field. Given a O-lattice s ⊆ K n we have red(s) = s Ms is a k-vector space. Definition 6.6. For any imaginary set of parameters C, we let V S k,C be the many-sorted structure whose sorts are the k vector spaces red(s) where s ⊆ K n is an O-lattice of rank n definable over C. Each sort red(s) is equipped with its k-vector space structure. In addition, V S k,C has any C-definable relation on products of the sorts. Definition 6.7. A definable set D is said to be internal to the residue field if there is a finite set of parameters F ⊆ G such that D ⊆ dcl eq (kF ). Each of the structures red(s) is internal to the residue field, and the parameters needed to witness the internality lie in red(s), so in particular each of the k-vector spaces red(s) is stably embedded. The entire multi-sorted structure V S k,C is also stably embedded and stable, and in this subsection we will prove that it eliminates imaginaries. Notation 6.8. We recall that given an O-submodule M of K, we write S M to denote the end-segment induced by M , i.e.{v(x) x ∈ M }. We recall some definitions from [27] to show that V S k,C eliminates imaginaries. Definition 6.9. Let t be a theory of fields (possibly with additional structure). A t-linear structure A is a structure with a sort k for a model of t, and addional sorts (V i i ∈ I) denoting finite-dimensional vector spaces. Each V i has (at least) a k-vector space structure, and dimV i < ∞. We assume that: 1. k is stably embedded, 2. the induced structure on k is precisely given by t, 3. The V i are closed under tensor products and duals. Moreover, we say it is flagged if for any finite dimensional vector space V there is a flitration V 1 ⊆ V 2 ⊆ ⋅ ⋅ ⋅ ⊆ V n = V by subspaces, with dimV i = i and V i is one of the distinguished sorts. The following is [27,Lemma 5.2]. Lemma 6.10. If k is an algebraically closed field and A is a flagged k-linear structure, then A admits elimination of imaginaries. Notation 6.11. Let A be an O-module. Let MA = {xa x ∈ M, a ∈ A} we denote as red(A) the quotient O-module A MA. We observe that red(A) = A MA is canonically isomorphic to A ⊗ O k. Proof. This is a straightforward computation and it is left to the reader. Remark 6.13. Given A ⊆ K n and B ⊆ K m O-lattices, there is some O-lattice C ⊆ K mn such that A ⊗ O B is canonically identified with C. This isomorphism induces as well a one to one correspondence between red(A ⊗ O B) and red(C). Proof. Given K n and K m two vector spaces, the tensor product K n ⊗ K m is a K vector space whose basis is {e i ⊗ e j i ≤ n, j ≤ m} and it is canonically identified with K nm , via a linear map φ that extends the bijection between the basis sending e i ⊗ e j to e ij . Given A ⊆ K n and B ⊆ K m O-lattices, then A ⊗ O B is an O-lattice of K n ⊗ K m and we denote as C = φ(A ⊗ O B). This map induces as well an identification between red(A ⊗ O B) and red(C) such that the following map commutes: where for any f ∈ Hom O (A, B) and a ∈ A: A ⊗ O B C red(A ⊗ O B) red(C)φ(f + MHom O (A, B)) ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ red(A) → red(B) a + MA ↦ f (a) + MB. Proof. This is a straightforward computation and it is left to the reader. Proof. Let A be an O-lattice of K n . By linear algebra K n can be identified with its dual space (K n ) * . Let A * = {T ∈ (K n ) * for all a ∈ A, T (a) ∈ O}. A * is canonically identified with Hom O (A, O) via the map that sends a transformation T to T ↾ A . Also A * is isomorphic to some O-lattice C of K n , as there is a canonical isomorphism between K n and its dual space. So we have a definable O-isomorphism φ between Hom O (A, O) and C, and this correspondence induces an identificationφ between red (Hom O (A, O)) and red(C) making the following diagram commute: Hom O (A, O) C red(Hom O (A, O)) red(C) φ red red φ Remark 6.16. Let A ⊆ K n be an O-lattice. There is a sequence of O-lattices < A i i ≤ n > such that < red(A i ) i ≤ n > is a flag of red(A) and for each i ≤ n, ⌜A i ⌝ ∈ dcl eq (⌜A⌝). Proof. We proceed by induction on n, the base case is trivial. Let A ⊆ K n+1 , and π n+1 ∶ K n+1 → K be the projection into the last coordinate. Let B ⊆ K n be the O-lattice such that ker(π n+1 ) = B × {0} = A ∩ (K n × {0}). We observe that ⌜B⌝, ⌜π n+1 (A)⌝ ∈ dcl eq (⌜A⌝). By Corollary 3.1 B is a direct summand of A, so we have the exact splitting sequence 0 → B → A → π n+1 (A) → 0. Consequently, 0 → MB → MA → Mπ n+1 (A) → 0 and 0 → red(B) → red(A) → red(π n+1 (A)) → 0 are exact sequences that split. By the induction hypothesis, there is a sequence {0} ≤ A 1 ≤ ⋅ ⋅ ⋅ ≤ A n = B such that < red(A i ) i ≤ n > is a flag of red(B), dim(red(A i )) = i and ⌜A i ⌝ ∈ dcl eq (⌜B⌝) ⊆ dcl eq (⌜A⌝). Let A n+1 = A, the sequence < A i i ≤ n + 1 > satisfies the required conditions. Theorem 6.17. Let C ⊆ K eq , then V S k,C has elimination of imaginaries. Proof. The sorts red(s) where s is a O-lattice of K n and dcl eq (C)-definable form the multi-sorted structure VS k,C . Each red(s) carries a k-vector space structure. V S k,C is closed under tensor operation by Remark 6.13 and Fact 6.12. It is closed under duals by Remark 6.15 and Fact 6.14. By Remark 6.16 each sort red(s) where s is an O-lattice admits a complete filtration by C-definable vector spaces. Therefore, VS k,C is a flagged k-linear structure, so the statement is immediate consequence of 6.10. Germs of functions In this subsection we show how to code the germ of a definable function f over a definable type p(x) in the stabilizer sorts. Definition 6.18. Let T be a complete first order theory and M ⊧ T . Let B ⊆ M and p be a B-definable type whose solution set is P . Let f be an M -definable function whose domain contains P . Suppose that f = f c is defined by the formula φ(x, y, c) (so f c (x) = y). We say that f c and f c ′ have the same germ on P if the formula f c (x) = f c ′ (x) lies in p. By the definability of p the equivalence relation E φ (c, c ′ ) that states f c and f c ′ have the same germ on P is definable over B. The germ of f c on P is defined to be the class of c under the equivalence relation E φ (y, z), which is an element in M eq . We write germ(f, p) to denote the code for this equivalence class. Definition 6.19. Let p be a global type definable over B and let C a set of parameters. We say that a realization a of p is sufficiently generic over BC if a ⊧ p ↾ BC . We start proving some results that will be required to show how to code the germs of a definable function f over a definable type p in the stabilizer sorts. Let U ⊆ K be a 1-torsor, we recall Definition 5.1, where we defined the ⌜U ⌝-definable partial type. Σ gen U (x) = {x ∈ U } ∪ {x ∉ B B ⊊ U is a proper subtorsor of U }. We refer to this type as the generic type of U . When considering complete extensions of Σ gen U (x) one finds an important distinction between the closed and the open case. In Proposition 5.2 we proved that whenever U is a closed 1-torsor then Σ gen U (x) admits a unique complete extension. The open case inherits a higher level of complexity. Proposition 6.20. Let U be an open 1-torsor, then any completion of the generic type of U is ⌜U ⌝-definable. Proof. Let c ⊧ Σ gen U (x), it is sufficient to prove that p(x) = tp(c M) is ⌜U ⌝-definable. Let ∆ ∈ RJ(Γ), ℓ ∈ N, β ∈ Γ, k ∈ Z. First, we observe that for any a, a ′ ∈ U (M) we have that v(c − a) = v(c − a ′ ), because c realizes the generic type of U . In particular, v ∆ (x − a) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) ∈ p(x) if and only ifv ∆ (x − a ′ ) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) ∈ p(x). Pick some element δ ∈ Γ such that ρ ∆ (δ) = k ∆ , and let µ = π ℓ ∆ (v(c − a) + δ) ∈ dcl eq (∅). Then, for any a ∈ U (M) we have: v ∆ (x − a) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) ∈ p(x) if and only if π ℓ ∆ (β) = µ. If a ∉ U (M), then v(c − a) = v(b − a) for any b ∈ U (M). Hence: v ∆ (x − a) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) ∈ p(x) if and only if ∃b ∈ U v ∆ (b − a) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) φ(a,β) . Let ψ(a, β) ∶= a ∈ U ∧ π ℓ ∆ (β) = µ ∨ a ∉ U ∧ φ(a, β)). Hence, v ∆ (x − a) − ρ ∆ (β) + k ∆ ∈ ℓ(Γ ∆) ∈ p(x) if and only if ψ(a, β). Note that ⌜ψ(x, z)⌝ ∈ dcl eq (⌜U ⌝). We continue showing a dcl eq (⌜U ⌝)-definable scheme for the coset formulas. Let a ∈ U (M) and consider the definable end-segment S a = {v(x − a) x ∈ U }. For any a ≠ a ′ , S a = S a ′ , thus we write S to denote this set. Note that ⌜S⌝ ∈ dcl eq (⌜U ⌝) and let ∆ ∈ RJ(Γ). We recall that we denote by S ∆ the set ρ ∆ (S), which is a definable end-segment in Γ ∆. If S ∆ has a minimum element γ ∈ dcl eq (S ∆ ) ⊆ dcl eq (⌜S⌝) then for any a ∈ U (M) we have: v ∆ (x − a) = ρ ∆ (β) + k ∆ ∈ p(x) if and only if ρ ∆ (β) + k ∆ = γ. If S ∆ does not have a minimum, then for any a ∈ U (M), v ∆ (x − a) = ρ ∆ (β) + k ∆ ∈ p(x) if and only if β ≠ β. Finally, for a ∉ U (M) we have that v(c − a) = v(b − a) for any b ∈ U (M), therefore for ∆ ∈ RJ(Γ) and k ∈ Z we have that: v ∆ (x − a) = ρ ∆ (β) + k ∆ ∈ p(x) if and only if ∃b ∈ U v ∆ (b − a) = ρ ∆ (β) + k ∆ . Consequently, for each quantifier free formula φ(x, y), we have shown the existence of a formula dφ(y) such that ⌜dφ(y)⌝ ∈ dcl eq (⌜U ⌝) and φ(x, b) ∈ p(x) if and only if ⊧ dφ(b). By quantifier elimination (see Corollary 2.33), the type p(x) is completely determined by the quantifer free formulas, we conclude that p(x) is ⌜U ⌝-definable. Corollary 6.21. Let U be a definable 1-torsor, then each completion p(x) of Σ gen U (x) is ⌜U ⌝-definable. Proof. This follows immediately by combining Proposition 5.2 and Proposition 6.20. Proof. Let c be a realization of the type p(x), a ∈ M (M) and d = c + a. As Σ gen M (x) ⊆ p(x), c ∈ A and c ∉ U for any proper subtorsor U ⊆ A. First we argue that d ⊧ Σ gen M (x). Because A is a O-submodule of K (in particular closed under addition) we have d ∈ A. And if there is a subtorsor U ⊊ A such that d ∈ U , then c ∈ −a + U ⊊ A contradicting that c ⊧ Σ gen A (x). For any ∆ ∈ RJ(Γ), element z ∈ M (M) and realization b ⊧ Σ gen A (x) we have v ∆ (z − b) = v ∆ (b) . Thus for any n ∈ N and β ∈ Γ ∆: v ∆ (b − z) − β ∈ n(Γ ∆) if and only if v ∆ (b) − β ∈ n(Γ ∆). We conclude that d and c must satisfy the same congruence and coset formulas, because c is a realization of the generic type of M , a ∈ M (M) and v ∆ (d) Proof. Let c be a realization of p(x) and fix a ∈ M (M). By Proposition 6.22, d = c − a is also a realization of p(x). The statement now follows because a = c − d. = v ∆ (c + a) = v ∆ (c − (−a)) = v ∆ (c). Proposition 6.24. Let (I 1 , . . . , I n ) ∈ I n , for every O-module M of type (I 1 , . . . , I n ) we can find a type p M (x 1 , . . . ,x n ) ∈ S n×n (K) such that: 1. p M (x) is definable over ⌜M ⌝, 2. A realization of p M (x) is a matrix representation of M . This is if (d 1 , . . . ,d n ) ⊧ p M (x) then [d 1 , . . . ,d n ] is a representation matrix for M . Proof. Let M be the monster model. Step 1: We define a partial type Σ (I1,...,In) satisfying condition i) and ii) for the canonical module C (I1,...,In) . Such a type is left-invariant under the action of Stab(I 1 , . . . , I n ). We consider the set J = {(i, j) 1 ≤ i, j ≤ n} , and we equip it with a linear order defined as: (i, j) < (i ′ , j ′ ) if and only if j < j ′ or j = j ′ ∧ i ′ > i. And we consider an enumeration of J = {v 1 , . . . , v n 2 } such that v 1 < v 2 < ⋅ ⋅ ⋅ < v n 2 . By Proposition 4.8 Stab (I1,...,In) = {((a i,j ) 1≤i,j≤n ∈ B n (K) a ii ∈ O × ∆ S I i ∧ a ij ∈ Col(I i , I j ) for each 1 ≤ i < j ≤ n }. Hence, for each 1 ≤ m ≤ n 2 let: p vm (x) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ tp(0) if v m = (i, j) where 1 ≤ j < i ≤ n, Σ gen O∆ S I i (x) if v m = (i, i) for some 1 ≤ i ≤ n, Σ gen Col(Ii,Ij ) (x) if v m = (i, j) where 1 ≤ i < j ≤ n. Consider the partial definable type Σ C (I 1 ,...,In ) = p v n 2 ⊗ ⋅ ⋅ ⋅ ⊗ p v1 . Given a realization of this type (b v n 2 , . . . , b v1 ) ⊧ Σ C (I 1 ,...,In ) let B = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ b vn b v2n . . . b v n 2 ⋮ ⋮ ⋮ ⋮ b v1 b vn+1 . . . b v n(n−1)+1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ By construction B is an upper triangular matrix such that (B) i,j ∈ Col(I i , I j ) for 1 ≤ i < j ≤ n and (B) ii ∈ O × ∆ S I i , thus its column vectors constitute a basis for the canonical module. To check left invariance, it is sufficient to take A ∈ Stab (I1,...,In) (M) and argue that for each 1 ≤ m ≤ n 2 the element (AB) vm is a realization of generic type p vm sufficiently generic over M ∪ {(AB) v k k < m}. Suppose that v m = (i, j), then (AB) vm = (AB) ij = j k=i a ik b kj = a ii b ij + ⋅ ⋅ ⋅ + a ij b jj . In the fixed enumeration we guarantee that b ij is chosen sufficiently generic over M ∪ {b kj i < k ≤ j}. For each i < k ≤ j, a ik b kj ∈ Col(I i , I j ), thus we have that v(a ii b ij ) = v((AB) ij ). Consequently, (AB) ij is a realization of p vm generic over M together with all the elements b kl where (k, l) appears earlier in the enumeration than (i.j). Step 2: For any O-module M ⊆ K n of type (I 1 , . . . , I n ) there is an ⌜M ⌝-definable type p M , such that any realization of p M is a representation matrix for M . Let T = M n → M n be a linear transformation whose representation matrix is upper triangular and T sends the canonical module C (I1,...,In) to M . And let Σ M = T (Σ C (I 1 ,...,In ) ), its definition is independent from the choice of T , because given two linear transformations with upper triangular representation matrices Theorem 6.25. Let X be a definable subset of K n and let p(x) ⊢ x ∈ X be a global type definable over ⌜X⌝. Let f = X → G be a definable function. Then the p-germ of f is coded in G over ⌜X⌝. Proof. We first assume that f ∶ X → B n (K) Stab (I1,...,In) . Let B = dcl G (germ(f, p), ⌜X⌝) = dcl eq (germ(f, p), ⌜X⌝) ∩ G. Suppose that f is c-definable, and let q = tp(c B) and Q be its set of realizations. Fix some c ′ ∈ Q. We denote by f ′ the function obtained by replacing the parameter c by c ′ in the formula defining f . Let M be a small model containing Bcc ′ . Step Because tp(c B) = tp(c ′ B) we can find an automorphism σ ∈ Aut(M B) sending c to c ′ . Then u f ′ (a) = σ(u f (a) ), which is a definable type over ⌜f ′ (a)⌝. Let d ′ be a realization of u f ′ (a) ↾ M . Let r ′ (x, y) = tp(a, d ′ M ), then σ(r(x, y)) = r ′ (x, y) = r(x, y) by the B-invariance of r(x, y), so tp(a, d M ) = tp(a, d ′ M ). Since f (a) ∈ dcl eq (d) and f ′ (a) ∈ dcl eq (d ′ ), we must have that tp(a, f (a) M ) = tp(a, f ′ (a) M ) and since f and f ′ are both definable over M this implies that f (a) = f ′ (a). Step 2: The germ(f, p) is coded in the stabilizer sorts G over ⌜X⌝. Firstly, note that for any a ⊧ p(x) ↾ Bcc ′ it is the case that f (a) = f ′ (a). In fact, by Step 1 f (x) = f ′ (x) ∈ tp(a M ) and f (x) = f ′ (x) is a formula in tp(a Bcc ′ ) . Then f and f ′ both have the same p-germ. Since p(x) is definable over B = dcl G (B) the equivalence relation E stating that f and f ′ have both the same p-germ is B-definable. Since for any realization a ⊧ p(x) ↾ Bcc ′ it is the case that f (a) = f ′ (a), the class E(x, c) is B-invariant, therefore germ(f, p) is definable over B = dcl G (B). We continue arguing that the statement for f = X → B n (K) Stab (I1,...,In) is sufficient to conclude the entire result. For each ∆ ∈ RJ(Γ) there is a canonical isomorphism Γ ∆ ≅ K × O × ∆ , where O ∆ is the valuation ring of the coarsened valuation v ∆ induced by ∆. The functions whose image lie in Γ ∆ are being considered in the previous case, because Stab(O ∆ ) = O × ∆ . By Proposition 3.6 any definable function f = X → k = O M can be seen as a function whose image lies in B 2 (K) Stab (M,O) . It is only left to consider the case where the target set is K. The proof follows in a very similar manner as the case for f ∶ X → B n (K) Stab (I1,...,In) . Let a ⊧ p(x). We let a ⊧ p(x) and let r(x, y) ∶= tp(a, f (a) M ), this is a B-definable type by Theorem 5.9, in particular B-invariant. Likewise, r ′ (x, y) ∶= tp(a, f ′ (a) M ) is B-invariant, thus tp(a, f (a) M ) = tp(a, f ′ (a) M ). Since f and f ′ are both definable over M , this implies that f (a) = f ′ (a). The rest of the proof follows exactly as in the second step. Some useful lemmas In this subsection we prove several lemmas that will be required to code finite sets. Notation 6.26. Let U ⊆ K be a 1-torsor, U = a + bI where I ∈ I. Let A = bI, we consider the definable equivalence relation over U given by: E(b, b ′ ) if and only if b − b ′ ∈ MA. We write red(U ) to denote the definable quotient U E. We write p U (x) to denote some type centered in U extending the generic type of U Σ gen U (x) which is ⌜U ⌝definable. If U is closed such type is unique (see Proposition 5.2) and for the open case there are several choices for this type, but all of them are ⌜U ⌝ -definable by Proposition 6.20. Proof. We focus first on the construction of the type q, and later we show that it satisfies the required conditions. Suppose that each 1-torsor B i = c i + b i I i for some I i ∈ I. By transitivity all the balls are of the same type I ∈ I and for all 1 ≤ i, j ≤ n we have that v(b i ) = v(b j ). Hence, we may assume that each B i is of the form c i + bI for some fixed c i , b ∈ K and I ∈ I. We argue by cases: 1. Case 1: All the 1-torsors B i are closed. For each i ≤ n, let p Bi (x) be the unique ⌜B i ⌝-definable type given by Proposition 5.2. Define r(x 1 , . . . , x n ) = p B1 (x 1 ) ⊗ ⋅ ⋅ ⋅ ⊗ p Bn (x n ), this is ⌜W * ⌝-definable type. Let (a 1 , . . . , a n ) ⊧ r(x 1 , . . . , x n ) and let q = tp(⌜{a 1 , . . . , a n }⌝ M). This type is well defined independently of the choice of the order, because each type p Bi (x) is generically stable, thus it commutes with any definable type by [32][Proposition 2.33]. The type q is ⌜W * ⌝-definable and centered at W * . Let S bI = v(b) + S I = {v(b) + v(x) x ∈ I}, this is a definable end-segment of Γ with no minimal element. Let r(y) be the ⌜S bI ⌝ definable type given by Fact 2.31, extending the partial generic type Σ gen S bI (y). Fix elements a = {a 1 , . . . , a n } ∈ W (M) and δ ⊧ r(y), we define C(a, δ) = {C 1 (a), . . . , C n (a)}, where each C i (a) is the closed ball around a i of radius δ. For each i ≤ n we take p Ci(a) (x) the unique extension of the generic type of C i (a) given by Proposition 5.2, this type is ⌜C i (a)⌝-definable. Let q a δ be the symmetrized generic type of C 1 (a) × ⋅ ⋅ ⋅ × C n (a), i.e. we take tp(⌜{b 1 , . . . , b n }⌝ Mδ) where (b 1 , . . . , b n ) is a realization of the generically stable type p C1(a) ⊗ ⋅ ⋅ ⋅ ⊗ p Cn(a) . Let q a be the definable global type satisfying that d ⊧ q a if and only if there is some δ ⊧ r(y) and d ⊧ q a δ . Claim 6.27.1. The type q a does not depend on the choice of a. Proof. Let a ′ = {a ′ 1 , . . . , a ′ n } ∈ W (M) and δ ⊧ r(y). For each i ≤ n, a i , a ′ i ∈ B i meaning that a i − a ′ i ∈ bI i.e. v(a i − a ′ i ) ∈ S bI = v(b) + S I and note that v(a i − a ′ i ) ∈ Γ(M) . By construction, δ ∈ S bI and δ < v(a i − a ′ i ), thus the closed ball of radius δ concentrated on a i is the same closed ball of radius δ concentrated on a ′ i . As the set of closed balls C(a, δ) = C(a ′ , δ) we must have that q a δ = q a ′ δ , and since this holds for any δ ⊧ r(y) we conclude that q a does not depend on the choice of a and we simply denote it as q. This type q is ⌜W * ⌝-definable and it is centered in W * . This finalizes the construction of the type q that we are looking for. We continue checking that the type q that we have constructed satisfies the other properties that we want. In both cases, by construction, if b * is a sufficiently generic realization of q over C and B is the finite set coded by b * if we take b i the unique element of B that lies on B i then b i realizes the generic type Σ gen Bi (x). By Corollary 6.21, the type tp(b i M) is ⌜B i ⌝-definable. If the torsors are closed, then the types p Bi (x) are all compatible under the action of Aut(M ⌜F ⌝) as there is a unique complete extension of the generic type of B i , this is guaranteed by Proposition 5.2. We now work the details for the open case, let's fix σ ∈ Aut(M ⌜F ⌝) and assume that σ(B i ) = B j . The type r(y) is ⌜F ⌝-definable, thus σ(r(y)) = r(y). By construction, for all k ≤ n the type p B k (x) that we are fixing is the unique extension of generic type of some closed ball C δ (a k ) where a k ∈ B i and δ ⊧ r(y). And for any a, a ′ ∈ B i and δ, δ ′ ⊧ r(y), C δ (a) = C δ ′ (a ′ ). If σ(B i ) = B j , then σ(b i ) ∈ B j and σ(C δ (b i )) = C σ(δ) (σ(b i )) = C δ (b j ). By Proposition 5.2 there is a unique complete extension of the generic type of the closed ball C δ (a k ) for each k ≤ n, thus σ(p Bi (x)) = p Bj (x) as desired. Notation 6.28. Let M ⊆ K n be a non-trivial definable O-module and let Z =d + M be a torsor. Let π n = K n → K be the projection into the last coordinate. Consider the function that describes the fiber in Z of each element at the projection, this is h Z (x) = {y ∈ K n−1 (y, x) ∈ Z}. Fact 6.29. Let M be a O-submodule of K n . Then for any x, z ∈ π n (M ) we have that h M (x) + h M (y) = h M (x + y). Furthermore, if Z =b + M ∈ K n M is a torsor, then for any d 1 , d 2 ∈ π n (Z) we have that d 1 − d 2 ∈ π n (N ) and: h N (d 1 − d 2 ) = h Z (d 1 ) − h Z (d 2 ). Proof. This is a straightforward computation and it is left to the reader. Let c be a realization of the type p A (x) sufficiently generic over ⌜M 1 ⌝⌜M 2 ⌝ and d = c − y. By Proposition 6.22 d is a realization of p A (x) sufficiently generic over ⌜M 1 ⌝⌜M 2 ⌝, and y = c − d. As germ(h M1 , p A ) = germ(h M2 , p A ), we have that h M1 (c) = h M2 (c) and h M1 (d) = h M2 (d). By Fact 6.29, h M1 (y) = h M1 (c) − h M1 (d) = h M2 (c) − h M2 (d) = h M2 (y). Consequently, M 1 = M 2 as desired. Corollary 6.31. Let n ≥ 2 be a natural number and N ⊆ K n be a definable O-submodule. Let Z =b + N be a torsor, then ⌜Z⌝ is interdefinable with (⌜π n (Z)⌝, germ(h Z , p πn(Z) )), where p πn (Z) is any global type containing the generic type of π n (Z). Proof. We first show that ⌜Z⌝ is interdefinable with ⌜π n (Z)⌝, ⌜N ⌝, germ(h Z , p πn(Z) ) . Let Z 1 =b 1 + N and Z 2 =b 2 + N torsors, and suppose that A = π n (Z 1 ) = π n (Z 2 ). Let c be a realization of the type p A (x) sufficiently generic over ⌜Z 1 ⌝⌜Z 2 ⌝, then h Z1 (c) = h Z2 (c). If Z 1 ≠ Z 2 , then they must be disjoint because they are different cosets of N . But if h Z1 (c) = h Z2 (c) then Z 1 ∩ Z 2 ≠ ∅, so Z 1 = Z 2 . We continue showing that N is definable over (⌜π n (Z)⌝, germ(h Z , p πn(Z) )). We will find a global type p πn(N ) (x) extending the generic type of π n (N ) such that (⌜π n (N )⌝, germ(h N , p πn(N ) )) ∈ dcl eq (⌜π n (Z)⌝, germ(h Z , p πn(Z) )). By Lemma 6.30, this guarantees that ⌜N ⌝ ∈ dcl eq (⌜π n (Z)⌝, germ(h Z , p πn(Z) )). First, let y ′ ∈ π n (Z), then π n (N ) = {y − y ′ y ∈ π n (Z)}. As this definition is independent from the choice of y ′ , we have ⌜π n (N )⌝ ∈ dcl eq (⌜π n (Z)⌝). Proof. We proceed by contradiction and we assume the existence of some proper M-definable subtorsor B ⊆ π n (N ) such that d 2 − d 1 ∈ B. Then d 2 ∈ d 1 + B ⊊ π n (Z), and d 1 + B is a proper Md 1 -definable torsor of π n (Z), but this contradicts that d 2 is a sufficiently generic realization of Σ gen πn(Z) (x) over Md 1 . Let p πn(N ) (x) = tp(d 2 − d 1 M), we observe that such type is independent of the choices of d 1 and d 2 as the congruence and coset formulas are completely determined in the type q(x 2 , x 1 ). It is only left to show that germ(h N , p πn(N ) ) ∈ dcl eq (⌜π n (Z)⌝, germ(h Z , p πn(Z) )). Let σ ∈ Aut(M (⌜π n (Z)⌝, germ(h Z , p πn(Z) ))), we will show that h N (x) = h σ(N ) (x) ∈ p πn(N ) (x). Because p πn(N ) (x) is ⌜π n (N )⌝-definable then σ(p πn(N ) (x)) = p πn(N ) (x). As σ(germ(h Z , p πn(Z) )) = germ(h Z , p πn(Z) ), then h Z (x) = h σ(Z) (x) ∈ p πn(Z) . Let C = {⌜Z⌝, ⌜σ(Z)⌝, ⌜N ⌝, ⌜σ(N )⌝}. In particular, if d 1 ⊧ p πn(Z) ↾ C and d 2 ⊧ p πn(Z) ↾ Cd1 then h Z (d 1 ) = h σ(Z) (d 1 ) and h Z (d 2 ) = h σ(Z) (d 2 ). By Fact 6.29, h N (d 2 − d 1 ) = h Z (d 2 ) − h Z (d 1 ) = h σ(Z) (d 2 ) − h σ(Z) (d 1 ) = h σ(N ) (d 2 − d 1 ). Consequently σ(germ(h N , p πn(N ) )) = germ(h N , p πn(N ) ). Because σ is arbitrary, we conclude that germ(h N , p πn(N ) ) ∈ dcl eq (⌜π n (Z)⌝, germ(h Z , p πn(Z) )), as required. Some coding lemmas Lemma 6.32. Let A be a definable O-lattice in K n and U ∈ K n A be a torsor. Let B be the O-lattice in K n+1 that is interdefinable with U (given by Proposition 3.6).Then there is a ⌜U ⌝definable injection : f = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ red(U ) → red(B) b + MA ↦ (b, 1) + MB. Proof. We recall how the construction of B was achieved. Given anyd ∈ U , we can represent B = A 2 + d 1 O, where A 2 = {0} × A.b, b ′ ∈ U , b − b ′ ∈ MA if and only if b 1 − b ′ 1 ∈ MB. This shows that the map f is a ⌜U ⌝-definable injection. Lemma 6.33. Let F be a primitive finite set of 1-torsors, then F can be coded in G. Proof. If F = 0 or F = 1 the statement follows clearly. So we may assume that F > 1. By primitivity all the torsors in F are translates of the same O-submodule of K. Indeed, there are some b ∈ K and I ∈ I that for any t ∈ F there is some a t ∈ K satisfying t = a t + bI. Moreover, there is some δ ∉ v(b) + S I such that for any two different torsors t, t ′ ∈ F if x ∈ t and y ∈ t ′ then v(x − y) = δ. Let T = ⋃ t∈F t. We define J F = {Q(x) ∈ K[x] Q(x) has degree at most F and for all x ∈ T , v(Q(x)) ∈ v(b) + ( F − 1)δ + S I }. Step 1: ⌜J F ⌝ is interdefinable with ⌜F ⌝. Observe that J F is definable over ⌜F ⌝, because v(b), ⌜T ⌝, δ lie in dcl eq (⌜F ⌝). Hence, it is sufficient to prove that we can recover F from J F . For this we will show that given a monic polynomial Q(x) ∈ K[x] with exactly F -different roots in K each of multiplicity one, we have that Q(x) ∈ J F if and only if Q(x) satisfies all the following condition: Condition: Let {β 1 , . . . , β F } ⊆ K be the set of all the roots of Q(x) (note that all of them are different). For each 1 ≤ i ≤ F there is some t ∈ F such that β i ∈ t. And all the roots of Q(x) lie in different torsors, i.e. if i ≠ j, take t, t ′ ∈ F such that β i ∈ t and β j ∈ t ′ then t ≠ t ′ . We first show that a monic polynomial Q(x) with exactly F -different roots in K each of multiplicity one satisfying the condition above belongs to J F . Let R = {β 1 , . . . , β F } ⊆ K the set of all the (different) roots of Q(x). Let x ∈ T , then there is some t ∈ F such that x ∈ t. Let β i be the root of Q(x) that belongs to t, then x, β i ∈ t, so v(x − β i ) ∈ v(b) + S I . For any other index j ≠ i, let t ′ ∈ F be such that β j ∈ t ′ , because t ≠ t ′ , v(x − β j ) = δ. Summarizing we have: v Q(x) = v k≤ F (x − β k ) = v(x − β i ) ∈v(b)+S I + j≠i v(x − β j ) =( F −1)δ ∈ v(b) + ( F − 1)δ + S I . Consequently, Q(x) ∈ J F . For the converse, let Q(x) ∈ J F be a monic polynomial with exactly F -different roots R = {β 1 , . . . , β F } ⊆ K. We show that Q(x) satisfies the condition, i.e. each root belongs to some torsor t ∈ F and any two different roots belong to different torsors of F . Claim 6.33.1. Given any torsor t ∈ F , there is a unique root β ∈ R such that for all elements x ∈ t, v(x − β) > δ. Proof. Let t ∈ F be a fixed torsor. We first show the existence of some root β ∈ R such that for any x ∈ t, we have v(x − β) > δ. We argue by contradiction, so let t ∈ F and assume that there is no root β ∈ B such that v(x − β) > δ for all x ∈ t. Then for each element x ∈ t we have: v(Q(x)) = v i≤ F (x − β i ) = i≤ F v(x − β i ) ≤ F δ. In this case Q(x) ∉ J F , because F δ ∉ v(b) + ( F − 1)δ + S I as δ ∉ v(b) + S I . This concludes the proof for existence. For uniqueness, let {t 1 , . . . , t F } be some fixed enumeration of F . Let β i ∈ R be such that for all x ∈ t i we have v(x − β i ) > δ. We first argue that for any i ≠ j, we must have that β i ≠ β j . Suppose by contradiction that β i = β j = β, and let x ∈ t i and y ∈ t j , then: δ = v(x − y) = v((x − β) + (β − y)) ≥ min{v(x − β), v(y − β)} > δ. The uniqueness now follows because F = R . By Claim 6.33.1, we can fix an enumeration {t i i ≤ F } of F such that for any x ∈ t i , v(x − β i ) > δ. We note that if j ≠ i, then for any x ∈ t i we have that v(x − β j ) = δ. In fact, fix some y ∈ t j , as v(y − β j ) > δ we have: v (x − β j ) = v((x − y) + (y − β j )) = min{v(x − y) =δ , v(y − β j ) >δ } = δ. Claim 6.33.2. For each i ≤ F we have that β i ∈ t i . Proof. We fix some i ≤ F . Thus, for any x ∈ t i : v(Q(x)) = v k≤ F v(x − β k ) = v(x − β i ) + j≠i v(x − β j ) = v(x − β i ) + ( F − 1)δ. Because Q(x) ∈ J F , we must have that v(x − β i ) ∈ v(b) + S I . Thus β i ∈ t i . Moreover, by construction, if i ≠ j then t i ≠ t j . Step 2: F admits a code in the geometric sorts. By the first step F is interdefinable with J F . The latter one is an O-module, so by Lemma 5.5 it admits a code in the stabilizer sorts G. Lemma 6.34. Let F be a primitive finite set of 1-torsors such that F > 1. There is a ⌜F ⌝definable O-lattice s ⊆ K 2 and an ⌜F ⌝-definable injective map g = F → VS k,⌜s⌝ . Proof. Let F be a primitive finite set of 1-torsors. By primitivity, there is some d ∈ K and I ∈ I such that for any t ∈ F there is some a t ∈ K satisfying t = a t + dI. Moreover, there is some δ ∈ Γ (v(d) + S I ) such that for any pair of different torsors t, t ′ ∈ F , and x ∈ t, y ∈ t ′ we have v(x − y) = δ. Let T = ⋃ t∈F t, and take elements c ∈ T and b ∈ K such that v(b) = δ. Let U = c + bO. Then U is the smallest closed 1-torsor that contains all the elements of F . Note that U is definable over ⌜F ⌝. Let h be the map sending each element Coding of finite sets of tuples in the stabilizer sorts We start by recalling some terminology from previous sections for sake of clarity. Notation 6.36. Let M ⊆ K n be an O-module, and (I 1 , . . . , I n ) ∈ I be such that M ≅ ⊕ i≤n I i . For any torsor Z =d + M ∈ K n M we say that Z is of type (I 1 , . . . , I n ) and it has complexity n. We denote by π n ∶ K n → K the projection to the last coordinate and for a torsor Z =d + M ∈ K n M we write as A Z = π n (Z). We recall as well the notation introduced in 6.28 for the function that describes the fiber in Z of each element at the projection, this is h Z (x) = {y ∈ K n−1 (y, x) ∈ Z}. Definition 6.37. Let F be a finite set of torsors, the complexity of F is the maximum complexity of the torsors t ∈ F . The following is a very useful fact that we will use repeatedly. Fact 6.38. Let F be a finite set of torsors, then there is a finite set F ′ ⊆ G such that ⌜F ⌝ and ⌜F ′ ⌝ are both interdefinable. In particular, any definable function f ∶ F → P , where P is a finite set of torsors or P ⊆ G, is interdefinable with a fuction g ∶ F ′ → G, where F ′ ⊆ G. Proof. The statement follows immediately by Proposition 3.6. The main goal of this section is the following theorem. Theorem 6.39. For every m ∈ N ≥1 the following hold: • I m : For every r > 0 and finite set F ⊆ G r of size m then ⌜F ⌝ is interdefinable with a tuple of elements in G. • II m : For every F ⊆ G of size m and f ∶ F → G a definable function, then ⌜f ⌝ is interdefinable with a tuple in G. We will prove this statement by induction on m, we note that for m = 1 the statements I m and II m follow trivially. We now assume that I k and II k hold for each k ≤ m and we want to show I m+1 and II m+1 . In order to keep the steps of the proof easier to follow we break the proof into some smaller steps. We write each step as a proposition to make the document more readable. Proposition 6.40. Let F be finite set of torsors of size at most m + 1, then ⌜F ⌝ is interdefinable with a tuple of elements in G. Furthermore, any definable function f ∶ S → F , where S is a finite set of at most m + 1-torsors and F is a finite set of torsors can be coded in G. Proof. We will start by proving the following statements by a simultaneous induction on n: • A n : Any set F of torsors of size at most m + 1 of complexity at most n can be coded in G. • B n ∶ Every definable function f ∶ S → F , where S is a finite set of at most m + 1-torsors and F is a finite set of torsors of complexity at most n can be coded in G. We observe first that we may assume in A n that F is a primitive set of size m + 1. If F ≤ m the statement follows immediately by Fact 6.38 combined with I k for each k ≤ m. So we may assume that F has m + 1 elements. If F is not primitive, then we can find a non trivial equivalence E relation definable over ⌜F ⌝, and let C 1 , . . . , C l be the equivalence classes. For each i ≤ l C i ≤ m, by Fact 6.38 and because I k holds for each k ≤ m ⌜C i ⌝ is interdefinable with a tuple c i of elements in G. Because l ≤ m and I l holds, we can find a code c in the stabilizer sorts of the set {c 1 , . . . , c l }. The code ⌜F ⌝ is interdefinable with c ∈ G. Likewise, for B n we may assume that S is primitive over ⌜f ⌝. Otherwise, there is a ⌜f ⌝ ∪ ⌜S⌝ -definable equivalence relation E on S and let C 1 , . . . , C l be the equavalence classes of this relation. For each i ≤ l, C i ≤ m and let f i = f ↾ Ci . By Fact 6.38, for each i ≤ l ⌜f i ⌝ is interdefinable with a map g i ∶ S i → G where S i ⊆ G and S i ≤ m. Because II k holds for each k ≤ m, f i admits a code c i in G. Because I l holds, we can find a code c for the finite set {c 1 , . . . , c l }. The codes ⌜f ⌝ and c are interdefinable. We continue arguing for the base case n = 1. The statement A 1 holds by Lemma 6.33, while B 1 is given by (3) of Lemma 6.35. We now assume that A n and B n hold and we prove A n+1 and B n+1 . First we prove that A n+1 holds. Let F be a primitive finite set of torsors of size m + 1. By primitivity all the torsors in F are of the same type. For each Z ∈ F we write A Z to denote the projection of Z into the last coordinate. By primitivity of F the projections to the last coordinate are either all equal or all different. We argue by cases: For each x * ∈ W * the functions f x * and l x * can be coded in G. Z x Z l x * (A Z ) A Z x Z ′ l x * (A Z ′ ) Z' A Z ′ x Z ′′ l x * (A Z ′′ ) A Z ′′ Z" Proof. We argue first for the function f x * . If S is primitive over ⌜f x * ⌝ the statement follows by Lemma (2) 6.35. If S is not primitive over ⌜f x * ⌝ then there is an equivalence relation E definable over S ∪⌜f x * ⌝ and let C 1 , . . . , C l be the equivalence classes of E. For each i ≤ l, C i ≤ m and let f i x * = f x * ↾ Ci ∶ C i → K. For each i ≤ l ⌜f i ⌝ is interdefinable with a tuple c i of elements in G, this follows by combining Fact 6.38 and II k for each k ≤ m. Because I l holds, the set {c 1 , . . . , c l } admits a code c in the stabilizer sorts. Then ⌜f x * ⌝ and c are interdefinable. For the function l x * , the statement follows immediately by the induction hypothesis B n . Figure 1 By the induction hypothesis B n for each x ∈ A, the function g x can be coded in G because its range is of lower complexity. By compactness we can uniformize such codes, so we can define the function r ∶ A → G by sending x ↦ ⌜g x ⌝ . By Theorem 6.25 the germ of r over p A (x) can be coded in G over ⌜A⌝. By Lemma 6.33 the set S admits a code ⌜S⌝ in the stabilizer sorts. f (t) f (t ′ ) f (t ′′ ) t t ′ t ′′ A A A Claim 6.40.4. The code ⌜f ⌝ is interdefinable with (⌜A⌝, ⌜S⌝, germ(r, p A )), and the later is a sequence of elements in G. Proof. It is clear that (⌜A⌝, ⌜S⌝, germ(r, p A )) ∈ dcl eq (⌜f ⌝).We want to show that ⌜f ⌝ ∈ dcl eq (⌜A⌝, ⌜S⌝, germ(r, p A )). Let σ ∈ Aut(M ⌜A⌝, ⌜S⌝, germ(r, p A )). By Corollary 6.31 for each torsor Z ∈ F = {f (t) t ∈ S}, the code ⌜Z⌝ is being identified with the tuple (⌜A⌝, germ(h Z , p A )). Thus, the function f is interdefinable over ⌜A⌝ with the function: f ′ ∶ S → G that sends t ↦ germ(h f (t) , p A ). So, it is sufficient to argue that σ(⌜f ′ ⌝) = ⌜f ′ ⌝. Let B be the set of parameters required to define all the objects that have been mentioned so far. For any realization c of p A (x) sufficiently generic over B we must have that r(c) = σ(r)(c). Because germ(r, p A ) = σ(germ(r, p A )) = germ(σ(r), p A ). By definition, r(c) = ⌜g c ⌝ and σ(r)(c) = ⌜σ(g) c ⌝, where σ(g) c ∶ S → G is the function that sends t ↦ ⌜h σ(f )(t) (c)⌝. For any torsor t ∈ S there must be a unique element t ′ ∈ S such that σ(t ′ ) = t and h f (t) (c) = h σ(f )(σ(t ′ )) (c), as g c = σ(g) c . The later implies that germ(h f (t) , p A ) = germ(h σ(f )(σ(t ′ )) , p A ). We conclude that σ(t ′ , germ(h f (t ′ ) , p A )) = (t, germ(h f (t) , p A )) meaning that σ is acting as a bijection among the elements in the graph of f ′ . Therefore, σ(⌜f ′ ⌝) = ⌜f ′ ⌝, as desired. This completes the proof for the first case. 2. Case 2: All the projections are different i.e. A Z ≠ A Z ′ for all Z ≠ Z ′ ∈ F . Proof. Let f ∶ S → F be a definable injective function where S is a finite set of 1-torsors primitive over ⌜f ⌝. We consider the definable function that sends each torsor t ∈ S to the code of the projection into the last coordinate of the torsor f (t) ∈ F , more explicitly: π n+1 ○ f ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ S → G t ↦ ⌜π n+1 (f (t))⌝. By Lemma (3) 6.35, π ○ f can be coded in G, and by A n+1 the finite set F is coded by a tuple in G. It is sufficient to show the following claim: Claim 6.40.5. The code ⌜f ⌝ is interdefinable with the tuple (⌜π ○ f ⌝, ⌜F ⌝), which is a tuple in the stabilizer sorts. Proof. Clearly (⌜π ○ f ⌝, ⌜F ⌝) ∈ dcl eq (⌜f ⌝). Note that ⌜S⌝ ∈ dcl eq (⌜π ○ f ⌝) because S is the domain of the given function, we can describe the function f ∶ S → F by sending t ↦ ⌜Z t ⌝, where Z t is the unique torsor in F such that ⌜π n+1 (Z t )⌝ = (π ○ f )(t), we conclude that ⌜f ⌝ ∈ dcl eq (⌜π ○ f ⌝, ⌜F ⌝). As a consequence, f is coded in G by the tuple (⌜π ○ f ⌝, ⌜F ⌝). This finalizes the proof for the second case. Consequently, A n and B n hold for all n ∈ N. The statement follows. We continue arguing that I m+1 holds for r = 1. Proposition 6.41. Let F ⊆ G be a finite set of size m + 1 then F admits a code in G. Proof. If F is not primitive we show that ⌜F ⌝ can be coded in G, by using Fact 6.38 and the induction hypothesis I k for k ≤ m. We may assume that F is a primitive set, so all the elements of F lie in the same sort. If F is either contained in the main field or the residue field, then F is coded by a tuple of elements in the same field, because fields code uniformly finite sets. If F ⊆ Γ ∆ for some ∆ ∈ RJ(Γ) the statement follows as there is a definable order over the elements of F . If F ⊆ B n (K) Stab (I1,...,In) for some n ≥ 2, by Proposition 6.40 F admits a code in G. (Indeed, O-modules are in particular torsors). We continue showing that II m+1 holds, we first prove the following statement. Proposition 6.42. Let F be a finite set of torsors of size m + 1 and f ∶ F → P be a definable bijection, where P is a finite set of torsors. Suppose that F is primitive over ⌜f ⌝, then ⌜f ⌝ is interdefinable with a tuple of elements in G. Proof. We proceed by induction on the complexity of the torsors in F . The base case follows directly by Proposition 6.40. We assume the statement for any set of torsors F with complexity n and we prove it for complexity n + 1. By primitivity all the projections into the last coordinate are either equal or all distinct. For each torsor Z ∈ F we denote as A Z the projection of Z into the last coordinate. We argue by cases: 1. Case 1: All the projections are equal and let A = A Z for all Z ∈ F . For each x ∈ A, let I x = {⌜h Z (x)⌝ Z ∈ F } which describes the set of fibers at x. We define B = {x ∈ A I x = F } which is a ⌜F ⌝-definable set. For each y ∈ B we consider the map g y ∶ I y → P defined by sending h Z (y) ↦ f (Z), which is the function that sends each fiber to the image of the torsor under f . By the induction hypothesis we can find a code ⌜g y ⌝ in G, and by compactness we can uniformize such codes. Therefore we can define the function: r ∶ B → G by sending y ↦ ⌜g y ⌝. Let p A (x) be a global complete type containing the generic type of A, it is ⌜A⌝definable by Corollary 6.21. By Corollary 6.31, p A (x) ⊢ x ∈ B. In fact, if we fix a realization of the generic type c of p A (x) sufficiently generic over {⌜Z⌝ Z ∈ F }, and Z ≠ Z ′ ∈ F then the fibers h Z (c) and h Z ′ (c) must be different. By Theorem 6.25 the germ of r over p A (x) can be coded in G over ⌜A⌝. Claim 6.42.1. The code ⌜f ⌝ is interdefinable with (germ(r, p A ), ⌜F ⌝) which is a tuple in the stabilizer sorts G. Proof. Clearly (germ(r, p A ), ⌜F ⌝) ∈ dcl eq (⌜f ⌝). We will argue that for any automorphism σ ∈ Aut(M ⌜F ⌝, germ(r, p A )) we have σ(⌜f ⌝) = ⌜f ⌝. As each torsor Z ∈ F is being identified with the tuple (⌜A⌝, germ(h Z , p A )), and ⌜A⌝ ∈ dcl eq (⌜F ⌝) then it is sufficient to argue that: σ {(germ(h Z , p A ), f (Z)) Z ∈ F } = {(germ(h Z , p A ), f (Z)) Z ∈ F }. For any Z ∈ F there is a unique torsor Z ′ ∈ F such that σ(Z ′ ) = Z, because σ(⌜F ⌝) = ⌜F ⌝. Let D be the set of parameters required to define all the objects that have been mentioned so far. For any realization c of the type p A (x) sufficiently generic over D we have r(c) = σ(r)(c), because σ(germ(r, p A )) = germ(σ(r), p A ). Consequently r(c) = ⌜g c ⌝ = ⌜σ(g) c ⌝ = σ(r)(c). In particular, h σ(Z ′ ) (c) = h Z (c) which implies that germ(h σ(Z ′ ) , p A ) = germ(h Z , p A ). In addition, σ(f )(σ(Z ′ )) = σ(g)(h σ(Z ′ ) (c)) = g c (h Z (c)) = f (Z). Therefore, σ {(germ(h Z , p A )f (Z)) Z ∈ F } = {(germ(h Z , p A ), f (Z)) Z ∈ F }, as desired. 2. Case 2: All the projections are different. i.e. for all Z ≠ Z ′ ∈ F we have A Z ≠ A Z ′ . By Proposition 6.40 we can find a code in the stabilizer sorts for F , and ⌜F ⌝ ∈ dcl eq (⌜f ⌝) as it is the domain of this function. Let S = {A Z Z ∈ F } and define the function g ∶ S → F by sending A Z ↦ Z, where Z is the unique torsor in F satisfying that π n+1 (Z) = A Z . Clearly g is a ⌜F ⌝-definable bijection. We consider the map f ○ g ∶ S → P that sends A Z ↦ f (Z). By Proposition 6.40, the function f ○ g admits a code in the stabilizer sorts. Claim 6.42.2. The code ⌜f ⌝ is interdefinable with the tuple (⌜f ○ g⌝, ⌜F ⌝) which is a tuple in the stabilizer sorts. Proof. It is clear that (⌜f ○ g⌝, ⌜F ⌝) ∈ dcl eq (⌜f ⌝). For the converse note that S is definable over ⌜f ○ g⌝ as it is its domain. As F is given, we can define the function π ∶ F → S that sends Z ↦ A Z . This is the map that sends each torsor to its projection into the last coordinate. We observe that f = (f ○ g) ○ π, in fact f (Z) = (f ○ g)(A Z ). So ⌜f ⌝ ∈ dcl eq (⌜f ○ g⌝, ⌜F ⌝). Proposition 6.43. For every F ⊆ G finite set of size m + 1 and definable function f ∶ F → G, the code ⌜f ⌝ is interdefinable with a tuple of elements in G. Proof. Without loss of generality we may assume that F is primitive over ⌜f ⌝. Otherwise, there is a (⌜F ⌝∪⌜f ⌝)definable equivalence relation on F and we let C 1 , . . . , C l be the equivalence classes. For each i ≤ l we have C i ≤ m and let f i = f ↾ Ci . By the induction hypothesis, for each k ≤ m II k the code ⌜f i ⌝ is interdefinable with a tuple c i ∈ G. Because l ≤ m and I l holds the set {c 1 , . . . , c l } admits a code c ∈ G. Then ⌜f ⌝ and c are interdefinable. Hence, we may assume that F is primitive over ⌜f ⌝. By primitivity f is either constant or injective. If f is constant equal to some c then ⌜f ⌝ is interdefinable with the tuple (⌜F ⌝, c), which lies in the stabilizer sorts by Proposition 6.41. Summarizing, we may assume that f is an injective function and F is primitive over ⌜f ⌝. By primitivity all the torsors of F lie in the same sort. If F is contained in the residue field, then F is interdefinable with the code of a finite set of 1-torsors of type M and the statement follows by Proposition 6.42. If F ⊆ B n (K) Stab (I1,...,In) for some n ≥ 2, the statement follows by Proposition 6.42, because O-modules are torsors. If F ⊆ Γ ∆ for some ∆ ∈ RJ(Γ), then we can list the elements of F in increasing order γ 1 < ⋅ ⋅ ⋅ < γ m+1 , and the tuple (γ i , f (γ i )) 1≤i≤m+1 lies in the stabilizer sorts and is interdefinable with the code of f . It is therefore left to consider the case where F ⊆ K. We may assume that ⌜F ⌝ is a tuple of elements in the main field, as fields code finite sets. Let U be the smallest closed torsor that contains all the elements of F , this is a ⌜F ⌝-definable set. Let g the function that sends each element x ∈ F to the unique class of red(U ) that contains such element. Let s be the Olattice whose code is interdefinable with ⌜U ⌝, and let h = red(U ) → red(s) be the map given by Lemma 6.32. Let D = h ○ g(F ), which is an ⌜F ⌝-definable finite subset of red(s). By Proposition 6.42, the composition f ○ g −1 ○ h −1 = D → G can be coded in the stabilizer sorts G. As h ○ g = F → D is a ⌜F ⌝-definable bijection, then f is interdefinable with the tuple (⌜F ⌝, ⌜f ○ g −1 ○ h −1 ⌝) which is a sequence of elements in G. Finally, we conclude proving that I m+1 holds for r > 0. Proposition 6.44. For any r > 0 let F ⊆ G r be a finite set of size m + 1. Then F can be coded in G. Proof. Let r > 0 and F be a finite set of G r of size m + 1. Suppose that F is not primitive, that means that we can find a non trivial equivalence E relation definable over ⌜F ⌝, and let C 1 , . . . , C l be such classes. For each i ≤ l, C i ≤ m, because I k holds for each k ≤ m we can find a code c i ∈ G. As l ≤ m by I l holds, we can find a code c in the stabilizer sorts of the set {c 1 , . . . , c l }, because l < m + 1. The code ⌜F ⌝ is interdefinable with c. We assume that F is a primitive set. Let π i = G r → G be the projection into the i − th coordinate. By primitivity of F each projection π i es either constant or injective. As F > 1 there must be an index 1 ≤ i 0 ≤ r such that π i0 is injective and F 0 = π i0 (F ) is a primitive finite subset of G. By Proposition 6.41 we can find a code ⌜F 0 ⌝ in G. For each other index i ≠ i 0 , by Proposition 6.43 we have that π i ○ π −1 i0 = F 0 → G can be coded in the stabilizer sorts. Then ⌜F ⌝ is interdefinable with the tuple (⌜F 0 ⌝, (⌜π i ○ π −1 i0 ⌝) i≠i0 ) which is a tuple in the stabilizer sorts, as required. This completes the induction on the cardinality of the set F . Because I m holds for each m ∈ N we can conclude with the following statement. Theorem 6.45. Let r > 0 and F ⊆ G r , then ⌜F ⌝ is interdefinable with a tuple of elements in G. Putting everything together We conclude this section with our main theorem. Theorem 6.46. Let K be a henselian valued field of equicharacteristic zero, residue field algebraically closed and dp-minimal value group. Then K eliminates imaginaries in the languageL, where the stabilizer sorts are added. Proof. By Theorem 5.10, K has weak elimination of imaginaries down to the stabilizer sorts. By Fact 6.4 it is sufficient to show that finite sets can be coded, this is guaranteed by Theorem 6.45. Theorem 2 . 7 . 27The possible completions of the theory of regular groups, are: 1. the theory of discrete regular groups, and 2. the completions of the theory of dense regular groups T χ where χ =∶ Primes → N ∪ {∞}, is a function specifying the index χ(p) = [Γ ∶ pΓ]. Fact 2 . 35 . 235There is a one-to-one correspondence between the O-submodules of K and the end-segments of Γ. Given M ⊆ K an O-submodule, we have that S M ∶= {v(x) x ∈ M } is an end-segment of Γ. We refer to S M as the end-segment induced by M . And given an end-segment S ⊆ Γ, the set M S Fact 2 . 37 . 237Let F = {M S S ∈ S},where S is the complete family of definable end-segments described in Corollary 2.24. Then F is a complete family of O-submodules of K.Definition 2.38.1. Let w ∶ K → Γ w be a valuation, γ ∈ Γ w and a ∈ K. The closed ball of radius γ centered at a according to the valuation w is the set of the formB γ (a) = {x ∈ K γ ≤ w(x − a)}, and the open ball of radius γ centered at a according to the valuation w is the set of the form B γ (a) = {x ∈ K γ < w(x − a)}. 2. A swiss cheese according to the valuation w is a set of the form A (B 1 ∪ ⋅ ⋅ ⋅ ∪ B n ) where for each i ≤ n, B i ⊊ A and the B i and A are balls according to the original valuation w ∶ K → Γ w . Fact 2 . 41 . 241Let (K, w) be a henselian valued field of equicharacteristic zero, and Q Fact 2 . 47 . 247Let N ⊆ K be a non-trivial O-submodule. Let n ∈ N {0} then N = nI where I is a copy of K, O or an (integral or fractional) ideal of O. Theorem 2 . 49 . 249Let K be a maximal valued field and n ∈ N ≥1 . Let N ⊆ K n be an O-submodule then N is maximal, and N is definably isomorphic to a direct sum of copies of K, O and (integral or fractional) ideals of O. Moreover, if N ≅ ⊕ i≤n I i where each I i is either a copy of K, O and (integral or fractional) ideals of O one can find an upper triangular basis {a 1 , . Corollary 3. 2 . 2Let (F, v) be a henselian valued field of equicharacteristic zero and let N, M ⊆ F be a definable O-submodules. Then for any definable O-homomorphism h ∶ M → K N . Then there is some b ∈ F satisfying that for any y ∈ M , h(y) = by + N . Definition 4 . 2 . 42For each n ∈ N, let {e 1 , . . . , e n } be the canonical basis of K n and (I 1 , . . . , I n ) ∈ F n . 1. Let C (I1,...,In) = { 1≤i≤n 3 . 3We define the subgroup Stab (I1,...,In) = {A ∈ B n (K) AC (I1,...,In) = C (I1,...,In) }. 4 . 4Let Λ (I1,...,In) ∶= {M M ⊆ K n is an O-module of type (I 1 , . . . , I n )}. Definition 4.4. [The stabilizer sorts] We consider the language L G extending the three sorted language L (defined in Subsection 2.3.1), where: Remark 4 . 5 . 45The geometric sorts for the case of ACV F are a particular instance of the stabilizer sorts. Let S n denotes the set of O-lattices of K n of rank n, these are simply the O-modules of type (O, . . . , O) of these torsors is considered in the stabilizer sorts as the code of an O module of type (M, . . . , M, O), because any torsor of the form a + MΛ for some Λ ∈ S n can be identified with an O-module of type (M, . . . , M n−times , O) (see Proposition 3.6). Notation 4 . 6 . 46For each ∆ ∈ RJ(Γ) we denote as O ∆ the valuation ring of K of the coarsened valuation v ∆ ∶ K × → Γ ∆ induced by ∆. Fact 4. 7 . 7Let I ∈ F and let S I = {v(x) x ∈ I}. Then Stab(I) = O × Proposition 4. 8 . 8Let n ∈ N, and (I 1 , . . . , I n ) ∈ F n . Then If a ∉ U (M), then for any Lemma 5. 5 . 5Let K ⊧ T and M ⊆ K n be a definable O-submodule. Then the code ⌜M ⌝ can be coded in the stabilizer sorts.Proof. Let V + the span of M and V − the maximal K-subspace of K n contained in M . By Fact 5.4 the subspaces V + and V − can be coded by a tuple c in K, and the quotient vector space V + V − admits a cdefinable basis. Hence V + V − can be identified over c with some power K m . And ⌜M ⌝ is interdefinable over c with the code of the image M V − in K m . But this image is an O-submodule of K m of type (I 1 , . . . , I m ) ∈ F m so it admits a code in B m (K) Stab (I1,...,In) . So M admits a code in the stabilizer sorts, as required. Fact 6 . 12 . 612Let A ⊆ K n and B ⊆ K m be O-lattices. Then red(A) ⊗ k red(B) can be canonically identified with red(A ⊗ O B). Fact 6 . 14 . 614Let A ⊆ K n and B ⊆ K m be O-lattices. Then there is an isomorphism φ ∶ red(Hom O (A, B)) → Hom k (red(A), red(B)), Remark 6 . 15 . 615Given an O-lattice A ⊆ K n , then Hom O (A, O) can be canonically identified with some O-module C of K n . So there is a correspondence between red(Hom O (A, O)) and red(C). Proposition 6 . 22 . 622Let M ⊆ M be a definable O-module and let p(x) be a global type containing the generic type of M . Then p(x) is stabilized additively by M (M) i.e. if c is a realization of p(x) and a ∈ M (M) then a + c is a realization of p(x). Corollary 6 . 23 . 623Let M ⊆ M be a definable O-module. Let p(x) be a global type containing the generic type of M . Let a ∈ M (M), then a is the difference of two realizations of p(x) i.e. we can find c, d ⊧ p(x) such that a = c − d. [T ] and [T ′ ] which send the canonical O-module of type (I 1 , . . . , I n ) to M , we have that [T ′ ] −1 [T ] ∈ Stab (I1,...,In) and the type Σ C (I 1 ,...,In ) is left invariant under the action of such group. Thus, Σ M is ⌜M ⌝-definable and given B ⊧ Σ M , the type tp(B M) is still ⌜M ⌝-definable by Corollary 6.21. 1 : 1For any realization a ⊧ p(x) ↾ M we have f (a) = f ′ (a). Let a be a realization of p(x) ↾ M . Let u f (a) (y) be the definable type over ⌜f (a)⌝ given by Proposition 6.24. Given any realization d = (d 1 , . . . ,d n ) ⊧ u f (a) (y), [d 1 , . . . ,d n ] is a representation matrix for the module f (a). In particular, f (a) = ρ (I1,...,I k ) (d 1 , . . . ,d k )) ∈ dcl eq (d). Let d be a realization of u f (a) (y) ↾ M and let r(x, y) = tp(a, d M ), then the type r(x, y) is B-definable, and therefore B-invariant. Lemma 6 . 27 . 627Let F = {B 1 , . . . , B n } be a primitive finite set of 1-torsors. Let W = {{x 1 , . . . , x n } x i ∈ B i }, and W * = {⌜{x 1 , . . . , x n }⌝ {x 1 , . . . , x n } ∈ W } Then there is a ⌜W * ⌝definable type q concentrated on W * . Furthermore, given b * a realization of q sufficiently generic over a set of parameters C, if we take B the finite set coded by b * , then if b ∈ B is the element that belongs to B i then b i is a sufficiently generic realization of some type p Bi (x), which is ⌜B i ⌝-definable and extends the generic type of B i . Lastly, the types p Bi (x) are compatible under the action of Aut(M ⌜F ⌝) meaning that if σ ∈ Aut(M ⌜F ⌝) and σ(B i ) = B j then σ(p Bi (x)) = p Bj (x). 2 . 2Case 2: All the 1-torsors B i are open, i.e. I ∈ I {O}. Lemma 6 . 30 . 630Let n ≥ 2 be a natural number and M ⊆ K n be a definable O-submodule. Then ⌜M ⌝ is interdefinable with (⌜π n (M )⌝, germ(h M , p πn(M) )), where p πn (M ) is any complete extension of the generic type of π n (M ). Proof. Let M 1 and M 2 be O-modules of the same type. Suppose that A = π n (M 1 ) = π n (M 2 ), and germ(h M1 , p A ) = germ(h M2 , p A ). We must show that M 1 and M 2 are the same O-module. Claim 6 .31. 1 . 61Let q(x 2 , x 1 ) = p πn(Z) (x 2 ) ⊗ p πn(Z) (x 1 ) and (d 2 , d 1 ) ⊧ q(x 2 , x 1 ). Then Σ gen πn(N ) (y) ⊆ tp(d 2 − d 1 M). This definition is independent from the choice ofd. We consider the ⌜U ⌝ definable injection φ = U → B that sends each elementb to b 1 . The interpretable sets red(U ) and red(B) = B MB are both ⌜U ⌝-definable. It follows by a standard computation that for any Figure 2 2Figure 2 Fact 2.25. Let S ⊆ Γ be a definable end-segment. Then ∆ S is a definable convex subgroup of Γ, thereforen∈N≥2 S ∆ n is a complete family. Proof. It is an immediate consequence of Proposition 2.22, Fact 2.21 and Remark 2.23. The following is [26, Fact 4.1]. Lemma 2.46. Let K be a maximal valued field, then any (integral or fractional) ideal I of O is maximal as an O-submodule of K. Moreover, any finite direct sum of maximal O-modules is also maximal.. By [8, Lemma 4.30] there is some maximal immediate extension of K ⊆ F ⊆ C. By [8, Theorem 7.12] K ≺ F . The following is [14, Lemma 5]. This is the function that sends each torsor t ∈ S to the fiber at x of the torsor f (t) ∈ F . [SeeFigure 1]. By construction, such a map must be injective and it is ⌜F ⌝-definable. of F to the unique class that contains it in red(U ). Let s be the O-lattice in K 2 , whose code is interdefinable with ⌜U ⌝ (given by Propositionof F to the unique class that contains it in red(U ). By construction, such a map must be injective and it is ⌜F ⌝-definable. Let s be the O-lattice in K 2 , whose code is interdefinable with ⌜U ⌝ (given by Proposition By Lemma 6.32 there is a ⌜s⌝-definable injection f ∶ red(U ) → red(s). Let g = f ○ h, the composition map g = F → VS k. ⌜s⌝ satisfies the required conditionsBy Lemma 6.32 there is a ⌜s⌝-definable injection f ∶ red(U ) → red(s). Let g = f ○ h, the composition map g = F → VS k,⌜s⌝ satisfies the required conditions. Suppose that F is primitive over ⌜f ⌝, then: 1. for any set of parameters C if f (F ) ⊆ V S k,C then f is coded in G over C, 2. if f (F ) ⊆ K then f is coded in G, 3. if f (F ) is a finite set of 1-torsors of the same type I ∈ I. Lemma 6.35. Let F be a finite set of 1-torsors and let f ∶ F → G be a definable function. Then f is coded in GLemma 6.35. Let F be a finite set of 1-torsors and let f ∶ F → G be a definable function. Suppose that F is primitive over ⌜f ⌝, then: 1. for any set of parameters C if f (F ) ⊆ V S k,C then f is coded in G over C, 2. if f (F ) ⊆ K then f is coded in G, 3. if f (F ) is a finite set of 1-torsors of the same type I ∈ I. Then f is coded in G. Also, by primitivity of F over ⌜f ⌝, f is either constant or injective. If it is constant and equal to c, the tuple (⌜F ⌝, c) is a code for f . By Lemma 6.33 ⌜F ⌝ admits a code in G, so (⌜F ⌝, c) is interdefinable with a tuple in the stabilizer sorts. all the three cases, we may assume that F > 1, otherwise the statement clearly follows. In the following arguments we assume that f is an injective function and that F ≥ 2Proof. In all the three cases, we may assume that F > 1, otherwise the statement clearly follows. Also, by primitivity of F over ⌜f ⌝, f is either constant or injective. If it is constant and equal to c, the tuple (⌜F ⌝, c) is a code for f . By Lemma 6.33 ⌜F ⌝ admits a code in G, so (⌜F ⌝, c) is interdefinable with a tuple in the stabilizer sorts. In the following arguments we assume that f is an injective function and that F ≥ 2. Let s be the O-lattice of K 2 and g ∶ F → V S k,C⌜s⌝ the injective map given by Lemma 6.34. Both s and g are ⌜F ⌝-definable. Let F * = g(F ) ⊆ V S k,C⌜s⌝ , the map f ○g −1 ∶ F * → V S k,C⌜s⌝ can be coded in G by Theorem 6.17. Hence, the tuple (⌜f ○ g −1 ⌝, ⌜F ⌝) is a code of f over C, because g is a ⌜F ⌝-definable bijection, and. ⌜f ⌝ ∈ G, By Lemma. 6⌜f ○ g −1 ⌝, ⌜F ⌝) is interdefinable with a tuple of elements in GBy Lemma 6.33 ⌜F ⌝ ∈ G. Let s be the O-lattice of K 2 and g ∶ F → V S k,C⌜s⌝ the injective map given by Lemma 6.34. Both s and g are ⌜F ⌝-definable. Let F * = g(F ) ⊆ V S k,C⌜s⌝ , the map f ○g −1 ∶ F * → V S k,C⌜s⌝ can be coded in G by Theorem 6.17. Hence, the tuple (⌜f ○ g −1 ⌝, ⌜F ⌝) is a code of f over C, because g is a ⌜F ⌝-definable bijection, and (⌜f ○ g −1 ⌝, ⌜F ⌝) is interdefinable with a tuple of elements in G. Because F is primitive over ⌜f ⌝, then D is a primitive set. Thus, there is some δ ∈ Γ such that for any pair of different elements x, y ∈ D v(x − y) = δ. Let b ∈ K be such that v(b) = δ, take x ∈ D and let U = x + bO, this is the smallest closed 1-torsor containing D. The elements of D all lie in different classes of red(U ) and let g ∶ D → red(U ) be the definable map sending each element x ∈ D to the unique element in red(U ) that contains x. Both U and g are ⌜D⌝-definable, and therefore d-definable. By Proposition 3.6, there is an O-lattice s ⊆ K 2 , whose code is interdefinable with ⌜U ⌝. Let h ∶ red(U ) → red(s) the ⌜U ⌝-definable injective map given by Lemma 6.32. Both U and h are d-definable. By (1) of this statement the function h ○ g ○ f ∶ F → V S k. Let D = F (f ) ⊆ K, ⌜s⌝ can be coded in G. Since f and h ○ g ○ f are interdefinable over d, the statement followsLet D = f (F ) ⊆ K, this is a finite set in the main field so it can be coded by a tuple d of elements in K. Because F is primitive over ⌜f ⌝, then D is a primitive set. Thus, there is some δ ∈ Γ such that for any pair of different elements x, y ∈ D v(x − y) = δ. Let b ∈ K be such that v(b) = δ, take x ∈ D and let U = x + bO, this is the smallest closed 1-torsor containing D. The elements of D all lie in different classes of red(U ) and let g ∶ D → red(U ) be the definable map sending each element x ∈ D to the unique element in red(U ) that contains x. Both U and g are ⌜D⌝-definable, and therefore d-definable. By Proposition 3.6, there is an O-lattice s ⊆ K 2 , whose code is interdefinable with ⌜U ⌝. Let h ∶ red(U ) → red(s) the ⌜U ⌝-definable injective map given by Lemma 6.32. Both U and h are d-definable. By (1) of this statement the function h ○ g ○ f ∶ F → V S k,⌜s⌝ can be coded in G. Since f and h ○ g ○ f are interdefinable over d, the statement follows. Let s ⊆ K 2 and g ∶ D → red(s) ⊆ V S k,⌜s⌝ the injective map given by Lemma 6.34. Both s and g are ⌜D⌝-definable. By part (1) of this statement the composition g ○ f can be coded in G. Let D = f (F ) then D must be a primitive set of 1-torsors because F is primitive over ⌜f ⌝ (in particular. there are I ∈ I and b ∈ K and elements a t ∈ K such that for each t ∈ f (F ) we have t = a t +bI). and as g is a ⌜D⌝-definable bijection the tuple (⌜g ○ f ⌝, ⌜D⌝) is interdefinable with ⌜f ⌝Let D = f (F ) then D must be a primitive set of 1-torsors because F is primitive over ⌜f ⌝ (in particular, there are I ∈ I and b ∈ K and elements a t ∈ K such that for each t ∈ f (F ) we have t = a t +bI). By Lemma 6.33, we may assume ⌜D⌝ is a tuple in the stabilizer sorts. Let s ⊆ K 2 and g ∶ D → red(s) ⊆ V S k,⌜s⌝ the injective map given by Lemma 6.34. Both s and g are ⌜D⌝-definable. By part (1) of this statement the composition g ○ f can be coded in G, and as g is a ⌜D⌝-definable bijection the tuple (⌜g ○ f ⌝, ⌜D⌝) is interdefinable with ⌜f ⌝. Case 1: All the projections are equal, i.e. A = A Z for all Z ∈ F. Case 1: All the projections are equal, i.e. A = A Z for all Z ∈ F . ∈ F } is a finite set of torsors of size at most m + 1 of complexity at most n. By the induction hypothesis A n it admits a code in the stabilizer sorts. By compactness we can uniformize such codes, and we can define the function g ∶ A → G by sending the element x to the code ⌜{⌜h Z (x)⌝ Z ∈ F }⌝. For each x ∈ A, the set of fibers {h Z (x) Z. This is a ⌜F ⌝-definable function. Let p A (x) be a global type extending the generic type of A, it is ⌜A⌝-definable by Corollary 6.21. By Theorem 6.25 the germ of g over p A can be coded in G over ⌜A⌝. By Corollary 6. 31 for any Z ∈ F the code ⌜Z⌝ is interdefinable with the tuple (⌜A⌝, germ(h Z , p A )), then ⌜F ⌝ is interdefinable with (⌜A⌝, ⌜{germ(h Z , p A ) Z ∈ F }⌝Proof. For each x ∈ A, the set of fibers {h Z (x) Z ∈ F } is a finite set of torsors of size at most m + 1 of complexity at most n. By the induction hypothesis A n it admits a code in the stabilizer sorts. By compactness we can uniformize such codes, and we can define the function g ∶ A → G by sending the element x to the code ⌜{⌜h Z (x)⌝ Z ∈ F }⌝. This is a ⌜F ⌝-definable function. Let p A (x) be a global type extending the generic type of A, it is ⌜A⌝-definable by Corollary 6.21. By Theorem 6.25 the germ of g over p A can be coded in G over ⌜A⌝. By Corollary 6.31 for any Z ∈ F the code ⌜Z⌝ is interdefin- able with the tuple (⌜A⌝, germ(h Z , p A )), then ⌜F ⌝ is interdefinable with (⌜A⌝, ⌜{germ(h Z , p A ) Z ∈ F }⌝). . Z ∈ F }⌝, ⌜a⌝, Claim 6.40.1. germ(g, p A ) is interdefinable with the code ⌜{germ(h Z , p AClaim 6.40.1. germ(g, p A ) is interdefinable with the code ⌜{germ(h Z , p A ) Z ∈ F }⌝ over ⌜A⌝. We first prove that germ(g, p A ) ∈ dcl eq (⌜A⌝, ⌜{germ(h Z , p A ) Z ∈ F }⌝). Proof. We first prove that germ(g, p A ) ∈ dcl eq (⌜A⌝, ⌜{germ(h Z , p A ) Z ∈ F }⌝). Note that σ({germ(h Z , p A ) Z ∈ F }) = {germ(h σ(Z) , p A ) Z ∈ F } = {germ(h Z , p A ) Z ∈ F }, because σ(⌜{germ(h Z , p A ) Z ∈ F }⌝) = ⌜{germ(h Z , p A ) Z ∈ F }⌝. As a result, for any realization c of p A (x) sufficiently generic over B we must have that {⌜h Z (c)⌝ Z ∈ F } = {⌜h σ(Z) (c)⌝ Z ∈ F } so g(c) = σ(g)(c), as desired. For the converse, let σ ∈ Aut(M ⌜A⌝, germ(g, p A )) we want to show that σ(⌜{germ(h Z , p A ) Z ∈ F }⌝) = ⌜{germ(h Z , p A ) Z ∈ F }⌝. Let c be a realization of p A (x) sufficiently generic over B by hypothesis g(c) = σ(g)(c). Z , A ) Z ∈ F }⌝, ⌜a⌝) ; ⌝ Z ∈ F }⌝, p A ) = germ(g, p A ). Let B the set of all the parameters required to define all the objects that have been mentioned so far. It is therefore sufficient to argue that for any realization c of p A (x) sufficiently generic over B we have σ(g)(c) = g(c), where σ(g) ∶ A → G is the function given by sending the element x to the code ⌜{⌜h σ(Z) (x). Then: g(c) = ⌜{⌜h Z (c)⌝ Z ∈ F }⌝ = ⌜{⌜h σ(Z) (c)⌝ Z ∈ F }⌝ = σ(g)(c)Let σ ∈ Aut(M ⌜{germ(h Z , p A ) Z ∈ F }⌝, ⌜A⌝), we want to show that σ(germ(g, p A )) = germ(σ(g), p A ) = germ(g, p A ). Let B the set of all the parameters required to define all the objects that have been men- tioned so far. It is therefore sufficient to argue that for any realization c of p A (x) sufficiently generic over B we have σ(g)(c) = g(c), where σ(g) ∶ A → G is the function given by sending the element x to the code ⌜{⌜h σ(Z) (x)⌝ Z ∈ F }⌝. Note that σ({germ(h Z , p A ) Z ∈ F }) = {germ(h σ(Z) , p A ) Z ∈ F } = {germ(h Z , p A ) Z ∈ F }, because σ(⌜{germ(h Z , p A ) Z ∈ F }⌝) = ⌜{germ(h Z , p A ) Z ∈ F }⌝. As a result, for any realization c of p A (x) sufficiently generic over B we must have that {⌜h Z (c)⌝ Z ∈ F } = {⌜h σ(Z) (c)⌝ Z ∈ F } so g(c) = σ(g)(c), as desired. For the converse, let σ ∈ Aut(M ⌜A⌝, germ(g, p A )) we want to show that σ(⌜{germ(h Z , p A ) Z ∈ F }⌝) = ⌜{germ(h Z , p A ) Z ∈ F }⌝. Let c be a realization of p A (x) sufficiently generic over B by hypothesis g(c) = σ(g)(c). Then: g(c) = ⌜{⌜h Z (c)⌝ Z ∈ F }⌝ = ⌜{⌜h σ(Z) (c)⌝ Z ∈ F }⌝ = σ(g)(c). Therefore, for each Z ∈ F there is some Z ′ ∈ F such that h Z (c) = h σ(Z ′ ) (c) and this implies that germ(h Z , p A ) = germ(h σ(Z ′ ) , p A ). Z ∈ F } , Thus σ {germ(h Z , p A ) Z ∈ F } = {germ(h σ(Z) , p A ) Z ∈ F } = {germ. Therefore, for each Z ∈ F there is some Z ′ ∈ F such that h Z (c) = h σ(Z ′ ) (c) and this implies that germ(h Z , p A ) = germ(h σ(Z ′ ) , p A ). Thus σ {germ(h Z , p A ) Z ∈ F } = {germ(h σ(Z) , p A ) Z ∈ F } = {germ(h Z , p A ) Z ∈ F }. We conclude that σ(⌜{germ(h Z , p A ) Z ∈ F }⌝) = ⌜{germ(h Z , p A ) Z ∈ F }⌝, as desired. We conclude that σ(⌜{germ(h Z , p A ) Z ∈ F }⌝) = ⌜{germ(h Z , p A ) Z ∈ F }⌝, as desired. A )) which is a sequence of elements in G. Consequently, F is coded by the tuple. ⌜A⌝Consequently, F is coded by the tuple (⌜A⌝, germ(g, p A )) which is a sequence of elements in G. Case 2: All the projections are different i.e. A Z ≠ A Z ′ for all. Z ≠ Z ′ ∈ F, Case 2: All the projections are different i.e. A Z ≠ A Z ′ for all Z ≠ Z ′ ∈ F . the set of all these codes. For each x * ∈ W * , we define the function f x * ∶ S → K that sends A Z ↦ x Z , where x Z is the unique element in the set coded by x * that belongs to A Z . Let l x * ∶ S → G the function given by sending A Z ↦ ⌜h Z. , . {a Z Z ∈ F } Say {a 1, } Let, , . . X N } X I ∈ A I } ;, , . X N } ∈ W }, I E , such set is independent from the choice of the enumeration. Each set {x 1 , . . . , x n } ∈ W admits a code in the home sort K, because fields uniformly code finite sets. We denote by W * = {⌜{x 1 ,. f x * (A Z ))⌝. This map sends the projection A Z to the code of the fiber in the module Z at the point x Z , which is the unique point in the set coded by x * that belongs to A Z [See Figure 2Proof. To simplify the notation fix some enumeration of the projections {A Z Z ∈ F } say {A 1 , . . . , A n }. Let W = {{x 1 , . . . , x n } x i ∈ A i }, such set is independent from the choice of the enumeration. Each set {x 1 , . . . , x n } ∈ W admits a code in the home sort K, because fields uniformly code finite sets. We denote by W * = {⌜{x 1 , . . . , x n }⌝ {x 1 , . . . , x n } ∈ W }, i.e. the set of all these codes. For each x * ∈ W * , we define the function f x * ∶ S → K that sends A Z ↦ x Z , where x Z is the unique element in the set coded by x * that belongs to A Z . Let l x * ∶ S → G the function given by sending A Z ↦ ⌜h Z (f x * (A Z ))⌝. This map sends the projection A Z to the code of the fiber in the module Z at the point x Z , which is the unique point in the set coded by x * that belongs to A Z [See Figure 2]. By compactness we can uniformize all such codes, so we can define the function g ∶ W * → G by sending x * ↦ (⌜f x * ⌝, ⌜l x * ⌝). By compactness we can uniformize all such codes, so we can define the function g ∶ W * → G by sending x * ↦ (⌜f x * ⌝, ⌜l x * ⌝). The second part of Lemma 6.27 also guarantees that given d * a generic realization of q over a set of parameters B, if we take Y the set coded by d * and b is the element in Y that belongs to A Z then b is a sufficiently generic realization over B of some type p A Z (x) which is ⌜A Z ⌝-definable and extends the generic type of A Z .We recall as well that the types p A Z (x) given by Lemma 6.27 are all compatible under the action of Aut(M ⌜F ⌝), this is for any σ ∈ Aut(M ⌜F ⌝) if σ(Z) = Z ′ then σ(p A Z (x)) = p A σ(Z) (x). By Theorem 6.25 the germ of g over q can be coded in the stabilizer sorts G over ⌜W * ⌝ ∈ dcl eq (⌜S⌝). * ) ⊢ X * ∈ W *, By Lemma 6.34 we may assume ⌜S⌝ ∈ G. Claim 6.40.3. The tuple (germ(g, q), ⌜S⌝) ∈ G is interdefinable with ⌜F ⌝By Lemma 6.27 there is some ⌜W * ⌝-definable type q(x * ) ⊢ x * ∈ W * . The second part of Lemma 6.27 also guarantees that given d * a generic realization of q over a set of parameters B, if we take Y the set coded by d * and b is the element in Y that belongs to A Z then b is a sufficiently generic realization over B of some type p A Z (x) which is ⌜A Z ⌝-definable and extends the generic type of A Z .We recall as well that the types p A Z (x) given by Lemma 6.27 are all compatible under the action of Aut(M ⌜F ⌝), this is for any σ ∈ Aut(M ⌜F ⌝) if σ(Z) = Z ′ then σ(p A Z (x)) = p A σ(Z) (x). By Theorem 6.25 the germ of g over q can be coded in the stabilizer sorts G over ⌜W * ⌝ ∈ dcl eq (⌜S⌝) . By Lemma 6.34 we may assume ⌜S⌝ ∈ G. Claim 6.40.3. The tuple (germ(g, q), ⌜S⌝) ∈ G is interdefinable with ⌜F ⌝. For the converse, let σ ∈ Aut(M germ(g, q), ⌜S⌝) we want to show that σ(F ) = F . By Corollary 6.31 the code of each torsor Z ∈ F is interdefinable with the pair (A Z , germ(h Z , p A Z )). Hence it is sufficient to argue that: σ({(A Z , germ(h Z , p A Z )) Z ∈ F }) = {(A Z , germ(h Z. Z ∈ F } , ⌜S⌝) ∈ dcl eq (⌜F ⌝)Proof. It is clear that (germ(g, q), ⌜S⌝) ∈ dcl eq (⌜F ⌝). For the converse, let σ ∈ Aut(M germ(g, q), ⌜S⌝) we want to show that σ(F ) = F . By Corollary 6.31 the code of each torsor Z ∈ F is interdefinable with the pair (A Z , germ(h Z , p A Z )). Hence it is sufficient to argue that: σ({(A Z , germ(h Z , p A Z )) Z ∈ F }) = {(A Z , germ(h Z , p A Z )) Z ∈ F }. We have that σ(⌜W * ⌝) = ⌜W * ⌝ because σ(S) = S. Therefore σ(germ(g, q)) = germ(σ(g), q) = germ(g, q). We have that σ(⌜W * ⌝) = ⌜W * ⌝ because σ(S) = S. Therefore σ(germ(g, q)) = germ(σ(g), q) = germ(g, q). As a result, (⌜f d * ⌝, ⌜l d * ⌝) = (⌜σ(f ) d * ⌝, ⌜σ(l) d * ⌝). ∈ t, then σ is sending the pair (t, d t ) to (σ(t), d σ(t) ), where d σ(t) is a realization p σ(t) (x) sufficiently generic over B. By assumption, we also have that ⌜l d * ⌝ = ⌜σ(l) d * ⌝, thus the action of σ is a bijection among the elements of the graph of l d * . Consequently, for any t ∈ S there is some unique t ′ ∈ S. ∈ S}, Let B be the set of parameters required to define all the objects that have been mentioned so far. For any realization d * of the type q sufficiently generic over B we have g(d * ) = σ(g)(d * ). ) )) = (t ′ , h σ(Z) (d t ′ )). Thus σ(Z) is a torsor whose projection is t ∈ S, and d t ∈ t is a realization of the type p t (x) sufficiently generic over B. As a result. σ t, germ(h Z , p t ) = t ′ , germ(h σ. We conclude that: σ {(A Z , germ(h Z , p A Z )) Z ∈ F } = σ( (t, germ(h ZLet B be the set of parameters required to define all the objects that have been mentioned so far. For any realization d * of the type q sufficiently generic over B we have g(d * ) = σ(g)(d * ), where σ(g) is the function sending an element x * in W * to the tuple (σ(f ) x * , σ(l) x * ). As a result, (⌜f d * ⌝, ⌜l d * ⌝) = (⌜σ(f ) d * ⌝, ⌜σ(l) d * ⌝). ∈ t, then σ is sending the pair (t, d t ) to (σ(t), d σ(t) ), where d σ(t) is a realization p σ(t) (x) sufficiently generic over B. By assumption, we also have that ⌜l d * ⌝ = ⌜σ(l) d * ⌝, thus the action of σ is a bijection among the elements of the graph of l d * . Consequently, for any t ∈ S there is some unique t ′ ∈ S such that σ((t, h Z (d t ))) = (σ(t), h σ(Z) (d σ(t) )) = (t ′ , h σ(Z) (d t ′ )). Thus σ(Z) is a torsor whose pro- jection is t ∈ S, and d t ∈ t is a realization of the type p t (x) sufficiently generic over B. As a result, σ t, germ(h Z , p t ) = t ′ , germ(h σ(Z) , p t ′ ) . We conclude that: σ {(A Z , germ(h Z , p A Z )) Z ∈ F } = σ( (t, germ(h Z , p t )) t ∈ S} A Z )) Z ∈ F }, as desired. is either constant or injective. t ′ ∈ S} = {(A Z , germ(h Zif it is constant equal to c then ⌜f ⌝ is interdefinable with (⌜S⌝, c= {(t ′ , germ(h σ(Z) , p t ′ )) t ′ ∈ S} = {(A Z , germ(h Z , p A Z )) Z ∈ F }, as desired. is either constant or injective, if it is constant equal to c then ⌜f ⌝ is interdefinable with (⌜S⌝, c). Case 1: The projections are all equal, i.e. there is a torsor A such that A = A Z for all Z ∈ F. Case 1: The projections are all equal, i.e. there is a torsor A such that A = A Z for all Z ∈ F . A (x) be some global type extending the generic type of A, it is ⌜A⌝-definable by Corollary 6.21. Let f ∶ S → F be a definable injective map. For each x ∈ A we define the function g x ∶ S → G by sending t ↦. ⌜h f (t) (x)⌝Proof. We fix p A (x) be some global type extending the generic type of A, it is ⌜A⌝-definable by Corollary 6.21. Let f ∶ S → F be a definable injective map. For each x ∈ A we define the function g x ∶ S → G by sending t ↦ ⌜h f (t) (x)⌝. Residue field domination in real closed fields. C Ealy, D Haskell, J Marikova, Notre Dame J. Formal Logic. 603C. Ealy, D. Haskell and J. Marikova, Residue field domination in real closed fields, Notre Dame J. Formal Logic Volume 60, Number 3 (2019), 333-351. Stable Domination and Independence in Algebraically Closed Valued Fields. D Haskell, E Hrushovski, D Macpherson, Lecture Notes in Logic). Cambridge University PressD. Haskell, E. Hrushovski and D. Macpherson, Stable Domination and Independence in Algebraically Closed Valued Fields (Lecture Notes in Logic), Cambridge: Cambridge University Press, (2007). Definable sets in algebraically closed valued fields: elimination of imaginaries. D Haskell, E Hrushovski, D Macpherson, Journal für die reine und angewandte Mathematik. 597D. Haskell, E. Hrushovski and D. Macpherson, Definable sets in algebraically closed valued fields: elimination of imaginaries, Journal für die reine und angewandte Mathematik, 597,(2006), 175-236. On the angular component map modulo p. J Pas, J. Symbolic Logic. 55J.Pas,On the angular component map modulo p. J. Symbolic Logic 55.3 (1990), pp.1125-1129. Relative elimination of quantifiers for Henselian valued fields. S A Basarab, Ann. Pure Appl. Logic. 53S. A. Basarab, Relative elimination of quantifiers for Henselian valued fields, Ann. Pure Appl. Logic 53.1 (1991), pp. 51-74. Elementary properties of ordered Abelian groups. A Robinson, E Zakon, Trans. Amer. Math. Soc. 96A. Robinson and E. Zakon,Elementary properties of ordered Abelian groups, Trans. Amer. Math. Soc. 96, (1960), 222-236. Quantifier elimination for Henselian fields relative to additive and multiplicative congruences. F V Kuhlmann, Israel J. Math. 85F.V. Kuhlmann, Quantifier elimination for Henselian fields relative to additive and multiplicative congruences, Israel J. Math. 85 (1994), pp. 277-306. L Van Den Dries, Lectures on the Model Theory of Valued Fields In: Model Theory in Algebra. Berlin, HeidelbergSpringerL. van den Dries, Lectures on the Model Theory of Valued Fields In: Model Theory in Algebra, Analysis and Arithmetic. Lecture Notes in Mathematics, vol 2111. Springer, Berlin, Heidelberg.(2014) Presburger sets and p-minimal fields. R Cluckers, 10.2178/jsl/1045861509.https:/projecteuclid.org/euclid.jsl/1045861509J. Symbolic Logic. 681R. Cluckers, Presburger sets and p-minimal fields, J. Symbolic Logic 68 (2003), no. 1, 153-162. doi:10.2178/jsl/1045861509.https://projecteuclid.org/euclid.jsl/1045861509. Elimination of quantifiers for certain ordered and lattice-ordered abelian groups. V Weispfenning, Bull. Soc. Math. Belg. Sér. B. 331V. Weispfenning, Elimination of quantifiers for certain ordered and lattice-ordered abelian groups, Bull. Soc. Math. Belg. Sér. B 33 (1981), no. 1, 131-155. Logic and Algebra. O Belegradek, Poly-regular ordered abelian groups. Yi ZhangProvidence, RIAMS302O. Belegradek, Poly-regular ordered abelian groups, in: Yi Zhang (Ed.), Logic and Algebra (Contemp. Math. 302), AMS, Providence, RI, (2003). On the proof of elimination of imaginaries in algebraically closed valued fields. W Johnson, Notre Dame Journal of Formal Logic. 613W. Johnson,On the proof of elimination of imaginaries in algebraically closed valued fields., Notre Dame Journal of Formal Logic, 61(3), 363-381 (2020). Relative decidability and definability in henselian valued fields. J Flenner, 10.2178/jsl/1318338847The Journal of Symbolic Logic. 764J. Flenner, Relative decidability and definability in henselian valued fields., The Journal of Symbolic Logic, 76(4),(2011) 1240-1260. doi:10.2178/jsl/1318338847 Modules Over Dedekind Rings and Valuation Rings. I Kaplansky, 10.2307/1990759Transactions of the American Mathematical Society. 722I. Kaplansky, Modules Over Dedekind Rings and Valuation Rings, Transactions of the American Mathematical Society, 72(2), (1952) 327-340. doi:10.2307/1990759 M Aschenbrenner, L Van Den Dries, J Van Der Hoeven, Asymptotic Differential Algebra and Model Theory of Transseries. Princeton University PressM. Aschenbrenner, L. Van den Dries, J. van der Hoeven. Asymptotic Differential Algebra and Model Theory of Transseries. Princeton University Press.(2017) Imaginaries and definable types in algebraically closed valued fields, Valuation theory in interaction. E Hrushovski, EMS Ser. Congr. Rep. Eur. Math. Soc.E. Hrushovski, Imaginaries and definable types in algebraically closed valued fields, Valuation theory in interaction, EMS Ser. Congr. Rep., Eur. Math. Soc., Zürich, (2014), 297-319. Studies in logic and the foundations of mathematics. A Robinson, 10.2307/2964239Journal of Symbolic Logic. 252North-Holland Publishing CompanyComplete theoriesA. Robinson, Complete theories. Studies in logic and the foundations of mathematics, North-Holland Publishing Company, Amsterdam, VIII 129 pp. Journal of Symbolic Logic, 25(2), (1956), 172-174. doi:10.2307/2964239. Imaginaries in separably closed valued fields. M Hils, M Kamensky, S Rideau, Doi:10.1112/plms.12116.arXiv:1612.02142Proc. Lond. Math. Soc. 3M. Hils, M. Kamensky, and S. Rideau, Imaginaries in separably closed valued fields, Proc. Lond. Math. Soc. (3), 116(6), pp 1457-1488, (2018), Doi:10.1112/plms.12116.arXiv:1612.02142. Definable equivalence relations and zeta functions of groups. E Hrushovski, B Martin, S Rideau, R Cluckers, 10.4171/JEMS/817Journal of the European Mathematical Society. 2010E. Hrushovski, B. Martin,S. Rideau, and R. Cluckers, Definable equivalence relations and zeta functions of groups. Journal of the European Mathematical Society, 20(10), (2018), 2467-2537, https://doi.org/10.4171/JEMS/817. Imaginaries and invariant types in existentially closed valued differential fields Journal für die reine und angewandte Mathematik. S Rideau, Crelles Journal). 2019750S. Rideau, Imaginaries and invariant types in existentially closed valued differential fields Journal für die reine und ange- wandte Mathematik (Crelles Journal), vol. 2019, no. 750, (2019), pp. 157-196. R Farré, Strong ordered abelian groups and dp-rank. Pre-printR. Farré, Strong ordered abelian groups and dp-rank, Pre-print:http://arxiv.org/abs/1706.05471 . Quantifier elimination in ordered abelian groups. R Cluckers, I Halupczok, Confluentes Math. 34R. Cluckers and I. Halupczok, Quantifier elimination in ordered abelian groups, Confluentes Math. 3, no. 4,(2011), 587-615. The theory of ordered abelian groups does not have the independence property. Y Gurevich, P H Schmitt, Trans. Amer. Math. Soc. 2841Y. Gurevich and P. H. Schmitt, The theory of ordered abelian groups does not have the independence property,Trans. Amer. Math. Soc. 284, (1984) no. 1, 171-182. Canonical Forms for Definable Subsets of Algebraically Closed and Real Closed Valued Fields. J E Holly, 10.2307/2275760The Journal of Symbolic Logic. 603J. E. Holly. Canonical Forms for Definable Subsets of Algebraically Closed and Real Closed Valued Fields. The Journal of Symbolic Logic, 60(3), 843-860. (1995) https://doi.org/10.2307/2275760 Elimination of imaginaries in ordered abelian groups with bounded regular rank. M Vicaria, in preparationM. Vicaria, Elimination of imaginaries in ordered abelian groups with bounded regular rank, in preparation. Groupoids, imaginaries and internal covers. E Hrushovskı, Turkish Journal of Mathematics. 362E. Hrushovskı, Groupoids, imaginaries and internal covers. Turkish Journal of Mathematics, 36 (2), (2012), 173-198 . Retrieved from https://dergipark.org.tr/en/pub/tbtkmath/issue/12216/145807. F Jahnke, P Simon, P , E Walsberg, 10.1017/jsl.2016.15Dp-minimal valued fields. 82F. Jahnke, P. Simon, P. and E. Walsberg, Dp-minimal valued fields, The Journal of Symbolic Logic, 82(1), 151-165. doi:10.1017/jsl.2016.15 M Aschembrenner, A Chernikov, A Gehret, M Ziegler, Distality in valued fields and related structures. M. Aschembrenner, A. Chernikov, A. Gehret, M. Ziegler, Distality in valued fields and related structures, arxiv-preprint https://arxiv.org/pdf/2008.09889.pdf. Une Théorie de Galois Imaginaire. B Poizat, 10.2307/2273680The Journal of Symbolic Logic. 484B. Poizat.Une Théorie de Galois Imaginaire. The Journal of Symbolic Logic, 48(4), 1151-1170. (1983) https://doi.org/10.2307/2273680 Model theory of ordered abelian groups. P Schmitt, Habilitation Thesis at Ruprecht-Karl-UniversitätP. Schmitt, Model theory of ordered abelian groups. Habilitation Thesis at Ruprecht-Karl-Universität (1982). A Guide to NIP Theories. P Simon, 10.1017/CBO9781107415133Lecture Notes in Logic. Cambridge University PressP. Simon. A Guide to NIP Theories. Lecture Notes in Logic. Cambridge: Cambridge University Press.(2015) doi:10.1017/CBO9781107415133
[]
[ "Efficient Nearly-Fair Division with Capacity Constraints", "Efficient Nearly-Fair Division with Capacity Constraints" ]
[ "Hila Shoshan [email protected] \nAriel University Ariel\nIsrael\n", "Noam Hazon [email protected] \nAriel University\nArielIsrael\n", "Erel Segal-Halevi \nAriel University\nArielIsrael\n" ]
[ "Ariel University Ariel\nIsrael", "Ariel University\nArielIsrael", "Ariel University\nArielIsrael" ]
[]
We consider the problem of fairly and efficiently allocating indivisible items (goods or bads) under capacity constraints. In this setting, we are given a set of categorized items. Each category has a capacity constraint (the same for all agents), that is an upper bound on the number of items an agent can receive from each category. Our main result is a polynomial-time algorithm that solves the problem for two agents with additive utilities over the items. When each category contains items that are all goods (positively evaluated) or all chores (negatively evaluated) for each of the agents, our algorithm finds a feasible allocation of the items, which is both Pareto-optimal and envy-free up to one item. In the general case, when each item can be a good or a chore arbitrarily, our algorithm finds an allocation that is Pareto-optimal and envy-free up to one good and one chore.
10.48550/arxiv.2205.07779
[ "https://export.arxiv.org/pdf/2205.07779v2.pdf" ]
248,811,190
2205.07779
7d6511fbcba84504d12a8073d526398104bdb0bf
Efficient Nearly-Fair Division with Capacity Constraints Hila Shoshan [email protected] Ariel University Ariel Israel Noam Hazon [email protected] Ariel University ArielIsrael Erel Segal-Halevi Ariel University ArielIsrael Efficient Nearly-Fair Division with Capacity Constraints ACM Reference Format: Hila Shoshan, Noam Hazon, and Erel Segal-Halevi. 2023. Efficient Nearly-Fair Division with Capacity Constraints. In Proc.of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), London, United Kingdom, May 29 -June 2, 2023, IFAAMAS, 13 pages.Fair divisionIndivisible itemsMixed mannaCapacity constraints We consider the problem of fairly and efficiently allocating indivisible items (goods or bads) under capacity constraints. In this setting, we are given a set of categorized items. Each category has a capacity constraint (the same for all agents), that is an upper bound on the number of items an agent can receive from each category. Our main result is a polynomial-time algorithm that solves the problem for two agents with additive utilities over the items. When each category contains items that are all goods (positively evaluated) or all chores (negatively evaluated) for each of the agents, our algorithm finds a feasible allocation of the items, which is both Pareto-optimal and envy-free up to one item. In the general case, when each item can be a good or a chore arbitrarily, our algorithm finds an allocation that is Pareto-optimal and envy-free up to one good and one chore. Introduction The problem of how to fairly divide a set of items among agents with different preferences has been investigated by many mathematicians, economists, political scientists and computer scientists. Most of the earlier work focused on how to fairly divide goods, i.e., items with non-negative utility. In recent years, several works have considered the division of chores, i.e., items with non-positive utility, and a few works also considered the division of a mixture of goods and chores (for example, Aziz et al. [3] and Bérczi et al. [7]). Indeed, items may be considered as goods for one agent and as chores for another agent. For example, consider a project that has to be completed by a team of students. It consists of several tasks that should be divided among the students, such as: programming tasks, user-interface tasks and algorithm development tasks. One student may evaluate the programming tasks as items with negative utilities and the UI and algorithmic tasks as items with positive utilities, while another student may evaluate them the other way around. Often, there is a constraint by which the items are partitioned into categories, and each category has an associated capacity, which defines the maximum number of items in this category that may be assigned to each agent. Considering again the student project example, the mentor of the project may want all students to be involved in all aspects of the project. Therefore, the mentor may partition the project tasks into three categories: programming, UI, and algorithms, setting a capacity for each category. For example, if the team consists of two students, and there are 5 programming tasks, 6 UI tasks and 4 algorithm tasks, then a capacity of 3 on programming and UI tasks and a capacity of 2 on algorithm tasks would ensure that both students are involved in about the same number of tasks from each category. Clearly, the capacity constraints should be large enough so that all of the items in a given category could be assigned to the agents. An allocation satisfying all capacity constraints is called feasible. Note that, without capacity constraints, if one agent evaluates an item as a good, while another agent evaluates it as a chore, we can simply give it to the agent who evaluates it as a good, as done by Aziz et al. [3]. However, with capacities it may not be possible, which shows that the combination of capacities and mixed valuations is more difficult than each of these on its own. Two important considerations in item allocation are efficiency and fairness. As an efficiency criterion, we use Pareto optimality (PO), which means that no other feasible allocation is at least as good for all agents and strictly better for some agent. As fairness criteria, we use two relaxations of envy-freeness (EF). The stronger one is envy-freeness up to one item (EF1), which was introduced by Budish [17], and adapted by Aziz et al. [3] for a mixture of goods and chores. Intuitively, an allocation is EF1 if for each pair of agents , , after removing the most difficult chore (for ) from 's bundle, or the most valuable good (for ) from 's bundle, would not be jealous of . With capacity constraints, an EF1 allocation may not exist. For example, consider a scenario with one category with two items, 1 and 2 , and capacity constraint of 1. 1 is a good for both agents (e.g., 1 ( 1 ) = 2 ( 1 ) = 1), and 2 is a chore for both agents (e.g., 1 ( 2 ) = 2 ( 2 ) = −1). Clearly, in every feasible allocation, one agent must receive the good and the other agent must receive the chore (due to the capacity constraint), and thus the allocation is not EF1. Therefore, we introduce a natural relaxation of it, which we call envy-freeness up to one good and one chore (EF [1,1]). It means that, for each pair of agents , , there exists a chore in 's bundle, and a good in 's bundle, such that both are in the same category, and after removing them, would not be jealous of . In the special case in which, for each agent and category, either all items are goods or all items are chores (as in the student project example above), EF [1,1] is equivalent to EF1. We call this special case a same-sign instance; note that it is still more general than only-goods or only-chores settings. We focus on allocation problems between two agents. This case is practically important. For example, student projects are often done in teams of two, and household chores are often carried out by the two partners. Fair allocation among two agents is the focus of various papers on fair division [2,7,13,14,28,33,34,38]. We prove the existence of PO and EF [1,1] allocations with capacity constraints for two agents with arbitrary (positive or negative) utilities over the items. The proof is constructive: we provide a polynomial-time algorithm that, for two agents, returns an allocation that is both PO and EF [1,1]. In a same-sign instance, the returned allocation is PO and EF1. Our focus on the case of two agents allows us to simultaneously make two advancements over the state-of-the-art in capacityconstrained fair allocation [10,21]: First, we handle a mixture of goods and chores, rather than just goods. As we show in Appendix A, standard techniques used for goods are not applicable for mixed utilities. Second, we attain an allocation that is not only fair but also PO. Before this work, it was not even known if a PO and EF1 allocation of goods with capacity constraints always exists. Our algorithm is based on the following ideas. The division problem can be considered as a matching problem on a bipartite graph, in which one side represents the agents and the other side represents the items. We add dummy items and clones of agents such that in every matching the capacity constraints are guaranteed. We assign a positive weight to each agent. We assign, to each edge between an agent and an item, a weight which is the product of the agent's weight and the valuation of the agent to the item. A maximum-weight matching in this graph represents a feasible allocation that maximizes a weighted sum of utilities. Every allocation that maximizes a weighted sum of utilities, with positive agent weights, is Paretooptimal. 1 Our algorithm first computes a maximum-weight matching that is also envy-free (EF) for one of the agents. It then tries to make it EF [1,1], while maintaining it a maximum-weight matching, by identifying pairs of items that can be exchanged between the agents, based on a ratio that captures how much one agent prefers an item relative to the other agent's preferences. Every exchange of items is equivalent to increasing the jealous agent's weight and decreasing the other agent's weight. Related Work Fair division problems vary according to the nature of the objects being divided, the preferences of the agents, and the fairness criteria. Many algorithms have been developed to solve fair division problems, for details see the surveys of such algorithms [15], [31], [12], [11]. In this paper we consider a new setting, which combines goods, chores, capacity constraints and Pareto-optimality. Note that even ignoring PO, goods, or both, our result is new. Mixtures of Goods and Chores Bérczi et al. [7] present a polynomial-time algorithm for finding an EF1 allocation for two agents with arbitrary utility functions (positive or negative). Chen and Liu [20] proved that the leximin solution is EFX (a property stronger than EF1) for combinations of goods and chores, for agents with identical valuations. Gafni 1 In fact, maximizing a weighted sum of utilities is stronger than Pareto-optimality. When allocating goods without capacity constraints, maximizing a weighted sum of utilities is equivalent to a stronger efficiency notion called fractional Pareto-optimality [6,32,39]. et al. [23] present a generalization of both goods and chores, by considering items that may have several copies. All these works do not consider efficiency. Efficiency in a setting with goods and chores is studied by Aziz et al. [3]. They use the round-robin technique for finding an EF1 and PO division of combinations of goods and chores between two agents. Similarly, Aziz et al. [4] find an allocation that is PROP1 (a property weaker than EF1) and PO for goods and chores. Aleksandrov and Walsh [1] prove that, with tertiary utilities, EFX and PO allocations always exist for mixed items. However, all of these works do not handle capacity constraints. Constraints When all agents have weakly additive utilities, the round-robin protocol finds a complete EF1 division in which all agents receive approximately the same number of goods [18]. This technique, together with the envy-graph, has been used for finding a fair division of goods under capacity constraints [10]. This work has been extended to heterogeneous capacity constraints [21], and to maximin-share fairness [26]. Fair allocation of goods of different categories has been studied by Mackin and Xia [30]. The constraint is that each agent must receive at least one item per category. Sikdar et al. [36] consider an exchange market in which each agent holds multiple items of each category and should receive a bundle with exactly the same number of items of each category. Nyman et al. [35] study a similar setting (they call the categories "houses" and the items "rooms"), but with monetary transfers ("rent"). Several other constraints have been considered. For example, Bilò et al. [9] study the fair division of goods such that each bundle needs to be connected on an underlying graph. Igarashi and Peters [27] study PO allocation of goods with connectivity constraints. An overview of the different types of constraints that have been considered can be found in [37]. Efficiency and Fairness There are several techniques for finding a division of goods that is EF1 and PO. For example, the Maximum Nash Welfare algorithm selects a complete allocation that maximizes the product of utilities. It assumes that the agents' utilities are additive, and the resulting allocation is both EF1 and PO [18,41]. In the context of fair cake-cutting (fair division of a continuous resource), Weller [40] proved the existence of an EF and PO allocation by considering the set of all allocations that maximize a weighted sum of utilities. We adapted this technique for the setting with indivisible items and capacity constraints. Barman et al. [6] present a price-based mechanism that finds an EF1 and PO allocation of goods in pseudo-polynomial time. Similarly, Barman and Krishnamurthy [5] use a price-based approach to show that fair and efficient allocations can be computed in strongly polynomial time. The price-based approach can be seen as a "dual" of our weight-based approach. Garg et al. [24] present an algorithm for EF1 and PO allocation of chores when agents have bivalued preferences. With general additive preferences, the existence of an PO and EF1 allocation of chores for three agents (without capacity constraints) was proved only very recently by Garg et al. [25]. The authors claim that "the case of chores turns out to be much more difficult to work with, resulting in relatively slow progress despite significant efforts by many researchers". Indeed, for four or more agents, existence is still open even for only-chores instances and without capacity constraints. Alternative Techniques Our setting combines a mixture of goods and chores, capacity constraints, and a guarantee of both fairness and efficiency. These three issues were studied in separation, but not all simultaneously. Although previous works have developed useful techniques, they do not work for our setting. For example, using the top-trading graph presented by Bhaskar et al. [8] for dividing chores does not work when there are capacity constraints. The reason is that if we allocate an item to the "sink" agent (i.e., an agent that does not envy any agent) on the top-trading graph, we may exceed the capacity constraints. As another example, consider the maximum-weighted matching algorithm of Brustle et al. [16]. It is not hard to modify the algorithm to work with chores, but adding capacity constraints on each category might not maintain the EF1 property between the categories. See Appendix A for more details. Therefore, in this paper we develop a new technique for finding PO and EF1 (or EF [1,1]) allocation of the set of items (goods and chores), that also maintains capacity constraints. Table 1 summarizes some of the previous results mentioned in this section, which are close to our setting. Notations An instance of our problem is a tuple = ( , , , , ): ⋃︀ ⋃︀ ≤ ≤ ⋃︀ ⋃︀ , ∈ N. The lower bound is needed to ensure we can divide all the items, and not "throw" anything away, and the upper bound is a trivial bound used for computing the run-time. • = (︀ ⌋︀ • is an -tuple of utility functions ∶ → R. We assume additive utilities, that is, ( ) ∶= ∑ ∈ ( ) for ⊆ . In a general mixed instance, each utility can be any real number (positive, negative or zero). A same-sign instance is an instance in which, for each agent ∈ and category ∈ (︀ ⌋︀, contains only goods for or only chores for . That is, either ( ) ≥ 0 for all ∈ , or ( ) ≤ 0 for all ∈ . Note that, even in a same-sign instance, it is possible that each agent evaluates different categories as goods or chores, and that different agents evaluate the same item differently. An allocation is a vector ∶= ( 1 , 2 , ..., ), with ∀ , ∈ (︀ ⌋︀, ≠ ∶ ∩ = ∅ and ⋃ ∈(︀ ⌋︀ = . is called "agent 's bundle". An allocation is called feasible if for all ∈ (︀ ⌋︀, the bundle contains at most items of each category , for each ∈ (︀ ⌋︀. Definition 3.1 (Due to Aziz et al. [3]). An allocation is called Envy Free up to one item (EF1) if for all , ∈ , at least one of the following holds: • ∃ ⊆ with ⋃︀ ⋃︀ ≤ 1, s.t. ( ∖ ) ≥ ( ). • ∃ ⊆ with ⋃︀ ⋃︀ ≤ 1, s.t. ( ) ≥ ( ∖ ). We also define a slightly weaker fairness notion, that we need for handling general mixed instances, in which an EF1 allocation is not guaranteed to exist, as shown in Introduction. Definition 3.2. An allocation is called Envy Free up to one good and one chore (EF [1,1]) if for all , ∈ , there exists a set ⊆ with ⋃︀ ⋃︀ ≤ 1, and a set ⊆ with ⋃︀ ⋃︀ ≤ 1, such that G and T are of the same category, and ( ) ≥ ( ). The uncategorized setting of Aziz et al. [3] can be reduced to our setting by putting each item in its own category, with a capacity of 1. An allocation is EF [1,1] in the categorized instance if-and-only-if it is EF1 (by Definition 3.1) in the original instance. Throughout the paper, any result that is valid for mixed instances with EF [1,1] is also valid for same-sign instances with EF1. This follows from the following lemma. LEMMA 3.3. In a same-sign instance, EF[1,1] is equivalent to EF1. PROOF. Suppose that some allocation, , for a same-sign instance is EF [1,1]. Therefore, for all , ∈ , ∃ ⊆ with ⋃︀ ⋃︀ ≤ 1, and ∃ ⊆ with ⋃︀ ⋃︀ ≤ 1, such that G and T are of the same category, and ( ) ≥ ( ). If ⋃︀ ⋃︀ = 0 or ⋃︀ ⋃︀ = 0, then is EF1, by definition. So assume that ⋃︀ ⋃︀ = ⋃︀ ⋃︀ = 1. Since G and T are in the same category, and in a same-sign instance, for each agent ∈ (︀ ⌋︀ and category ∈ (︀ ⌋︀, contains only goods for or only chores for , then, for all ∈ (︀ ⌋︀, if is a category of goods for agent , then ( ) ≥ ( ) implies ( ) ≥ ( ), so both allocations are EF1 for agent . If is a category of chores for agent , then ( ) ≥ ( ) implies ( ) ≥ ( ), so again both allocations are EF1 for agent . □ Remark 3.4. Our new EF [1,1] is reminiscent of another guarantee called 1 1 , that is, envy-freeness up to adding a good to one agent and removing a good from another agent [5]. But lemma 3.3 implies that EF [1,1] is stronger. The reason is that if there are only goods, it is enough to remove one good from an agent's bundle, and there is no need to also add a good to the envious agent's bundle. EF [1,1] can be seen as a generalization of EF1 as defined in [Aziz et al. 2022] to the case of categorized items (you just have to define one category for every item, with an upper bound equal to one). Remark 3.5. The restriction in Definition 3.2 that and should be of the same category is essential for Lemma 3.3. To see this, denote by EF[1,1,U] the unrestricted variant of EF [1,1], allowing to remove one chore and one good from any category. Suppose that there are two categories: one of them contains a good (for both agents) and the other contains a chore (for both agents). If one agent gets the good and the other agent gets the chore, the allocation is EF[1,1,U], and it is a same-sign instance, but it is not EF1. Any EF [1,1] allocation is clearly EF[1,1,U]. Therefore, proving that our algorithm returns an EF[1,1] allocation implies two things at once: in general instances, it returns an EF[1,1,U] allocation; and in same-sign instances, our algorithm returns an EF1 allocation. Finally, we recall two definitions: [20] any identical v v -EFX -the leximin solution [23] any Definition 3.6. Given an allocation for agents, the envy graph of is a graph with nodes, each represents an agent, and there is a directed edge → iff envies in allocation . A cycle in the envy graph is called an envy cycle. leveled v v - EFX - existence proof [3] 2 arbitrary v v - EF1 v round-robin technique [4] any arbitrary v v - PROP1 v polynomial-time algorithm [1] any tertiary v v - EFX v existence Our efficiency criterion is defined next: Definition 3.7. Given an allocation , another allocation ′ is a Pareto-improvement of if ( ′ ) ≥ ( ) for all ∈ , and ( ′ ) > ( ) for some ∈ . A feasible allocation is Pareto-Optimal ( ) if no feasible allocation is a Pareto-improvement of . Finding a PO and EF[1,1] Division In this section, we present some general notions that can be used for any number of agents. Then, we present our algorithm that finds in polynomial time a feasible PO allocation with two agents. In any mixed instance, this allocation is also EF [1,1]; in a same-sign instance, it is also EF1, according to Lemma 3.3. Preprocessing We preprocess the instance such that, in any feasible allocation, all bundles have the same cardinality. To achieve this, we add to each category with capacity constraint , some − ⋃︀ ⋃︀ dummy items with a value of 0 to all agents. In the new instance, each bundle must contain exactly items from each category . From now on, without loss of generality, we assume that ⋃︀ ⋃︀ = = ∑ ∈(︀ ⌋︀ . This implies that, in every feasible allocation , we have ⋃︀ ⋃︀ = ⇑ for all ∈ (︀ ⌋︀. Maximizing a Weighted Sum of Utilities Our algorithm is based on searching the space of PO allocations. Particularly, we consider allocations that maximize a weighted sum of utilities 1 1 + 2 2 + ... + , where each agent is associated with a weight ∈ (︀0, 1⌋︀, and 1 + 2 + ... + = 1. Such allocations can be found by solving a maximum-weight matching problem in a weighted bipartite graph. We denote the set of all agents' weights by = ( 1 , 2 , ..., ). Definition 4.1. For any real numbers (weights) = ( 1 , 2 , ..., ), such that, ∀ ∈ (︀ ⌋︀, ∈ (︀0, 1⌋︀, and 1 + 2 + ... + = 1, let be a bipartite graph ( 1 ∪ 2 , ) with ⋃︀ 1 ⋃︀ = ⋃︀ 2 ⋃︀ = . 2 contains all items (of all categories, including dummies). 1 contains copies of each agent ∈ (︀ ⌋︀. For each category ∈ (︀ ⌋︀, we choose distinct copies of each agent and add an undirected edge from each of them to all the items of . Each edge { , } ∈ , ∈ 1 , ∈ 2 has a weight ( , ), where: ( , ) ∶= ⋅ ( ) An allocation is called -maximal if it corresponds to a maximumweight matching among the maximum-cardinality matchings in . PROPOSITION 4.2. Every -maximal allocation, where 1 , 2 , . . . , ∈ (0, 1), is PO. PROOF. Every -maximal allocation = ( 1 , 2 , ..., ) maximizes the sum 1 1 ( 1 )+ 2 2 ( 2 )+...+ ( ). Every Paretoimprovement would increase this sum. Therefore, there can be no Pareto-improvement, so is PO. □ Exchanging Pairs of Items Our algorithm starts with a -maximal allocation, and repeatedly exchanges pairs of items between the agents in order to find an allocation that is also EF [1,1]. To determine which pairs to exchange, we need some definitions and lemmas. Additionally, in a same-sign instance, for each agent, , are in the same "type", that is, both goods or both chores. In this paper, we work a lot with exchangeable pairs, so we use , ∈ , as a shorthand for " ∈ and ∈ ". Finding a Fair Allocation The following two lemmas deal with fairness while exchanging exchangeable pairs in amaximal allocation. LEMMA 4.4. Let be a -maximal feasible allocation, and let ′ be another feasible allocation, resulting from by exchanging an exchangeable pair ( , ) between some two agents ≠ . Then there exists some ordering of the agents, 1 , . . . , , such that for all > , the EF[1,1] condition is satisfied for agent with respect to agent in both allocations and ′ . That is, envies up to one good and one chore in both allocations. In particular, there is at least one agent (agent ) for whom both and ′ are EF [1,1]. PROOF. Let = ( 1 , .., ) and ′ = ( ′ 1 , ..., ′ ). Let be the category that contains both items , . By the pre-processing step, every bundle in contains at least one item from . So we can write every bundle , for all ∈ (︀ ⌋︀, as: = ∪{ } for some ∈ . After the exchange, we have for all ≠ , ∶ ′ = = ∪ { }, whereas ′ = ∪ { }, ′ = ∪ { }. Consider the envy-graph representing the partial allocation ( 1 , 2 , . . . , ). We claim that it contains no cycle. Suppose that it contained an envy-cycle. If we replaced the bundles according to the direction of edges in the cycle, we would get another feasible allocation which is a Pareto-improvement of the current allocation, , which is -maximal. Contradiction! Therefore, the envy-graph of ( 1 , 2 , ..., ) has a topological ordering. Let 1 , . . . , be such an ordering, so that for all > , agent prefers over . In both allocations and ′ , the bundles of both and are derived from and by adding a single good or chore. Therefore, in both and ′ , the EF [1,1] condition is satisfied for agent w.r.t. agent . In particular, for agent , both these allocations are EF [1,1]. 2 □ Lemma 4.4 considered a single exchange. Now, we consider a sequence of exchanges. The following lemma works only for two agents -we could not yet extend it to more than two agents. , . . . , with the following properties: • ∀ ∈ (︀ ⌋︀, the allocation = ( 1 , 2 ) is -maximal, where = ( 1, , 2, ) for some 1, , 2, ∈ (0, 1). • 1 is EF for agent 1 and is EF for agent 2. • ∀ ∈ (︀ − 1⌋︀, +1 is obtained from by a single exchange of an exchangeable pair between the agents. Then, for some ∈ (︀ ⌋︀, the allocation is PO and EF [1,1]. PROOF. Every is PO by Proposition 4.2. Therefore, it is never possible for the two agents to envy each other simultaneously. Since at 1 agent 1 is not jealous and at agent 2 is not jealous, there must be some ∈ (︀ − 1⌋︀ in which is EF for 1, and +1 is EF for 2. Because +1 results from by exchanging an exchangeable pair between the agents, by Lemma To apply Lemma 4.5, we need a way to choose the pair of exchangeable items in each step of the sequence, so that the next allocation in the sequence remains -maximal. We use the following definition. The Properties of a -maximal Allocation The following lemma is proved in Appendix C. LEMMA 4.7. For any agents, for any = ( 1 , 2 , ..., ) such that 1 , 2 , ..., ∈ (0, 1), and an allocation = ( 1 , ..., ), the following are equivalent: (i) is -maximal. (ii) Every exchange-cycle does not increase the weighted sum of utilities. That is, for all ≥ 2, a subset of agents { 1 , ..., } ∈ (︀ ⌋︀, and a set of items 1 , ..., , such that all are in the same category, and ∀ ∈ (︀ ⌋︀, ∈ : 1 1 ( 1 ) + 2 2 ( 2 ) + ... + ( ) ≥ 1 1 ( ) + 2 2 ( 1 ) + ... + ( −1 ) The following lemma follows from Lemma 4.7, but only for two agents. LEMMA 4.8. Suppose there are = 2 agents. For any 1 , 2 ∈ (0, 1) and an allocation = ( 1 , 2 ), the following are equivalent: (i) is -maximal, for = ( 1 , 2 ). (ii) For any exchangeable pair 1 , 2 ∈ 1 , 2 , exactly one of the following holds: 1 ( 1 ) > 1 ( 2 ) and 1 ⇑ 2 ≥ 2⇑1 ( 1 , 2 ) or 1 ( 1 ) = 1 ( 2 ) and 2 ( 2 ) ≥ 2 ( 1 ) or 1 ( 1 ) < 1 ( 2 ) and 1 ⇑ 2 ≤ 2⇑1 ( 1 , 2 ) PROOF. The only exchange-cycle in a 2-agents allocation is a replacement of an exchangeable pair 1 , 2 ∈ 1 , 2 between the agents. Then, according to Lemma 4.7, for any exchangeable pair 1 , 2 ∈ 1 , 2 , 1 1 ( 1 ) + 2 2 ( 2 ) ≥ 1 1 ( 2 ) + 2 2 ( 1 ) (1) 1 1 ( 1 ) − 2 2 ( 1 ) ≥ 1 1 ( 2 ) − 2 2 ( 2 ) (2) 1 (︀ 1 ( 1 ) − 1 ( 2 )⌋︀ ≥ 2 (︀ 2 ( 1 ) − 2 ( 2 )⌋︀(3) The claim in (ii) is an algebraic manipulation of (3), so (ii) ⇐⇒ (3). And since (i) ⇐⇒ (3), also (i) ⇐⇒ (ii). □ LEMMA 4.9. For any agents, in any -maximal allocation (with positive weights), for any , and an exchangeable pair , ∈ , , the following implications hold: ( ) ≥ ( ) ⇒ ( ) ≥ ( ) ( ) > ( ) ⇒ ( ) > ( ) PROOF. By Lemma 4.7, since is a -maximal allocation, each exchange-cycle does not increase the sum of the matching. In particular, for = 2, if we define 1 = , 2 = , 1 = , 2 = , we have: ( ) + ( ) ≥ ( ) + ( ) Which is equal to: (︀ ( ) − ( )⌋︀ ≥ (︀ ( ) − ( )⌋︀ and are both positive, so if the left term is positive or nonnegative, the right term must to be positive or non-negative too, respectively. □ Lemma 4.9 implies that, in any exchangeable pair , ∈ , in a -maximal allocation, there are two cases: (a) Both agents prefer the same item ( or ); (b) Agent prefers and agent prefers . In case (a), we say that the exchangeable pair has a preferred item. PROOF. If envies , then ( ) > ( ). Since both and contain the same number of items in each category, there must be a category in which, for some item pair , ∈ , , agent prefers to . By Lemma 4.9, agent too prefers to . So is a preferred item. □ Maintaining the -maximality The following lemma shows that, by exchanging items, we can move from one -maximal allocation to another ′ -maximal allocation (for a possibly different weight-vector ′ ). This lemma, too, works only for two agents. LEMMA 4.12. Suppose there are = 2 agents. Let be amaximal allocation, for = ( 1 , 2 ). Suppose there is an exchangeable pair 1 , 2 ∈ 1 , 2 such that: (1) 2 ( 1 ) > 2 ( 2 ), that is, 1 is the preferred item. (2) Among all exchangeable pairs in which 1 is the preferred item, this pair has a largest difference-ratio 2⇑1 ( 1 , 2 ). Let ′ be the allocation resulting from exchanging 1 and 2 in . Then, ′ is ′ -maximal for some ′ = ( ′ 1 , ′ 2 ) with ′ 1 ≤ 1 , ′ 2 ≥ 2 , ′ 1 ∈ (0, 1), ′ 2 ∈ (0, 1). PROOF SKETCH. The lemma can be proved by using Lemmas 4.8, 4.9, the maximality condition in the lemma [condition 2] and Definition 4.6. The idea of the proof is to define ′ 1 , ′ 2 ∈ (0, 1) such that ′ 1 ′ 2 = 2⇑1 ( 1 , 2 ), ′ 1 + ′ 2 = 1. Then, 0 < ′ 1 ′ 2 ≤ 1 2 , and ′ 1 ≤ 1 , ′ 2 ≥ 2 . Then we look at all the exchangeable pairs ( * 1 , * 2 ) in the new allocation ′ , resulting from the exchange, and show that they satisfy all the conditions of Lemma 4.8(ii) with ′ 1 , ′ 2 , which are: (a) 1 ( * 1 ) > 1 ( * 2 ) and 2⇑1 ( 1 , 2 ) ≥ 2⇑1 ( * 1 , * 2 ) or (b) 1 ( * 1 ) = 1 ( * 2 ) and 2 ( * 2 ) ≥ 2 ( * 1 ) or (c) 1 ( * 1 ) < 1 ( * 2 ) and 2⇑1 ( 1 , 2 ) ≤ 2⇑1 ( * 1 , * 2 ) The exchangeable pairs in ′ can be divided into four types: (1) The exchangeable pairs ( * 1 , * 2 ) that have not moved. (2) The pair ( 2 , 1 ). (3) Pairs in the form ( * 1 , 1 ), * 1 ∈ ′ 1 , * 1 ≠ 2 . (4) Pairs in the form ( 2 , * 2 ), * 2 ∈ ′ 2 , * 2 ≠ 1 . We show that each pair of each type satisfies its own condition out of (a), (b) and (c). Therefore, by Lemma 4.8, ′ is ′ -maximal allocation, for ( ′ 1 , ′ 2 ). The complete proof with all the technical arguments can be found in Appendix D. □ Table 2: Utilities of the agents in the example. Algorithm for Two Agents Throughout this subsection we consider general mixed instances, for simplicity. By Lemma 3.3, for same-sign instances all the results hold with EF1 instead of EF [1,1]. Let us start with an intuitive description of the algorithm, for two agents. Suppose that 2 is a function of 1 , and consider the line 1 + 2 = 1, 1 ≥ 0, 2 ≥ 0, which describes the collection of all pairs of non-negative weights 1 , 2 ∈ (︀0, 1⌋︀ whose sum is 1. Each point on this line represents a ′ -maximal allocation, for some weight-vector ′ . In every such allocation, there are no envy-cycles in the envy graph, so there is at most one envious agent. The algorithm starts with an initial allocation which is a maximumweight matching in the graph , where = (0.5, 0.5), corresponding to the center of the line. This initial allocation is PO (By Lemma 4.2) and EF for at least one agent. If it is EF for both agents then we are done. Otherwise, depending on the envious agent, the algorithm decides which side of the line to go to. If agent 2 envies, we need to improve 2's weight, so we go towards (0,1). If agent 1 envies, we need to go towards (1,0). Therefore, as long as the allocation is not EF [1,1], the algorithm swaps an exchangeable pair chosen according to Lemma 4.12, thus maintaining the search space as the space of the -maximal allocations. Note that since the items of the exchanged pair are both in the same category, the capacity constraints are also maintained. Lemma 4.5 implies that some point on the line gives a feasible EF [1,1] and PO division. Specifically, the exchange pairs are determined as follows. For each item we can define a linear function ( 1 ): 1 1 ( ) − 2 2 ( ) = 1 1 ( ) − (1 − 1 ) 2 ( ) = 1 1 ( ) − 2 ( ) + 1 2 ( ) =( 1 ( ) + 2 ( )) 1 − 2 ( ) If we draw all those functions in one coordinate system, each pair of lines intersects at most once. In total there are ( 2 ) intersections, where = ∑ ∈(︀ ⌋︀ ⋃︀ ⋃︀, the total number of items, in all categories (including the dummies). For example, consider the same-sign instance = ( , , , , ) where = (︀2⌋︀, = { 1 , 2 }, 1 = { 1 , 2 , 3 , 4 }, 2 = { 5 , 6 }, = {2, 1} and is shown in Table 2. The corresponding lines for the items are depicted in Figure 1. The meaning of each point of intersection is a possible switching point for these two items between the agents. Clearly, the replacement will only take place between exchangeable pairs, i.e. items in the same category, which are in different agents' bundles at the time of the intersection. According to Definition 4.6, at each intersection point of the lines of 1 and replace the names of agent 1 and agent 2 7: end if // We can now assume that agent 2 is jealous. // Step 2: Build a set of item-pairs whose replacement increases agent 2's utility: 8: item-pairs ← all the exchangeable pairs 1 , 2 ∈ 1 , 2 , for which 2 ( 1 ) > 2 ( 2 ). 9: current-pair ← ( 1 , 2 ) where 2⇑1 ( 1 , 2 ) is maximal. // Step 3: Switch items in order until an EF [1,1] allocation is found: 10: while = ( 1 , 2 ) is not EF [1,1] do 11: Switch current-pair between the agents. 12: Update item-pairs list and current-pair (Steps 8, 9). 13: end while 14: return In this example, the algorithm starts with the allocation = ( 1 , 2 ) in the point (0.5, 0.5), which is 1 = { 1 , 2 , 6 }, 2 = { 3 , 4 , 5 }. Note that for each category, 1's items are the top lines. In this initial allocation, 2 envies by more than one item, so we start exchanging items in order to increase 2 . The first intersecting pair (when we go left) is 5 , 6 . It is an exchangeable pair, so we exchange it and update the allocation to 1 = { 1 , 2 , 5 }, 2 = { 3 , 4 , 6 }. This is an EF1 allocation, so we are done. If at some point there are multiple intersections of exchangeable pairs, we swap the pairs in an arbitrary order. LEMMA 4.13. If Algorithm 1 exchanges the last exchangeable pair in the item-pairs list (that is initialized in step 8), then the resulting allocation is envy-free for agent 2. PROOF. After the last exchange, there is no exchangeable pair ( 1 , 2 ), 1 , 2 ∈ 1 , 2 for which 1 is the preferred item. Therefore, by Lemma 4.11, agent 2 is not jealous. □ THEOREM 4.14. Algorithm 1 always returns an allocation that is -maximal with positive weights (and thus PO), and satisfies the capacity constraints. The allocation is EF [1,1], and EF1 for a same-sign instance. PROOF. A matching in graph always gives each agent items of category . Thanks to the dummy items, all possible allocations that satisfy the capacity constraints can be obtained by a matching. The first allocation that the algorithm checks is some -maximal allocation, where = ( 1 , 2 ), 1 , 2 ∈ (0, 1), so by Proposition 4.2, this is a PO allocation. At each iteration, it exchanges an exchangeable pair, ( 1 , 2 ), such that 2 ( 1 ) > 2 ( 2 ), and among all the exchangeable pairs with 2 ( 1 ) > 2 ( 2 ) it has the largest 2⇑1 ( 1 , 2 ), so by Lemma 4.12, the resulting allocation is also ′ -maximal for some ′ = ( ′ 1 , ′ 2 ), ′ 1 , ′ 2 ≥ 0. In addition, since the items are in the same category, the allocation remains feasible. The first allocation in the sequence is, by step 1, envy-free for agent 1. By Lemma 4.13, the last allocation in the sequence is envy-free for agent 2. So by Lemma 4.5, there exists some iteration in which the allocation is PO and EF [1,1], and EF1 for a same-sign instance. □ THEOREM 4.15. The runtime of Algorithm 1 is ( 4 ). PROOF. Step 1 can be done by finding a maximum weighted matching in a bipartite graph . Its time complexity is (⋃︀ ⋃︀) 3 (Fredman and Tarjan [22]), where ⋃︀ ⋃︀ = 2 , the number of vertices in the graph. Thus, ( 3 ) is the time complexity of step 1. At step 2 we go through all the categories ∈ (︀ ⌋︀, at each we create groups 1, , 2, which contain agent 1's and agent 2's items from in . It can be done in 2 ⋃︀ ⋃︀ = . Now we have ⋃︀ 1, ⋃︀ = ⋃︀ 2, ⋃︀ = . Then, we iterate over all the pairs 1 , 2 ∈ 1, , 2, , and add them to the list, which takes 2 time. In total, building item-pairs list is ∑ ∈(︀ ⌋︀ ( + 2 ) = (∑ ∈(︀ ⌋︀ ) = ( 2 ). The item-pairs list size is ∑ ∈(︀ ⌋︀ 2 = ( 2 ), and then finding its maximum takes ( 2 ). In total, step 2 takes ( 2 ) time. The upper bound on the number of iterations in the while loop at step 3 is the number of intersection points between items, which is at most ( 2 ). At each iteration we switch one exchangeable pair, ( 1 , 2 ), and update the pairs-list. The only pairs that should be updated (deleted or added) are those that contain 1 or 2 . There are at most 2 = ( ) such pairs. Finding the maximum is ( 2 ). In total, step 3 takes ( 4 ) time. Overall, the time complexity of the algorithm is ( 4 ) (because ≥ necessarily). □ Conclusion and Future Work We presented the first algorithm for efficient nearly-fair allocation of mixed goods and chores with capacity constraints. We believe that our paper provides a good first step in understanding fair division of mixed resources under cardinality constraints. Our proofs are modular, and some of our lemmas can be used in more general settings. Three or More Agents The most interesting challenge is to generalize our algorithm to three or more agents. Proposition 4.2 and Lemmas 4.4, 4.7, 4.9, 4.11 work for any number of agents, but the other lemmas currently work only for two agents. Algorithm 1 essentially scans the space of -maximal allocations: it starts with one -maximal allocation, and then moves in the direction that increases the utility of the envious agent. To extend it to agents, we can similarly start with a -maximal allocation corresponding to = (1⇑ , . . . , 1⇑ ), i.e., identical weights for each of the agents. These weights represent a point in an -dimensional space. Then, we can exchange items to benefit an envious agent, in order to increase their weight and improve their utility. In case there are several envious agents, we can select one that is at the "bottom" of the envy chain. For example, in the SWAP algorithm of Biswas and Barman [10], the swap is done in a way that benefits the envious agent with the smallest utility. Similarly, in the envygraph algorithm of Lipton et al. [29], the next item is given to an agent with no incoming edges in the envy-graph (an agent who is not envied by any other agent). The exchanges should be done in an order that preserves the -maximality and ensures we reach an EF [1,1] allocation. The two main Lemmas that should be extended to ensure the above two conditions are Lemma 4.12 and Lemma 4.5. We have not yet been able to develop such a method and prove its correctness. Finding an EF1+PO allocation for = 3 agents seems hard even when there is a single category with only goods. More General Constraints Another possible generalization is to more general constraints. Capacity constraints are a special case of matroid constraints, by which each bundle should be an independent set of a given matroid (see [10] for the definitions). Lemmas 4.2, 4.4, 4.5, 4.9 and 4.12 do not use categories, and should work for general matroids. The other lemmas should be adapted. Finally, we assumed that both agents have the same capacity constraints. We do not know if our results can be extended to agents with different capacity constraints (e.g. agent 1 can get at most 7 items while agent 2 can get at most 3 items). Specifically, the proof of Lemma 4.4 does not work -if ( 1 , 2 ) is feasible, then ( 2 , 1 ) might be infeasible. In this section, we present some of our attempts to find an EF1 and PO allocation for an instance with chores and capacity constraints, using ideas from previous works. These attempts failed. This shows that the problem is not trivial, and the new tools that have been developed in this paper are required. Agent 1 -1 0 0 0 -2 -4 Agent 2 -1 0 0 0 -1 -3 A.1 Iterated Matching Algorithm The Iterated Matching algorithm, presented by Brustle et al. [16], finds an EF1 allocation of indivisible goods in the case of additive valuations. This is done by using the valuation graph, which is a complete bipartite graph on vertex sets ( agents) and ( goods), with weights that represent the agents' utilities. The algorithm proceeds in rounds where each agent is matched to exactly one item in each round, by a maximum weighted matching on a sub graph of all the remaining goods, until all items have been allocated. This algorithm can be easily applied to chores with one category, by adding at the beginning some dummy chores (with utility of 0 to each agent), where ⋃︀ ⋃︀ = − , and ∈ N, ∈ {0, ..., − 1}. However, with more than one category, we need to add an external loop that runs on all categories, and at each iteration executes the algorithm for chores. While it maintains capacity constraints (because in each iteration all the agents get chores, similarly to round robin procedure), it may not necessarily maintain the EF1 requirement, as we show in the following example. Denote by , the -th item of category . Table 3 presents the utilities of the agents over the items. After allocating category 1 , the allocation is 1 = { 1,2 }, 2 = { 1,1 }, so 1 ( 1 ) = −2, 1 ( 2 ) = 0, 2 ( 1 ) = −4, 2 ( 2 ) = 0, then agent 1 envies 2 up to one item (her only item), and agent 2 is not jealous. Then we allocate the second category, which changes the allocations to: 1 = { 1,2 , 2,1 }, 2 = { 1,1 , 2,2 }. Now 1 ( 1 ) = −4, 1 ( 2 ) = −1, so agent 1 envies by more than one item (her worst chore has utility of −2). If there was always an agent who was not jealous, we could have assigned her the new chore, but this is not guaranteed. An envy-cycle may be created, and we know that envy-cycle elimination algorithm may fail EF1 for additive chores [according to Bhaskar et al. [8]]. A.2 Top-trading Envy Cycle Elimination Algorithm Bhaskar et al. [8] considered fair allocation of chores, and suggested to use cycle elimination on the top-trading graph, instead of the usual envy-graph. The top-trading graph for a division is a directed graph on the vertex set , with a directed edge from agent to agent if ( ) = max ∈ ( ) and ( ) > ( ), i.e. is the most preferred bundle for agent in , and she strictly prefers over her own bundle. In their paper [8], they show that resolving a top-trading envy cycle preserves EF1. Indeed, every agent involved in the top-trading exchange receives its most preferred bundle after the swap, and therefore does not envy anyone else in the next round. They also define a sink agent as an agent with no out-going edges in the envy graph, that is, an agent who does not envy anybody. In addition, they prove that if the usual envy-graph does not have a sink, then the top-trading envy graph has a cycle. In their algorithm, for each chore, they construct the envy-graph. If there is no sink in it, they eliminate cycles on the top-trading envy-graph, which guarantees the existence of a sink agent in the envy graph, and then allocate the chore to a sink agent. This method does not work in the setting with capacity constraints, because we can not simply assign the new chore to the sink agent, because she may have reached the maximum allowed number of chores from this category. For example, consider an instance with two categories, 1 = { 1,1 , 1,2 , 1,3 , 1,4 } and 2 = { 2,1 , 2,2 }, with capacity constraints = {2, 1}, and utility functions presented in Table 4. At the beginning of the algorithm, both allocations are empty, so agent 1 and agent 2 are both sinks. Say that agent 1 was selected to get 1,1 , and now 1 = { 1,1 }, 2 = ∅. Now agent 1 is jealous, so the only sink agent is 2, and 1,2 is allocated to 2. Since 2 ( 1,2 ) = 2 ( 1,3 ) = 0, agent 2 remains sink in the two following iterations. Then, the new allocations are 1 = { 1,1 }, 2 = { 1,2 , 1,3 }, and the only sink is agent 2. According to the algorithm, we should assign 1,4 to agent 2, but this violates the capacity constraints. A.3 Greedy Round-robin with Cycle Elimination Biswas and Barman [10] solved the problem of allocating goods under capacity constraints. Their algorithm first determines an arbitrary ordering of the n agents, , and then for each category: uses the Greedy Round-Robin algorithm to allocate the goods of this category, eliminate the cycles on the envy-graph, and update to be a topological ordering of the envy-graph. As already mentioned, this algorithm will not work for chores, because eliminating cycles in the usual envy-graph may violate EF1 [8]. In addition, if we use the top-trading graph instead, we can not use it to determine the topological ordering, since this ordering should be based on the envy-graph. It is possible that the top-trading graph is cycle-free, while the envy-graph has cycles. For example, consider an instance with 4 agents, one category with 4 items, 1 = { 1 , 2 , 3 , 4 }, a capacity constraint of 1, and the utilities presented in Table 5. Consider the allocation = ( 1 , 2 , 3 , 4 ), where 1 = { 1 }, 2 = { 2 }, 3 = { 3 }, 4 = { 4 }. The envy graph of allocation is shown in Figure 2, and its top-trading graph is shown in Figure 3. Note that the top-trading graph is cycle-free, while the envy-graph has a cycle (1 → 2 → 3), So we do not have a topological ordering on it. A.4 Pareto-improve an EF1 Allocation We examined the approach of finding an EF1 allocation, which is not necessarily PO, and applying Pareto improvements to it until an EF1 and PO allocation is obtained. However, the following proposition shows that this approach is inadequate, even with two agents. PROPOSITION A.1. Not every Pareto-improvement of an EF1 allocation yields an EF1 allocation, even with two agents. PROOF. Let = { 1 , 2 } be an EF1 allocation for two agents. Suppose that all the items are chores. Define a Pareto improvement as a replacement between two subsets of chores: 1 ⊆ 1 and 2 ⊆ 2 (one of the subsets may be empty), such that the change harms no one and benefits at least one agent. In particular, ∀ ∈ 1, 2 : ( 3− ) ≥ ( ) -the agent prefers (or indifferent) what she received over what she gave. The following example proves the proposition. Consider an instance with one category with 8 chores, capacity constraint of 8, and two agents with the valuations presented in Table 6. Suppose that the EF1 allocation, , is 1 = { 1 , 5 , 6 , 7 }, 2 = { 2 , 3 , 4 , 8 }. The utilities of the agents in are: • 1 ( 1 ) = −10, 1 ( 2 ) = −7 • 2 ( 1 ) = −2, 2 ( 2 ) = −4 Clearly, the two agents are jealous of each other, but their envy is up to one chore because 1 ( 1 { 1 }) ≥ 1 ( 2 ), and 2 ( 2 { 3 }) ≥ 2 ( 1 ). In addition, is not PO because there is an envy-cycle in the envy-graph. Consider the following Pareto-improvement: 1 = { 1 }, 2 = { 2 , 3 , 4 }. It does not harm agent 1 and benefits agent 2. After the replacement, the utilities of agent 1 do not change, that is, 1 ( 1 ) = −10, 1 ( 2 ) = −7. However, the most difficult chore in agent 1's bundle is worth −2, which is not enough for her to eliminate the envy. So the Pareto-improvement is not EF1. PROOF. By adding + − to inequality (4), we get: − − ≤ − − ⇑ ( , ) ≤ ⇑ ( , )(8)− − ≤ − − ⇑ ( , ) ≤ ⇑ ( , )(9) PROOF. For each in {5,6,7}, divide inequality ( ) by the two terms with the " " subscript to get inequality ( + 3). − − ≥ − − ⇑ ( , ) ≥ ⇑ ( , )(11)− − ≥ − − ⇑ ( , ) ≥ ⇑ ( , )(12) C A Complete Proof for Lemma 4.7 Lemma 4.7. For any agents, for any = ( 1 , 2 , ..., ) such that 1 , 2 , ..., ∈ (0, 1), and an allocation = ( 1 , ..., ), the following are equivalent: (i) is -maximal. (ii) Every exchange-cycle does not increase the weighted sum of utilities. That is, for all ≥ 2, a subset of agents { 1 , ..., } ∈ (︀ ⌋︀, and a set of items 1 , ..., , such that all are in the same category, and ∀ ∈ (︀ ⌋︀, ∈ : ← an empty graph, in which the nodes are the agents. 3: Choose an item such that ∈ , ∈ ′ , for some , ∈ (︀ ⌋︀ 4: if there is no such item then 5: return result 6: end if 7: Add a directed edge ( , ) to with name . // Since gets a new item ( ), and in every feasible allocation there is the same number of items from each category, must give someone an item from the same category. 8: ← the item from 's category such that ∈ , ∈ ′ , for some ∈ (︀ ⌋︀ 9: Add a directed edge ( , ) to with name . 10: if has a cycle ( == ) then 11: Append the cycle to result. 12: ← the allocation after exchanging items in according to this cycle. ( 1 ) + ... + ( −1 ) So we can switch those items in a cycle, and get another feasible allocation with a weighted sum greater than . Thus is notmaximal. Contradiction. (ii) ⇒ (i): Denote by ′ = ( ′ 1 , ..., ′ ) some feasible allocation, different from . We claim that it is possible to transform to ′ using a sequence of pairwise-disjoint exchange-cycles. One way to find these exchange-cycles is presented in Algorithm 2. Note that, each cycle in the sequence places the items involved in the exchange exactly where they should be according to the ′ allocation. Therefore, any item involved in one cycle, cannot be involved in any other cycle, that is, the cycles have no items in common. Therefore, all cycles found by Algorithm 2 exist in allocation , so according to the assumption in (ii), each of them does not increase the weighted sum of the allocation. Therefore, is a set of agents. • = ( 1 , . . . , ) is a set of items. • = ( 1 , 2 , ..., ) is a set of categories. The categories are pairwise-disjoint and = ⋃ . • = ( 1 , 2 , ..., ) is a list of size , containing the capacity constraint of each category. We assume that ∀ ∈ (︀ ⌋︀: Definition 4. 3 . 3Given a feasible allocation = ( 1 , 2 , ..., ), an exchangeable pair is a pair ( , ) of items, ∈ and ∈ , , ∈ (︀ ⌋︀, ≠ , such that and are in the same category. This implies that ∖ { } ∪ { } and ∖ { } ∪ { } are both feasible. LEMMA 4. 5 . 5Suppose there are = 2 agents. Suppose there is a sequence of feasible allocations 1 Definition 4. 6 . 6For a pair of agents , ∈ (︀ ⌋︀ s.t. ≠ , and a pair of items ( , ), the difference ratio, denoted by ⇑ ( , ), is defined as: ⇑ ( , ) ∶= ( ) − ( ) ( ) − ( ) If ( ) = ( ), then the ratio is always 0. If ( ) = ( ) (and ( ) ≠ ( )), then the ratio is defined as +∞ if ( ) > ( ), or −∞ if ( ) < ( ). Definition 4 . 10 . 410Consider a -maximal allocation and an exchangeable pair , ∈ , , for some , ∈ (︀ ⌋︀. is called a preferred item in the exchangeable pair ( , ) if both ( ) > ( ) and ( ) > ( ). LEMMA 4 . 11 . 411For any agents, in any -maximal allocation , if an agent envies some agent , then there is an exchangeable pair , ∈ , , and is the preferred item. Figure 1 : 1The corresponding lines for the items in the example. Figure 2 :Figure 3 : 23EnvyTop . 1 . 1For any six real numbers , , , , , , the following inequalities are equivalent: . 2 . 2If > > , then the following are equivalent: ( , ) ≤ ⇑ ( , ) . 3 . 3Lemma B.1 is still true if we reverse all inequalities directions, so too Lemma B.2.That is, if > > , then the following are equivalent: ( , ) ≥ ⇑ ( , ) 1 1 1( 1 ) + 2 2 ( 2 ) + ... + ( ) ≥1 1 ( ) + 2 2 ( 1 ) + ... + ( −1 ) Algorithm 2 Transforming one allocation to another one using edgedisjoint exchange-cycles Input: Two feasible allocations = ( 1 , ..., ), ′ = ( ′ 1 , ..., ′ ) Output: A sequence of exchange-cycles leading from to ′ . 1: result ← an empty sequence. 2: ⇒ (ii): Suppose is a -maximal allocation. Suppose toward a contradiction that (ii) is not true, that is, there exists a set of indices { 1 , ..., } ∈ (︀ ⌋︀ and a set of items in the same category 1 , ..., , such that: 1 1 ( 1 ) + 2 2 ( 2 ) + ... Table 1 : 1Summary of some works on fair allocation of indivisible itemspaper agents utilities goods chores constraints fairness PO result [7] 2 arbitrary v v - EF1 - polynomial-time algorithm 4.4, there exists an agent ∈ (︀2⌋︀ such that both and +1 are EF[1,1] for . If both are EF[1,1] for agent 1, then +1 is an EF[1,1] allocation. If both are EF[1,1] for agent 2, then is an EF[1,1] allocation. □ Table 4 : 4Top trading algorithm counterexample1,1 1,2 1,3 1,4 2,1 2,2 Table 3 : 3Iterated matching algorithm counterexample1,1 1,2 2,1 2,2 Agent 1 0 -2 -2 -1 Agent 2 0 -4 -4 0 Appendix A Methods that Do Not Work Table 5 : 5Cycle elimination algorithm counterexample Table 6 : 6Pareto improvements counterexample1 2 3 4 5 6 7 8 Agent 1 -5 -2 -1 -2 -2 -2 -1 -2 Agent 2 -1 -1 -2 -1 -1 0 0 0 1 1 ( 1′ 1 ) + ... + ( ′ ) ≤ 1 1 ( 1 ) + ... + ( ). This holds for every feasible allocation ′ , so is -maximal by definition. □ D A Complete Proof for Lemma 4.12 Lemma 4.12. Suppose there are = 2 agents. Let be a -maximal allocation, for = ( 1 , 2 ). Suppose there is an exchangeable pair In fact, the result holds not only for an exchange of two items, but also for any permutation of items of the same category, one item per agent. The proof is the same. , 1 2 = 2 ( 1 )− 2 ( 2 ) 1 ( 1 )− 1 ( 2 ) = 2⇑1 ( 1 , 2 ) holds.The largest value is obtained on the right side of the graph, and as we progress to the left side its value decreases. ∈ 1 , 2 ∈ 2 such that:(1) 2 ( 1 ) > 2 ( 2 ), that is, 1 is the preferred item.(2) Among all exchangeable pairs with a preferred item, this pair has the largest difference-ratio 2⇑1 ( 1 , 2 ). AcknowledgmentsThis research has been partly supported by the Ministry of Science, Technology & Space (MOST), Israel.We now look at all the exchangeable pairs ( * 1 , * 2 ) after the exchange, and see that they satisfy all the conditions of Lemma 4.8(ii) with ′ = ( ′ 1 , ′ 2 ), which can be written as:who have not moved: Lemma 4.8 implies that:. After the exchange, they still satisfy the same conditions. (2) The pair ( 2 , 1 ):, and the pair ( 2 , 1 ) fits condition (c), which says that the item in 1's bundle worth less, for agent, it is not possible that 2 ( 1 ) < 2 ( * 1 ) because it implies 2 ( 2 ) < 2 ( * 1 ), and by 4.9, 1 ( 2 ) < 1 ( * 1 ). By the values of r's numerators and denominators, we get 2⇑1 ( * 1 , 2 ) > 2⇑1 ( 1 , 2 ), and it contradicts 2⇑1 ( 1 , 2 ) maximality. Therefore, 2 ( 1 ) ≥ 2 ( * 1 ).(c) If 1 ( * 1 ) < 1 ( 1 ), it is also not possible that 2 ( 1 ) < 2 ( * 1 ), as explained in (b). It is also not possible that 2 ( 1 ) = 2 ( * 1 ), because then 2 ( * 1 ) > 2 ( 2 ), and by 4.9, since is an ( 1 , 2 )-maximal and ( * 1 , 2 ) is an exchangeable pair in it, 1 ( * 1 ) > 1 ( 2 ). Therefore, 2⇑1 ( * 1 , 2 ) > 2⇑1( 1 , 2 ), contradiction. So in that case, necessarily 2 ( * 1 ) < 2 ( 1 ). Based on that, we now show that 2⇑1 ( 1 , 2 ) ≤ 2⇑1 ( * 1 , 1 ).( 1 , 2 ), and by 4.9,. Since 1 ( 2 ) < 1 ( 1 ) and 1 ( * 1 ) < 1 ( 1 ), there are two options:By Lemma 4.8 we know that, and by B.3 (withPairs in the form ( 2 , * 2 ), * 2 ∈ ′ 2 , * 2 ≠ 1 : Exactly the same arguments in case 3 (but with 2 , * 2 ), prove this case. Therefore, by Lemma 4.8 (ii), ′ is ( ′ 1 , ′ 2 )-maximal, and Greedy algorithms for fair division of mixed manna. Martin Aleksandrov, Toby Walsh, arXiv:1911.11005arXiv preprintMartin Aleksandrov and Toby Walsh. 2019. Greedy algorithms for fair division of mixed manna. arXiv preprint arXiv:1911.11005 (2019). A note on the undercut procedure. Haris Aziz, Social Choice and Welfare. 45Haris Aziz. 2015. A note on the undercut procedure. Social Choice and Welfare 45, 4 (2015), 723-728. Fair allocation of indivisible goods and chores. Haris Aziz, Ioannis Caragiannis, Ayumi Igarashi, Toby Walsh, Autonomous Agents and Multi-Agent Systems. 36Haris Aziz, Ioannis Caragiannis, Ayumi Igarashi, and Toby Walsh. 2022. Fair allocation of indivisible goods and chores. Autonomous Agents and Multi-Agent Systems 36, 1 (2022), 1-21. A polynomial-time algorithm for computing a Pareto optimal and almost proportional allocation. Haris Aziz, Hervé Moulin, Fedor Sandomirskiy, Operations Research Letters. 48Haris Aziz, Hervé Moulin, and Fedor Sandomirskiy. 2020. A polynomial-time algorithm for computing a Pareto optimal and almost proportional allocation. Operations Research Letters 48, 5 (2020), 573-578. On the proximity of markets with integral equilibria. Siddharth Barman, Sanath Kumar Krishnamurthy, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Siddharth Barman and Sanath Kumar Krishnamurthy. 2019. On the proximity of markets with integral equilibria. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 1748-1755. Finding fair and efficient allocations. Siddharth Barman, Rohit Sanath Kumar Krishnamurthy, Vaish, Proceedings of the 2018 ACM Conference on Economics and Computation. the 2018 ACM Conference on Economics and ComputationSiddharth Barman, Sanath Kumar Krishnamurthy, and Rohit Vaish. 2018. Finding fair and efficient allocations. In Proceedings of the 2018 ACM Conference on Economics and Computation. 557-574. Kristóf Bérczi, Erika R Bérczi-Kovács, Endre Boros, Naoyuki Fekadu Tolessa Gedefa, Telikepalli Kamiyama, Yusuke Kavitha, Kazuhisa Kobayashi, Makino, arXiv:2006.044282020. Envy-free relaxations for goods, chores, and mixed items. arXiv preprintKristóf Bérczi, Erika R Bérczi-Kovács, Endre Boros, Fekadu Tolessa Gedefa, Naoyuki Kamiyama, Telikepalli Kavitha, Yusuke Kobayashi, and Kazuhisa Makino. 2020. Envy-free relaxations for goods, chores, and mixed items. arXiv preprint arXiv:2006.04428 (2020). Umang Bhaskar, Rohit Sricharan, Vaish, arXiv:2012.06788On Approximate Envy-Freeness for Indivisible Chores and Mixed Resources. arXiv preprintUmang Bhaskar, AR Sricharan, and Rohit Vaish. 2021. On Approximate Envy-Freeness for Indivisible Chores and Mixed Resources. arXiv preprint arXiv:2012.06788 (2021). Almost envy-free allocations with connected bundles. Vittorio Bilò, Ioannis Caragiannis, Michele Flammini, Ayumi Igarashi, Gianpiero Monaco, Dominik Peters, Cosimo Vinci, William S Zwicker, Games and Economic Behavior. 131Vittorio Bilò, Ioannis Caragiannis, Michele Flammini, Ayumi Igarashi, Gianpiero Monaco, Dominik Peters, Cosimo Vinci, and William S Zwicker. 2022. Almost envy-free allocations with connected bundles. Games and Economic Behavior 131 (2022), 197-221. Fair Division Under Cardinality Constraints. Arpita Biswas, Siddharth Barman, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. the Twenty-Seventh International Joint Conference on Artificial IntelligenceArpita Biswas and Siddharth Barman. 2018. Fair Division Under Cardinality Constraints.. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. 91-97. Fair Allocation of Indivisible Goods. Sylvain Bouveret, Yann Chevaleyre, Nicolas Maudet, Handbook of Computational Social Choice. Endriss, Jérôme Lang, and Ariel D. ProcacciaFelix Brandt, Vincent Conitzer, UlleCambridge University PressSylvain Bouveret, Yann Chevaleyre, and Nicolas Maudet. 2016. Fair Allocation of Indivisible Goods. In Handbook of Computational Social Choice, Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia (Eds.). Cambridge University Press, 284-310. Mathematics and democracy: Designing better voting and fair-division procedures. J Steven, Brams, Princeton University PressSteven J Brams. 2007. Mathematics and democracy: Designing better voting and fair-division procedures. Princeton University Press. The undercut procedure: an algorithm for the envy-free division of indivisible items. J Steven, Brams, Christian Marc Kilgour, Klamler, Social Choice and Welfare. 39Steven J Brams, D Marc Kilgour, and Christian Klamler. 2012. The undercut procedure: an algorithm for the envy-free division of indivisible items. Social Choice and Welfare 39, 2 (2012), 615-631. Two-person fair division of indivisible items: An efficient, envy-free algorithm. J Steven, Marc Brams, Christian Kilgour, Klamler, Notices of the AMS. 61Steven J Brams, Marc Kilgour, and Christian Klamler. 2014. Two-person fair division of indivisible items: An efficient, envy-free algorithm. Notices of the AMS 61, 2 (2014), 130-141. Fair Division: From cake-cutting to dispute resolution. John Steven, Alan D Brams, Taylor, Cambridge University PressSteven John Brams and Alan D Taylor. 1996. Fair Division: From cake-cutting to dispute resolution. Cambridge University Press. One Dollar Each Eliminates Envy. Johannes Brustle, Jack Dippel, V Vishnu, Mashbat Narayan, Adrian Suzuki, Vetta, Proceedings of the 21st ACM Conference on Economics and Computation. the 21st ACM Conference on Economics and ComputationJohannes Brustle, Jack Dippel, Vishnu V. Narayan, Mashbat Suzuki, and Adrian Vetta. 2020. One Dollar Each Eliminates Envy. In Proceedings of the 21st ACM Conference on Economics and Computation. 23-39. The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. Eric Budish, Journal of Political Economy. 119Eric Budish. 2011. The combinatorial assignment problem: Approximate compet- itive equilibrium from equal incomes. Journal of Political Economy 119, 6 (2011), 1061-1103. The unreasonable fairness of maximum Nash welfare. Ioannis Caragiannis, David Kurokawa, Hervé Moulin, Ariel D Procaccia, Nisarg Shah, Junxing Wang, ACM Transactions on Economics and Computation (TEAC). 7Ioannis Caragiannis, David Kurokawa, Hervé Moulin, Ariel D Procaccia, Nisarg Shah, and Junxing Wang. 2019. The unreasonable fairness of maximum Nash welfare. ACM Transactions on Economics and Computation (TEAC) 7, 3 (2019), 1-32. EFX Exists for Three Agents. Jugal Bhaskar Ray Chaudhury, Kurt Garg, Mehlhorn, 10.1145/3391403.3399511Proceedings of the 21st ACM Conference on Economics and Computation (Virtual Event, Hungary) (EC '20). the 21st ACM Conference on Economics and Computation (Virtual Event, Hungary) (EC '20)New York, NY, USAAssociation for Computing MachineryBhaskar Ray Chaudhury, Jugal Garg, and Kurt Mehlhorn. 2020. EFX Exists for Three Agents. In Proceedings of the 21st ACM Conference on Economics and Computation (Virtual Event, Hungary) (EC '20). Association for Computing Machinery, New York, NY, USA, 1-19. https://doi.org/10.1145/3391403.3399511 Xingyu Chen, Zijie Liu, arXiv:2005.04864The fairness of leximin in allocation of indivisible chores. arXiv preprintXingyu Chen and Zijie Liu. 2020. The fairness of leximin in allocation of indivisi- ble chores. arXiv preprint arXiv:2005.04864 (2020). On Fair Division under Heterogeneous Matroid Constraints. Amitay Dror, Michal Feldman, Erel Segal-Halevi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceAmitay Dror, Michal Feldman, and Erel Segal-Halevi. 2021. On Fair Division under Heterogeneous Matroid Constraints. In Proceedings of the AAAI Conference on Artificial Intelligence. 5312-5320. Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms. L Michael, Robert Endre Fredman, Tarjan, 10.1145/28869.28874J. ACM. 343Michael L. Fredman and Robert Endre Tarjan. 1987. Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms. J. ACM 34, 3 (jul 1987), 596-615. https://doi.org/10.1145/28869.28874 Unified Fair Allocation of Goods and Chores via Copies. Yotam Gafni, Xin Huang, Ron Lavi, Inbal Talgam-Cohen, arXiv:2109.08671arXiv preprintYotam Gafni, Xin Huang, Ron Lavi, and Inbal Talgam-Cohen. 2021. Unified Fair Allocation of Goods and Chores via Copies. arXiv preprint arXiv:2109.08671 (2021). Fair and efficient allocations of chores under bivalued preferences. Jugal Garg, Aniket Murhekar, John Qin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Jugal Garg, Aniket Murhekar, and John Qin. 2022. Fair and efficient allocations of chores under bivalued preferences. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 5043-5050. Jugal Garg, arXiv:2212.02440Aniket Murhekar, and John Qin. 2022. Improving Fairness and Efficiency Guarantees for Allocating Indivisible Chores. arXiv preprintJugal Garg, Aniket Murhekar, and John Qin. 2022. Improving Fairness and Efficiency Guarantees for Allocating Indivisible Chores. arXiv preprint arXiv:2212.02440 (2022). Halvard Hummel, Magnus Lie Hetland, arXiv:2106.07300Guaranteeing Half-Maximin Shares Under Cardinality Constraints. arXiv preprintHalvard Hummel and Magnus Lie Hetland. 2021. Guaranteeing Half-Maximin Shares Under Cardinality Constraints. arXiv preprint arXiv:2106.07300 (2021). Pareto-optimal allocation of indivisible goods with connectivity constraints. Ayumi Igarashi, Dominik Peters, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligenceAyumi Igarashi and Dominik Peters. 2019. Pareto-optimal allocation of indivisible goods with connectivity constraints. In Proceedings of the AAAI conference on artificial intelligence. 2045-2052. Two-player fair division of indivisible items: Comparison of algorithms. Marc Kilgour, Rudolf Vetschera, European Journal of Operational Research. 271D Marc Kilgour and Rudolf Vetschera. 2018. Two-player fair division of indivisi- ble items: Comparison of algorithms. European Journal of Operational Research 271, 2 (2018), 620-631. On approximately fair allocations of indivisible goods. Evangelos Richard J Lipton, Elchanan Markakis, Amin Mossel, Saberi, Proceedings of the 5th ACM Conference on Electronic Commerce. the 5th ACM Conference on Electronic CommerceRichard J Lipton, Evangelos Markakis, Elchanan Mossel, and Amin Saberi. 2004. On approximately fair allocations of indivisible goods. In Proceedings of the 5th ACM Conference on Electronic Commerce. 125-131. Allocating indivisible items in categorized domains. Erika Mackin, Lirong Xia, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. the Twenty-Fifth International Joint Conference on Artificial IntelligenceErika Mackin and Lirong Xia. 2016. Allocating indivisible items in categorized domains. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. 359-365. Fair division and collective welfare. Hervé Moulin, MIT pressHervé Moulin. 2004. Fair division and collective welfare. MIT press. Welfare economics and existence of an equilibrium for a competitive economy. Takashi Negishi, Metroeconomica. 12Takashi Negishi. 1960. Welfare economics and existence of an equilibrium for a competitive economy. Metroeconomica 12, 2-3 (1960), 92-97. . Antonio Nicolò, Rodrigo A Velez, Divide and compromise. Mathematical Social Sciences. 90Antonio Nicolò and Rodrigo A Velez. 2017. Divide and compromise. Mathemati- cal Social Sciences 90 (2017), 100-110. Strategic divide and choose. Antonio Nicolò, Yan Yu, Games and Economic Behavior. 64Antonio Nicolò and Yan Yu. 2008. Strategic divide and choose. Games and Economic Behavior 64, 1 (2008), 268-289. Fair division with multiple pieces. Kathryn Nyman, Francis Edward Su, Shira Zerbib, Discrete Applied Mathematics. 283Kathryn Nyman, Francis Edward Su, and Shira Zerbib. 2020. Fair division with multiple pieces. Discrete Applied Mathematics 283 (2020), 115-122. Mechanism Design for Multi-Type Housing Markets with Acceptable Bundles. Sujoy Sikdar, Sibel Adalı, Lirong Xia, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceSujoy Sikdar, Sibel Adalı, and Lirong Xia. 2019. Mechanism Design for Multi- Type Housing Markets with Acceptable Bundles. In Proceedings of the AAAI Conference on Artificial Intelligence. 2165-2172. Constraints in fair division. Warut Suksompong, ACM SIGecom Exchanges. 19Warut Suksompong. 2021. Constraints in fair division. ACM SIGecom Exchanges 19, 2 (2021), 46-61. . Jamie Tucker, - Foltz, Richard Zeckhauser, arXiv:2207.03076Playing Divide-and-Choose Given Uncertain Preferences. arXiv preprintJamie Tucker-Foltz and Richard Zeckhauser. 2022. Playing Divide-and-Choose Given Uncertain Preferences. arXiv preprint arXiv:2207.03076 (2022). Two problems in the theory of fairness. Hal R Varian, Journal of Public Economics. 5Hal R Varian. 1976. Two problems in the theory of fairness. Journal of Public Economics 5, 3-4 (1976), 249-260. Fair division of a measurable space. Dietrich Weller, Journal of Mathematical Economics. 14Dietrich Weller. 1985. Fair division of a measurable space. Journal of Mathemati- cal Economics 14, 1 (1985), 5-17. Budget-feasible Maximum Nash Social Welfare is Almost Envy-free. Xiaowei Wu, Bo Li, Jiarui Gan, 10.24963/ijcai.2021/65Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. Zhi-Hua Zhouthe Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21International Joint Conferences on Artificial Intelligence OrganizationXiaowei Wu, Bo Li, and Jiarui Gan. 2021. Budget-feasible Maximum Nash Social Welfare is Almost Envy-free. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Zhi-Hua Zhou (Ed.). International Joint Conferences on Artificial Intelligence Organization, 465-471. https://doi. org/10.24963/ijcai.2021/65 Main Track.
[]
[ "Learning Conditional Invariance through Cycle Consistency", "Learning Conditional Invariance through Cycle Consistency", "Learning Conditional Invariance through Cycle Consistency", "Learning Conditional Invariance through Cycle Consistency" ]
[ "Maxim Samarin \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Vitali Nesterov \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Mario Wieser [email protected] \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Aleksander Wieczorek \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Sonali Parbhoo [email protected] \nSchool of Engineering and Applied Sciences\nHarvard John A. Paulson\nHarvard University\n150 Western Ave02134BostonMAUSA\n", "Volker Roth [email protected] \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Maxim Samarin \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Vitali Nesterov \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Mario Wieser [email protected] \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Aleksander Wieczorek \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n", "Sonali Parbhoo [email protected] \nSchool of Engineering and Applied Sciences\nHarvard John A. Paulson\nHarvard University\n150 Western Ave02134BostonMAUSA\n", "Volker Roth [email protected] \nDepartment of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland\n" ]
[ "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "School of Engineering and Applied Sciences\nHarvard John A. Paulson\nHarvard University\n150 Western Ave02134BostonMAUSA", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland", "School of Engineering and Applied Sciences\nHarvard John A. Paulson\nHarvard University\n150 Western Ave02134BostonMAUSA", "Department of Mathematics and Computer Science\nUniversity of Basel\nSpiegelgasse 14051BaselSwitzerland" ]
[]
Identifying meaningful and independent factors of variation in a dataset is a challenging learning task frequently addressed by means of deep latent variable models. This task can be viewed as learning symmetry transformations preserving the value of a chosen property along latent dimensions. However, existing approaches exhibit severe drawbacks in enforcing the invariance property in the latent space. We address these shortcomings with a novel approach to cycle consistency. Our method involves two separate latent subspaces for the target property and the remaining input information, respectively. In order to enforce invariance as well as sparsity in the latent space, we incorporate semantic knowledge by using cycle consistency constraints relying on property side information. The proposed method is based on the deep information bottleneck and, in contrast to other approaches, allows using continuous target properties and provides inherent model selection capabilities. We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models with improved invariance properties.
10.1007/978-3-030-92659-5_24
[ "https://export.arxiv.org/pdf/2111.13185v1.pdf" ]
244,709,190
2111.13185
42bed1e4bdf71ff00f345365f5afeb9207c69436
Learning Conditional Invariance through Cycle Consistency Maxim Samarin Department of Mathematics and Computer Science University of Basel Spiegelgasse 14051BaselSwitzerland Vitali Nesterov Department of Mathematics and Computer Science University of Basel Spiegelgasse 14051BaselSwitzerland Mario Wieser [email protected] Department of Mathematics and Computer Science University of Basel Spiegelgasse 14051BaselSwitzerland Aleksander Wieczorek Department of Mathematics and Computer Science University of Basel Spiegelgasse 14051BaselSwitzerland Sonali Parbhoo [email protected] School of Engineering and Applied Sciences Harvard John A. Paulson Harvard University 150 Western Ave02134BostonMAUSA Volker Roth [email protected] Department of Mathematics and Computer Science University of Basel Spiegelgasse 14051BaselSwitzerland Learning Conditional Invariance through Cycle Consistency Sparsity · Cycle Consistency · Invariance · Deep Variational Information Bottleneck · Variational Autoencoder · Model Selection Identifying meaningful and independent factors of variation in a dataset is a challenging learning task frequently addressed by means of deep latent variable models. This task can be viewed as learning symmetry transformations preserving the value of a chosen property along latent dimensions. However, existing approaches exhibit severe drawbacks in enforcing the invariance property in the latent space. We address these shortcomings with a novel approach to cycle consistency. Our method involves two separate latent subspaces for the target property and the remaining input information, respectively. In order to enforce invariance as well as sparsity in the latent space, we incorporate semantic knowledge by using cycle consistency constraints relying on property side information. The proposed method is based on the deep information bottleneck and, in contrast to other approaches, allows using continuous target properties and provides inherent model selection capabilities. We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models with improved invariance properties. Abstract. Identifying meaningful and independent factors of variation in a dataset is a challenging learning task frequently addressed by means of deep latent variable models. This task can be viewed as learning symmetry transformations preserving the value of a chosen property along latent dimensions. However, existing approaches exhibit severe drawbacks in enforcing the invariance property in the latent space. We address these shortcomings with a novel approach to cycle consistency. Our method involves two separate latent subspaces for the target property and the remaining input information, respectively. In order to enforce invariance as well as sparsity in the latent space, we incorporate semantic knowledge by using cycle consistency constraints relying on property side information. The proposed method is based on the deep information bottleneck and, in contrast to other approaches, allows using continuous target properties and provides inherent model selection capabilities. We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models with improved invariance properties. Keywords: Sparsity · Cycle Consistency · Invariance · Deep Variational Information Bottleneck · Variational Autoencoder · Model Selection. Motivation Understanding the nature of a generative process for observed data typically involves uncovering explanatory factors of variation responsible for the observations. But the relationship between these factors and our observation usually remains unclear. A common assumption is that the relevant factors can be expressed by a low-dimensional latent representation Z [25]. Therefore, popular machine learning methods involve learning of appropriate latent representations Both authors contributed equally. arXiv:2111.13185v1 [cs.LG] 25 Nov 2021 to disentangle factors of variation. Learning disentangled representations is often considered in an unsupervised setting which does not rely on the prior knowledge about the data such as labels [7,8,13,17,24]. However, it was shown that inductive bias on the dataset and learning approach is necessary to obtain disentanglement [25]. Inductive biases allow us to express assumptions about the generative process and to prioritise different solutions not only in terms of disentanglement [5,13,21,35,44], but also in terms of constrained latent space structures [15,16], preservation of causal relationships [40], or interpretability [45]. We consider a supervised setting where semantic knowledge about the input data allows structuring the latent representation in disjoint subspaces Z 0 and Z 1 of the latent space Z by enforcing conditional invariance. In such supervised settings, disentanglement can be viewed as an extraction of level sets or symmetries inherent to our data X which leave a specified property Y invariant. An important application in that direction is the generation of diverse molecular structures with similar chemical properties [44]. The goal is to disentangle factors of variation relevant for the property. Typically, level sets L y are defined implicitly through L y (f ) = {(x 1 , ..., x d )|f (x 1 , ..., x d ) = y} for a property y which implicitly describes the level curve or surface w.r.t. inputs (x 1 , ..., x d ) ∈ R d . The topic of this paper is to identify a sparse parameterisation of level sets which encodes conditional invariances and thus selects a correct model. Several techniques have been developed to steer model selection by sparsifying the number of features, e.g. [38,39], or compressing features into a low-dimensional feature space, e.g. [4,33,42]. These methods improve generalisation by focusing on only a subset of relevant features and using these to explain a phenomenon. Existing methods for including such prior knowledge in the model usually do not include dimensionality reduction techniques and perform a hand-tuned selection [15,21,44]. In this paper, we introduce a novel approach to cycle consistency, relying on property side information Y as our semantic knowledge, to provide conditional invariance in the latent space. With this we mean that conditioning on part of the latent space, i.e. Z 0 , allows property-invariant sampling in the latent space Z 1 . By ensuring that our method consistently performs on generated samples when fed back to the network, we achieve more disentangled and sparser representations. Our work builds on [42], where a general sparsity constraint on latent representations is provided, and on [21,44], where conditional invariance is obtained through adversarial training. We show that our approach addresses some drawbacks in previous approaches and allows us to identify more meaningful factors for learning better models and achieve improved invariance performance. Our contributions may thus be summarised as follows: -We propose a novel approach for supervised disentanglement where conditional invariance is enforced by a novel cycle consistency on property side information. This facilitates the guided exploration of the latent space and improves sampling with a fixed property. -Our model inherently favours sparse solutions, leading to more interpretable latent dimensions and facilitates built-in model selection. -We demonstrate that our method improves on the state-of-the-art performance for conditional invariance as compared to existing approaches on both synthetic and molecular benchmark datasets. 2 Related Work Deep Generative Latent Variable Models and Disentanglement Because of its flexibility, the variational autoencoder (VAE) [20,34] is a popular deep generative latent variable model in many areas such as fairness [27], causality [26], semi-supervised learning [19], and design and discovery of novel molecular structures [11,22,28]. The VAE is closely related to the Information Bottleneck (IB) principle [4,39]. Various approaches exploit this relation, e.g the deep variational information bottleneck (DVIB) [2,4]. Further extensions were proposed in the context of causality [9,29,30] or archetypal analysis [15,16]. The β-VAE [13] extends the standard VAE approach and allows unsupervised disentanglement. In unsupervised settings, there exists a great variety of approaches based on VAEs and generative adversarial networks (GANs) to achieve disentanglement such as FactorVAE [17], β-TCVAE [7] or InfoGAN [8,24]. Partitioning the latent space into subspaces is inspired by the multi-level VAE [5], where the latent space is decomposed into a local feature space that is only relevant for a subgroup and a global feature space. In supervised settings, several approaches such as [10,21,23,44] achieve disentanglement by applying adversarial information elimination to select a model with partitioned feature and property space. In such a setting, different to unsupervised disentanglement, our goal is supervised disentanglement with respect to a particular target property. Another important line of research employs the idea of cycle consistency for learning disentangled representations. Presumably the most closely related work to this study is conducted by [14,43,44]. Here, the authors employ a cycleconsistent loss on the latent representations to learn symmetries and disentangled representations in weakly supervised settings, respectively. Moreover, in [44], the authors use adversarial training and mutual information estimation to learn symmetry transformations instead of explicitly modelling them. In contrast, our work replaces adversarial training by using cycle consistency. Model Selection via Sparsity Several works perform model selection by introducing sparsity constraints which penalise the model complexity. A common sparsity constraint is the Least Absolute Shrinkage and Selection Operator (LASSO) [38]. Extensions of the LASSO propose a log-penalty to obtain even sparser solutions in the compressed IB setting [33] and generalise it further to deep generative models [42]. Furthermore, the LASSO has been extended to the group LASSO, where combinations of covariates are set to zero, the sparse group LASSO [36], and the Bayesian group LASSO [32]. Perhaps most closely related to our work is the oi-VAE [3], which incorporates a group LASSO prior in deep latent variable models. These methods employ a general sparsity constraint to achieve a sparse representation. Our model extends these ideas and imposes a semantic sparsity constraint in the form of cycle consistency that performs regularisation based on prior knowledge. Preliminaries Deep Variational Information Bottleneck We focus on the DVIB [4] which is a method for information compression based on the IB principle [39]. The objective is to compress a random variable X into a random variable Z while being able to predict a third random variable Y . The DVIB is closely related to the VAE [20,34]. The optimal compression is achieved by solving the parametric problem min φ,θ I φ (Z; X) − λI φ,θ (Z; Y ),(1) where I is the mutual information between two random variables. Hence, the DVIB objective balances maximisation of I φ,θ (Z; Y ), i.e. Z being informative about Y , and minimisation of I φ (Z; X), i.e. compression of X into Z. We assume a parametric form of the conditionals p φ (Z|X) and p θ (Y |Z) with φ and θ representing the parameters of the encoder and decoder network, respectively. Parameter λ controls the degree of compression and is closely related to β in the β-VAE [13]. The relationship to the VAE becomes more apparent with the definition of the mutual information terms: I φ (Z; X) = E p(X) D KL (p φ (Z|X) p(Z)),(2)I φ,θ (Z; Y ) ≥ E p(X,Y ) E p φ (Z|X) log p θ (Y |Z) + h(Y ),(3) with D KL being the Kullback-Leibler divergence, and h(Y ) the entropy. Note that we write Eq. (3) as an inequality which uses the insight of [41] that the RHS is in fact a lower bound to I θ (Z; Y ); see [41] for details. Cycle Consistency We use the notion of cycle consistency similar to [14,46]. The CycleGAN [46] performs unsupervised image-to-image translation, where a data point is mapped to its initial position after being transferred to a different space. For instance, suppose that domain X consists of summer landscapes, while domain Y consists of winter landscapes (see Appendix Fig. A1). A function f (x) may be used to transform a summer landscape x to a corresponding winter landscape y. Similarly, function g(y) maps y back to the domain X. The goal of cycle consistency is to learn a mapping tox, which is close to the initial x. In most cases, there is a discrepancy between x andx referred to as the cycle consistency loss. In order to obtain an almost invertible mapping, the loss (g(f (x)) − x 1 is minimised. (a) Firstly, we learn a sparse representation Z from our input data X which we separate into a property space Z 0 and an invariant space Z 1 . Given this representation, we try to predict the propertyŶ and reconstruct our inputX. Grey arrows indicate thatŶ = dec Y (Z 0 ) instead of Z 0 is used for decodingX (see Sec. 4.3). (b) Secondly, we sample new data in two ways: (i) uniformly in Z to get new data pointsX andỸ (orange data), (ii) uniformly in Z 1 with fixed Z 0 to getX at fixedŶ (cyan data). We concatenate the respective decoder outputs. (c) Lastly, we feed the concatenated input batch X c into our model and calculate the cycle consistency loss between the properties. Model Our model is based on the DVIB to learn a compact latent representation. The input X and the output Y may be complex objects and take continuous values, such as molecules with their respective molecular properties. Unlike the standard DVIB, we do not only want to predict Y from an input X, but also want to generate newX by sampling from our latent representation. As a consequence, we add an additional second decoder that reconstructs X from Z (similar to [11] for decoder Y in the VAE setting), leading to the adjusted parametric objective min φ,θ,τ I φ (Z; X) − λ I φ,θ (Z; Y ) + I φ,τ (Z; X) ,(4) where φ are the encoder parameters, and θ and τ describe network parameters for decoding Y and X, respectively. Learning a Compact Representation Formulating our model as a DVIB allows leveraging properties of the mutual information with respect to learning compact latent representations. To see this, first assume that X and Y are jointly Gaussian-distributed which leads to the Gaussian Information Bottleneck [6] where the solution Z can be found analytically and proved to be Gaussian. In particular, for X ∼ N (0, Σ X ), the optimal Z is a noisy projection of X: Z = AX + ξ, where ξ ∼ N (0, I). The mutual information between X and Z is then equal to I(X; Z) = 1 2 log |AΣ X A + I|.(5) If we now assume A to be diagonal, the model becomes sparse [33]. This is because a full-rank projection AX of X does not change the mutual information since I(X; X ) = I(X; AX ). A reduction in mutual information can only be achieved by a rank-deficient matrix A. In general, the conditionals Z|X and Y |Z in Eq. (1) may be parameterised by neural networks with X and Z as input. The diagonality constraint on A does not cause any loss of generality of the DVIB solution as long as the neural network encoder f φ makes it possible to diagonalise Af φ (X)f φ (X) A (see [42] for more details). In the following, we consider A to be diagonal and define the sparse representation as the dimensions of the latent space Z selected by the non-zero entries of A. Recalling Eq. (5), this allows us to approximate the mutual information for the encoder in Eq. (2) in a sparse manner I φ (X; Z) = 1 2 log |diag(f φ (X)f φ (X) ) + 1|,(6) where 1 is the all-one vector and the diagonal elements of A are subsumed in the encoder f φ . Conditional Invariance and Informed Sparsity A general sparsity constraint is not sufficient to ensure that latent dimensions indeed represent independent factors. In a supervised setting, our target Y conveys semantic knowledge about the input X, e.g. a chemical property of a molecule. To incorporate semantic knowledge into our model, we require a mechanism that partitions the representation such that it encodes the semantic meaning not only sparsely but preferably independently of other information concerning the input. To this end, the central element of our approach is cycle consistency with respect to target property Y , which is illustrated in steps (b) and (c) in Fig. 1. The idea is, that reconstructedX or newly sampledX with associated prediction Y andỸ are expected to provide matching predictionsŶ andỸ whenX and X are used as an input to the network. This means, if we perform another cycle through the network with sampled or reconstructed inputs, the property prediction should stay consistent. The partitioning of the latent space Z in the property subspace Z 0 and the invariant subspace Z 1 is crucial. The property Y is predicted from Z 0 , while the input is reconstructed from the full latent space Z. Ensuring cycle consistency with respect to the property allows putting property-relevant information into the property subspace Z 0 . Furthermore, the latent space is regularised by drawing samples which adhere to cycle consistency and provide additional sparsity. If information about Y is encoded in Z 1 , this will lead to a higher cycle consistency loss. In this way, cycle consistency enforces invariance in subspace Z 1 . By fixing coordinates in Z 0 , and thus fixing a property, sampling in Z 1 results in newly generatedX with the same propertyỸ . More formally, fixing Z 0 renders random variables X and Y conditionally independent, i.e. X ⊥ ⊥ Y |Z 0 (see Appendix Fig. A2). We ensure conditional invariance with a particular sampling: We fix the Z 0 coordinates and sample in Z 1 to obtain generatedX all with a fixed propertyŶ . Using these inputs allows to obtain a new predictionỸ which should be close to the fixed target propertyŶ . We choose the L 2 norm for convenience and define the full cycle consistency loss by L cycle = Ŷ −Ŷ 2 + Ỹ −Ỹ 2 + Ŷ −Ỹ 2 .(7) Proposed Framework The resulting model in Eq. (8) combines sparse DVIBs with partitioned latent space and a novel approach to cycle consistency, which drives conditional invariance and informed sparsity in the latent structure. This allows latent dimensions in Z 0 relevant for prediction of Y to disentangle of latent dimensions in Z 1 which encode remaining input information of X. L = I φ (X; Z) − λ I φ,τ (Z 0 , Z 1 ; X) + I φ,θ (Z 0 ; Y ) −β Ŷ −Ŷ 2 + Ỹ −Ỹ 2 + Ŷ −Ỹ 2(8) The proposed model performs model selection as it inherently favours sparser latent representations. This in turn facilitates easier interpretation of latent factors because of the built-in conditional independence between property space Z 0 and invariant space Z 1 . These adjustments address some of the issues of the STIB [44] relying on adversarial training, mutual information estimation (which can be difficult in high-dimensions [37]) and bijective mapping which can make the training challenging. In contrast to the work of [14], we impose a novel cycle consistency loss on the predicted outputs Y instead of the latent representation Z. A reason to consider rather Y than Z is that varying latent dimensionality leads to severe problems in the optimisation process as it requires an adaptive rescaling of the different loss weights. To overcome this drawback, we close the full cycle and define the loss on the outputs. Appendix Sec. A.3 and Algorithm A.1 provide more information on the implementation. 3 As an implementation detail, we choose to concatenate the decoded Z 0 code with Z 1 in order to decodê X, i.e.X = dec X (Z 1 ,Ŷ = dec Y (Z 0 )). This is an additional measure to ensure that Z 0 contains information relevant for property prediction Y and prevent superfluous remaining information about the input X in property space Z 0 . Experimental Evaluation We evaluate the effectiveness of our proposed method w.r.t. (i) selection of a sparse representation with meaningful factors of variation (i.e. model selection) and (ii) enforcing conditional independence in the latent space between these factors. To this end, we conduct experiments on a synthetic dataset with knowledge about appropriate parameterisations to highlight the differences to existing models. Additionally, we evaluate our model on a real-world application with a focus on conditional invariance and generation of novel samples. To assess the performance of our model, we compare our approach to two state-of-the-art baselines: (i) the β-VAE [13] which is a typical baseline model in disentanglement studies and (ii) the symmetry-transformation information bottleneck (STIB) [44] which ensures conditional invariance through adversarial training and is the direct competitor to our model. We adapt the β-VAE by adding a decoder for property Y (similar to [11]) which takes only subspace Z 0 as input. The latent space of the adapted β-VAE is split into two subspaces as in the STIB and our model, but has no explicit mechanisms to enforce invariance. This setup can be viewed as an ablation study in which the β-VAE is the basis model of our approach without cycle consistency and sparsity constraints. The STIB provides an alternative approach for the same goal but with a different mechanism. The objective of the supervised disentanglement approach is to ensure disentanglement of a fixed property with respect to variations in the invariant space Z 1 . This is a slightly different setting than in standard unsupervised disentanglement and therefore standard disentanglement metrics might be less insightful. Instead, in order to test the property invariance, we first encode the inputs of the test set and fix the coordinates in the property subspace Z 0 which provides predictionŶ . Then we sample uniformly at random in Z 1 (plus/minus one standard deviation), decode the generatedX and perform a cycle through the network to obtainỸ . This provides the predicted property for the generatedX. If conditional invariance between X and Y at a fixed Z 0 is warranted, the mean absolute error (MAE) betweenŶ andỸ should be close to zero. Thus, all models are trained to attain similar MAEs for reconstructing X and, in particular, predicting Y , to ensure a fair comparison. Synthetic Dataset In the first experiments, we focus on learning level sets of ellipses and ellipsoids mapped into five dimensions. We consider these experiments as they allow a clear interpretation and visualisation of fixing a property, i.e. choosing the ellipse curve or ellipsoid surface, and known low-dimensional parameterisations are readily available. To this end, we sample uniformly at random data points X original from U([−1, 1] dX ) and calculate as the corresponding one-dimensional properties Y original the ellipse curves (d X = 2) and ellipsoid surfaces (d X = 3) rotated by 45 • in the X 1 X 2 -plane. In addition, we add Gaussian noise to the property Y original . In a real-world scenario, we typically do not have access to the underlying generating process providing X original and property Y original but a transformed view on these quantities. To reflect this, we map the input X original into a five dimensional Level sets are usually defined implicitly (see Appendix Eq. (A.9)). Common parameterisations consider polar coordinates (x, y) = (r cos ϕ, r sin ϕ) for the ellipse and spherical coordinates (x, y, z) = (r cos ϕ sin θ, r sin ϕ sin θ, r cos θ) for the ellipsoid, with radius r ∈ [0, ∞), (azimuth) angle ϕ ∈ [0, 2π) in the X 1 X 2plane, and polar angle θ ∈ [0, π] measured from the X 3 axis. The goal of our experiment is to identify a low-dimensional parameterisation which captures the underlying radial and angular components, i.e. identify latent dimensions which correspond to parameters (r, ϕ) and (r, ϕ, θ). space (d X = 5), i.e. X original ∈ [−1, 1] N ×d X → X ∈ R N ×5 , and property Y original into three dimensional space (d Y = 3), i.e. Y original ∈ R N ×1 + → Y ∈ R N ×3 , with Details on the architecture and training can be found in Appendix Sec. A.4. We use fully-connected layers for our encoder and decoder networks. Note that in our model, the noise level is fixed at σ noise = 1 w.l.o.g. (see Sec. 4.1). We choose an 8-dim. latent space, with 3 dimensions reserved for property subspace Z 0 and 5 dimensions for invariant subspace Z 1 . We consider a generous latent space with d Z1 = d X = 5 and d Z0 = d Y = 3 to evaluate the sparsity and model selection. Results: All models attain similar MAEs for X reconstruction and Y prediction but differ in the property invariance as summarised in Table 1. Our model learns more invariant representations with several factors difference w.r.t. the property invariance in both experiments. In Fig. 2(a), signal vs. noise for the different models is presented. The standard deviation σ signal is calculated as the sample standard deviation of the learned means in the respective latent dimension. The sampling noise σ noise is optimised as a free parameter during training. We consider a latent dimension to be informative or selected if the signal exceeds the noise. The sparest solution is obtained in our model with one latent dimension selected in the property subspace Z 0 and one in the invariant subspace Z 1 . In Fig. 2(b), we examine the obtained solution more closely in the original data space by mapping back from d X = 5 to d X = 2 dimensions. We consider ten equidistant values in the selected Z 0 dim. 1 and sample points in the selected Z 1 dim. 8. The different colours represent fixed values in Z 0 , with latent traversal in Z 1 dim. 8 reconstructing the full ellipse. This means, the selected latent dim. 8 contains all relevant information at a given coordinate in Z 0 , while dim. 4 to 7 do not contain any relevant information. We can relate the selected dim. 1 in Z 0 to the radius r and dim. 8 in Z 1 to the angle ϕ. For the ellipsoid (d X = 3) we obtain qualitatively the same results as for the ellipse. Again, only our model selects the correct number of latent factors with one in Z 0 and two in Z 1 (see Fig.2(c)). The latent traversal results in Fig. 2(d) are more intricate to interpret. For latent dim. 6, we obtain a representation which can be interpreted as encoding the polar angle θ. Traversal in latent dim. 8 yields closed curves in three dimensions which can be viewed as on orthogonal representation to dim. 6 and be interpreted as an encoding of the azimuth angle ϕ. In both Fig. 2(b) and 2(d), the last plot shows sampling in the selected Z 1 dimensions for fixed Z 0 (i.e. property Y ) and reconstructs the full ellipse and ellipsoid. Although β-VAE and STIB perform equally well on reconstructing and predicting on the test set, these models do not consistently lead to sparse and easily interpretable repre- The last plot (red borders) samples in all selected dimensions, which reconstructs the full ellipse and ellipsoid, respectively. We intentionally did not sample the ellipsoid surfaces completely to allow seeing surfaces underneath. sentations which allow direct traversal on the level sets as shown for our model. The presented results remain qualitatively the same for reruns of the models. Small Organic Molecules (QM9) As a more challenging example, we consider the QM9 dataset [31] which includes 133,885 organic molecules. The molecules consist of up to nine heavy atoms (C, O, N, and F), not including hydrogen. Each molecule includes corresponding chemical properties computed with the Density Functional Theory methods. In our experiments, we select a subset with a fixed stoichiometry (C 7 O 2 H 10 ) which consists of 6,093 molecules. We choose the band gap energy as the property. Details on the architecture and training can be found in Appendix Sec. A.5. We use fully-connected layers for our encoder and decoder. For the input X we use the bag-of-bonds [12] descriptor as a translation, rotation, and permutation invariant representation of molecules, which involves 190 dimensions. The latent space size is 17, where Z 0 is 1-dimensional and Z 1 is 16-dimensional. To evaluate the invariance, we first adjust the regularisation loss weights for a fair comparison of the models. The weights for the irrelevance loss in the STIB and the invariance loss terms in our model were increased until a drop in reconstruction and prediction performances compared to the β-VAE results was noticeable. Results: Table 1 summarises the results. On a test set of 300 molecules, all models achieve similar MAE of 0.01 for the reconstruction of X. For prediction of the band gap energies Y a MAE of approx. 4 kcal mol −1 is achieved. The invariance is computed on the basis of 25 test molecules and 400 samples generated for each reference molecule. Similarly to the synthetic experiments, the STIB model performs almost twice as well as the β-VAE, while our model yields a distinctly better invariance of 1.34 kcal mol −1 among both models. With this result, we can generate novel molecules which are very close to a fixed property. This capability is illustrated in Fig. 3. For two reference molecules in the test set, we generate 2,000 new molecules by sampling uniformly at random with one standard deviation in the invariant subspace Z 1 and keeping the reference property value, i.e. fixed Z 0 coordinates. We show three such examples in Fig. 3a and Discussion Sparsity constraints and cycle consistency lead to sparse and interpretable models facilitating model selection. The results in Fig. 2(a,c) demonstrate that our method identifies the sparsest solution in comparison to the standard disentanglement baseline β-VAE and the direct competitor STIB, which do not address sparsity explicitly. Furthermore, the experiments on ellipses and ellipsoids show that only our model also identifies a correct parameterisation. It correctly learns the radius r in the property subspace Z 0 as it encodes the level set, i.e. the ellipse curve or ellipsoid surface given by property Y . The angular components ϕ and θ are correctly -and in particular independentlylearned in the invariant subspace Z 1 (see Fig. 2(b,d)). This is a direct consequence of the cycle consistency on the property Y . It allows for semantically structuring the latent space on the basis of the semantic knowledge on property Y . Finally, these results highlight that our method is able to inherently select the correct model. Although the β-VAE and STIB are capable of attaining similar reconstruction and prediction errors, a reconstruction of level sets in these models requires a more complicated combination of latent dimensions and hinders interpretation. Therefore, only our model makes an interpretation of the learned latent representation feasible. Cycle consistency enforces conditional invariance. Table 1 shows that for all experiments, our model exhibits the best property invariance at otherwise similar reconstruction and prediction errors. The β-VAE has no mechanisms to ensure invariance and thus performs worst. But although the STIB relies on adversarial training to minimise mutual information (MI) between Z 1 and Y , the alternating training and MI estimation can pose practical obstacles, especially in cases with high-dimensional latent spaces. Our cycle-consistency-based approach has the same benefits and is more feasible. In particular, our approach can operate on arbitrarily large latent spaces in both Z 0 and Z 1 , because of the inherent sparsity of the solution. Typically, an upper limit for the size of property subspace Z 0 and invariant subspace Z 1 can be defined by the dimensionality of the property Y and input X (see Fig. 2). Noteworthy -although our model is trained and tested on data in the interval [−1, 1] d X , d X = {2, 3} -the results generalise well beyond this interval, as long as a part of the level curve or surface was encountered during training (see Fig. 2(b)). This can be directly attributed to the regularisation of the latent space through additional sampling and cycle consistency of generated samples. These mechanisms impose conditional invariance which, in turn, facilitates generalisation and exploration of new samples by sharing the same level set or symmetry-conserved property. Conditional invariance improves targeted molecule discovery. Conditional invariance is of great importance for the generative potential of our model. In Fig. 3 we exemplary explored the molecular structures for two reference molecules. By sampling in the invariant space Z 1 , we discover molecular structures with property values which are very close to the fixed targets, i.e the mean absolute deviation is below the model prediction error. Our experiment demonstrates the ability to generate molecules with self-consistent properties which rely on the improved conditional invariance provided by our model. This facilitates the discovery of novel molecules with desired chemical properties. In conclusion, we demonstrated on synthetic and real-world use cases that our method allows selecting a correct model and improve interpretability as well as exploration of the latent representation. In our synthetic study, we focused on simple cases of connected and convex level sets. To generalise these findings, more general level sets are interesting to be investigated in order to relate to more real-world scenarios. In addition, our approach could be applied to medical applications where a selection of interpretable models is of particular relevance. Fig. 1 : 1Model illustration. N data points and dimensions d X = {2, 3}. See Appendix Sec. A.4 for more details and Fig. A3 for an illustration of the dataset. Fig. 2 : 2Results for ellipse and ellipsoid in original input space (d X = {2, 3}). (a,c) Illustration of standard deviation in the different latent dimensions, where property subspace Z 0 spans dimensions 1-3 and invariant subspace Z 1 spans dimensions 4-8. Grey bars indicate the sampling noise σ noise and orange bars the sample standard deviation σ signal in the respective dimension. We consider a latent dimension to be selected if the signal exceeds the noise, i.e. orange bars are visible. Only our model selects the expected numbers of parameters. (b,d) Illustration of latent traversal in our model in latent dimensions 4 to 8 in our model in the original input space for fixed values in the property space dimension 1 (different colours). (b) The selected dimension 8 represents the angular component ϕ and reconstructs the full ellipse curves. (d) The selected dimension 6 represents the polar angle θ, while dimension 8 can be related to the azimuth angle ϕ. (b,d) Fig. 3 : 3Illustration of the generative capability of our model for two reference molecules (rows). (a) The first molecule is the reference molecule with a fixed reference band gap energy. We display three samples and their predicted band gap energies out of 2,000 samples. (b) Boxplots for distribution of predicted property. The star symbol marks the fixed reference band gap energy. The shaded background depicts the prediction error range of the model. select the nearest neighbours in the test set for visualisation of the molecular structure. For all samples, the boxplots inFig. 3billustrate the distribution in the predicted property values. The spread of predicted property values is generally smaller than the model prediction error of 4.06 kcal mol −1 and the predicted property of a majority of samples is close to the target property value. Table 1 : 1Mean absolute errors (MAE) for reconstruction of input X, prediction of property Y , and property invariance. Ellipse/Ellipsoid: MAEs on 5-dim. input X and 3-dim. property Y are depicted. Molecules: MAEs on input X and property Y as the band gap energy in kcal mol −1 .Ellipse Ellipsoid Molecules Model X Y Invar. X Y Invar. X Y Invar. β-VAE 0.03 0.25 0.058 0.02 0.25 0.153 0.01 4.01 5.66 STIB 0.03 0.25 0.027 0.03 0.25 0.083 0.01 4.08 3.05 Ours 0.04 0.25 0.006 0.05 0.25 0.006 0.01 4.06 1.34 Implementation: https://github.com/bmda-unibas/CondInvarianceCC TensorFlow: Large-scale machine learning on heterogeneous systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015), https://www.tensorflow.org/, software available from tensorflow.org Information dropout: Learning optimal representations through noisy computation. A Achille, S Soatto, IEEE Transactions on Pattern Analysis and Machine Intelligence. Achille, A., Soatto, S.: Information dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2018) oi-VAE: Output interpretable VAEs for nonlinear group factor analysis. S K Ainsworth, N J Foti, A K C Lee, E B Fox, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningAinsworth, S.K., Foti, N.J., Lee, A.K.C., Fox, E.B.: oi-VAE: Output interpretable VAEs for nonlinear group factor analysis. In: Proceedings of the 35th International Conference on Machine Learning (2018) Deep variational information bottleneck. A A Alemi, I Fischer, J V Dillon, K Murphy, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netAlemi, A.A., Fischer, I., Dillon, J.V., Murphy, K.: Deep variational information bottleneck. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenRe- view.net (2017), https://openreview.net/forum?id=HyxQzBceg Multi-level variational autoencoder: Learning disentangled representations from grouped observations. D Bouchacourt, R Tomioka, S Nowozin, AAAI Conference on Artificial Intelligence. Bouchacourt, D., Tomioka, R., Nowozin, S.: Multi-level variational autoencoder: Learning disentangled representations from grouped observations. In: AAAI Con- ference on Artificial Intelligence (2018) Information bottleneck for gaussian variables. G Chechik, A Globerson, N Tishby, Y Weiss, Journal of Machine Learning Research. Chechik, G., Globerson, A., Tishby, N., Weiss, Y.: Information bottleneck for gaus- sian variables. In: Journal of Machine Learning Research (2005) Isolating sources of disentanglement in variational autoencoders. R T Chen, X Li, R Grosse, D Duvenaud, arXiv:1802.04942arXiv preprintChen, R.T., Li, X., Grosse, R., Duvenaud, D.: Isolating sources of disentanglement in variational autoencoders. arXiv preprint arXiv:1802.04942 (2018) Infogan: Interpretable representation learning by information maximizing generative adversarial nets. X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, P Abbeel, arXiv:1606.03657arXiv preprintChen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657 (2016) D Chicharro, M Besserve, S Panzeri, arXiv:2010.05375Causal learning with sufficient statistics: an information bottleneck approach. arXiv preprintChicharro, D., Besserve, M., Panzeri, S.: Causal learning with sufficient statistics: an information bottleneck approach. arXiv preprint arXiv:2010.05375 (2020) A Creswell, Y Mohamied, B Sengupta, A A Bharath, Adversarial information factorization. Creswell, A., Mohamied, Y., Sengupta, B., Bharath, A.A.: Adversarial information factorization (2018) Automatic chemical design using a data-driven continuous representation of molecules. R Gómez-Bombarelli, J N Wei, D Duvenaud, J M Hernández-Lobato, B Sánchez-Lengeling, D Sheberla, J Aguilera-Iparraguirre, T D Hirzel, R P Adams, A Aspuru-Guzik, ACS central science. 42Gómez-Bombarelli, R., Wei, J.N., Duvenaud, D., Hernández-Lobato, J.M., Sánchez-Lengeling, B., Sheberla, D., Aguilera-Iparraguirre, J., Hirzel, T.D., Adams, R.P., Aspuru-Guzik, A.: Automatic chemical design using a data-driven continuous representation of molecules. ACS central science 4(2), 268-276 (2018) Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space. K Hansen, F Biegler, R Ramakrishnan, W Pronobis, O A Von Lilienfeld, K R Müller, A Tkatchenko, The journal of physical chemistry letters. 612Hansen, K., Biegler, F., Ramakrishnan, R., Pronobis, W., Von Lilienfeld, O.A., Müller, K.R., Tkatchenko, A.: Machine learning predictions of molecular proper- ties: Accurate many-body potentials and nonlocality in chemical space. The journal of physical chemistry letters 6(12), 2326-2331 (2015) beta-vae: Learning basic visual concepts with a constrained variational framework. I Higgins, L Matthey, A Pal, C Burgess, X Glorot, M Botvinick, S Mohamed, A Lerchner, International Conference on Learning Representations. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-vae: Learning basic visual concepts with a constrained varia- tional framework. In: International Conference on Learning Representations (2017) Disentangling factors of variation with cycle-consistent variational auto-encoders. A H Jha, S Anand, M K Singh, V S R Veeravasarapu, European Conference on Computer Vision. Jha, A.H., Anand, S., Singh, M.K., Veeravasarapu, V.S.R.: Disentangling factors of variation with cycle-consistent variational auto-encoders. In: European Conference on Computer Vision (2018) Learning extremal representations with deep archetypal analysis. S M Keller, M Samarin, F A Torres, M Wieser, V Roth, International Journal of Computer Vision. 1294Keller, S.M., Samarin, M., Torres, F.A., Wieser, M., Roth, V.: Learning extremal representations with deep archetypal analysis. International Journal of Computer Vision 129(4), 805-820 (2021) Deep archetypal analysis. S M Keller, M Samarin, M Wieser, V Roth, German Conference on Pattern Recognition. SpringerKeller, S.M., Samarin, M., Wieser, M., Roth, V.: Deep archetypal analysis. In: German Conference on Pattern Recognition. pp. 171-185. Springer (2019) Disentangling by factorising. H Kim, A Mnih, International Conference on Machine Learning. PMLRKim, H., Mnih, A.: Disentangling by factorising. In: International Conference on Machine Learning. pp. 2649-2658. PMLR (2018) Adam: A method for stochastic optimization. D P Kingma, J Ba, International Conference on Learning Representations. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Interna- tional Conference on Learning Representations (2015) Semi-supervised learning with deep generative models. D P Kingma, S Mohamed, D J Rezende, M Welling, Advances in neural information processing systems. Kingma, D.P., Mohamed, S., Rezende, D.J., Welling, M.: Semi-supervised learn- ing with deep generative models. In: Advances in neural information processing systems. pp. 3581-3589 (2014) Auto-encoding variational bayes. D P Kingma, M Welling, 2nd International Conference on Learning Representations. Bengio, Y., Le-Cun, Y.Banff, AB, CanadaConference Track ProceedingsKingma, D.P., Welling, M.: Auto-encoding variational bayes. In: Bengio, Y., Le- Cun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings (2014), http://arxiv.org/abs/1312.6114 Learning latent subspaces in variational autoencoders. J Klys, J Snell, R Zemel, Advances in Neural Information Processing Systems. Klys, J., Snell, J., Zemel, R.: Learning latent subspaces in variational autoencoders. In: Advances in Neural Information Processing Systems (2018) Grammar variational autoencoder. M J Kusner, B Paige, J M Hernández-Lobato, International Conference on Machine Learning. Kusner, M.J., Paige, B., Hernández-Lobato, J.M.: Grammar variational autoen- coder. In: International Conference on Machine Learning (2017) Fader networks: Manipulating images by sliding attributes. G Lample, N Zeghidour, N Usunier, A Bordes, L Denoyer, M Ranzato, Advances in Neural Information Processing Systems. Lample, G., Zeghidour, N., Usunier, N., Bordes, A., Denoyer, L., Ranzato, M.: Fader networks: Manipulating images by sliding attributes. In: Advances in Neural Information Processing Systems (2017) Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans. Z Lin, K Thekumparampil, G Fanti, S Oh, International Conference on Machine Learning. PMLRLin, Z., Thekumparampil, K., Fanti, G., Oh, S.: Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans. In: Interna- tional Conference on Machine Learning. pp. 6127-6139. PMLR (2020) Challenging common assumptions in the unsupervised learning of disentangled representations. F Locatello, S Bauer, M Lucic, G Raetsch, S Gelly, B Schölkopf, O Bachem, international conference on machine learning. PMLRLocatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., Bachem, O.: Challenging common assumptions in the unsupervised learning of disentangled representations. In: international conference on machine learning. pp. 4114-4124. PMLR (2019) Causal effect inference with deep latent-variable models. C Louizos, U Shalit, J M Mooij, D Sontag, R Zemel, M Welling, I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, Advances in Neural Information Processing Systems. Garnett, R.Curran Associates, Inc30Louizos, C., Shalit, U., Mooij, J.M., Sontag, D., Zemel, R., Welling, M.: Causal effect inference with deep latent-variable models. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30, pp. 6446-6456. Curran Associates, Inc. (2017) The variational fair autoencoder. C Louizos, K Swersky, Y Li, M Welling, R S Zemel, 4th International Conference on Learning Representations. Bengio, Y., LeCun, Y.San Juan, Puerto RicoConference Track ProceedingsLouizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.S.: The variational fair autoencoder. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings (2016), http://arxiv.org/abs/1511.00830 V Nesterov, M Wieser, V Roth, 3dmolnet: A generative network for molecular structures. Nesterov, V., Wieser, M., Roth, V.: 3dmolnet: A generative network for molecular structures (2020) Transfer learning from wellcurated to less-resourced populations with hiv. S Parbhoo, M Wieser, V Roth, F Doshi-Velez, Proceedings of the 5th Machine Learning for Healthcare Conference. the 5th Machine Learning for Healthcare ConferenceParbhoo, S., Wieser, M., Roth, V., Doshi-Velez, F.: Transfer learning from well- curated to less-resourced populations with hiv. In: Proceedings of the 5th Machine Learning for Healthcare Conference (2020) Information bottleneck for estimating treatment effects with systematically missing covariates. S Parbhoo, M Wieser, A Wieczorek, V Roth, Entropy. 224389Parbhoo, S., Wieser, M., Wieczorek, A., Roth, V.: Information bottleneck for es- timating treatment effects with systematically missing covariates. Entropy 22(4), 389 (2020) Quantum chemistry structures and properties of 134 kilo molecules. R Ramakrishnan, P O Dral, M Rupp, O A Von Lilienfeld, Scientific data. 11Ramakrishnan, R., Dral, P.O., Rupp, M., Von Lilienfeld, O.A.: Quantum chemistry structures and properties of 134 kilo molecules. Scientific data 1(1), 1-7 (2014) The bayesian group-lasso for analyzing contingency tables. S Raman, T J Fuchs, P J Wild, E Dahl, V Roth, Proceedings of the 26th Annual International Conference on Machine Learning. the 26th Annual International Conference on Machine LearningRaman, S., Fuchs, T.J., Wild, P.J., Dahl, E., Roth, V.: The bayesian group-lasso for analyzing contingency tables. In: Proceedings of the 26th Annual International Conference on Machine Learning (2009) Sparse meta-gaussian information bottleneck. M Rey, V Roth, T Fuchs, International Conference on Machine Learning. PMLRRey, M., Roth, V., Fuchs, T.: Sparse meta-gaussian information bottleneck. In: International Conference on Machine Learning. pp. 910-918. PMLR (2014) Stochastic backpropagation and approximate inference in deep generative models. D J Rezende, S Mohamed, D Wierstra, International Conference on Machine Learning. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and ap- proximate inference in deep generative models. In: International Conference on Machine Learning (2014) Dualdis: Dual-branch disentangling with adversarial learning. T Robert, N Thome, M Cord, Robert, T., Thome, N., Cord, M.: Dualdis: Dual-branch disentangling with adver- sarial learning (2019) A sparse-group lasso. N Simon, J Friedman, T Hastie, R Tibshirani, JOUR-NAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS. Simon, N., Friedman, J., Hastie, T., Tibshirani, R.: A sparse-group lasso. JOUR- NAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS (2013) Understanding the limitations of variational mutual information estimators. J Song, S Ermon, arXiv:1910.06222arXiv preprintSong, J., Ermon, S.: Understanding the limitations of variational mutual informa- tion estimators. arXiv preprint arXiv:1910.06222 (2019) Regression shrinkage and selection via the lasso. R Tibshirani, Journal of the Royal Statistical Society (Series B. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B) (1996) The information bottleneck method. N Tishby, F C Pereira, W Bialek, Allerton Conference on Communication, Control and Computing. Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: Allerton Conference on Communication, Control and Computing (1999) Causal compression. A Wieczorek, V Roth, arXiv:1611.00261arXiv preprintWieczorek, A., Roth, V.: Causal compression. arXiv preprint arXiv:1611.00261 (2016) On the difference between the information bottleneck and the deep information bottleneck. A Wieczorek, V Roth, Entropy. 222131Wieczorek, A., Roth, V.: On the difference between the information bottleneck and the deep information bottleneck. Entropy 22(2), 131 (2020) Learning sparse latent representations with the deep copula information bottleneck. A Wieczorek, M Wieser, D Murezzan, V Roth, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track ProceedingsWieczorek, A., Wieser, M., Murezzan, D., Roth, V.: Learning sparse latent rep- resentations with the deep copula information bottleneck. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net (2018), https://openreview.net/forum?id=Hk0wHx-RW Learning Invariant Representations for Deep Latent Variable Models. M Wieser, University of BaselPh.D. thesisWieser, M.: Learning Invariant Representations for Deep Latent Variable Models. Ph.D. thesis, University of Basel (2020) Inverse learning of symmetries. M Wieser, S Parbhoo, A Wieczorek, V Roth, Advances in Neural Information Processing Systems. Wieser, M., Parbhoo, S., Wieczorek, A., Roth, V.: Inverse learning of symmetries. In: Advances in Neural Information Processing Systems (2020) M Wu, M C Hughes, S Parbhoo, M Zazzi, V Roth, F Doshi-Velez, Beyond sparsity: Tree regularization of deep models for interpretability. Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., Doshi-Velez, F.: Beyond sparsity: Tree regularization of deep models for interpretability (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. J Zhu, T Park, P Isola, A A Efros, International Conference on Computer Vision. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision (2017)
[ "https://github.com/bmda-unibas/CondInvarianceCC" ]
[ "Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling", "Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling" ]
[ "Kavisha Vidanapathirana ", "Peyman Moghadam ", "Ben Harwood ", "Muming Zhao ", "Sridha Sridharan ", "Clinton Fookes " ]
[]
[]
Place Recognition enables the estimation of a globally consistent map and trajectory by providing non-local constraints in Simultaneous Localisation and Mapping (SLAM). This paper presents Locus, a novel place recognition method using 3D LiDAR point clouds in large-scale environments. We propose a method for extracting and encoding topological and temporal information related to components in a scene and demonstrate how the inclusion of this auxiliary information in place description leads to more robust and discriminative scene representations. Second-order pooling along with a nonlinear transform is used to aggregate these multi-level features to generate a fixed-length global descriptor, which is invariant to the permutation of input features. The proposed method outperforms state-of-the-art methods on the KITTI dataset. Furthermore, Locus is demonstrated to be robust across several challenging situations such as occlusions and viewpoint changes in 3D LiDAR point clouds. The open-source implementation is available at: https://github.com/csiro-robotics/locus .
10.1109/icra48506.2021.9560915
[ "https://export.arxiv.org/pdf/2011.14497v3.pdf" ]
227,228,240
2011.14497
40fc533aa2c8bf174b8527dd4aab36aa14c2d11b
Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling Kavisha Vidanapathirana Peyman Moghadam Ben Harwood Muming Zhao Sridha Sridharan Clinton Fookes Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling Place Recognition enables the estimation of a globally consistent map and trajectory by providing non-local constraints in Simultaneous Localisation and Mapping (SLAM). This paper presents Locus, a novel place recognition method using 3D LiDAR point clouds in large-scale environments. We propose a method for extracting and encoding topological and temporal information related to components in a scene and demonstrate how the inclusion of this auxiliary information in place description leads to more robust and discriminative scene representations. Second-order pooling along with a nonlinear transform is used to aggregate these multi-level features to generate a fixed-length global descriptor, which is invariant to the permutation of input features. The proposed method outperforms state-of-the-art methods on the KITTI dataset. Furthermore, Locus is demonstrated to be robust across several challenging situations such as occlusions and viewpoint changes in 3D LiDAR point clouds. The open-source implementation is available at: https://github.com/csiro-robotics/locus . I. INTRODUCTION Place Recognition (PR) is an essential capability required by autonomous robots and driverless cars, which enables the recognition of previously visited places under changing viewpoints and environmental conditions. Place recognition is crucial for various applications such as loop closure detection for large-scale, global data association in Simultaneous Localization and Mapping (SLAM) or metric localization within known maps [1], [2]. In this paper, we consider the problem of place recognition based on 3D LiDAR point clouds. The majority of the current state-of-the-art LiDAR-based place recognition methods extract representations of 3D point clouds based on either local or global descriptors. Local descriptors such as keypoint features [3] often suffer from low repeatability under noise and minor changes in the environment. The demand for extracting discriminative and generalizable global descriptors from point clouds has led to the development of several different techniques [4]- [6]. However, the majority of global descriptors are not robust to viewpoint variations, occlusions and dynamic scenes [7]. A potential reason for this fragility is that these methods are inherently not capable of capturing topological and temporal interactions between different components in the scene. Such auxiliary information is vital for the construction of robust and discriminative representations since a higher-level understanding is needed to distinguish between different scenes of similar structure. To this end, we propose a novel place recognition method (named Locus) which effectively exploits multi-level features for global place description using 3D LiDAR data. We define multiple levels of feature representations for each point cloud frame. A point cloud frame is defined as the points accumulated from a sweep of the LiDAR [8]. First, segment features are extracted to encode structural information of each point cloud frame. The segment-based representation leverages from the advantages of both local and global representations while not suffering from their individual drawbacks. Compared to local keypoint based representations, they are more descriptive and more robust to noise and changing environments. Next, spatial feature pooling encodes topological relationships between segments within a point cloud frame. Capturing topological relationships between components of a scene enables discrimination between scenes of similar composition but different arrangement. Finally, temporal feature pooling encodes co-occurrences of segments in the past k t point cloud frames providing robustness to sensor motion and dynamic objects. Once multilevel features are extracted, second-order pooling aggregates information of local features over a point cloud frame to form a holistic representation for place recognition. Second-order pooling captures the multiplicative interactions between the multi-level features and outputs a fixed-length descriptor which is invariant to the number and permutation of the input features. Furthermore, the fixed dimension of the global descriptor enables maintaining the computational complexity. Our main contributions are summarised as follows: • We introduce multi-level features which encode structural appearance, topological relationships and temporal correspondences related to components in a scene. • We formulate the generation of a global descriptor which encodes these multi-level features into a single viewpoint invariant representation using second-order pooling and demonstrate how these multi-level features contribute to place recognition performance. • Our proposed method (Locus) outperforms the state-ofthe-art place recognition methods on the KITTI dataset. • We quantitatively evaluate the robustness of our Locus method on a variety of challenging scenarios such as viewpoint changes and occlusion. Fig. 1: The overall framework of the proposed Locus method. Segments extracted from a LiDAR point cloud frame are described using two complementary sets of features. One feature describes the structural appearance of a segment while the other encodes topological and temporal information related to a segment. The two sets of features are aggregated using second-order pooling (O2P) followed by a Power-Euclidean transform (PE) to obtain a global descriptor of the point cloud. II. RELATED WORK A. Point cloud representation and descriptor generation To address the challenge of place recognition, the scenelevel point cloud is often encoded in three different ways; a set of local descriptors, a single global descriptor or a set of object/segment descriptors. Local descriptor based methods first detect a set of keypoints in the point cloud and then form local descriptors by encoding information in each keypoint neighbourhood [3], [9]. Local descriptors and keypoint detection suffer from low repeatability in noisy point clouds and changing environments. Global descriptors aim to describe the entire point cloud by a single vector representation. M2DP [4] generates a global descriptor by projecting the points into multiple 2D planes and calculating density signatures. PointNetVLAD [5] extracts a global descriptor using an end-to-end process based on PointNet and NetVLAD [10]. PointNetVLAD ignores the spatial distribution of features and hence lacks descriptive power. LPD-Net [11] addresses the limitation of PointNetVLAD by using adaptive local feature extraction along with a graph-based neighborhood aggregation module. Recently, DH3D [12] proposed a network which learns both the keypoint detection and local description and generates a global descriptor using NetVLAD. The aforementioned global descriptors do not demonstrate rotational-invariance and often fail upon reverse revisits. The current state-of-the-art consists of several rotationinvariant global descriptors. ScanContext [6] records the maximum height of points in a 2D grid-map of the scene and computes pairwise descriptor distances by comparing distances to all column shifted variants of the query descriptor to achieve rotational-invariance. Intensity ScanContext [13] extends ScanContext by including the intensity return of the LiDAR sensor. The ScanContext and its variants are not capable of capturing scene composition or topological relationships and rely on expensive distance calculation to ensure rotational-invariance. Recently, Kong et al. [14] represented point clouds as a semantic graph and recognize places using a graph similarity network. Their network is capable of capturing topological and semantic information from the point cloud and also achieves rotational-invariance along with state-of-the-art performance. However, due to the use of semantic segmentation, their method suffers from the following two bottlenecks. First, it is dependant on the existence of pre-defined semantic classes in the test dataset (i.e., domain gap problem). Second, since each segment is only represented by its class label, the method is not capable of differentiating between two segments of the same class and thus loses valuable information related to intra-class variations. Euclidean segment based representations [7] are less affected by the aforementioned drawbacks while still being able to capture topological and semantic information of the components in a scene. SegMatch [7] and SegMap [15] address the challenges of local and global descriptors by constructing a global map along the trajectory and performs recognition via segment-wise kNN retrieval on this map followed by a geometric consistency check. In this paper, we also leverage from segment descriptors for point cloud encoding but we avoid the construction of a global map and instead treat place recognition as a retrieval problem similar to recent LiDAR-based place recognition research [5], [6], [13], [14]. In this sense, we avoid segment-wise kNN search and construct a global descriptor through the aggregation of multi-level segment features. B. Feature aggregation Bag-of-Words (BoW) [16] and its higher-order variants (Vector of Locally Aggregated Descriptors (VLAD) [17], NetVLAD [10], Fisher Vector (FV) [18]) are popular methods in many retrieval tasks which construct global descriptors by aggregating local descriptors using zeroth, first or second order statistics respectively. These codebook-based aggregation methods require a preprocessing step to learn the codebook (a standard BoW, cluster centers in VLAD and Gaussian Mixture Models (GMM) in FV) and do not transfer well to unseen environments. Other higher-order aggregation methods such as Second Order Pooling (O2P) [19] do not require such pre-processing steps and and hence enables to be used for online place recognition in unseen environments. They also tend to be more efficient in run-time since the description stage does not require finding the nearest codebook word for each feature. Additionally, O2P methods allow the aggregation of complementary features [20] while maintaining a fixedlength descriptor without violating permutation invariance. In this paper, we propose to use second-order pooling to aggregate multiple levels of feature representations (structural, topological and temporal) into a fixed-length global descriptor for place recognition. C. Incorporating temporal information In place recognition, temporal information has been used almost exclusively in the retrieval stage. In visual place recognition (VPR), SeqSLAM [21] incorporated sequential information by comparing feature similarity over time and demonstrated dramatic performance improvement when dealing with changing appearance. Lynen et al. [22] modeled sequential retrieval as a probability density estimation in votes versus travel distance space. More recently, SeqLPD [23] extended LPD-Net with SeqSLAM for LiDAR-based place recognition. Our method differs from these sequential retrieval methods by incorporating temporal information in the place description stage instead. Recently, in VPR, Delta descriptors [24] outlined how visual sequentialrepresentation provides inherent robustness to variations in camera motion and environmental changes. III. METHOD This section describes our proposed method (named Locus) for LiDAR-based place description including the segment based representation of the point cloud, the generation of the multi-level features and their second-order aggregation (see Fig. 1). A. Segment-based representation Segments are defined as Euclidean clusters of points in the point cloud representation. Our segment extraction is performed similar to the SegMatch implementation [7] by first removing the ground plane and then extracting Euclidean clusters of points. The maximum Euclidean distance between two neighbouring points in the same segment is set to 0.2m and the segments contain a minimum of 100 and a maximum of 15000 points. Thus, each input point cloud P ∈ R N ×3 is represented by a set of segments S = {s 1 , .., s m }, where s i ∈ R Ni×3 , and the number of segments m varies depending on the environment and range of the point cloud. For each segment, a compact deep feature which encodes the structural appearance of the segment is obtained using the SegMap-CNN network proposed in [15]. The network represents each segment in a voxel grid of 32x32x16 of which the voxel sizes (0.1m by default) are scaled in order to fit larger segments. The description network consists of three 3D convolutional layers plus max-pool layers followed by two fully connected layers. For a given set of m segments S, it outputs a set of compact features F a = {f a1 , .., f am } (where f ai = f a (s i ) ∈ R d , d = 64) which discriminates segments based on structural appearance. B. Spatiotemporal Pooling Incorporating topological and temporal information in place description has many advantages when dealing with changing environments and varying sensor motion. With the aim of encoding this information, we compute a complimentary set of features F b ∈ R m×d for each point cloud frame P n , in addition to the structural appearance features F a ∈ R m×d described in section III-A. This is achieved via two stages of feature pooling: spatial and temporal. Feature pooling computes a new feature for a segment by taking the weighted average of features of all related segments. Encoding topological information is achieved via feature pooling based on spatial relationships within a point cloud frame. Temporal information is encoded via pooling features based on temporal correspondences across multiple point cloud frames. For a query segment s Pn i from the current point cloud frame P n , we find topological relationships and temporal correspondences with all other segments s P l j ∈ S n,kt from the current frame and k t frames into the immediate past, S n,kt = {S Pn , .., S P n−k t }, S P l = {s P l 1 , .., s P l m l }, (1) If a segment s P l j is spatially or temporally related to the query segment s Pn i , its structural appearance feature f a (s P l j ) is included in the feature pooling of the query segment. 1) Spatial Feature Pooling: We define spatial relationships between segments through a directed graph G = (V, E). Vertices V = {1, .., m} represent the segments S = {s 1 , .., s m } in the current point cloud frame and edges E ⊆ V × V represent which segments relate to which other segments. G is constructed as a kNN graph where each segment is connected to its k s nearest neighbours (k s = 5) and the distance between segments is calculated using the minimum translational distance (MTD) [25] between the convex hulls of the segments. We use the QuickHull algorithm [26] to compute convex hulls. Since the segment extraction process in Section III-A guarantees that segments do not overlap, the MTD (D) can be computed as follows, D(s i , s j ) = min{ p ix − p jy : ∀ p ix ∈ŝ i , p jy ∈ŝ j } (2) where p ix − p jy represents Euclidean distance between points p ix , p jy ∈ R 3 andŝ = QuickHull(s). The spatial feature pooling for segment s i is then carried out as, Φ(s i ) = j:(i,j)∈E φ(i, j)f a (s j ),(3)φ(i, j) = softmax{D(s i , s j )} = exp(−β · D(s i , s j )) k:(i,k)∈E exp(−β · D(s i , s k )) ,(4) Algorithm 1: Estimate temporal segment correspondences from frame P l to P l−1 Result: Corresponding segment indices S c ∈ R m S P l , S P l−1 ∈ R m ×n ×3 // segments; F P l a , F P l−1 a ∈ R m ×d // features; C P l , C P l−1 ∈ R m ×4 // segment centroids; T l−1 l ∈ SE(3) // relative pose from P l−1 to P l ; for s i ∈ S P l do f ai , c l i // feature, centroid of segment s i in P l ; F N ← kN N (f ai , F P l−1 a ); C N ← rN N ( T l−1 l c l i , C P l−1 ); N ← intersection(F N , C N ); if len(N ) > 0 then S c [i] ← arg min(N ) ; else S c [i] ← ∅; end end where β = 0.1 is a smoothing factor. The spatiallypooled feature Φ(s i ) is essentially a weighted average of the structural-appearance features (f a ) of the 5 closest segments to segment s i . This captures information on the immediate neighbourhood of s i and thus contributes towards encoding topological information in the scene. 2) Temporal Feature Pooling: Temporal relationships are defined by segment correspondences between frames. The segment s Pn i in the current point cloud frame P n will only relate to its corresponding segment in each of the k t previous frames (k t = 3). Segment correspondence indices S c are calculated as in Algorithm 1 iteratively k t times. First kN N finds the indices of the k nearest neighbours of f ai from the set of features F P l−1 a of the previous frame. Next, rN N finds the indices of nearest neighbor centroids from the previous frame within a radius r = 1m. To increase accuracy, the rN N search takes into account the the sensor's relative pose across frames represented by the homogeneous transformation T l−1 l . Finally, common elements in both nearest neighbour sets are found using the intersection function and the function arg min finds the index of the segment which minimises both feature-space and Euclideanspace distance. For the selected segment s Pn i in the current frame, the set of corresponding segments can be obtained from S c as {s Pn−1 i , .., s P n−kc i } = {S n,kt (j) : j ∈ S c }. We note that k c ≤ k t since correspondences can be lost between frames (when a segment which minimises both feature space and Euclidean-space distance is not available in the previous frame). The temporal pooling for s Pn i is then carried out as, Ψ(s i ) = j∈Sc ψ(i, j)f a (s j ),s j = S n,kt (j), where ψ(i, j) is calculated similar to φ(i, j) in Eq. 4 with D(s i , s j ) replaced by f a (s i ) − f a (s j ) and k is sampled from S c . The aggregation of features across multiple sequential frames essentially magnifies the weight of features corresponding to highly repeatable segments (segments which are extracted at every frame in the sequence and mapped to a similar point in the f a feature space each time) and thus, inherently down-weights non-repeatable segments. The final sptiotemporal feature f b is obtained as the average of a spatially-pooled feature Φ(s i ) and a temporallypooled feature Ψ(s i ), f b (s i ) = (Φ(s i ) + Ψ(s i ))/2.(6) C. Second-order pooling Given a set of segments S = {s 1 , .., s m } and two complementary sets of features F a ∈ R m×d and F b ∈ R m×d the second-order pooling F O2 of the features is defined as, F O2 = {F O2 xy }, F O2 xy = max s∈S f o2 xy (s),(7) where F O2 is a matrix with elements F O2 xy (1 ≤ x, y ≤ d) and f o2 (s) = f a (s)f b (s) T ∈ R d×d is the outer product of the two complementary features of segment s (f a (s), f b (s) ∈ R d ). This accounts to taking the element-wise maximum of the second-order features of all segments in the scene. In order to make the scene descriptor matrix F O2 more discriminative, it is decomposed in to singular values as F O2 = U λV and transformed non-linearly using the Power-Euclidean (PE) transform [27] [28] into F O2 α by raising each of its Eigen values by a power of α as follows, F O2 α = UλV,λ = diag(λ α 1,1 , .., λ α d,d ),(8) where α = 0.5. The matrix F O2 α is flattened and normalized to obtain the final global descriptor vector g ∈ R d 2 . The aggregation of multi-level features using higher-order pooling has multiple advantages for segment-based place description. First, it allows the encoding of complementary features for each segment thus enabling the incorporation of structural-appearance information along with topological and temporal information. Second, even though different point clouds will consist of varying number of segments m of varying sizes N i , the output dimension of the aggregated feature is fixed and therefore computational time can be greatly reduced. Finally, the aggregated feature is invariant to the permutation of its inputs, which results in a viewpoint invariant global descriptor for place recognition. IV. EXPERIMENTAL SETUP A. Evaluation criteria We evaluate performance using the Precision-Recall curve and its scalar metric: the maximum F 1 score (F 1 max ). Additionally, we use the Extended Precision (EP ) metric proposed in [29] as it highlights the maximum recall at 100% precision which is vital for robustness in place recognition. For Precision (P) and Recall (R), the F 1 max and EP scores are defined as, where τ is the threshold for positive prediction, P R0 is the Precision at minimum Recall, and R P 100 is the maximum Recall at 100% Precision. Retrieval is performed based on the comparison of the cosine distance of the query descriptor with a database of descriptors of previously visited places. Inline with the evaluation criteria of [14], previous entries adjacent to the query by less than 30s time difference are excluded from search to avoid matching to the same instance. The top-1 retrieval is considered a positive if the associated distance to query is less than the test threshold (τ ). A positive retrieval is considered a true-positive if it is less than 3m from the ground truth pose of the query and a false-positive if it is greater than 20m away to maintain consistency with the evaluation in [14]. F 1 max = max τ 2 P τ · R τ P τ + R τ , EP = P R0 + R P 100 2 ,(9) B. Dataset Evaluation is performed on the KITTI odometry dataset [8] which consists of Velodyne HDL-64E LiDAR scans collected from a moving vehicle in multiple dynamic urban environments in Karlsruhe, Germany. In line with the most recent evaluation on this dataset [14], we evaluate on sequences 00, 02, 05, 06, 07, and 08. V. RESULTS In this section, we demonstrate the contribution of each component in our proposed method. We also provide quantitative and qualitative results to validate our method in comparison to state-of-the-art methods and we quantitatively evaluate the robustness our proposed method against viewpoint changes and sensor occlusion. A. Contribution of individual components We investigate the contributions of each component (ablation study) of our proposed method using F 1 max and EP on KITTI sequence 08. First, we demonstrate the importance of the Power Euclidean (PE) non-linear transform in our second-order pooling. For the generation of feature f a , we use the model trained on extracted segments from the sequences 05 and 06 [15]. The most basic second-order pooling method (O2P) is the aggregation of second-order statistics of the same feature (Eq. 7 with f b = f a ). This can be extended with the Power Euclidean (PE) non-linear transform to get O2P+PE (Eq. 8 with f b = f a ). The O2P+PE shows a dramatic increase of 44.7% F 1 max (from 0.550 to 0.796) and and improvement of 32.4% EP (from 0.564 to 0.739) when including the non-linear transform. This demonstrates the importance of PE for obtaining discriminative descriptors. The rest of the evaluation uses the O2P+PE feature pooling and compares the contribution of each input feature type. The input f a is the structural appearance feature and the input feature f b is varied across all multi-level feature types. Table I shows the respective performance for each of the following cases: f b = f a (structural), f b = Φ (structural ⊗ spatial), f b = Ψ (structural ⊗ temporal), and f b = (Φ+Ψ)/2 (structural ⊗ spatiotemporal). Table I demonstrates that pooling information from complementary features improves the place recognition performance. Furthermore, incorporating topological or temporal information individually makes the final descriptor more discriminative and improves F 1 max by 12.4% and 13.1% respectively. The best performance of our proposed method is achieved when incorporating both topological and temporal information along with the structural appearance (+ 17% F 1 max ). B. Comparison to State-of-the-Art In this section, we compare the results of our method against other state-of-the-art results. Table II shows summary comparison of F 1 max scores. Locus outperforms all other methods with the highest mean F 1 max score (0.942). Our method sets a new state-of-the-art place recognition performance on KITTI dataset. Locus achieves a significant improvement on the challenging sequence 08 which consists of number of reverse or orthogonal revisits thereby demonstrating the robustness to viewpoint variations. The Precision-Recall curves of Locus across all sequences are presented in Fig. 2. The curves highlight that the performance of Locus on sequence 02 is significantly poor compared to other sequences. As depicted in Fig. 3, sequence 02 has a long stretch of road with many false negatives. This road consists of non-descriptive point clouds where the segment extraction process fails to obtain a high number of descriptive segments. The qualitative visualization of our method at R P 100 (i.e., maximum Recall at 100% Precision) is shown in Fig. 3. The figure shows the locations of true positive (red) and false negative (black) retrievals along the trajectory. This demonstrates that our method is capable of obtaining accurate retrievals in a wide range of environment and revisit types. We observe that most failure cases (black dots) occur [14] with the same evaluation criteria. PointNetVLAD* is the PointNetVLAD method retrained on KITTI [14]. near intersections. C. Robustness tests We evaluate the robustness of our method against the state-of-the-art on a variety of challenging scenarios which simulate real-word adverse conditions. We simulate these adverse conditions by introducing a set of distortions to the input point cloud. Performance is compared based on the F 1 max metric on the same KITTI sequences. 1) Viewpoint changes: We simulate viewpoint changes by rotating point clouds about the z-axis with a random angle. The change in mean F 1 max over all sequences on this test compared to the normal results was -0.007 for our method. The respective change for ScanContext was +0.002 and for SemGraph-RN it was -0.003 as presented in [14]. Locus, ScanContext and SemGraph-RN show less than 1% difference implying that random rotation of the input point cloud does not effect the final performance of these methods. Other methods such as M2DP, PointNetVLAD have been shown not to be rotation invariant as mentioned in [14]. 2) Occlusion: This test aims to simulate occlusion of the LiDAR where the field-of-view of the sensor can be greatly reduced due to nearby dynamic objects or self-occlusion. We extend the occlusion test in [14] which only considered occlusions of 30°by evaluating performance of various occlusion angles θ occ from 0°up to 180°. The occlusion test consists of removing all points which lie within a sector of a randomly selected azimuth. We compare our method with ScanContext (SC) [6], which achieved the second best overall performance in the Table II. ScanContext (SC) mean F 1 max performance drops by more than 20% at 45°occlusion and around 50% at 90°occlusion. Our method shows small performance degradation of 3.2% at 45°occlusion and 9.98% at 90°occlusion. Our method outperforms ScanContext (SC) with a large margin of 45.2% mean F 1 max at 90°occlusion. Also note that in our method, the F 1 max of sequences 00 VI. CONCLUSION In this paper, we presented Locus, a novel LiDAR-based place recognition method for large-scale environments. We presented the advantages of scene representation via the aggregation of multi-level features related to components in a scene. We quantitatively showed how the inclusion of topological and temporal information in the description stage leads to an improvement in final place recognition performance. We formulated the generation of a global descriptor which incorporates all multi-level features without violating rotational-invariance. We validated our method through evaluation on the KITTI dataset where it surpassed the stateof-the art. Furthermore, we demonstrated the robustness of Locus against viewpoint changes and occlusion. Fig. 2 : 2Precision-Recall curves of our proposed Locus method on the KITTI dataset. Fig. 3 : 3Qualitative performance visualization of our method at R P 100 (zero false positives) along the trajectory. Red: true positives, Black: false negatives, Blue: true negatives (not revisits). Fig. 4 : 4Robustness of Locus to scan occlusion. Vertical blue line shows when the mean F 1 max drops by 10%. and 05 remains above 80% even at occlusions of 180°where 50% of the point cloud is removed. Results for all sequences are depicted inFigure 4. TABLE I : IA comparison of Locus performance on KITTI sequence 08 when different input feature types are used. TABLE II : IIF 1 max scores on the KITTI dataset. Results of M2DP, ScanContext, PointNetVLAD, and SemGraph are as presented in C Park, P Moghadam, S Kim, A Elfes, C Fookes, S Sridharan, Proceedings -IEEE International Conference on Robotics and Automation. -IEEE International Conference on Robotics and AutomationElastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAMC. Park, P. Moghadam, S. Kim, A. Elfes, C. Fookes, and S. Srid- haran, "Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM," in Proceedings -IEEE International Conference on Robotics and Automation, sep 2018, pp. 1206-1213. Robust photogeometric localization over time for map-centric loop closure. C Park, S Kim, P Moghadam, J Guo, S Sridharan, C Fookes, IEEE Robotics and Automation Letters. 42C. Park, S. Kim, P. Moghadam, J. Guo, S. Sridharan, and C. Fookes, "Robust photogeometric localization over time for map-centric loop closure," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1768-1775, 2019. SHOT: Unique signatures of histograms for surface and texture description. S Salti, F Tombari, L. Di Stefano, Computer Vision and Image Understanding. 125S. Salti, F. Tombari, and L. Di Stefano, "SHOT: Unique signatures of histograms for surface and texture description," Computer Vision and Image Understanding, vol. 125, pp. 251-264, 2014. M2DP: A novel 3D point cloud descriptor and its application in loop closure detection. L He, X Wang, H Zhang, IEEE International Conference on Intelligent Robots and Systems. L. He, X. Wang, and H. Zhang, "M2DP: A novel 3D point cloud descriptor and its application in loop closure detection," in IEEE International Conference on Intelligent Robots and Systems, nov 2016, pp. 231-237. PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition. M A Uy, G H Lee, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionM. A. Uy and G. H. Lee, "PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, dec 2018, pp. 4470-4479. Scan Context: Egocentric Spatial Descriptor for Place Recognition Within 3D Point Cloud Map. G Kim, A Kim, IEEE International Conference on Intelligent Robots and Systems. G. Kim and A. Kim, "Scan Context: Egocentric Spatial Descriptor for Place Recognition Within 3D Point Cloud Map," in IEEE International Conference on Intelligent Robots and Systems, dec 2018, pp. 4802- 4809. SegMatch: Segment based place recognition in 3D point clouds. R Dube, D Dugas, E Stumm, J Nieto, R Siegwart, C Cadena, Proceedings -IEEE International Conference on Robotics and Automation. -IEEE International Conference on Robotics and AutomationR. Dube, D. Dugas, E. Stumm, J. Nieto, R. Siegwart, and C. Cadena, "SegMatch: Segment based place recognition in 3D point clouds," in Proceedings -IEEE International Conference on Robotics and Automation, jul 2017, pp. 5266-5272. Vision meets robotics: The KITTI dataset. A Geiger, P Lenz, C Stiller, R Urtasun, International Journal of Robotics Research. 3211A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The KITTI dataset," International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, sep 2013. Local Descriptor for Robust Place Recognition Using LiDAR Intensity. J Guo, P V Borges, C Park, A Gawel, IEEE Robotics and Automation Letters. 42J. Guo, P. V. Borges, C. Park, and A. Gawel, "Local Descriptor for Robust Place Recognition Using LiDAR Intensity," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1470-1477, apr 2019. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition. R Arandjelovic, P Gronat, A Torii, T Pajdla, J Sivic, IEEE Transactions on Pattern Analysis and Machine Intelligence. 406R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, "NetVLAD: CNN Architecture for Weakly Supervised Place Recog- nition," IEEE Transactions on Pattern Analysis and Machine Intelli- gence, vol. 40, no. 6, pp. 1437-1451, jun 2018. LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis. Z Liu, S Zhou, C Suo, P Yin, W Chen, H Wang, H Li, Y Liu, 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Z. Liu, S. Zhou, C. Suo, P. Yin, W. Chen, H. Wang, H. Li, and Y. Liu, "LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recog- nition and Environment Analysis," in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2831-2840. DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization. J Du, R Wang, D Cremers, European Conference on Computer Vision (ECCV). J. Du, R. Wang, and D. Cremers, "DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization," in European Conference on Computer Vision (ECCV), jul 2020, pp. 744-762. Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure Detection. H Wang, C Wang, L Xie, IEEE International Conference on Robotics and Automation (ICRA). H. Wang, C. Wang, and L. Xie, "Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure Detection," in IEEE International Conference on Robotics and Automation (ICRA), sep 2020, pp. 2095-2101. Semantic Graph Based Place Recognition for 3D Point Clouds. X Kong, X Yang, G Zhai, X Zhao, X Zeng, M Wang, Y Liu, W Li, F Wen, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). X. Kong, X. Yang, G. Zhai, X. Zhao, X. Zeng, M. Wang, Y. Liu, W. Li, and F. Wen, "Semantic Graph Based Place Recognition for 3D Point Clouds," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), aug 2020, pp. 8216-8223. SegMap: Segment-based mapping and localization using data-driven descriptors. R Dubé, A Cramariuc, D Dugas, H Sommer, M Dymczyk, J Nieto, R Siegwart, C Cadena, The International Journal of Robotics Research. 392-3R. Dubé, A. Cramariuc, D. Dugas, H. Sommer, M. Dymczyk, J. Nieto, R. Siegwart, and C. Cadena, "SegMap: Segment-based mapping and localization using data-driven descriptors," The International Journal of Robotics Research, vol. 39, no. 2-3, pp. 339-355, mar 2020. Fab-map: Probabilistic localization and mapping in the space of appearance. M Cummins, P Newman, The International Journal of Robotics Research. 276M. Cummins and P. Newman, "Fab-map: Probabilistic localization and mapping in the space of appearance," The International Journal of Robotics Research, vol. 27, no. 6, pp. 647-665, 2008. Aggregating local image descriptors into compact codes. H Jégou, F Perronnin, M Douze, J Sánchez, P Pérez, C Schmid, IEEE Transactions on Pattern Analysis and Machine Intelligence. 349H. Jégou, F. Perronnin, M. Douze, J. Sánchez, P. Pérez, and C. Schmid, "Aggregating local image descriptors into compact codes," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1704-1716, 2012. Image classification with the fisher vector: Theory and practice. J Sánchez, F Perronnin, T Mensink, J Verbeek, International Journal of Computer Vision. 1053J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, "Image clas- sification with the fisher vector: Theory and practice," International Journal of Computer Vision, vol. 105, no. 3, pp. 222-245, dec 2013. Semantic segmentation with second-order pooling. J Carreira, R Caseiro, J Batista, C Sminchisescu, European Conference on Computer Vision (ECCV). 7578J. Carreira, R. Caseiro, J. Batista, and C. Sminchisescu, "Semantic segmentation with second-order pooling," in European Conference on Computer Vision (ECCV), vol. 7578, 2012, pp. 430-443. Bilinear CNN models for fine-grained visual recognition. T Y Lin, A Roychowdhury, S Maji, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionT. Y. Lin, A. Roychowdhury, and S. Maji, "Bilinear CNN models for fine-grained visual recognition," in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1449-1457. SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. M J Milford, G F Wyeth, Proceedings -IEEE International Conference on Robotics and Automation. -IEEE International Conference on Robotics and AutomationM. J. Milford and G. F. Wyeth, "SeqSLAM: Visual route-based naviga- tion for sunny summer days and stormy winter nights," in Proceedings -IEEE International Conference on Robotics and Automation, 2012, pp. 1643-1649. Placeless Place-Recognition. S Lynen, M Bosse, P Furgale, R Siegwart, Proceedings -2014 International Conference on 3D Vision. -2014 International Conference on 3D VisionS. Lynen, M. Bosse, P. Furgale, and R. Siegwart, "Placeless Place- Recognition," in Proceedings -2014 International Conference on 3D Vision, 3DV 2014, feb 2015, pp. 303-310. SeqLPD: Sequence Matching Enhanced Loop-Closure Detection Based on Large-Scale Point Cloud Description for Self-Driving Vehicles. Z Liu, C Suo, S Zhou, F Xu, H Wei, W Chen, H Wang, X Liang, Y H Liu, IEEE International Conference on Intelligent Robots and Systems. Z. Liu, C. Suo, S. Zhou, F. Xu, H. Wei, W. Chen, H. Wang, X. Liang, and Y. H. Liu, "SeqLPD: Sequence Matching Enhanced Loop-Closure Detection Based on Large-Scale Point Cloud Description for Self- Driving Vehicles," in IEEE International Conference on Intelligent Robots and Systems, nov 2019, pp. 1218-1223. Delta Descriptors: Change-Based Place Representation for Robust Visual Localization. S Garg, B Harwood, G Anand, M Milford, IEEE Robotics and Automation Letters. 54S. Garg, B. Harwood, G. Anand, and M. Milford, "Delta Descriptors: Change-Based Place Representation for Robust Visual Localization," IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5120-5127, oct 2020. Determining the minimum translational distance between two convex polyhedra. S Cameron, R Culley, Proceedings. null3S. Cameron and R. Culley, "Determining the minimum translational distance between two convex polyhedra," in Proceedings. 1986 IEEE International Conference on Robotics and Automation, vol. 3, pp. 591- 596. The Quickhull Algorithm for Convex Hulls. C B Barber, D P Dobkin, H Huhdanpaa, ACM Transactions on Mathematical Software. 224C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, "The Quickhull Algorithm for Convex Hulls," ACM Transactions on Mathematical Software, vol. 22, no. 4, pp. 469-483, 1996. Higher-order occurrence pooling for bags-of-words: Visual concept detection. P Koniusz, F Yan, P Gosselin, K Mikolajczyk, IEEE Transactions on Pattern Analysis and Machine Intelligence. 392P. Koniusz, F. Yan, P. Gosselin, and K. Mikolajczyk, "Higher-order occurrence pooling for bags-of-words: Visual concept detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 2, pp. 313-326, 2017. Is Second-Order Information Helpful for Large-Scale Visual Recognition. P Li, J Xie, Q Wang, W Zuo, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionP. Li, J. Xie, Q. Wang, and W. Zuo, "Is Second-Order Information Helpful for Large-Scale Visual Recognition?" in Proceedings of the IEEE International Conference on Computer Vision, dec 2017, pp. 2089-2097. Exploring performance bounds of visual place recognition using extended precision. B Ferrarini, M Waheed, S Waheed, S Ehsan, M J Milford, K D Mcdonald-Maier, IEEE Robotics and Automation Letters. 52B. Ferrarini, M. Waheed, S. Waheed, S. Ehsan, M. J. Milford, and K. D. McDonald-Maier, "Exploring performance bounds of visual place recognition using extended precision," IEEE Robotics and Au- tomation Letters, vol. 5, no. 2, pp. 1688-1695, apr 2020.
[ "https://github.com/csiro-robotics/locus" ]
[ "PAIRS OF DIAGONAL QUARTIC FORMS: THE ASYMPTOTIC FORMULAE", "PAIRS OF DIAGONAL QUARTIC FORMS: THE ASYMPTOTIC FORMULAE" ]
[ "Jörg Brüdern ", "Trevor D Wooley " ]
[]
[]
We establish an asymptotic formula for the number of integral solutions of bounded height for pairs of diagonal quartic equations in 26 or more variables. In certain cases, pairs in 25 variables can be handled.
10.1093/imrn/rnad021
[ "https://export.arxiv.org/pdf/2211.10397v1.pdf" ]
253,708,340
2211.10397
9229b67cb412d6817a5ba264a58e92a2f29a44df
PAIRS OF DIAGONAL QUARTIC FORMS: THE ASYMPTOTIC FORMULAE 18 Nov 2022 Jörg Brüdern Trevor D Wooley PAIRS OF DIAGONAL QUARTIC FORMS: THE ASYMPTOTIC FORMULAE 18 Nov 2022 We establish an asymptotic formula for the number of integral solutions of bounded height for pairs of diagonal quartic equations in 26 or more variables. In certain cases, pairs in 25 variables can be handled. Introduction Once again we are concerned with the pair of Diophantine equations a 1 x 4 1 + a 2 x 4 2 + . . . + a s x 4 s = b 1 x 4 1 + b 2 x 4 2 + . . . + b s x 4 s = 0,(1.1) wherein the given coefficients a j , b j satisfy (a j , b j ) ∈ Z 2 \ {(0, 0)} (1 j s). While our focus was on the validity of the Hasse principle for such pairs in two precursors of this article [6,9], we now investigate the asymptotic density of integral solutions. Denote by N (P ) the number of solutions in integers x j with |x j | P (1 j s) to this system. Then, subject to a natural rank condition on the coefficient matrix, one expects an asymptotic formula for N (P ) to hold provided that s is not too small. Indeed, following Hardy and Littlewood [11] in spirit, the quantity P 8−s N (P ) should tend to a limit that is itself a product of local densities. On a formal level, the densities are readily described. The real density, also known as the singular integral, is defined by I = lim T →∞ T −T T −T s j=1 1 −1 e (a j α + b j β)t 4 j dt j dα dβ (1.2) whenever the limit exists. Let M(q) denote the number of solutions x in (Z/qZ) s satisfying (1.1). Then for primes p, the p-adic density is defined by s p = lim h→∞ p (2−s)h M(p h ),(1.3) assuming again that this limit exists. In case of convergence, the product S = p s p is referred to as the singular series, and the desired asymptotic relation can be presented as the limit formula Note that (1.4) can hold only when in each of the two equations comprising (1.1) there are sufficiently many non-zero coefficients. Of course one may pass from (1.1) to an equivalent system obtained by taking linear combinations of the two constituent equations. Thus, the invariant q 0 = q 0 (a, b), defined by q 0 (a, b) = min (c,d)∈Z 2 \{(0,0)} card{1 j s : ca j + db j = 0}, must be reasonably large. Indeed, it follows from Lemmata 3.1, 3.2 and 3.3 in our companion paper [9] that the conditions s 16 and q 0 12 ensure that the limits (1.2) and (1.3) all exist, that the product S is absolutely convergent, and that the existence of non-singular solutions to the system (1.1) in each completion of the rationals implies that IS > 0. A first result concerning the limit (1.4) is then obtained by introducing the moment estimate 1 0 x P e(αx 4 ) 14 dα ≪ P 10+ε , (1.5) derived as the special case u = 14 of Lemma 5.3 below, to a familiar method of Cook [10] (see also [2]). Here we point out that the estimate (1.5) first occurs implicitly in the proof of [15,Theorem 4.1], conditional on the validity of the (now proven) main conjecture in Vinogradov's mean value theorem (for which see [1] and [17,Corollary 1.3]). In this way, one routinely confirms (1.4) when s 29 and q 0 15. This result, although not explicitly mentioned in the literature, is certainly familiar to experts in the area, and has to be considered as the state of the art today. It seems worth remarking in this context that, at a time when the estimate (1. 5) was not yet available, the authors [3,5] handled the case s 29 with more restrictive rank conditions. The main purpose of this memoir is to make three variables redundant. Relaxing the rank condition q 0 15 appears to be a difficult enterprise, as we now explain. Consider a pair of equations (1.1) with s 29, and suppose that b i = a j = 0 for 1 i 14 < j s. These two equations are independent and thus N (P ) factorises as N (P ) = N 1 (P )N 2 (P ), where N 1 (P ) and N 2 (P ) denote the number of integral solutions of the respective single equations with |y j | P (1 j s − 14). The equation (1.7) has at least 15 nonzero coefficients, and so a straightforward application of the Hardy-Littlewood method using the mean value (1.5) shows that P 18−s N 2 (P ) tends to a limit as P → ∞, with this limit equal to a product of local densities analogous to I and s p . By choosing b j = (−1) j for 15 j s, we ensure that this limit is positive, and thus P 8−s N (P ) tends to a limit as P → ∞ if and only if P −10 N 1 (P ) likewise tends to a limit. From the definitions (1.2) and (1.3), it is apparent that the local densities I and s p factorise into components stemming from the equations underlying N 1 and N 2 . The relation (1.4) therefore holds for this particular pair of equations if and only if P −10 N 1 (P ) tends to the product of local densities associated with the equation (1.6). In particular, were (1.4) known to hold in any case where q 0 = 14 and s is large, then it would follow that P −10 N 1 (P ) tends to the limit suggested by a formal application of the circle method, a result that is not yet known. This shows that relaxing the condition on q 0 would imply progress with single diagonal quartic equations. The invariant q 0 is a very rough measure for the entanglement of the two equations present in (1.1). This can be refined considerably. The pairs (a j , b j ) are all non-zero in Z 2 , so they define a point (a j : b j ) ∈ P(Q). We refer to indices i, j ∈ {1, 2, . . . , s} as equivalent if (a i : b i ) = (a j : b j ). This defines an equivalence relation on {1, 2, . . . , s}. Suppose that there are ν equivalence classes with r 1 , . . . , r ν elements, respectively, where r 1 r 2 . . . r ν . On an earlier occasion [5] we named the tuple (r 1 , . . . , r ν ) the profile of the equations (1.1). Note that q 0 = s − r 1 , whence our assumed lower bound q 0 15 implies that r 1 s − 15 and ν 2. If more is known about the profile, then we can save yet another variable. For a pair (1.1) in "general position" one has ν = s and r 1 = 1, and in a quantitative sense easily made precise, such pairs constitute almost all such Diophantine systems. Hence, the conclusion of Theorem 1.2 applies to almost all pairs of equations of the shape (1.1). We pointed out long ago [5] that a diffuse profile can be advantageous. However, even with the estimate (1.5) in hand, the method of [5] only handles cases where s 27 and r 1 and r 2 are not too large. Thus our results improve on all previous work on the subject even if the input to the published versions is enhanced by the newer mean value bound (1.5). It is time to describe the methods, and in particular the new ideas involved in the proofs. Our more recent results specific to systems of diagonal quartic forms [6,8,9] all depend on large values estimates for Fourier coefficients of powers of Weyl sums, and the current communication is no exception. The large values estimates provide upper bounds for higher moments of these Fourier coefficients, and these in turn yield mean value bounds for correlations of Weyl sums. We describe this link here in a setting appropriate for application to pairs of equations. Consider a 1-periodic twice differentiable function h : R → R. Its Fourier expansion h(α) = n∈Zĥ (n)e(αn) (1.8) converges uniformly and absolutely. Hence, by orthogonality, one has 1 0 1 0 h(α)h(β)h(−α − β) dα dβ = n∈Zĥ (n) 3 . (1.9) The methods of [6,8,9] rest on this and closely related identities, choosing h(α) = |g(α)| u with suitable quartic Weyl sums g and a positive real number u. As a service to future scholars, we analyse in some detail the differentiability properties of functions like |g(α)| u in §3. It transpires that when u 2 then the relation (1.9) holds. We use (1.9) with h(α) = |f (α)| u , where now f (α) = x P e(αx 4 ) (1.10) is the ordinary Weyl sum. We then obtain new entangled mean value estimates for smaller values of u. This alone is not of strength sufficient to reach the conclusions of Theorem 1.1. As experts in the field will readily recognise, for larger values of u the quality of the aforementioned mean value estimates is diluted by major arc contributions, and one would therefore like to achieve their removal. Thus, if n is a 1-periodic set of real numbers with n ∩ [0, 1) a classical choice of minor arcs and 1 n is the indicator function of n, then one is tempted to apply the function h(α) = 1 n (α)|f (α)| u in place of |f (α)| u within (1.9). However, this function is no longer continuous. We bypass this difficulty by introducing a smoothed Farey dissection in §4. This is achieved by a simple and very familiar convolution technique that should be useful in other contexts, too. In this way, in §5 we obtain a minor arc variant of the cubic moment method developed in our earlier work [6]. Equipped with this and the mean value bounds that follow from it, one reaches the conclusions of Theorem 1.1 in the majority of cases under consideration. Unfortunately, some cases with exceptionally large values of r j stubbornly deny treatment. To cope with these remaining cases, we develop a mixed moment method in §6. The point of departure is a generalisation of (1.9). If h 1 , h 2 , h 3 are functions that qualify for the discussion surrounding (1.8) and (1.9), then by invoking orthogonality once again, we see that 1 0 1 0 h 1 (α)h 2 (β)h 3 (−α − β) dα dβ = n∈Zĥ 1 (n)ĥ 2 (n)ĥ 3 (n). (1.11) By Hölder's inequality, the right hand side here is bounded in terms of the three moments n∈Z |ĥ j (n)| 3 . (1.12) In all cases where h j (α) = |f (α)| u j for some even positive integral exponent u j one hasĥ j (n) 0, so (1.9) can be used in reverse to interpret (1.12) in terms of the number of solutions of a pair of Diophantine equations. The purely analytic description of the method has several advantages. First and foremost, one can break away from even numbers u j , and still estimate all three cubic moments (1.12). This paves the way to a complete treatment of pairs of equations (1.1) with s 26 and q 0 15. Beyond this, the identity (1.11) offers extra flexibility for the arithmetic harmonic analysis. Instead of the homogeneous passage from (1.11) to (1.12) one could apply Hölder's inequality with differing weights. As an example of stunning simplicity, we note that the expression in (1.11) is bounded above by n∈Z |ĥ 1 (n)| 2 1/2 n∈Z |ĥ 2 (n)| 4 1/4 n∈Z |ĥ 3 (n)| 4 1/4 . If we apply this idea with h j (α) = |f (α)| u j and u j a positive even integer, then the first factor relates to a single diagonal Diophantine equation while the other two factors concern systems consisting of three diagonal Diophantine equations. This argument is dual (in the sense that we work with Fourier coefficients) to a method that we described as complification in our work on systems of cubic forms [7]. There is, of course, an obvious generalisation of (1.9) to higher dimensional integrals that has been used here. This points to a complex interplay between systems of diagonal equations in which the size parameters (number of variables and number of equations) vary, and need not be restricted to natural numbers. We have yet to explore the full potential of this observation. We briefly comment on the role of the Hausdorff-Young inequality [18, Chapter XII, Theorem 2.3] within this circle of ideas. In the notation of (1.11) this asserts that n∈Z |ĥ j (n)| 3 1 0 |h j (α)| 3/2 dα 2 . Passing through (1.11) and (1.12), one then arrives at the estimate 1 0 1 0 h 1 (α)h 2 (β)h 3 (−α − β) dα dβ 3 j=1 1 0 |h j (α)| 3/2 dα 2/3 . (1.13) However, by Hölder's inequality, one finds 1 0 1 0 h 1 (α)h 2 (β)h 3 (−α − β) dα dβ 1 i<j 3 1 0 |h i h j | 3/2 dα dβ 1/3 , where, on the right hand side, one should read h 1 = h 1 (α), h 2 = h 2 (β) and h 3 = h 3 (−α − β). By means of obvious linear substitutions, this also delivers the bound (1.13). This last method is essentially that of Cook [10]. Our approach is superior because the methods are designed to remember the arithmetic source of the Weyl sums when estimating moments of Fourier coefficients. The proof of Theorem 1.2 requires yet another tool that is a development of our multidimensional version of Hua's lemma [3]. This somewhat outdated work is based on Weyl differencing. An analysis of the method shows that whenever a new block of differenced Weyl sums enters the recursive process, a new entry r j to the profile of the underlying Diophantine system is needed. It is here where one imports undesired constraints on the profile, as in Theorem 1.2. However, powered with the new upper bound (1.5), the method just described yields a bound for a two-dimensional entangled mean value over eighteen Weyl sums that outperforms the cubic moments technique by a factor P 1/6 (compare Theorem 6.1 with Theorem 7.2). Within a circle method approach, this mean value is introduced via Hölder's inequality. In the complementary factor, we have available an abundance of Weyl sums. Fortunately the cubic moments technique restricted to minor arcs presses the method home. We point out that our proof of Theorem 1.2 constitutes the first instance in which the cubic moments technique is successfully coupled with the differencing techniques derived from [3]. One might ask whether more restrictive conditions on the profile allow one to reduce the number of variables even further. As we demonstrate at the very end of this memoir it is indeed possible to accelerate the convergence in (1.4), but even the extreme condition r 1 = 1 seems insufficient to save a variable without another new idea. Once the new moment estimates are established, our proofs of Theorems 1.1 and 1.2 are fairly concise. There are two reasons. First, we may import the major arc work, to a large extent, from [9]. Second, more importantly, our minor arc treatment rests on a new inequality (Lemma 2.3 below) that entirely avoids combinatorial difficulties associated with exceptional profiles. This allows us to reduce the minor arc work to a single profile with a certain maximality property. We expect this argument to become a standard preparation step in related work, and have therefore presented this material in broad generality. We refer to §2 where the reader will also find comment on previous attempts in this direction. Notation. Our basic parameter is P , a sufficiently large real number. Implicit constants in Vinogradov's familiar symbols ≪ and ≫ may depend on s and ε as well as ambient coefficients such as those in the system (1.1). Whenever ε appears in a statement we assert that the statement holds for each positive real value assigned to ε. As usual, we write e(z) for e 2πiz . Some inequalities This section belongs to real analysis. We discuss a number of inequalities for products. As is familiar for decades, in an attempt to prove results of the type described in Theorems 1.1 and 1.2 via harmonic analysis, it is desirable to simplify to a situation where the profile is extremal relative to the conditions in hand, that is, the multiplicities r 1 , r 2 , . . . are as large as possible, and consequently ν is as small as is possible. In the past, most scholars have applied Hölder's inequality to achieve this objective, often by an ad hoc argument that led to the consideration of several cases separately. The purpose of this section is to make available general inequalities that encapsulate the reduction step in a single lemma of generality sufficient to include all situations that one encounters in practice. The germ of our method is a classical estimate, sometimes referred to as Young's inequality: if p and q are real numbers with p > 1 and 1 p + 1 q = 1, then for all non-negative real numbers u and v one has uv u p p + v q q . (2.1) This includes the case r = 2 of the bound |z 1 z 2 · · · z r | 1 r |z 1 | r + . . . + |z r | r (2.2) which holds for all r ∈ N and all z j ∈ C (1 j r). Indeed, the general case of (2.2) follows from (2.1) by an easy induction on r. In the following chain of lemmata we are given a number ν ∈ N and integral exponents m j , M j (1 j ν) with Then there is a weight w on S ν with the property that for all non-negative real numbers u 1 , u 2 , . . . , u ν one has u m 1 1 u m 2 2 · · · u mν ν σ∈Sν w(σ)u M 1 σ(1) u M 2 σ(2) · · · u Mν σ(ν) . (2.5) Proof. We define D = ν l=1 |M l − m l | and proceed by induction on ν + D. In the base case of the induction one has ν + D = 1. In this situation ν = 1 and D = 0, and the claim of the lemma is trivially true with σ = id and w(σ) = 1. Now suppose that ν + D > 1. We consider two cases. First we suppose that there is a number ν 1 with 1 ν 1 < ν and We put D 1 = ν 1 l=1 |M l − m l |, D 2 = ν l=ν 1 +1 |M l − m l |, ν 2 = ν − ν 1 . Then (2.3) and (2.4) are valid with ν 1 in place of ν, and one has D 1 D. Hence ν 1 + D 1 < ν + D so that we may invoke the inductive hypothesis to find a weight w 1 on S ν 1 with u m 1 1 u m 2 2 · · · u mν 1 ν 1 σ∈Sν 1 w 1 (σ)u M 1 σ(1) u M 2 σ(2) · · · u Mν 1 σ(ν 1 ) . (2.6) Similarly, in the current situation, the numbers m ν 1 +j , M ν 1 +j (1 j ν 2 ) may take the roles of m j , M j in (2.3) and (2.4) with ν 2 in place of ν. Again, we have ν 2 + D 2 < ν + D. Now writing τ for a permutation in S ν 2 acting on the set {ν 1 + 1, ν 1 + 2, . . . , ν}, we may invoke the inductive hypothesis again to find a weight w 2 on S ν 2 with u m ν 1 +1 ν 1 +1 u m ν 1 +2 ν 1 +2 · · · u mν ν τ ∈Sν 2 w 2 (τ )u M ν 1 +1 τ (ν 1 +1) u M ν 1 +2 τ (ν 1 +2) · · · u Mν τ (ν) . (2.7) We multiply the inequalities (2.6) and (2.7). It is then convenient to read permutations σ on 1, 2, . . . , ν 1 and τ on ν 1 + 1, ν 1 + 2, . . . , ν as permutations on 1, 2, . . . , ν with σ(j) = j for j > ν 1 and τ (j) = j for j ν 1 . Then, for permutations of the type στ in S ν we put w(στ ) = w 1 (σ)w 2 (τ ), and we put w(φ) = 0 for the remaining permutations φ ∈ S ν . With this function w the product of (2.6) and (2.7) becomes (2.5), completing the induction in the case under consideration. In the complementary case we have L l=1 m l < L l=1 M l (1 L < ν). (2.8) In particular, this shows that m 1 < M 1 . Also, by comparing the case L = ν −1 of (2.8) with the equation corresponding to the case L = ν in (2.4), we see that m ν > M ν , as a consequence of which we have m ν 1. We write m 1 = m ν + r. In view of (2.3), we see that r 0, and so an application of (2.1) with q = r +2 leads to the inequality u r+1 1 u ν r + 1 r + 2 u r+2 1 + 1 r + 2 u r+2 ν . Recall that m ν 1, whence m 1 − r − 1 = m ν − 1 0. It follows that u m 1 1 u mν ν u m 1 −r−1 1 u mν −1 ν r + 1 r + 2 u r+2 1 + 1 r + 2 u r+2 ν , and thus u m 1 1 · · · u mν ν r + 1 r + 2 u m 1 +1 1 u m 2 2 u m 3 3 · · · u m ν−1 ν−1 u mν −1 ν + 1 r + 2 u mν −1 1 u m 2 2 u m 3 3 · · · u m ν−1 ν−1 u m 1 +1 ν . The chain of exponents m 1 + 1, m 2 , m 3 , . . . , m ν−1 , m ν − 1 is decreasing, and we have m 1 + 1 M 1 and m ν − 1 0. Hence, in view of (2.8), the hypotheses (2.3) and (2.4) are still met when we put m 1 + 1 in place of m 1 and m ν − 1 in place of m ν . However, m 1 + 1 is closer to M 1 than is m 1 , and likewise m ν − 1 is closer to M ν than is m ν . The value of D associated with this new chain of exponents therefore decreases, and so we may apply the inductive hypothesis to find a weight W on S ν with u m 1 +1 1 u m 2 2 u m 3 3 · · · u m ν−1 ν−1 u mν −1 ν σ∈Sν W (σ)u M 1 σ(1) u M 2 σ(2) · · · u Mν σ(ν) . Interchanging the roles of u 1 and u ν , and denoting by τ the transposition of 1 and ν, we obtain in like manner the bound u m 1 +1 ν u m 2 2 u m 3 3 · · · u m ν−1 ν−1 u mν −1 1 σ∈Sν W (σ • τ )u M 1 σ(1) u M 2 σ(2) · · · u Mν σ(ν) . If we now import the last two inequalities into the inequality preceding them, we find that (2.5) holds with w(σ) = r + 1 r + 2 W (σ) + 1 r + 2 W (σ • τ ), and w is a weight on S ν . This completes the induction in the second case. Lemma 2.2. Suppose that m j , M j (1 j ν) satisfy (2.3) and (2.4). For 1 j ν let h j : R n → [0, ∞) denote a Lebesgue measurable function. Then h m 1 1 h m 2 2 · · · h mν ν dx max σ∈Sν h M 1 σ(1) h M 2 σ(2) · · · h Mν σ(ν) dx. Proof. Choose u j = h j in Lemma 2.1 for 1 j ν and integrate. For applications to systems of diagonal equations or inequalities, functions h j come with an equivalence relation between them. This we encode as a partition of the set of indices j in the final lemma of this section. Then, there exists a tuple (i 1 , . . . , i ν ) and a permutation σ ∈ S ν , with i l ∈ J σ(l) (1 l ν), having the property that h 1 h 2 · · · h s dx h M 1 i 1 h M 2 i 2 . . . h Mν iν dx. (2.9) Proof. For each suffix l with 1 l ν, it follows from (2.2) that j∈J l h j 1 m j j∈J l h m j j . Multiplying these inequalities together yields the bound h 1 h 2 · · · h s 1 m 1 · · · m ν j 1 ∈J 1 · · · jν ∈Jν h m 1 j 1 h m 2 j 2 · · · h mν jν . Now integrate. One then finds that there exists a tuple (j 1 , . . . , j ν ), with j l ∈ J l (1 l ν), for which h 1 h 2 · · · h s dx h m 1 j 1 h m 2 j 2 . . . h mν jν dx. Finally, we apply Lemma 2.2. One then finds that for some σ ∈ S ν the upper bound (2.9) holds with i l = j σ(l) (1 l ν). Smooth Farey dissections In this section we describe a partition of unity that mimics the traditional Farey dissection. With other applications in mind, we work in some generality. Throughout this section we take X and Y to be real numbers with 1 Y 1 2 √ X, and then let N(q, a) denote the interval of all real α satisfying |qα −a| Y X −1 . Define N = N X,Y as the union of all N(q, a) with 1 q Y , a ∈ Z and (a, q) = 1. Note that the intervals N(q, a) comprising N are pairwise disjoint. We also write M = M X,Y for the set N ∩ [0, 1]. For appropriate choices of the parameter Y, the latter is a typical choice of major arcs in applications of the Hardy-Littlewood method. The set N has period 1. Its indicator function 1 N has finitely many discontinuities in [0, 1), implying unwanted delicacies concerning the convergence of the Fourier series of 1 N . We avoid complications associated with this feature by a familiar convolution trick, which we now describe. Define the positive real number κ = 1 −1 exp(1/(t 2 − 1)) dt, and the function K : R → [0, ∞) by K(t) = κ −1 exp(1/(t 2 − 1)) if |t| < 1, 0 if |t| 1. As is well known, the function K(t) is smooth and even. We scale this function with the positive parameter X in the form K X (t) = 4X K(4Xt). Then K X is supported on the interval |t| 1/(4X) and satisfies the important relation ∞ −∞ K X (t) dt = ∞ −∞ K(t) dt = 1. (3.1) We now define the function N X,Y : R → [0, 1] by N X,Y (α) = ∞ −∞ 1 N (α − t)K X (t) dt = ∞ −∞ 1 N (t)K X (α − t) dt. (3.2) The main properties of this function N = N X,Y are listed in the next lemma. Lemma 3.1. The function N = N X,Y is smooth, and for all α ∈ R one has N(α) ∈ [0, 1]. Further, whenever 2 Y 1 4 √ X, the inequalities 1 N X,Y /2 (α) N(α) 1 N X,2Y (α) (3.3) and N ′ (α) ≪ X, N ′′ (α) ≪ X 2 (3.4) hold uniformly in α ∈ R. Proof. The integrands in (3.2) are non-negative, so N(α) 0, while (3.1) shows that N(α) 1. Since K is smooth and compactly supported, the second integral formulation of N in (3.2) shows that N is smooth, and that the derivative is obtained by differentiating the integrand. Thus, we obtain N ′ (α) = N ∂ ∂α K X (α − t) dt, whence |N ′ (α)| 4X 1 −1 |K ′ (t)| dt. This confirms the inequality for the first derivative in (3.4). The bound for the second derivative follows in like manner by differentiating again. We now turn to the task of establishing (3.3). First suppose that α ∈ N X,Y /2 . Then, there is a unique pair of integers a ∈ Z and q ∈ N with (a, q) = 1, q 1 2 Y and |qα − a| 1 2 Y X −1 . For |t| (4X) −1 we then have (α − t) − a q 1 4X + Y 2qX Y qX . Thus α − t ∈ N(q, a) ⊆ N X,Y . Since K X is supported on [−1/(4X), 1/(4X)], we deduce from (3.1) and (3.2) that N(α) ∞ −∞ 1 N(q,a) (α − t)K X (t) dt 1 −1 K X (t) dt = 1. It follows that one has N(α) = 1 for all α ∈ N X,Y /2 . However, we know already that N(α) is non-negative for all α ∈ R, and thus we have proved the first of the two inequalities in (3.3). We complete the proof of the lemma by addressing the second inequality in (3.3). Suppose that N(α) > 0. Then, it follows from (3.2) that for some t ∈ R with |t| (4X) −1 , one has α − t ∈ N X,Y . Hence, there exist a ∈ Z and q ∈ N with (a, q) = 1, q Y and |α − t − a/q| Y /(qX). By the triangle inequality, α − a q Y qX + 1 4X 2Y qX . This shows that α ∈ N X,2Y . Since 0 N(α) 1, the second of the inequalities in (3.3) also follows. We consider N = N X,Y as a smooth model of the major arcs N X,Y . It is convenient to define corresponding minor arcs n = n X,Y , with n X,Y = R\N X,Y , and to write m = [0, 1] \ M for the set of minor arcs complementary to M. The smoothed version of n X,Y is the function n X,Y : R → [0, 1] defined by n(α) = ∞ −∞ 1 n (α − t)K X (t) dt. We trivially have 1 N (α) + 1 n (α) = 1 for all α ∈ R, so it is a consequence of (3.1) and (3.2) that n = n X,Y satisfies the identity N(α) + n(α) = 1. (3.5) The properties of n can therefore be deduced from the corresponding facts concerning N. In particular, Lemma 3.1 translates as follows. Lemma 3.2. The function n = n X,Y is smooth, and for all α ∈ R one has n(α) ∈ [0, 1]. Further, whenever 2 Y 1 4 √ X, the inequalities 1 n X,2Y (α) n(α) 1 n X,Y /2 (α) and n ′ (α) ≪ X, n ′′ (α) ≪ X 2 hold uniformly in α ∈ R. Fractional powers of Weyl sums In this section we consider a trigonometric polynomial T (α) = M <n M +N c n e(αn) (4.1) with complex coefficients c n . The associated ordinary polynomial P (z) = N n=1 c M +n z n (4.2) is related to T via the identity T (α) = e(Mα)P (e(α)). (4.3) Lemma 4.1. Let k ∈ N. Then, for any real number u > k, the real function Ω u : R → R, defined by Ω u (α) = |T (α)| u , is k times continuously differentiable. Proof. In view of (4.3), we see that it suffices to prove this result in the special case where M = 0. This reduction step noted, we proceed by a succession of elementary exercises. Let u ∈ R. We begin by considering the function θ u : R \ {0} → R defined by θ u (α) = |α| u . This function is differentiable on R \ {0}, and one has θ ′ u (α) = u|α| u α −1 = uθ u (α)α −1 . By induction, it follows that for any l ∈ N the function θ u is l times differentiable, and that the l-th derivative is θ (l) u (α) = u(u − 1) · · · (u − l + 1)θ u (α)α −l . (4.4) Now suppose that u > 0. Then, by putting θ u (0) = 0 we extend θ u to a continuous function on R. More generally, whenever u > l, then lim α→0 θ u (α) α l = 0. By (4.4), this shows that whenever u > l then θ (l) u extends to a continuous function on R by choosing θ (l) u (0) = 0, and that θ (l−1) u is differentiable at 0 with derivative 0. We summarize this last statement as follows: (a) Let k ∈ N and u > k. Then θ u is k times continuously differentiable on R. Next, for u > 0, consider the function ρ u : R → R defined by putting ρ u (α) = | sin πα| u . For α ∈ (0, 1) one has sin πα > 0, whence ρ u (α) = (sin πα) u . Thus ρ u is smooth on (0, 1). But ρ has period 1, so it suffices to examine its differentiability properties at α = 0, a point at which ρ u is continuous. For all real α we have sin πα = παE(α), where E(α) = ∞ j=0 (−1) j (πα) 2j (2j + 1)! . The function E is smooth on R with E(0) = 1. Hence E(α) > 0 in a neighbourhood of 0 where we then also have ρ u (α) = π u |α| u E(α) u . By applying the product rule in combination with our earlier conclusion (a), we therefore conclude as follows: (b) Let k ∈ N and u > k. Then ρ u is k times continuously differentiable on R. We now turn to the function T where we suppose that M = 0, as we may. The sum in (4.1) defines a holomorphic function of the complex variable α, and hence the function T : R → C is a smooth map of period 1. The sum T (α) = 1 n Nc n e(−αn) defines another trigonometric polynomial, and for α ∈ R we have T (α) =T (α). Consequently, for real α we have |T (α)| 2 = T (α)T (α), (4.5) whence the function |T | 2 : R → C, given by α → |T (α)| 2 , is smooth on R with d dα |T (α)| 2 = T ′ (α)T (α) + T (α)T ′ (α). (4.6) On noting that T (α) j is again a trigonometric polynomial for all j ∈ N, we see that |T (α)| 2j is smooth. Hence, from now on, we may suppose that u is a real number but not an even natural number. Also, the conclusion of Lemma 4.1 is certainly true in the trivial case where c n = 0 for all n. In the contrary case, the polynomial in (4.2) has at most finitely many zeros. Therefore, the set Z = {α ∈ R : T (α) = 0} is 1-periodic with Z ∩ [0, 1) finite, and consequently R \ Z is open. We next examine the function |T | u : R \ Z → C, given by α → |T (α)| u . (c) When u is real but not an even natural number, the function |T | u is smooth. In order to confirm this assertion, note that |T (α)| u = θ u/2 (|T (α)| 2 ). By applying the chain rule in combination with the preamble to conclusion (a) and (4.6), we find that |T (α)| u is differentiable for α ∈ R \ Z. Indeed, d dα |T (α)| u = θ ′ u/2 (|T (α)| 2 ) T ′ (α)T (α) + T (α)T ′ (α)) = u 2 |T (α)| u−2 T ′ (α)T (α) + T (α)T ′ (α)). (4.7) Since the final factor on the right hand side here is smooth, we may repeatedly apply the product rule to conclude that |T (α)| u is smooth on R\Z, as claimed. Finally, we consider any element α 0 ∈ Z. Then one has P (e(α 0 )) = 0. Since P is not the zero polynomial, there exists r ∈ N and a polynomial Q ∈ C[z] with Q(e(α 0 )) = 0 such that P (z) = (z − e(α 0 )) r Q(z). Write U(α) = Q(e(α) ) for the trigonometric polynomial associated with Q. Then T (α) = e(α) − e(α 0 ) r U(α). For u > 0 and all real α we then have |T (α)| u = |e(α) − e(α 0 )| ru |U(α)| u = |2 sin π(α − α 0 )| ru |U(α)| u . There is an open neighbourhood of α 0 on which U(α) does not vanish. By our conclusion (c) it is apparent that |U(α)| u is smooth on this neighbourhood. If u > k, then the conclusion (b) implies that the function |2 sin π(α − α 0 )| ru is k times continuously differentiable. The conclusion of the lemma therefore follows by application of the product rule. We mention in passing that if more is known about the zeros of P , then the argument that we have presented shows more. For example, if all the zeros in Z are double zeros and u > k, then |T (α)| u is 2k times differentiable. Then, for all l ∈ Z \ {0}, one has |b l | 1 (2πl) 2 1 0 d 2 dα 2 W (α)|T (α)| u dα. (4.9) Moreover, for all α ∈ R one has the Fourier series expansion W (α)|T (α)| u = l∈Z b l e(αl),(4.10) in which the right hand side converges absolutely and uniformly on R. Proof. By (4.5) and Lemma 4.1, the condition u 2 ensures that W (α)|T (α)| u is twice continuously differentiable. Hence, the integral on the right hand side of (4.9) exists, and the upper bound (4.9) follows from (4.8) by integrating by parts two times. Furthermore, the upper bound (4.9) ensures that the series in (4.10) converges absolutely and uniformly on R. Thus, by [18, Chapter II, Theorem 8.14], this Fourier series sums to W (α)|T (α)| u . In this paper Lemmata 4.1 and 4.2 will only be used with the quartic Weyl sum f , as defined in (1.10), in the role of T . The weight W will be either constantly 1 or a smooth minor arc. Let u > 0 and define the Fourier coefficient ψ u (n) = 1 0 |f (α)| u e(−αn) dα. (4.11) Also, with a parameter Y at our disposal within the range 1 Y 1 4 P 2 , we consider the smooth minor arcs n(α) = n P 4 ,Y (α) and introduce the related Fourier coefficient Proof. We first compute the derivatives of |f (α)| u . Suppose temporarily that u is not an even natural number. By (4.7), whenever f (α) = 0, we have φ u (n) = 1 0 n(α)|f (α)| u e(−αn) dα.d dα |f (α)| u = u 2 |f (α)| u−2 f ′ (α)f (α) + f (α)f ′ (α) , and we may differentiate again to confirm the identity d 2 dα 2 |f (α)| u = u(u − 2) 4 |f (α)| u−4 f ′ (α)f (α) + f (α)f ′ (α) 2 + u 2 |f (α)| u−2 f ′′ (α)f (α) + 2f ′ (α)f ′ (α) + f (α)f ′′ (α) . These formulae hold for all α ∈ R when u is an even natural number, and thus d dα |f (α)| u u|f (α)| u−1 |f ′ (α)| and d 2 dα 2 |f (α)| u u(u − 1)|f (α)| u−2 |f ′ (α)| 2 + u|f (α)| u−1 |f ′′ (α)|. Hence, the trivial estimates f (α) ≪ P , f ′ (α) ≪ P 5 and f ′′ (α) ≪ P 9 suffice to conclude that the upper bounds d dα |f (α)| u ≪ P u+4 and d 2 dα 2 |f (α)| u ≪ P u+8 (4.13) hold for all α ∈ R when either u = 2 or f (α) = 0. However, when u > 2 these derivatives will be zero whenever f (α) = 0, so the inequalities (4.13) hold uniformly in α ∈ R. The upper bound ψ u (n) ≪ P u+8 n −2 is now immediate from Lemma 4.2. Furthermore, an application of the product rule in combination with Lemma 3.2 and (4.13) shows that d dα n(α)|f (α)| u ≪ P u+4 and d 2 dα 2 n(α)|f (α)| u ≪ P u+8 . The estimate φ u (n) ≪ P u+8 n −2 therefore follows by invoking Lemma 4.2 once again, and this completes the proof of the lemma. Cubic moments of Fourier coefficients The principal results in this section are the upper bounds for cubic moments of φ u (n) and ψ u (n) embodied in Theorem 5.1 below. The proof of these estimates involves a development of the ideas underpinning the main line of thought in our earlier paper [6]. For u > 0 it is convenient to define δ(u) = (25 − 3u)/6. When u 6, the contribution from the major arcs to the sum in (5.3) is easily seen to be of order P 3u−8 . Since δ(u) is negative for u > 25/3, we cannot expect that the upper bound (5.3) holds for such u. However, as is evident from (5.4), a minor arcs version remains valid for u 11. Before we embark on the proof of this theorem, we summarize some mean value estimates related to the Weyl sum (1.10). In the following two lemmata, we assume that Since [0, 1] = M ∪ m, the desired conclusion follows at once. In the special case u = 14, the first conclusion of Lemma 5.3 assumes the simple form already announced in (1.5). e(αz) 2 |f (α)| 4 dα ≪ P 3 Z + P 2+ε Z 3/2 . Proof. This is essentially contained in [12, Lemma 6.1], where these estimates are established in the case when Z is contained in [0, P 4 ]. As pointed out in [9, Lemma 2.2] this condition is not required. We now have available sufficient infrastructure to derive upper bounds for cubic moments of φ u (n) and ψ u (n). The proof of Theorem 5.1. Let ϑ u (n) denote one of ψ u (n), φ u (n). On examining the statement of the theorem, it is apparent that we may assume that in the former case we have 6 u 25/3, and in the latter case 6 u 11 and 2P 4/15 Y P/16. We begin with the observation that, by Lemma 4.3, one has ϑ u (n) ≪ P u+8 n −2 . Consequently, when u 6, one has |n|>P 7 |ϑ u (n)| 3 + |n| P 7 |ϑu(n)| 1 |ϑ u (n)| 3 ≪ P 7 + P 3u+24 |n|>P 7 n −6 ≪ P 3u−11 . It remains to consider the contribution of those integers n with |n| P 7 and |ϑ u (n)| > 1. We put Θ(α) = 1 when ϑ u = ψ u , and Θ(α) = n(α) when ϑ u = φ u . Then the definitions (4.11) and (4.12) take the common form In the missing cases where 6 u < 8 one interpolates between (5.8) and the elementary inequality 1 0 |f (α)| 4 dα ≪ P 2+ε ,(5.|ϑ u (n)| ϑ u (0) ψ u (0) ≪ P 2+ 3 4 (u−4)+ε . Fix a number τ with 0 < τ < 10 −10 and define T 0 by T 0 = P 3 4 u−1+τ , when 6 u < 8, P Then, on recalling the upper bounds for ϑ u (n) just derived, a familiar dyadic dissection argument shows that there is a number T ∈ [1, T 0 ] with the property that n∈Z |ϑ u (n)| 3 ≪ P 3u−11 + (log P ) |n| P 7 T <|ϑu(n)| 2T |ϑ u (n)| 3 ≪ P 3u−11 + P ε T 3 Z,(5.11) where Z denotes the number of elements in the set Z = {n ∈ Z : |n| P 7 and T < |ϑ u (n)| 2T }. For each n ∈ Z there is a complex number η n , with |η n | = 1, for which η n ϑ u (n) is a positive real number. Write |K(α) 2 f (α) 4 | dα ≪ P 3 Z + P 2+ε Z 3/2 . (5.15) Next we confirm the bound I ≪ P 5 3 u− 35 9 +ε . Indeed, in the case where Θ = 1 we have 6 u 25/3. In such circumstances 8 < 2u − 8/3 14, and so (5.6) applies and yields the claimed bound. In the case Θ = n we have u 11, and hence 2u − 8/3 < 20. Write m = m P 4 ,Y /2 . Then by Lemma 3.2, we have 0 n(α) 1 m . We therefore deduce that in this second case we have I 1 0 n(α)|f (α)| 2u− 8 3 dα m |f (α)| 2u− 8 3 dα, and (5.7) confirms our claimed bound for I. Collecting these estimates together within (5.14), we now have T Z ≪ P ε P 3 Z + P 2 Z 3/2 1/3 Z 1/6 P 5 3 u− 35 9 1/2 . On recalling (5.2), we find that this relation disentangles to yield the bound T 3 Z ≪ P 2+ 3 2 ( 5 3 u− 35 9 )+ε + T P 2+ 5 3 u− 35 9 +ε = P 3u−8+δ(u)+ε + T P It transpires that in the range T P 5 6 u− 35 18 the first term on the right hand side dominates, so that we finally reach the desired conclusion T 3 Z ≪ P 3u−8+δ(u)+ε . In view of (5.11), this is enough to complete the proof of Theorem 5.1 in the case that T is small. Our second approach is suitable for T of medium size, with We apply Schwarz's inequality to (5.13), obtaining the bound T Z 1 0 |K(α) 2 f (α) 4 | dα 1/2 1 0 Θ(α) 2 |f (α)| 2u−4 dα 1/2 . Note that when 6 u 11, one has 8 2u − 4 18, and when instead u 25/3, we have 2u − 4 < 14. Hence, as in the proof of our earlier estimate for I, it follows from Lemma 5.3 that 1 0 Θ(α) 2 |f (α)| 2u−4 dα ≪ P 5 3 u−5+ε . Applying this estimate in combination with (5.15), we conclude that T Z ≪ P ε (P 3 Z + P 2 Z 3/2 ) 1/2 (P 5 3 u−5 ) 1/2 . This bound disentangles to deliver the relation T 3 Z ≪ T P 5 3 u−2+ε + T −1 P 10 3 u−6+ε . On recalling (5.2), we find that our present assumptions (5.16) concerning the size of T deliver the estimate T 3 Z ≪ P 5 2 u− 23 6 +ε + P 5 2 u− 73 18 +ε ≪ P 3u−8+δ(u)+ε . The conclusion of Theorem 5.1 again follows in this case, by virtue of (5.11). The analysis of the large values T satisfying P By hypothesis, we have u 6. Also, from Lemma 3.1, we have N 1 N P 4 ,P /8 , so that (5.5) yields the bound 1 0 N(α)K(α)|f (α)| u dα Z M P 4 ,P /8 |f (α)| u dα ≪ ZP u−4 . Since u − 4 < 5 6 u − 11 6 , for large enough P one has ZP u−4 < 1 2 T Z. Thus T Z ≪ 1 0 n(α)K(α)|f (α)| u dα.(5.5 6 u − 5 3 + 5 3 u − 7 3 = 5 2 u − 4 < 5 2 u − 23 6 . Then in either case one finds from (5.18) via (5.2) that T 3 Z ≪ P 3u−8+δ(u)+2τ , and the conclusion of Theorem 5.1 follows in this final case, again by (5.11), on taking τ sufficiently small. We close this section with a related but simpler result. Theorem 5.5. One has n∈Z ψ 4 (n) 3 ≪ P 13/2+ε . Proof. By (4.11) and orthogonality, the Fourier coefficient ψ 4 (n) has a Diophantine interpretation that shows on the one hand that ψ 4 (n) ∈ N 0 , and on the other that ψ 4 (n) = 0 for all n ∈ Z with |n| > 2P 4 . By (4.11) and (5.10), we also have the bound ψ 4 (n) ψ 4 (0) ≪ P 2+ε . The argument leading to (5.11) now shows that there is a number T with 1 T P 2+ε having the property that n∈Z ψ 4 (n) 3 ≪ P 6+ε + P ε |n| 2P 4 T ψ 4 (n) 2T ψ 4 (n) 3 ≪ P 6+ε + P ε T 3 Z, (5.19) where Z denotes the number of elements in the set Z = {n ∈ Z : |n| 2P 4 and T < |ψ 4 (n)| 2T }. As in the corresponding analysis within the proof of Theorem 5.1, we next find that there are unimodular complex numbers η n (n ∈ Z ) having the property that, with K(α) defined via (5.12), one has T Z < 1 0 K(α)|f (α)| 4 dα. We first handle small values of T . Here, an application of Schwarz's inequality leads via (5.8) to the bound T Z 1 0 |f (α)| 8 dα 1/2 1 0 |K(α)| 2 dα 1/2 ≪ P 5/2+ε Z 1/2 . This disentangles to yield T 3 Z ≪ T P 5+ε , proving the theorem for T P 3/2 . Next, when T is large, we apply Hölder's inequality in a manner similar to that employed in the large values analysis of the proof of Theorem 5.1. Thus T Z 1 0 |K(α) 2 f (α) 2 | dα 1/2 1 0 |f (α)| 4 dα 1/4 1 0 |f (α)| 8 dα 1/4 , and hence T Z ≪ P ε (P Z + P 1/2 Z 3/2 ) 1/2 P 7/4 . We now obtain the bound T 3 Z ≪ T P 9/2+ε + T −1 P 8+ε , and in view of (5.19), this proves Theorem 5.5 in the complementary case P 3/2 T P 2+ε . Mean values of quartic Weyl sums In this section we estimate certain entangled moments of quartic Weyl sums, and then apply them to obtain minor arc estimates for use within the proofs of Theorems 1.1 and 1.2. Throughout this section and the next, let the pair of integers c i , d i (1 i 5) satisfy the condition that the points (c i : Theorem 6.1. One has I 4 ≪ P 13/2+ε and I u ≪ P 3u−8+δ(u)+ε (6 u 25/3). Also, when 6 u 11, one has J u ≪ P 3u−8+δ(u)+ε . d i ) ∈ P 1 (Q) are distinct. Define the linear forms M i = M i (α, β) (1 i 5) by M i (α, β) = c i α + d i β. Proof. It follows from Lemmata 3.2 and 4.2 that the function n(γ)|f (γ)| u has a uniformly convergent Fourier series with coefficients φ u (n). By orthogonality, we conclude that J u = (n 1 ,n 2 ,n 3 )∈N φ u (n 1 )φ u (n 2 )φ u (n 3 ), where N is the set of solutions in integers n 1 , n 2 , n 3 of the linear system c 1 n 1 + c 2 n 2 + c 3 n 3 = d 1 n 1 + d 2 n 2 + d 3 n 3 = 0. Since the projective points (c i : d i ) are distinct, there exist non-zero integers l i , depending only on the c i , d i , having the property that the solutions of this system are precisely the triples (n 1 , n 2 , n 3 ) = m(l 1 , l 2 , l 3 ) (m ∈ Z). It therefore follows from (2.2) that J u 1 3 m∈Z |φ u (l 1 m)| 3 + |φ u (l 2 m)| 3 + |φ u (l 3 m)| 3 n∈Z |φ u (n)| 3 . The desired bound for J u now follows from Theorem 5.1. The bounds for I 4 and I u follow in the same way, but the argument has to be built on the cubic moment estimates for ψ u (n) that are provided by Theorems 5.1 and 5.5. We now turn to related, less balanced mixed moments. With u and Y as before, we define K u = 1 0 1 0 |f (M 1 )f (M 2 )| u |f (M 3 )| 6 dα dβ, L u = 1 0 1 0 n(M 1 )n(M 2 )|f (M 1 )f (M 2 )| u |f (M 3 )| 6 dα dβ, and put η(u) = 19 6 − u 3 . Theorem 6.2. Subject to the hypotheses of this section, one has K u ≪ P 2u−2+η(u)+ε (6 u 19/2), L u ≪ P 2u−2+η(u)+ε (6 u 11). Proof. We proceed as in the initial phase of the proof of Theorem 6.1. Using the same notation, we obtain L u = (n 1 ,n 2 ,n 3 )∈N φ u (n 1 )φ u (n 2 )ψ 6 (n 3 ). Note here that ψ 6 (m) counts solutions of a Diophantine equation, and consequently is a non-negative integer. Hence L u 1 2 (n 1 ,n 2 ,n 3 )∈N ψ 6 (n 3 ) |φ u (n 2 )| 2 + |φ u (n 1 )| 2 . By symmetry, we may therefore suppose that for appropriate non-zero integers l 2 and l 3 , depending at most on c and d, one has The estimate for L u recorded in Theorem 6.2 therefore follows on recalling the definition of η(u). The initial steps in the estimation of K u are the same, and one reaches a bound for K u identical to (6.2) except that φ u now becomes ψ u . We split into major and minor arcs by inserting the relation 1 = N(α) + n(α), with parameters X = P 4 and Y = P 1/3 , into (4.11). From (5.5) we obtain 1 0 N(α)|f (α)| u e(−αn) dα M P 4 ,P |f (α)| u dα ≪ P u−4 . Hence, we discern from (4.11) and (4.12) that |ψ u (n)| 2 ≪ |φ u (n)| 2 + P 2u−8 , and so, K u ≪ m∈Z ψ 6 (l 3 m)|φ u (l 2 m)| 2 + P 2u−8 m∈Z ψ 6 (l 3 m). Here the first sum over m is the same as that occurring in the estimation of L u in (6.2), and has already been estimated above. Thus, since n∈Z ψ 6 (n) = |f (0)| 6 ≪ P 6 , we conclude that K u ≪ P 2u−2+η(u)+ε + P 2u−8 n∈Z ψ 6 (n) ≪ P 2u−2+η(u)+ε + P 2u−2 . Provided that u 19/2, which guarantees η(u) to be non-negative, this estimate confirms the upper bound for K u claimed in the theorem. Note that the mean values I u and J u involve s = 3u Weyl sums, at least for integral values of u. By comparison, the number of Weyl sums in K u and L u is s = 2u + 6. A short calculation shows that when applied with the same value of s, with s 18, the exponents of P in Theorems 6.1 and 6.2 coincide. Since almost all of Theorem 6.1 may be recovered from Theorem 6.2 via Hölder's inequality, and since for fixed values of s the exponent u in Theorem 6.2 is at least as large, Theorem 6.2 is morally the stronger result. In our later application of the circle method, this allows for larger values of r j in the profiles associated to the simultaneous equations (1.1), and this is essential for our method to succeed. Another advantage is that in L u only two of the forms M i are on minor arcs, while in the mean value J u all three are constrained to minor arcs. We continue with another result in which the profile is even farther out of balance. We consider the integral M = Then, just as in the argument of the proof of Theorem 6.2 leading to (6.2), we find that for appropriate non-zero integers l 2 and l 3 , depending at most on c and d, one has M m∈Z ψ 4 (l 3 m)|φ 11 (l 2 m)| 2 . Thus, an application of Hölder's inequality in combination with Theorems 5.1 and 5.5, together with (5.2), yields the bound M n∈Z ψ 4 (n) 3 1/3 n∈Z |φ 11 (n)| 3 2/3 ≪ P ε P 13/2 1/3 P 71/3 2/3 . The desired conclusion follows a rapid computation. Finally, we transform the estimates for L u and M into proper minor arc estimates. In the interest of brevity we write M = M P 4 ,P 1/3 and put p = [0, 1] 2 \ (M × M). (6.3) Theorem 6.4. Suppose that 19/2 < u 11. Then p |f (M 1 )f (M 2 )| u |f (M 3 )| 6 dα dβ ≪ P 2u−2+η(u)+ε . (6.4) Further, one has We note at once that whenever (α, β) ∈ p, one has N(M 1 )N(M 2 ) = 0. The explanation for this observation is that whenever N(M 1 )N(M 2 ) > 0, then it follows from Lemma 3.1 that M j ∈ N P 4 ,2P 2/7 (j = 1, 2). By taking suitable linear combinations of M 1 and M 2 we find that α and β lie in N P 4 ,AP 2/7 , with some A 2 depending only on the coefficients of M 1 and M 2 . But (α, β) ∈ [0, 1] 2 , and so (α, β) ∈ M × M for large enough P . This is not the case when (α, β) ∈ p, as claimed. p |f (M 1 ) 11 f (M 2 ) 11 f (M 3 ) 4 | dα dβ ≪ P 18−1/18+ε . With this observation in hand, we apply (6.6) within the integral on the left hand side of (6.5) to conclude that p |f (M 1 ) 11 f (M 2 ) 11 f (M 3 ) 4 | dα dβ M + M Nn + M nN ,(6.7) where M Nn = note that D > 0. Consider the linear transformation from R 2 to R 2 , with (α, β) → (α ′ , β ′ ), defined by means of the relation α ′ β ′ = D −1 c 1 d 1 c 2 d 2 α β . (6.9) Then M 1 = Dα ′ , M 2 = Dβ ′ , and α and β are linear forms in α ′ and β ′ with integer coefficients. By applying the transformation formula as a change of variables, one finds that M Nn = B N(Dα ′ )n(Dβ ′ )|f (Dα ′ ) 11 f (Dβ ′ ) 11 f (Aα ′ + Bβ ′ ) 4 | dα ′ dβ ′ , wherein A, B are non-zero integers and B is the image of [0, 1] 2 under the transformation (6.9). The parallelogram B is covered by finitely many sets [0, 1] 2 + t, with t ∈ Z 2 . Since the integrand in the last expression for M Nn is Z 2 -periodic it follows that M Nn ≪ 1 0 1 0 N(Dα)n(Dβ)|f (Dα) 11 f (Dβ) 11 f (Aα + Bβ) 4 | dα dβ. Here we have removed decorations from the variables of integration for notational simplicity. We now inspect all factors of the integrand in the latter upper bound that depend on β. By Hölder's inequality, Lemma 5.3 and obvious changes of variable, one obtains the estimate ≪ P ε P 67/6 5/7 (P 10 ) 2/7 = P 65/6+ε , uniformly in α ∈ R. Consequently, applying (5.5) in combination with yet another change of variable, we finally arrive at the bound M Nn ≪ P 65/6+ε 1 0 N(Dα)|f (Dα)| 11 dα ≪ P 18−1/6+ε . We may infer thus far that M Nn + M nN ≪ P 18−1/6+ε . On substituting this estimate into (6.7), noting also the bound M ≪ P 18−1/18+ε supplied by Theorem 6.3, the conclusion (6.5) is confirmed. The proof of (6.4) is essentially the same, and we economise by making similar notational conventions. The exponents 11 and 4 that occur in (6.5) must now be replaced by u and 6, respectively. The initial phase of the preceding argument then remains valid, and an appeal to Theorem 6.2 delivers the bound Here, we isolate factors of the integrand that depend on β and apply Hölder's inequality. Note that since u 11 we have 7u/4 < 20. Thus, by Lemma 5.3, Applying this bound, which is uniform in α ∈ R, together with (5.5), we arrive at the estimate L Nn ≪ P When u 11, the definition of η(u) ensures that 11 6 u − 2 3 2u − 2 + η(u), and hence L Nn + L nN ≪ P 2u−2+η(u)+ε . The conclusion (6.4) now follows by substituting this estimate into (6.10). p |f (M 1 )f (M 2 )| u |f (M 3 )| 6 dα dβ ≪ L Nn + L nN + P 2u−2+η(u)+ε ,(6.1 0 n(Dβ)|f (Dβ) u f (Aα + Bβ) 6 | dβ Another mean value estimate This section is an update for quartic Weyl sums of our earlier work [3] on highly entangled mean values. We now attempt to avoid independence conditions on linear forms as far as the argument allows while incorporating the consequences of the recent bound (1.5). We emphasise that throughout this section, we continue to work subject to the overall assumptions made at the outset of the previous section. We begin by examining the mean value G 1 = 1 0 1 0 |f (M 1 ) 2 f (M 2 ) 4 f (M 3 ) 4 | dα dβ. (7.1) Lemma 7.1. One has G 1 ≪ P 5+ε . Proof. This is essentially contained in [4, Section 2], but we give a proof for completeness. Recall the definition (6.1) of the linear forms M i . By orthogonality, the integral G 1 is equal to the number of solutions of an associated pair of quartic equations. By taking suitable integral linear combinations of these two equations, we may assume that they take the shape a(x 4 1 − x 4 2 ) = b(x 4 3 + x 4 4 − x 4 5 − x 4 6 ) = c(x 4 7 + x 4 8 − x 4 9 − x 4 10 ),(7.2) for suitable natural numbers a, b, c. Thus, we see that G 1 is equal to the number of solutions of the Diophantine system (7.2) with x i P . For each of the O(P ) possible choices for x 1 and x 2 with x 1 = x 2 , it follows via orthogonality and (5.10) that the number of solutions of this system in the remaining variables x 3 , . . . , x 10 is equal to 1 0 |f (α)| 4 dα 2 ≪ P 4+ε . Consequently, the contribution to G 1 from this first class of solutions is O(P 5+ε ). Now consider solutions of (7.2) in which x 1 = x 2 . By orthogonality, the total number of choices for x 3 , . . . , x 10 satisfying the rightmost equation in (7.2) is 1 0 |f (bα)f (cα)| 4 dα. Schwarz's inequality in combination with (5.8) shows this integral to be O(P 5+ε ). However, for any fixed choice of x 3 , . . . , x 10 in this second class of solutions, one has x 1 = x 2 , and hence the fixed integer N = b(x 4 3 + x 4 4 − x 4 5 − x 4 6 ) is non-zero. But it follows from (7.2) that x 2 1 − x 2 2 and x 2 1 + x 2 2 are each divisors of N. Thus, a standard divisor function estimate shows that the number of choices for x 1 and x 2 is O(P ε ), and we conclude that the contribution to G 1 from this second class of solutions is O(P 5+ε ). Adding these two contributions, we obtain the bound claimed in the statement of the lemma. We next examine the mean value G 2 = 1 0 1 0 |f (M 1 ) 2 f (M 2 ) 4 f (M 3 ) 4 f (M 4 ) 4 f (M 5 ) 4 | dα dβ. (7.3) Theorem 7.2. One has G 2 ≪ P 11+ε . Note that in this result we require the five linear forms M j to be pairwise independent. Therefore, the result will be of use only in cases where the profile of (1.1) has r 5 1. The mean value in Theorem 7.2 involves 18 Weyl sums and should therefore be compared with the bound I 6 ≪ P 67/6+ε provided by Theorem 6.1. The extra savings that we obtain here are the essential stepping stone toward Theorem 1.2. The proof of Theorem 7.2. As in the proof of Lemma 7.1, it follows from orthogonality that the integral G 2 is equal to the number of solutions of an associated pair of quartic equations. Taking suitable integral linear combinations of these two equations, we reduce to the situation where c 4 = d 5 = 0, and consequently M 4 = d 4 β and M 5 = c 5 α. Motivated by this observation, we begin our deliberations by estimating the auxiliary mean value G 3 = 1 0 1 0 |f (M 1 ) 2 f (M 2 ) 4 f (M 3 ) 4 f (d 4 β) 4 | dα dβ. The Weyl differencing argument [14,Lemma 2.3] shows that there are real numbers u h with u h ≪ P ε for which |f (γ)| 4 ≪ P 3 + P 1 |h| 2P 4 u h e(γh). (7.4) We apply this relation with γ = M 4 to the mean value G 3 and infer that G 3 ≪ P 3 G 1 + P G 4 , (7.5) where G 1 is the mean value defined in (7.1), and G 4 = 1 |h| 2P 4 u h 1 0 1 0 |f (M 1 ) 2 f (M 2 ) 4 f (M 3 ) 4 |e(d 4 hβ) dα dβ. By orthogonality, the double integral on the right hand side here is equal to the number of solutions of the system of Diophantine equations c 1 (x 4 1 − y 4 1 ) + c 2 (x 4 2 + x 4 3 − y 4 2 − y 4 3 ) + c 3 (x 4 4 + x 4 5 − y 4 4 − y 4 5 ) = 0 (7.6) d 1 (x 4 1 − y 4 1 ) + d 2 (x 4 2 + x 4 3 − y 4 2 − y 4 3 ) + d 3 (x 4 4 + x 4 5 − y 4 4 − y 4 5 ) +d 4 h = 0 with x i P and y i P . We may sum over h = 0 and replace u h by its upper bound. Then we find that G 4 ≪ P ε G 5 , where G 5 is the number of solutions of the equation (7.6) with the same conditions on x i and y i . By orthogonality again, we deduce that We therefore deduce that G 4 ≪ P 7+2ε . Meanwhile, the estimate G 1 ≪ P 5+ε is available from Lemma 7.1. On substituting these bounds into (7.5), we conclude thus far that G 3 ≪ P 8+ε . G 5 = 1 0 |f (c 1 α) 2 f (c 2 α) 4 f (c 3 α) 4 | dα. We now repeat this argument with γ = M 5 in (7.4), applying the resulting inequality within the integral G 2 defined in (7.3). Thus we obtain G 2 ≪ P 3 G 3 + P 1+ε G 6 ,(7.7) where G 6 denotes the number of solutions of the Diophantine equation d 1 (x 4 1 −y 4 1 )+d 2 (x 4 2 +x 4 3 −y 4 2 −y 4 3 )+d 3 (x 4 4 +x 4 5 −y 4 4 −y 4 5 )+d 4 (x 4 6 +x 4 7 −y 4 6 −y 4 7 ) = 0, with x i P and y i P . By orthogonality, G 6 = 1 0 |f (d 1 α) 2 f (d 2 α) 4 f (d 3 α) 4 f (d 4 α) 4 | dα. One may confirm that d 1 d 2 d 3 d 4 = 0 by arguing as above, and so an application of (2.2) in combination with (1.5) reveals that G 6 4 i=1 1 0 |f (d i α)| 14 dα = 4 1 0 |f (γ)| 14 dγ ≪ P 10+ε . The conclusion of the theorem now follows on substituting this bound together with our earlier estimate for G 3 into (7.7). The circle method In this section we prepare the ground to advance to the proofs of Theorems 1.1 and 1.2. A preliminary manoeuvre is in order. Let k = 0 or 1, and let N k (P ) = N k denote the number of solutions of the system (1.1) with k x j P (1 j s). Note that the equations (1.1) are invariant under the s mappings x j → −x j . This observation shows that 2 s N 1 (P ) N (P ) 2 s N 0 (P ). (8.1) The goal is then to establish the formulae lim P →∞ 2 s P 8−s N k (P ) = IS (k = 0, 1), (8.2) since then (1.4) follows immediately from (8.1) and the sandwich principle. Thus, we now launch the Hardy-Littlewood method to evaluate the counting functions N k (P ). This involves the exponential sum f k (α) = k x P e(αx 4 ). (8.3) This sum is, of course, an instance of the sum (1.10), where we have been deliberately imprecise about the lower end of the interval of summation. The results we have formulated so far are indeed independent of the choice of k, and it is only now and temporarily where this detail matters. We require the linear forms Λ j = Λ j (α, β), defined by Λ j (α, β) = a j α + b j β (1 j s) that are associated with the equations (1.1). We then put (8.4) and observe that, by orthogonality, one has N k (P ) = Then, given (α, β) ∈ [0, 1] 2 , if we put γ = α − a/q and δ = β − b/q for some a, b ∈ Z and q ∈ N, one concludes from (8.3) and [14,Theorem 4.1] that F k (α, β) = f k (Λ 1 )f k (Λ 2 ) · · · f k (Λ s ),f k (Λ j ) = q −1 S (q, Λ j (a, b)) v (Λ j (γ, δ))+O q 1/2+ε (1 + P 4 |Λ j (γ, δ)|) 1/2 . (8.6) Note that the right hand side here is independent of k. We multiply these approximations for 1 j s. This brings into play the expressions S (q, a, b) = q −s s j=1 S (q, Λ j (a, b)) and V (γ, δ) = s j=1 v (Λ j (γ, δ)) . If (α, β) ∈ V(q, a, b) ⊆ V then the error term in (8.6) is O(P 1/8+ε ), and we infer that F k (α, β) = S (q, a, b)V (γ, δ) + O(P s−7/8+ε ). Since V is a set of measure O(P −59/8 ), when we integrate this formula for F k (α, β) over V, we obtain the asymptotic relation V F k (α, β) dα dβ = S(P 1/8 )J * (P 1/8 ) + O(P s−33/4+ε ), where, for 1 Q P we define S(Q) = q Q q a=1 q b=1 (a,b,q)=1 S (q, a, b), J * (Q) = U(Q) V (γ, δ) dγ dδ, and U(Q) = [−QP −4 , QP −4 ] 2 . At this point, we require some more information concerning the matrix of coefficients, and we shall suppose that q 0 15. Then s 16, and we may apply [9, Lemma 3.3] to conclude that S(Q) = S + O(Q ε−1 ). Further, we have P −P e(γt 4 ) dt = 2v(γ), and thus [9, Lemma 3.1] shows that the limit (1.2) exists, and that we have 2 s J * (Q) = P s−8 J + O(P s−8 Q −1/4 ). We summarise these deliberations in the following lemma. The major arcs in Lemma 8.1 are certainly too slim for efficient use of Weyl type inequalities on the complementary set. A pruning argument allows us to enlarge the major arcs considerably. Let W denote the union of the rectangles W(q, a, b) = {(α, β) ∈ [0, 1] 2 : |qα − a| P −3 and |qβ − b| P −3 }, with 1 q P , 0 a, b q and (a, b, q) = 1. Then V ⊂ W, and we proceed to estimate the contribution from W \ V to the integral (8.5). A careful application of [14,Theorem 4.2] shows that S(q, c) ≪ q 3/4 (q, c) 1/4 . Further, if V (γ) = P (1 + P 4 |γ|) −1/4 , then by [14,Theorem 7.3], one has v(γ) ≪ V (γ). Hence, whenever (α, β) ∈ W(q, a, b) with q P , one deduces from (8.6) that f k (Λ j ) ≪ q −1/4 (q, Λ j (a, b)) 1/4 V (Λ j (α − a/q, β − b/q)) + P 1/2+ε . It is immediate that the first term on the right hand side here always dominates the second, and therefore, F k (α, β) ≪ q −s/4 s j=1 (q, Λ j (a, b)) 1/4 V (Λ j (α − a/q, β − b/q)) . We integrate over W \ V. The result is a sum over q P in which we consider the portion q P 1/8 separately. This yields the bound W\V F k (α, β) dα dβ ≪ K 1 (P 1/8 ) + K 2 (P 1/8 ),(8.7) where for 1 Q P , we write K 1 (Q) = q Q q a=1 q b=1 (a,b,q)=1 q −s/4 s j=1 (q, Λ j (a, b)) 1/4 B(Q) s j=1 V (Λ j ) dα dβ, with B(Q) = [−1, 1] 2 \ U(Q), and K 2 (Q) = Q<q P q a=1 q b=1 (a,b,q)=1 q −s/4 s j=1 (q, Λ j (a, b)) 1/4 [−1,1] 2 s j=1 V (Λ j ) dα dβ. Still subject to the condition q 0 15, the proof of [9, Lemma 3.2] shows that Thus we deduce that K 1 (P 1/8 ) + K 2 (P 1/8 ) ≪ P s−8−1/32 . Substituting this estimate into (8.7), and then recalling Lemma 8.1, we see that in the latter lemma we may replace V by W. This establishes the following theorem. Let w = [0, 1] 2 \ W denote the minor arcs. Then, in view of (8.2), (8.5) and Theorem 8.2, whenever q 0 15, the asymptotic relation (1.4) is equivalent to the minor arc estimate w F k (α, β) dα dβ = o(P s−8 ), (8.8) as P → ∞, and in the next two sections we shall confirm this subject to the hypotheses imposed in Theorems 1.1 and 1.2. 9. The proof of Theorem 1.1 At the core of the proof of Theorem 1.1 we require two minor arc estimates. Lemma 9.1. Let c 1 , c 2 , d 1 , d 2 ∈ Z, and suppose that M j = c j α + d j β (j = 1, 2) are linearly independent. Then w |f (M 1 )f (M 2 )| 15 dα dβ ≪ P 22−1/6+ε . Proof. It is immediate from (6.3) that w ⊂ p. Recall the initial argument within the proof of Theorem 6.4. This shows that for (α, β) ∈ p, the forms M 1 and M 2 cannot be in N P 4 ,P 2/7 simultaneously. By symmetry we may therefore suppose that M 1 ∈ n P 4 ,P 2/7 . Now apply the transformation formula as in (6.9). One finds that for an appropriate non-zero integer D, depending at most on c and d, one has w |f (M 1 )f (M 2 )| 15 dα dβ ≪ 1 0 m |f (Dα)f (Dβ)| 15 dα dβ, where m = m P 4 ,P 2/7 . Thus, applying a trivial estimate for one factor f (Dβ), we deduce via Lemma 5.3 that w |f (M 1 )f (M 2 )| 15 dα dβ ≪ P ε P 65/6 P 11 ≪ P 22−1/6+ε . This completes the proof of the lemma. Proof. On recalling that w ⊂ p, the lemma is immediate from Theorem 6.4. We are now fully equipped to complete the proof of Theorem 1.1. Suppose that we are given a pair of equations (1.1) with s 26, q 0 15 and profile (r 1 , r 2 , . . . , r ν ). The parameter l = s − r 1 − r 2 determines our argument. In the notation of Section 7, we let F = F k with k = 0 or 1 be the generating function defined in (8.4). Small values of l call for special attention. Initially, we consider the situation with 0 l 3. We apply Lemma 2.3 with J 1 and J 2 the subsets of the set of indices {1, 2, . . . , s} counted by r 1 and r 2 , respectively, and with J 3 the subset consisting of the remaining indices. Then card(J 3 ) = l. We also choose In this scenario, therefore, we deduce from Lemmata 9.1 and 9.2 that w F (α, β) dα dβ ≪ P s−30+l+ε P 18−1/18 l/4 P 22−1/6 1−l/4 ≪ P s−8−1/18+ε . where again each of the M j is one of the linear forms Λ i , and any two of the M j are linearly independent. Here s − 15 11 by the hypothesis s 26, and we may estimate excessive copies of f (M 1 ) trivially and apply Lemma 9.2. This confirms that (9.1) also holds for l 4. In particular, we have (8.8) subject to the hypotheses of Theorem 1.1. This completes the proof of Theorem 1.1. The proof of theorem 1.2 We continue to use the notation introduced in § §8 and 9, but now suppose that the hypotheses of Theorem 1.2 are met. Hence s = 25 and r 1 s−q 0 9. We also assume that r 5 1. Our goal on this occasion is the estimate w F (α, β) dα dβ ≪ P 17−1/24+ε . (10.1) Once this is established, Theorem 1.2 follows in the same way as Theorem 1.1 was deduced from (9.1). We apply Lemma 2.3 with J j the subset of the set of indices {1, 2, . . . , s} counted by r j for 1 j ν. Also, we put m j = r j for each j and Making use of the bounds supplied by Theorem 7.2 and Theorem 6.4 with u = 32/3, we therefore infer that w F (α, β) dα dβ ≪ P ε P 11 1/4 P 19−1/18 3/4 ≪ P 17−1/24+ε . Thus the bound (10.1) is confirmed, and the proof of Theorem 1.2 is complete. Finally, we briefly comment on the prospects of reducing the number of variables further. Note that the estimates for the minor arcs and for the whole unit square in Theorem 6.1 coincide for u = 25/3. Since δ(25/3) = 0, therefore, when s = 25 our basic method narrowly fails to be applicable to the system of equations (1.1). Further, it transpires that each additional variable contributes a factor P to the major arc contribution, but only P 5/6 to the minor arc versions of Theorems 6.1 and 6.2. As indicated in §1 already, it is worth comparing the 18th moment (u = 6) in Theorem 6.1 with that in Theorem 7.2, the latter being superior by a factor P 1/6 . It transpires that even if it were possible to propagate this saving through the moment method, then we would still fail to handle cases of (1.1) with s = 24, but only by a factor P ε . However, at this stage, the only workable compromise seems to be to apply Theorem 7.2 in conjunction with Theorems 6.1 or 6.4, via Hölder's inequality. If the profile of the equations (1.1) is even more illustrious than in Theorem 1.2, then one can put more weight on the bound stemming from Theorem 7.2. For example, if we suppose that s = 24 and r 1 5, then ν 5 and r 5 4, so that in hopefully self-explanatory notation, the minor arc contribution can be reduced to something of the shape w F (α, β) dα dβ ≪ w |f (M 1 ) 5 f (M 2 ) 5 f (M 3 ) 5 f (M 4 ) 5 f (M 5 ) 4 | dα dβ. One may then introduce the identity (3.5) with α = M j for all 1 j 5 simultaneously. The most difficult term that then arises is that weighted with n(M 1 ) · · · n(M 5 ). A cascade of applications of Hölder's inequality together with Theorem 6.1 shows this term to be bounded by (Υ 3 ) 3/5 (J 11 ) 2/5 ≪ P 16+1/15+ε , which is quite far from saving another variable. Theorem 1 . 1 . 11For pairs of equations (1.1) with s 26 and q 0 15, one has N (P ) ∼ ISP s−8 . j | P (1 j 14), and b 15 y 4 1 + b 16 y 4 2 + . . . + b s y 4 s−14 = 0, (1.7) Theorem 1 . 2 . 12Suppose that s = 25 and that (r 1 , . . . , r ν ) is the profile of the pair of equations (1.1). If q 0 16 and ν 5, then N (P ) ∼ ISP s−8 . S ν for the group of permutations on ν elements. We refer to a function w : S ν → [0, 1] with σ∈Sν w(σ) = 1 as a weight on S ν . Lemma 2.1. Suppose that the exponents m j , M j (1 j ν) satisfy (2.3) and (2.4). Lemma 2 . 3 . 23Suppose that the exponents m j , M j (1 j ν) satisfy (2.3) and (2.4). Let s = m 1 + m 2 + . . . + m ν , and for 1 j s, let h j : R n → [0, ∞) denote a Lebesgue measurable function. Finally, suppose that J 1 , J 2 , . . . , J ν are sets with respective cardinalities m 1 , m 2 , . . . , m ν that partition {1, 2, . . . , s}. Lemma 4. 2 . 2Let W : R → R be a twice continuously differentiable function of period 1, and let u 2. For l ∈ Z let . 3 . 3Suppose that u 2 and 1 Y 1 4 P 2 . Then, for all n ∈ Z\ {0}, one has |φ u (n)| + |ψ u (n)| ≪ P u+8 n −2 . . 1 . 1Let u be a real number with 6 u 25/3. Then n∈Z |ψ u (n)| 3 ≪ P 3u−8+δ(u)+ε . (5.3) Further, when 2P 4/15 Y P/16 and 6 u 11, one has n∈Z |φ u (n)| 3 ≪ P 3u−8+δ(u)+ε . (5.4) write M = M P 4 ,Y and m = m P 4 ,Y . It is useful to note that m P 4 ,Y = m P 4 ,P/8 ∪ K, where K = M P 4 ,P/8 \ M P 4 ,Y . Then, from [13, Lemma 5.1], we have the bounds M |f (α)| 6 dα ≪ P 2 and K |f (α)| 6 dα ≪ P 2 Y ε−1/4 . (5.5) Lemma 5.2. Suppose that P 4/15 Y P/8. Then m |f (α)| 20 dα ≪ P 15+ε . (α)| 8 dα ≪ P 5+ε .(5.8) One interpolates linearly between this estimate and the bound established in Lemma 5.2 via Hölder's inequality to confirm the upper bound (5.7) for 8 u 20. The upper bound (5.6) then follows on noting that for 6 u 14, it follows from (5.5) that M |f (α)| u dα ≪ P Lemma 5. 4 . 4Let Z be a set of Z integers. Then 3.2, it follows that Θ(α) ∈ [0, 1]. Thus, by Lemma 5.3, one finds that |ϑ u (n)| ϑ u (0) ψ u (0) , when 8 u 11. point our argument depends on the size of T . Our first argument handles the small values T more subtle. Suppose temporarily that ϑ u = ψ u , and hence that u 25/3α)K(α)|f (α)| u dα. > 0, and recall the definition of the exponent δ(u) from (5.1). Then, with 2P 4/15 Y P/16 and n = n P 4 ,Y , we consider the mean values 1 )n(M 2 )n(M 3 )|f (M 1 )f (M 2 )f (M 3 )| u dα dβ. (n 1 1,n 2 ,n 3 )∈N ψ 6 (n 3 )|φ u (n 2 )| 2 = m∈Z ψ 6 (l 3 m)|φ u (l 2 m)| 2 . 1 )n(M 2 )|f (M 1 ) 11 f (M 2 ) 11 f (M 3 ) 4 | dα dβ. Theorem 6 . 3 . 63Given the hypotheses of this section, one has M ≪ P 18−1/18+ε . Proof. We again traverse the initial phase of the proof of Theorem 6.1 to confirm the relation M = (n 1 ,n 2 ,n 3 )∈N φ 11 (n 1 )φ 11 (n 2 )ψ 4 (n 3 ). Let N = N P 4 ,P 2/7 and n = 1 − N. Then 1 = N(M 1 ) + n(M 1 ) N(M 2 ) + n(M 2 ) . (6.6) N (M 1 )n(M 2 )|f (M 1 ) 11 f (M 2 ) 11 f (M 3 ) 4 | dα dβ (6.8) and M nN is the integral in (6.8) with M 1 , M 2 interchanged. By symmetry in M 1 and M 2 , it now suffices to estimate M Nn . Recalling the definition (6.1) of the linear forms M i , we put D = |c 1 d 2 − c 2 d 1 | and Dβ)|f (Dβ) 11 f (Aα + Bβ) 4 | dβ )|f (Dα)f (Dβ)| u |f (Aα + Bβ)| 6 dα dβ. linear form M i is linearly independent of M 4 = d 4 β, and thus c 1 c 2 c 3 = 0. The trivial bound |f (c 1 α)| 2 ≪ P 2 therefore combines with Schwarz's inequality and (5.8) to award us the bound γ)| 8 dγ ≪ P 7+ε . F k (α, β) dα dβ.(8.5) Subject to conditions milder than those imposed in Theorems 1.1 and 1.2 we reduce the evaluation of the integral (8.5) to the estimation of its minor arc part. With this end in mind we define the major arcs V as the union of the rectanglesV(q, a, b) = {(α, β) ∈ [0, 1] 2 : |α − a/q| P −31/8 and |β − b/q| P −31/8 },with 0 a, b q, (a, b, q) Lemma 8 . 1 . 81Suppose that q 0 15 and that k ∈ {0, 1}. Then V F k (α, β) dα dβ = 2 −s P s−8 SJ + O(P s−8−1/32 ). V (Λ j ) dα dβ ≪ P s−8 Q −1/4 . Theorem 8. 2 . 2Suppose that q 0 15 and that k ∈ {0, 1}. Then W F k (α, β) dα dβ = 2 −s P s−8 SI + O(P s−8−1/32 ). Lemma 9. 2 . 2Suppose that any two of the binary linear forms M 1 , M 2 , M 3 are linearly independent. Then w |f (M 1 ) 11 f (M 2 ) 11 f (M 3 ) 4 | dα dβ ≪ P 18−1/18+ε . F M ν = . . . = M 4 = 0, M 3 = l, M 2 = 15 − l and M 1 = s − 15. The condition q 0 15 ensures that r 1 s − 15, and r 1 + r 2 = s − l = M 1 + M 2 . Also, we have M 1 = s − 15 15 − l = M 2 because r 1 r 2 15 − l and s= r 1 + r 2 + l 2r 2 + l 30 − l. (α, β) dα dβ ≪ w |f (M 1 ) s−15 f (M 2 ) 15−l f (M 3 ) l | dα dβ,where each of the M j is one of the linear forms Λ i , and any two of the M j are linearly independent. We now reduce the exponent s − 15 to 15 − l and then apply Hölder's inequality. Thusw F (α, β) dα dβ ≪ P s−30+l w |f (M 1 ) 15−l f (M 2 ) 15−l f (M 3 ) l | dα dβ (M 1 ) 11 f (M 2 ) 11 f (M 3 ) 4 | dα dβ, Υ 2 = w |f (M 1 )f (M 2 )| 15 dα dβ. now suppose that l 4. Then r 1 s − 15 and r 1 + r 2 s − 4. In Lemma 2.3 we now take J j to be the subset of the set of indices {1, 2, . . . , s} counted by r j . We also choose M ν = . . . = M 4 = 0, M 3 = 4, M 2 = 11 and M 1 = s − 15, and note that the hypothesis s 26 ensures that M 1 M 2 . The conditions required to apply Lemma 2.3 are consequently in play, and we deduce that w F (α, β) dα dβ ≪ w |f (M 1 )| s−15 |f (M 2 )| 11 |f (M 3 )| 4 dα dβ, M ν = . . . = M 6 = 0, M 5 = M 4 = 1, M 3 = 5 and M 2 = M 1 = 9.On recalling that r 1 9, it is immediate that (2.3) and (2.4) hold. Hence, Lemma 2.3 is applicable, and yields linear forms M 1 , . . . , M 5 that are linearly independent in pairs, where each M j is one of the Λ i , and wherew F (α, β) dα dβ w |f (M 1 ) 9 f (M 2 ) 9 f (M 3 ) 5 f (M 4 )f (M 5 )| dα dβ. (M 1 )f (M 2 )f (M 4 )f (M 5 )| 4 |f (M 3 )| 2 dα dβ, Υ 4 = w |f (M 1 )f (M 2 )| 32/3 |f (M 3 )| 6 dα dβ. Proof. For Y = P/8, the desired estimate is the case k = 4, w = 20 of Wooley[16, Lemma 3.1]. For smaller values of Y , we make use of the case Y = P/8and apply the second bound of (5.5). On combining [14, Theorem 4.1] with [14, Lemma 2.8 and Theorem 4.2], moreover, one readily confirms that the upper bound f (α) ≪ P Y −1/4 holds uniformly for α ∈ K. Consequently, one has the estimate K |f (α)| 20 dα ≪ P 16 Y ε−15/4 , and the conclusion of the lemma follows. Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three. J Bourgain, C Demeter, L Guth, Ann. of Math. 2J. Bourgain, C. Demeter and L. Guth, Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three, Ann. of Math. (2) 184 (2016), no. 2, 633-682. On simultaneous diagonal equations and inequalities. J Brüdern, R J Cook, Acta Arith. 622J. Brüdern and R. J. Cook, On simultaneous diagonal equations and inequalities, Acta Arith. 62 (1992), no. 2, 125-149. Hua's lemma and simultaneous diagonal equations. J Brüdern, T D Wooley, Bull. London Math. Soc. 343J. Brüdern and T. D. Wooley, Hua's lemma and simultaneous diagonal equations, Bull. London Math. Soc. 34 (2002), no. 3, 279-283. The paucity problem for certain pairs of diagonal equations. J Brüdern, T D Wooley, Q. J. Math. 541J. Brüdern and T. D. Wooley, The paucity problem for certain pairs of diagonal equa- tions, Q. J. Math. 54 (2003), no. 1, 41-48. Asymptotic formulae for pairs of diagonal equations. J Brüdern, T D Wooley, Math. Proc. Cambridge Philos. Soc. 1371J. Brüdern and T. D. Wooley, Asymptotic formulae for pairs of diagonal equations, Math. Proc. Cambridge Philos. Soc. 137 (2004), no. 1, 227-235. Cubic moments of Fourier coefficients and pairs of diagonal quartic forms. J Brüdern, T D Wooley, J. Eur. Math. Soc. 1711JEMS)J. Brüdern and T. D. Wooley, Cubic moments of Fourier coefficients and pairs of diagonal quartic forms, J. Eur. Math. Soc. (JEMS) 17 (2015), no. 11, 2887-2901. The Hasse principle for systems of diagonal cubic forms. J Brüdern, T D Wooley, Math. Ann. 3643-4J. Brüdern and T. D. Wooley, The Hasse principle for systems of diagonal cubic forms, Math. Ann. 364 (2016), no. 3-4, 1255-1274. Arithmetic harmonic analysis for smooth quartic Weyl sums: three additive equations. J Brüdern, T D Wooley, J. Eur. Math. Soc. 2010JEMS)J. Brüdern and T. D. Wooley, Arithmetic harmonic analysis for smooth quartic Weyl sums: three additive equations, J. Eur. Math. Soc. (JEMS) 20 (2018), no. 10, 2333-2356. Pairs of diagonal quartic forms: the non-singular Hasse principle. J Brüdern, T D Wooley, 10.1093/qmath/haac019arXiv:2110.04349Q. J. Math. in pressJ. Brüdern and T. D. Wooley, Pairs of diagonal quartic forms: the non-singular Hasse principle, Q. J. Math. (in press, doi:10.1093/qmath/haac019), arXiv:2110.04349. A note on a lemma of Hua. R J Cook, Quart. J. Math. Oxford Ser. 2R. J. Cook, A note on a lemma of Hua, Quart. J. Math. Oxford Ser. (2) 23 (1972), no. 3, 287-288. A new solution of Waring's problem. G H Hardy, J E Littlewood, Quart. J. Math. Oxford. 48G. H. Hardy and J. E. Littlewood, A new solution of Waring's problem, Quart. J. Math. Oxford 48 (1920), 272-293. Relations between exceptional sets for additive problems. K Kawada, T D Wooley, J. London Math. Soc. 2K. Kawada and T. D. Wooley, Relations between exceptional sets for additive problems, J. London Math. Soc. (2) 82 (2010), no. 2, 437-458. A new iterative method in Waring's problem. R C Vaughan, Acta Math. 1621-2R. C. Vaughan, A new iterative method in Waring's problem, Acta Math. 162 (1989), no. 1-2, 1-71. R C Vaughan, The Hardy-Littlewood method. CambridgeCambridge University Press2nd editionR. C. Vaughan, The Hardy-Littlewood method, 2nd edition, Cambridge University Press, Cambridge, 1997. The asymptotic formula in Waring's problem. T D Wooley, Int. Math. Res. Not. IMRN. 20127T. D. Wooley, The asymptotic formula in Waring's problem, Int. Math. Res. Not. IMRN 2012 (2012), no. 7, 1485-1504. On Waring's problem for intermediate powers. T D Wooley, Acta Arith. 1763T. D. Wooley, On Waring's problem for intermediate powers, Acta Arith. 176 (2016), no. 3, 241-247. Nested efficient congruencing and relatives of Vinogradov's mean value theorem. T D Wooley, Proc. London Math. Soc. 3T. D. Wooley, Nested efficient congruencing and relatives of Vinogradov's mean value theorem, Proc. London Math. Soc. (3) 118 (2019), no. 4, 942-1016. Trigonometric series. A Zygmund, ; , Cambridge Univ. PressCambridge3rd editionA. Zygmund, Trigonometric series, Vol. I and II. 3rd edition, Cambridge Univ. Press, Cambridge, 2002. Bunsenstrasse 3-5, D-37073 Göttingen, Germany Email address: jbruede@gwdg. Mathematisches Institut. de Department of Mathematics, Purdue University, 150 N. University Street, West LafayetteUSA Email address: [email protected] Institut, Bunsenstrasse 3-5, D-37073 Göttingen, Germany Email address: [email protected] Department of Mathematics, Purdue University, 150 N. University Street, West Lafayette, IN 47907-2067, USA Email address: [email protected]
[]
[ "On the Log-Likelihood Ratio Evaluation of CWCU Linear and Widely Linear MMSE Data Estimators", "On the Log-Likelihood Ratio Evaluation of CWCU Linear and Widely Linear MMSE Data Estimators" ]
[ "Oliver Lang \nLinz Center of Mechatronics GmbH\nJohannes Kepler University Institute of Signal\n4040, 4040Processing, Linz, Linz\n", "Mario Huemer \nLinz Center of Mechatronics GmbH\nJohannes Kepler University Institute of Signal\n4040, 4040Processing, Linz, Linz\n", "Christian Hofbauer \nLinz Center of Mechatronics GmbH\nJohannes Kepler University Institute of Signal\n4040, 4040Processing, Linz, Linz\n" ]
[ "Linz Center of Mechatronics GmbH\nJohannes Kepler University Institute of Signal\n4040, 4040Processing, Linz, Linz", "Linz Center of Mechatronics GmbH\nJohannes Kepler University Institute of Signal\n4040, 4040Processing, Linz, Linz", "Linz Center of Mechatronics GmbH\nJohannes Kepler University Institute of Signal\n4040, 4040Processing, Linz, Linz" ]
[]
In soft decoding, log-likelihood ratios (LLRs) are calculated from estimated data symbols. Data symbols from proper constellation diagrams such as QPSK are often estimated using the linear minimum mean square error (LMMSE) estimator. We prove that the recently introduced component-wise conditionally unbiased (CWCU) LMMSE estimator results in the very same LLRs as the LMMSE estimator for typical model assumptions. For improper constellation diagrams such as 8-QAM, we show that the widely linear versions of the LMMSE and the CWCU LMMSE estimator also yield identical LLRs. In that case, the CWCU estimator allows to reduce the complexity of the LLR determination.
10.1109/acssc.2016.7869120
[ "https://arxiv.org/pdf/1607.02248v2.pdf" ]
2,844,035
1607.02248
e7eb074efb61b45cd7c51e49778a24684fe80ea4
On the Log-Likelihood Ratio Evaluation of CWCU Linear and Widely Linear MMSE Data Estimators Oliver Lang Linz Center of Mechatronics GmbH Johannes Kepler University Institute of Signal 4040, 4040Processing, Linz, Linz Mario Huemer Linz Center of Mechatronics GmbH Johannes Kepler University Institute of Signal 4040, 4040Processing, Linz, Linz Christian Hofbauer Linz Center of Mechatronics GmbH Johannes Kepler University Institute of Signal 4040, 4040Processing, Linz, Linz On the Log-Likelihood Ratio Evaluation of CWCU Linear and Widely Linear MMSE Data Estimators In soft decoding, log-likelihood ratios (LLRs) are calculated from estimated data symbols. Data symbols from proper constellation diagrams such as QPSK are often estimated using the linear minimum mean square error (LMMSE) estimator. We prove that the recently introduced component-wise conditionally unbiased (CWCU) LMMSE estimator results in the very same LLRs as the LMMSE estimator for typical model assumptions. For improper constellation diagrams such as 8-QAM, we show that the widely linear versions of the LMMSE and the CWCU LMMSE estimator also yield identical LLRs. In that case, the CWCU estimator allows to reduce the complexity of the LLR determination. I. INTRODUCTION The task of estimating a parameter vector x ∈ C n×1 out of a measurement vector y ∈ C m×1 with m ≥ n can be treated in the classical sense or in the Bayesian sense. Classical and Bayesian estimation not only differ in terms of the incorporation of prior knowledge, but also in terms of the unbiased properties. While a classical estimatorx C has to fulfill E y [x C ] = x for all possible x (1) to be considered as unbiased, the frequently applied Bayesian linear minimum mean square error (LMMSE) estimator only fulfills E y,x [x L − x] = E x E y|x [x L − x|x] = 0.(2) This meansx L is only "unbiased" when averaged over the probability density function (PDF) of x, which is a much weaker constraint than (1). However, the Bayesian approach allows the incorporation of prior knowledge. In [1]- [4], an interesting compromise between the stringent classical unbiased constraint and the weak Bayesian unbiased constraint has been investigated. There, component-wise conditionally unbiased (CWCU) Bayesian parameter estimators have been studied, which aim for achieving conditional unbiasedness for one parameter component at a time. Let x i be the i th element of x andx i be an estimator of x i , then the CWCU constraints are for all possible x i (and all i = 1, 2, ..., n). The CWCU constraints are less stringent than the classical unbiased constraints in (1), and it turns out that in many cases a CWCU estimator allows the incorporation of prior knowledge on the statistical properties of the parameter vector [3], [4]. In the following, we denote the linear estimator fulfilling the CWCU constraints and minimizing the Bayesian mean square error (BMSE) for i = 1, 2, . . . , n as the CWCU LMMSE estimator. The CWCU LMMSE estimator is designed for proper measurement vectors. For the definition of propriety we refer to [5]. A proper measurement vector could, e.g., arise when a data vector with proper symbols, such as for quadrature phase-shift keying (QPSK), is transmitted over a dispersive linear channel and disturbed by additive white Gaussian noise (AWGN). For this case the well-known LMMSE estimator is often used to estimate the transmitted symbols, followed by an evaluation of the log-likelihood ratios (LLRs). In Sec. II of this work it will be proven that the LLRs of the CWCU LMMSE estimates and the LMMSE estimates are identical even though the CWCU LMMSE estimator performs worse in terms of the BMSE. The second part of this paper focuses on improper symbol constellations such as 8 quadrature amplitude modulation . In such a scenario the widely LMMSE (WLMMSE) estimator is more appropriate for estimating the transmitted symbols. In Sec. III we prove that the CWCU WLMMSE estimator derived in [6] again results in the same LLRs as the WLMMSE estimator while featuring a complexity advantage in deriving the LLR values. Finally, a simulation example is given in Sec. IV which illustrates the estimators' properties. E y|xi [x i |x i ] = x i ,(3) II. LLR EVALUATION OF PROPER SYMBOLS In this section, the LLRs of proper symbols evaluated from the LMMSE estimates are compared with those determined from the CWCU LMMSE estimates. Let x and y be connected via the linear model y = Hx + n, where H ∈ C m×n is a known observation matrix, x has mean E x [x] and covariance matrix C xx = E x (x − E x [x])(x − E x [x] ) H with (·) H denoting the conjugate transposition, and n ∈ C m×1 is a zero mean proper noise vector with covariance matrix C nn and independent of x. Furthermore, let h i ∈ C m×1 be the i th column of H,H i ∈ C m×(n−1) the matrix resulting from H by deleting h i , x i be the i th element of x, andx i ∈ C (n−1)×1 the arXiv:1607.02248v2 [math.ST] 13 Dec 2016 vector resulting from x after deleting x i . Then we can rewrite the linear model as y = h i x i +H ixi + n.(4) Consider the general linear estimatorx = Ey, E ∈ C n×m . The i th component of this estimator is given byx i = e H i y, where e H i ∈ C 1×m denotes the i th row of the estimator matrix E. Incorporating (4) yieldŝ x i = e H i h i x i Scaling + e H iHixi IPI + e H i n Noise .(5) In (5), we clearly see three effects, namely a scaling of the true parameter value, an inter-parameter interference (IPI) term, and a noise term. In communications, the noise term is usually Gaussian and the IPI term can usually approximately assumed to be Gaussian if n is large enough due to central limit theorem arguments. From (5), the conditional mean ofx i becomes E y|xi [x i |x i ] = e H i h i x i + e H iHi Ex i|xi [x i |x i ].(6) In the following, we assume statistically independent elements of x with zero mean, as usual in communications. Then, (6) simplifies to E y|xi [x i |x i ] = e H i h i x i = α i x i .(7) The conditional variance of the general linear estimator is given by var(x i |x i ) = E y|xi x i − E y|xi [x i |x i ] x i − E y|xi [x i |x i ] H x i . Inserting (5) and (7) into the previous equation yields var(x i |x i ) =E y|xi (e H i (H ixi + n))(e H i (H ixi + n)) H |x i =e H i (H i Cx ixiH H i + C nn )e i .(8) Note that the conditional variance in (8) is independent of x i . For a general estimator, the LLRs of any symbol constellation with equiprobable symbols can be written as [7] Λ(b ki |x i ) = log Pr(b ki = 1|x i ) Pr(b ki = 0|x i ) = log q∈S (b ki =1) p(x i |s (q) ) q∈S (b ki =0) p(x i |s (q) ) ,(9) wherex i is the i th estimated symbol, b ki is the k th bit of the i th estimated symbol, S (b ki = 1) and S (b ki = 0) are the sets of symbol indices corresponding to b ki = 1 and b ki = 0, respectively, and s (q) is the q th symbol of such a set. In (9), p(x i |s (q) ) denotes the conditional PDF of the estimatex i given that the actual symbol was s (q) . Its Gaussian approximation is determined by the conditional mean and the conditional variance according to p(x i |s (q) ) = 1 πvar(x i |s (q) ) e − 1 var(x i |s (q) ) |xi−E[xi|s (q) ]| 2 .(10) Together with (9), the LLRs of any linear estimator can be evaluated by inserting the conditional mean and the conditional variance of the specific estimator. Such a specific estimator e.g., could be the LMMSE or the CWCU LMMSE estimators. We begin with the LMMSE estimator, which is [8] x L = C xx H H (HC xx H H + C nn ) −1 y = E L y.(11) Let e H L,i ∈ C 1×m be the i th row of E L , then the conditional mean and variance are given by (7) and (8), respectively, where e H L,i has to be inserted for e H i . A known property of the LMMSE estimator is that α L,i = e H L,i h i is real valued, and in general smaller than 1. Hence,x L,i is conditionally biased according to (7). We now turn to the CWCU LMMSE estimator which is given by [3], [4] x CL = DC xx H H (HC xx H H + C nn ) −1 y = E CL y, (12) where the elements of the real diagonal matrix D are [D] i,i = 1/α L,i . The CWCU LMMSE estimator in (12) and the LMMSE estimator in (11) are connected viâ x CL = DE L y = Dx L . (13) Let e H CL,i ∈ C 1×m be the i th row of E CL , then it holds that e H L,i = α L,i e H CL,i ,x L,i = α L,ixCL,i and var(x L,i |x i ) = α 2 L,i var(x CL,i |x i ). In contrast to the LMMSE estimator, the CWCU LMMSE estimator fulfills e H CL,i h i = 1. This property makes (7) equal (3)). Hence,x CL,i is conditionally unbiased. The conditional mean and variance of the CWCU LMMSE estimator are given by (7) and (8), respectively, where e H CL,i has to be inserted for e H i . Inserting these conditional properties into (10) yields to E y|xi [x i |x i ] = x i (which is the CWCU constraint inp(x CL,i |s (q) ) = 1 πvar(x CL,i |s (q) ) e − 1 var(x CL,i |s (q) ) |xCL,i−E[xCL,i|s (q) ]| 2 = α 2 L,i πvar(x L,i |s (q) ) e − α 2 L,i var(x L,i |s (q) ) α −1 L,i xL,i−αL,is (q) 2 = α 2 L,i πvar(x L,i |s (q) ) e − 1 var(x L,i |s (q) ) |xL,i−E[xL,i|s (q) ]| 2 = α 2 L,i p(x L,i |s (q) ),(14) which holds for any symbol s (q) . Hence, for a given y the probability density p(x CL,i |s (q) ) of the CWCU LMMSE estimator and p(x L,i |s (q) ) of the LMMSE estimator for any s (q) only differ by the constant scaling factor α 2 L,i . This constant scaling factor does not depend on the symbol s (q) and it appears in the numerator and the denominator of (9), thus cancelling out. Hence, the LLRs of the CWCU LMMSE estimates and the LMMSE estimates are equal for proper constellation diagrams. As a consequence, the resulting bit error ratios (BERs) of the LMMSE and the CWCU LMMSE estimators are also the same, although the BMSE of the LMMSE estimator is in general lower than that of the CWCU LMMSE estimator. III. WIDELY LINEAR ESTIMATION OF IMPROPER DATA We now turn to improper constellation diagrams such as 8-QAM. In such scenarios it is advantageous to use widely linear estimators, which can incorporate information about the improperness of the data. A general widely linear estimator in augmented notation iŝ x = x x * = E F F * E * y y * = Ey,(15) where (·) * denotes the complex conjugate. For an introduction to the augmented form and widely linear estimation we refer to [5]. Isolating the i th element of (15) yieldsx i = e H i y, where e H i ∈ C 1×2m is the i th row of E. The augmented version is given byx i = x î x * i = e H i e H i+n y = E H i y,(16) where the rows of E H i are given by the i th and the (i + n) th row of the augmented estimator matrix E. The augmented version of (4) is y = H 0 0 H * x + n = Hx + n = H i x i +H ixi + n,(17) where H i = h i 0 0 h * i , x i = x i x * i ,H i = H i 0 0H * i ,x i = x ī x * i . With (17), (16) can be rewritten according tô x i = E H i H i x i + E H iHixi + E H i n.(18) For zero mean and statistically independent elements of x, the conditional augmented expected vector ofx i follows to E[x i |x i ] = E H i H i x i = α i x i .(19) From (18) and (19), the augmented conditional covariance matrix ofx i is Cx ixi|xi = E[(x i − E[x i |x i ])(x i − E[x i |x i ]) H |x i ] = E[E H i H ixi + n H ixi + n H E i |x i ] = E H i H i Cx ixiH H i + C nn E i .(20) Similar to the linear case in (8), (20) is independent of x i . Particular realizations for (19) and (20) can be obtained by inserting E H i of a concrete estimator. Such a particular estimator could be the WLMMSE estimator, whose augmented form is [5]x WL = C xy C −1 yy y = E WL y. Then, the augmented i th estimate is given bŷ x WL,i = C xiy C −1 yy y = E H WL,i y,(22) where the rows of E H WL,i are the i th and the (i + n) th row of E WL in (21). For the WLMMSE estimator, α WL,i = E H WL,i H i is in general not equal to the identity matrix. Hence, according to (19),x WL,i is conditionally biased. We now turn to the CWCU WLMMSE estimator, whose augmented i th estimate is [6] x CWL,i = C xixi C xiy C −1 yy C yxi −1 C xiy C −1 yy y. For statistically independent elements of x, it holds that C yxi = H i C xixi and (23) can be reformulated aŝ x CWL,i = C xixi C xiy C −1 yy E H WL,i C yxi −1 C xiy C −1 yy E H WL,i y = E H WL,i C yxi C −1 xixi −1 E H WL,i y = E H WL,i H i α WL,i −1 E H WL,i y = α −1 WL,ixWL,i .(24) Similar to the linear case in (13), the CWCU WLMMSE estimator is determined by the WLMMSE estimator times a term that corrects for the conditional bias. It follows from (24), that the augmented conditional covariance matrix of the CWCU WLMMSE estimator can be derived from the one of the WLMMSE estimator according to Cx ixi|xi,CWL = α −1 WL,i Cx ixi|xi,WL α H WL,i −1 .(25) With these conditional properties, it is possible to evaluate the LLRs by utilizing the general complex Gaussian density function p(x i |s (q) ) = 1 π 2 det(Cx ixi|s (q) ) · e − 1 2 (x i −E[x i |s (q) ]) H C −1 x ixi |s (q) (x i −E[x i |s (q) ]) .(26) In analogy to the linear case in (14) it will now be shown that p(x WL,i |s (q) ) of the WLMMSE estimator and p(x CWL,i |s (q) ) of the CWCU WLMMSE estimator only differ by a constant factor. By utilizing (24) and (25) the exponent of (26) for the CWCU WLMMSE estimator can be rearranged to − 1 2 x CWL,i − E[x CWL,i |s (q) ] H C −1 xixi|s (q) ,CWL · x CWL,i − E[x CWL,i |s (q) ] = − 1 2 x WL,i − E[x WL,i |s (q) ] H α H WL,i −1 α H WL,i · C −1 xixi|s (q) ,WL α WL,i α −1 WL,i x WL,i − E[x WL,i |s (q) ] = − 1 2 x WL,i − E[x WL,i |s (q) ] H C −1 xixi|s (q) ,WL · x WL,i − E[x WL,i |s (q) ] .(27) This result shows that the exponent of (26) is identical for the CWCU WLMMSE estimator and the WLMMSE estimator for a given y. The prefactor of (26) for the CWCU WLMMSE estimator follows to 1 π 2 det(Cx ixi|s (q) ,CWL ) = 1 π 2 det α −1 WL,i Cx ixi|s (q) ,WL α H WL,i −1 = 1 π 2 det(Cx ixi|s (q) ,WL )|det(α −1 WL,i )| 2 = det(α WL,i ) π 2 det(Cx ixi|s (q) ,WL ) .(28) Like in the linear case in (14), the prefactors of the CWCU WLMMSE estimator and the WLMMSE estimator only differ by a constant real factor. This factor does not depend on the symbol s (q) and it appears in the numerator and the denominator of (9), thus cancelling out in the determination of the LLRs. This leads to the result that the LLRs derived from the CWCU WLMMSE estimates and the WLMMSE estimates are exactly the same. Although the WLMMSE estimator in general features a lower BMSE, the BER performance of the WLMMSE and the CWCU WLMMSE estimator are identical. IV. SIMULATION EXAMPLE We give a simulation example were we use the unique word orthogonal frequency division multiplexing (UW-OFDM) framework described in [9], [10]. Like classical OFDM, UW-OFDM is a block based transmission scheme where in our particular setup at the receive side a data vector d ∈ C 36×1 is estimated based on a received blockỹ ∈ C 52×1 of frequency domain samples. We choose UW-OFDM since the estimator matrices are in general full matrices instead of diagonal matrices as in classical OFDM, such that the problem can be considered a more demanding and general one compared to the data estimation problem in classical OFDM systems. Hence, this framework is well suited for studying general effects of CWCU estimators. The system model for the transmission of one data block is given bỹ y =HGd +ṽ,(29) whereH ∈ C 52×52 is the diagonal channel matrix. G ∈ C 52×36 is a so called generator matrix, for details cf. [9], [10], d is a vector of improper 8-QAM symbols andṽ is a frequency domain noise vector. Note that every assumption made in Sec. II and Sec. III holds in this example: The data and the measurements are connected via a linear model, and the Gaussian assumption of p(x i |s (q) ) is valid due to central limit theorem arguments (note that the data vector length is 36 in this example). In the simulation, UW-OFDM symbols are transmitted over an AWGN channelH = I and further processed by the WLMMSE estimator and the CWCU WLMMSE estimator, respectively. These estimators feature different properties of the estimated data symbols. According to [5], the estimates conditioned on a given s (q) are proper, if the off-diagonal elements of Cx ixi|s (q) are zero, which holds true for 8-QAM symbols transmitted over the AWGN channel and received by the CWCU WLMMSE estimator. The corresponding relative frequencies ofx CWL,i are shown in Fig. 1a. One can see that the estimates are centered around the true constellation points since the CWCU WLMMSE estimator fulfills the CWCU constraints in (3). Furthermore, the estimates conditioned on a specific transmit symbol are properly distributed. In Fig. 1b, the relative frequencies of the WLMMSE estimates are shown. In contrast to the CWCU WLMMSE estimates, the WLMMSE estimates conditioned on a specific transmit symbol are neither centered around the true constellation points nor are they properly distributed. However, due to the close connection between the CWCU WLMMSE estimator and the WLMMSE estimator, the resulting LLRs are identical as shown above. Moreover, since the CWCU WLMMSE estimates for a given s (q) are proper, it is sufficient to use the proper complex Gaussian PDF in (10) instead of the general Gaussian PDF in (26) as basis for the LLR determination. Hence, the LLR determination of the CWCU WLMMSE estimates is less computationally demanding than for the WLMMSE estimates without any loss in BER performance. In [6], we also confirmed via simulation, that for frequency selective channels the CWCU WLMMSE estimates conditioned on a given transmit symbol s (q) are practically proper again for all investigated channel realizations. The off-diagonal elements of Cx ixi|s (q) are smaller than the main diagonal elements by at least a factor of 10 3 . Rounding the off-diagonal elements to zero and applying the proper complex Gaussian PDF in (10) instead of the general Gaussian PDF in (26) as basis for the LLR determination leads to a BER without any noticeable loss in performance. V. CONCLUSION In this paper, we proved that the CWCU LMMSE estimates result in the same LLRs as the LMMSE estimates for proper constellation diagrams such as QPSK or 16-QAM. As a consequence, the resulting BER performance of the CWCU LMMSE estimator and the LMMSE estimator is also the same, even though the two estimators fulfill different unbiased constrains and yield a different BMSE. For improper constellation diagrams such as 8-QAM, we showed that the same statements also hold for the relationships between the widely linear counterparts, the CWCU WLMMSE and WLMMSE estimators. A simulation example was presented, revealing different statistical properties of WLMMSE and CWCU WLMMSE data estimates. An interesting outcome is that the CWCU WLMMSE estimator offers a complexity advantage in the LLR determination over the WLMMSE estimator without a loss in BER performance. Copyright 2001 SS&C. Published in the Proceedings of the Asilomar Conderence on Signals, Systems, and Computers, November 6-9th, 2016, Pacific Grove, CA, USA. This work has been supported by the Austrian Science Fund (FWF): I683-N13. Fig. 1. Relative frequencies of the CWCU WLMMSE estimates in (a), and the WLMMSE estimates in (b). The black crosses mark the original 8-QAM constellation points.Re{x CWL } Im{x CWL } CWCU WLMMSE -4 -2 0 2 4 2 1 0 -1 -2 Re{x WL } Im{x WL } WLMMSE a) b) -4 -2 0 2 4 2 1 0 -1 -2 Component-Wise Conditionally Unbiased Bayesian Parameter Estimation: General Concept and Applications to Kalman Filtering and LMMSE Channel Estimation. M Triki, D T M Slock, Proc. 39th. 39thM. Triki, D.T.M. Slock, "Component-Wise Conditionally Unbiased Bayesian Parameter Estimation: General Concept and Applications to Kalman Filtering and LMMSE Channel Estimation," In Proc. 39th . Asilomar Conf, Signals, Syst, Comput, Pacific Grove, USAAsilomar Conf. Signals, Syst., Comput., pp. 670-674, Pacific Grove, USA, Nov. 2005. Interference cancellation with Bayesian channel models and application to TDOA/IPDL mobile positioning. M Triki, A Salah, D T M Slock, Proc. International Symposium on Signal Processing and its Applications. International Symposium on Signal essing and its ApplicationsM. Triki, A. Salah, D.T.M. Slock, "Interference cancellation with Bayesian channel models and application to TDOA/IPDL mobile posi- tioning," In Proc. International Symposium on Signal Processing and its Applications, pp. 299-302, Aug. 2005. On Component-Wise Conditionally Unbiased Linear Bayesian Estimation. M Huemer, O Lang, Proc. 48th Asilomar Conf. Signals, Syst., Comput. 48th Asilomar Conf. Signals, Syst., ComputPacific Grove, USAM. Huemer, O. Lang, "On Component-Wise Conditionally Unbiased Linear Bayesian Estimation," In Proc. 48th Asilomar Conf. Signals, Syst., Comput., pp. 879-885, Pacific Grove, USA, Nov. 2014. CWCU LMMSE Estimation under Linear Model Assumptions. O Lang, M Huemer, Lecture Notes in Computer Science (LNCS): Computer Aided Systems Theory -EUROCAST 2015 (15th International Conference. Las Palmas de Gran Canaria, Spain9520O. Lang, M. Huemer, "CWCU LMMSE Estimation under Linear Model Assumptions," In Lecture Notes in Computer Science (LNCS): Computer Aided Systems Theory -EUROCAST 2015 (15th International Conference, Las Palmas de Gran Canaria, Spain, February 2015, revised selected papers), Vol. 9520, pp. 537-545, Dec. 2015. Complex-Valued Signal Processing: The Proper Way to Deal With Impropriety. T Adali, P J Schreier, L L Scharf, In IEEE Trans. Signal Process. 5911T. Adali, P. J. Schreier, L. L. Scharf; "Complex-Valued Signal Processing: The Proper Way to Deal With Impropriety," In IEEE Trans. Signal Process., Vol. 59, Issue 11, pp. 5101-5125, 2011. Component-Wise Conditionally Unbiased Widely Linear MMSE Estimation. M Huemer, O Lang, C Hofbauer, 10.1016/j.sigpro.2016.10.018Elsevier Signal Processing. M. Huemer, O. Lang, C. Hofbauer, "Component-Wise Conditionally Unbiased Widely Linear MMSE Estimation," accepted for publication in Elsevier Signal Processing (DOI: http://dx.doi.org/10.1016/j.sigpro.2016.10.018). Exact and approximated expressions of the log-likelihood ratio for 16-QAM signals. S Allpress, C Luschi, S Felix, Proc. 38th Asilomar Conf. Signals, Syst., Comput. 38th Asilomar Conf. Signals, Syst., ComputPacific Grove, USAS. Allpress, C. Luschi, S. Felix, "Exact and approximated expressions of the log-likelihood ratio for 16-QAM signals," In Proc. 38th Asilomar Conf. Signals, Syst., Comput., pp. 794-798, Pacific Grove, USA, Nov. 2004. Fundamentals of statistical signal processing: estimation theory. S M Kay, Prentice-Hall PTR1st editionS. M. Kay, Fundamentals of statistical signal processing: estimation theory, Prentice-Hall PTR, 1st edition, 1993. Non-Systematic Complex Number RS Coded OFDM by Unique Word Prefix. M Huemer, C Hofbauer, J B Huber, In IEEE Trans. Signal Process. 601M. Huemer, C. Hofbauer, J. B. Huber, "Non-Systematic Complex Number RS Coded OFDM by Unique Word Prefix," In IEEE Trans. Signal Process., Vol. 60, No. 1, pp. 285-299, Jan. 2012. Design and analysis of UW-OFDM signals. M Huemer, C Hofbauer, A Onic, J B Huber, In AEU -International Journal of Electronics and Communications. 68M. Huemer, C. Hofbauer, A. Onic, J. B. Huber, "Design and analysis of UW-OFDM signals," In AEU -International Journal of Electronics and Communications, Vol. 68, Issue 10, pp. 958-968, Oct. 2014.
[]
[ "PROTES: Probabilistic Optimization with Tensor Sampling", "PROTES: Probabilistic Optimization with Tensor Sampling" ]
[ "Anastasia Batsheva [email protected] \nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia\n", "Andrei Chertkov [email protected] \nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia\n", "Gleb Ryzhakov [email protected] \nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia\n", "Ivan Oseledets [email protected] \nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia\n" ]
[ "Skolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia", "Skolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia", "Skolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia", "Skolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology Moscow\nSkolkovo Institute of Science and Technology and AIRI\nSkolkovo Institute of Science and Technology Moscow\nMoscowRussia, Russia, Russia, Russia" ]
[]
We developed a new method PROTES for black-box optimization, which is based on the probabilistic sampling from a probability density function given in the lowparametric tensor train format. We tested it on complex multidimensional arrays and discretized multivariable functions taken, among others, from real-world applications, including unconstrained binary optimization and optimal control problems, for which the possible number of elements is up to 2 100 . In numerical experiments, both on analytic model functions and on complex problems, PROTES outperforms existing popular discrete optimization methods (Particle Swarm Optimization, Covariance Matrix Adaptation, Differential Evolution, and others).Preprint. Under review.
10.48550/arxiv.2301.12162
[ "https://export.arxiv.org/pdf/2301.12162v2.pdf" ]
256,389,390
2301.12162
c399f4724bde401589e206064e93e9663d93c675
PROTES: Probabilistic Optimization with Tensor Sampling Anastasia Batsheva [email protected] Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology and AIRI Skolkovo Institute of Science and Technology Moscow MoscowRussia, Russia, Russia, Russia Andrei Chertkov [email protected] Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology and AIRI Skolkovo Institute of Science and Technology Moscow MoscowRussia, Russia, Russia, Russia Gleb Ryzhakov [email protected] Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology and AIRI Skolkovo Institute of Science and Technology Moscow MoscowRussia, Russia, Russia, Russia Ivan Oseledets [email protected] Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology Moscow Skolkovo Institute of Science and Technology and AIRI Skolkovo Institute of Science and Technology Moscow MoscowRussia, Russia, Russia, Russia PROTES: Probabilistic Optimization with Tensor Sampling We developed a new method PROTES for black-box optimization, which is based on the probabilistic sampling from a probability density function given in the lowparametric tensor train format. We tested it on complex multidimensional arrays and discretized multivariable functions taken, among others, from real-world applications, including unconstrained binary optimization and optimal control problems, for which the possible number of elements is up to 2 100 . In numerical experiments, both on analytic model functions and on complex problems, PROTES outperforms existing popular discrete optimization methods (Particle Swarm Optimization, Covariance Matrix Adaptation, Differential Evolution, and others).Preprint. Under review. Introduction The multidimensional optimization problem is one of the most common in machine learning. It includes the important case of gradient-free discrete optimization [30,20,21,41]: x min = min where d is the dimensionality of the problem, and N 1 , N 2 , . . . , N d are the numbers of items for each dimension. Such settings arise when searching for the minimum or maximum element in an implicitly given multidimensional array (tensor 1 ), including when considering the discretization of functions from a continuous argument. Multidimensional discrete optimization problems are still computationally difficult in the case of complex target functions or large dimensions, and efficient direct gradient-free optimization procedures are highly needed. The development of methods for low-rank tensor approximations has made it possible to introduce fundamentally new approaches for the approximation, storage, and operation with multidimensional * Equal contribution. 1 A tensor is just a multidimensional array with several dimensions d (d ≥ 1). A two-dimensional tensor (d = 2) is a matrix, and when d = 1 it is a vector. For scalars we use normal font ( tensors [13,6,7,40]. One of the common methods for low-rank approximation is the tensor train (TT) decomposition [26], which allows bypassing the curse of dimensionality. Many useful algorithms (e. g., element-wise addition, multiplication, solution of linear systems, convolution, integration, etc.) have effective implementations for tensors given in the TT-format. The complexity of these algorithms turns out to be polynomial in the dimension d and the mode size N . It makes the TT-decomposition extremely popular in a wide range of applications, including computational mathematics [29,2] and machine learning [31,19]. In the last few years, new discrete optimization algorithms based on the TT-format have been proposed: TTOpt [37], Optima-TT [5], and several modifications [34,35,24,36]. However, the development of new more accurate, and robust TT-based methods for multidimensional discrete optimization is possible. In this work, we extend recent approaches for working with probability distributions and sampling in the TT-format [10,25] to the optimization area. That is, we specify a multidimensional discrete probability distribution in the TT-format, followed by efficient sampling from it and updating its parameters to approximate the optimum in a better way. This makes it possible to build an effective optimization method, and the contributions of our work are as follows: • We develop a new method PROTES for optimization (finding the minimum or maximum 2 value) of multidimensional data arrays and discretized multivariable functions based on a sampling method from a probability distribution in the TT-format; • We apply 3 PROTES for various analytic model functions and for several multidimensional QUBO and optimal control problems to demonstrate its performance and compare it with popular discrete optimization algorithms (Particle Swarm Optimization, Covariance Matrix Adaptation, Differential Evolution and NoisyBandit) as well as TT-based methods (TTOpt and Optima-TT). We used the same set of hyperparameters of our algorithm for all experiments, and obtained the best result for 19 of the 20 problems considered. Optimization with probabilistic sampling Our problem is to minimize the given multivariable discrete black-box function f (1). It can be formulated in terms of the multi-index search in an implicitly given d-dimensional tensor Y ∈ R N1×N2×...×N d , Y[n 1 , n 2 , . . . , n d ] = f(x), x = [n 1 , n 2 , . . . , n d ], for all n i = 1, 2, . . . , N i (i = 1, 2, . . . , d). The essence of our idea is to use a probabilistic method to find the minimum x min . We propose to establish a discrete distribution p(x) from which the minimum could be sampled with high probability. This distribution can be specified as a tensor P θ ∈ R N1×N2×...×N d in some low-parametric representation with a set of parameters θ, having the same shape as the target tensor Y. We start from a random non-negative tensor P θ and iteratively perform the following steps until the budget is exhausted or until convergence (see graphic illustration in Figure 1): 1. Sample K candidates of x min from the current distribution P θ : X K = {x 1 , x 2 , . . . , x K }; 2. Compute the corresponding function values: y 1 = f(x 1 ), y 2 = f(x 2 ), . . . , y K = f(x K ); 3. Select k best candidates with indices S = {s 1 , s 2 , . . . , s k } from X K with the minimal objective value, i. e., y j ≤ y J for all j ∈ S and J ∈ {1, 2, . . . , K} \ S; 4. Update the probability distribution P θ (θ → θ (new) ) to increase the likelihood of selected candidates S. We make several (k gd ) gradient ascent steps with the learning rate λ for the tensor P θ , using the following loss function After a sufficient number of iterations, we expect the tensor P θ to represent an almost Kronecker delta-function with a pronounced peak in the value of the minimum of the target function (or several peaks if the minimum is not unique). Therefore, this value will be sampled during the steps of our algorithm since the probability of sampling other values will be sufficiently small. L θ ({x s1 , x s2 , . . . , x s k }) = k i=1 log (P θ [x si ]) .(2) From a low-parameter representation P θ we expect an efficient sampling algorithm and efficient calculation procedure for the logarithms in (2) with automatic differentiation capability to enable gradient ascent methods. As will be shown below, the TT-representation of tensors satisfies these requirements. Further, we will omit the index θ, assuming that the parameterized representation of the tensor P corresponds to the TT-format. Note that the values K, k, k gd , λ and the number of parameters in θ (i. e., rank of the TT-decomposition) are the hyperparameters of our algorithm. Basic properties of the tensor train format Let us dwell on the concept of the TT-format. A d-dimensional tensor P ∈ R N1×N2×...×N d is said to be in the TT-format [26] if its elements are represented by the following formula P[n 1 , n 2 , . . . , n d ] = R1 r1=1 R2 r2=1 · · · R d−1 r d−1 =1 G 1 [1, n 1 , r 1 ] G 2 [r 1 , n 2 , r 2 ] . . . G d [r d−1 , n d , 1],(3) where (n 1 , n 2 , . . . , n d ) is a multi-index (n i = 1, 2, . . . , N i for i = 1, 2, . . . , d), integers R 0 , R 1 , . . . , R d (with convention R 0 = R d = 1) are named TT-ranks, and three-dimensional tensors G i ∈ R Ri−1×Ni×Ri (i = 1, 2, . . . , d) are named TT-cores. The TT-decomposition (3) (see also an illustration in Figure 2) allows to represent a tensor or a discretized multivariable function in a compact and descriptive low-parameter form, which is linear in dimension d, i. e., it has less than d · max i=1,...,d (N i R 2 i ) ∼ d · N · R 2 parameters, where N and R are effective ("average") mode size and TT-rank respectively. Linear algebra operations (e. g., element-wise addition, solution of linear systems, convolution, integration, etc.) on tensors in the TT-format, respectively, also have complexity linear in dimension if the TT-ranks are bounded. The TT-approximation for a tensor or discretized multivariable function may be built by efficient numerical methods, e. g., TT-SVD [27], TT-ALS [4], and TT-cross [28]. A detailed description of the TT-format and related algorithms are given in works [26,6,7]. Below we discuss only three operations in the TT-format, which will be used later in the work. Construction of the random TT-tensor. To build a random non-negative TT-tensor of a given size (N 1 , N 2 , . . . , N d ) with a constant TT-rank R, it is enough to generate d TT-cores G 1 , G 2 , . . . , G d (3-dimensional tensors) with random elements from the uniform distribution on the interval (0, 1). We will refer to this method as tt_random(R, [N 1 , N 2 , . . . , N d ]). Computation of the log-likelihood in the TT-format. To calculate the logarithm log P[x] in a given multi-index x = (n 1 , n 2 , . . . , n d ), we can use the basic formula (3) and then take the logarithm of the result. It can be shown that this operation has complexity O d·R 2 , because, roughly speaking, we (d − 1) times multiply a vector of length R by a matrix of size R × R to get the result. The corresponding method will be called below as tt_log(P, x). Sampling from the tensor in the TT-format. To generate a multi-index x with a probability proportional to the corresponding value p = P[x] of the TT-tensor P, we use the approach proposed in the work [10]. The method is based on the sequential calculation of univariate conditional densities with efficient integration in the TT-format, and the estimate for its complexity turns out to be the following: O K · d · (N + R) · R + K · d · α(N ) , where K is a number of requested samples, and α(n) is a complexity of sampling from generalized Bernoulli distribution with n outcomes. Note that the algorithm allows sampling in the case of the initially non-normalized tensor, so we don't have to calculate the normalization factor. We will refer to this method as tt_sample(P, K). Optimization method PROTES With the formal scheme of the proposed approach given in Section 2 and the description of operations tt_random, tt_log and tt_sample given in Section 3, we can formulate our method PROTES for gradient-free discrete optimization in the TT-format, as presented in Algorithm 1. We denote as adam, a function that performs k gd steps of gradient ascent for the TT-tensor P at multi-indices X by the well-known Adam method [18]. In this case, the learning rate is λ, the loss function is given in (2), and tt_log with automatic differentiation support is used for the log-likelihood computation. Computational complexity of the method. Let estimate the complexity of the proposed algorithm, assuming that the number of requests to the target function (black-box) M is fixed. With the known estimate for the complexity of the tt_sample function, we can obtain the complexity of the sampling operations: O M K · K · d · (N + R) · R + α(N ) . Assuming that the complexity of one gradient step coincides with the complexity of calculating the differentiated function and using the estimate for the tt_sample function, we can estimate the total complexity of the tensor updates: O M K · k · k gd · d · R 2 . Combining the two above estimates we obtain the complexity of the method O M · d · k K · k gd · R 2 + (N + R) · R + α(N ) .(4) Algorithm 1 Method PROTES in the TT-format for multidimensional discrete black-box optimization Data: the function f(x), that computes the value of the target tensor Y ∈ R N 1 ×N 2 ×...×N d at the multi-index x = [n1, n2, . . . , n d ]; the maximum number of requests M ; the number of generated samples per iteration K; the number of selected candidates per iteration k; the number of gradient ascent steps k gd ; the gradient ascent learning rate λ; the TT-rank of the probability tensor R. Result: d-dimensional multi-index xmin, which relates to the minimum value of the tensor Y. 1 Initialize target multi-index and tensor value: xmin = None, ymin = ∞ 2 Generate random non-negative rank-R TT-tensor: P = tt_random (R, [N1, N2, . . . , N d ]) 3 for iter = 1 to M/K do 4 Generate K samples from P: x1, x2, . . . , xK = tt_sample(P, K) 5 Compute related tensor values: y1 = f(x1), y2 = f(x2), . . . , yK = f(xK ) 6 Find indices S = {s1, s2, . . . , s k } for top-k minimum items in the list [y1, y2, . . . , yK ] 7 Collect k-top candidates: X = {xs 1 , xs 2 , . . . , xs k }, Y = {ys 1 , ys 2 , . . . , ys k } 8 If Y contains a value less than ymin, then update xmin and ymin 9 Update the TT-tensor: P ← adam(P, L, X, k gd , λ) // with the loss function (2) and the method tt_log 10 return xmin. However, it is important to note that this estimate does not take into account the complexity of calculating M times the objective function f, which in practical applications can be significant and many times greater than the estimate (4). The intuition behind the method. The proposed method PROTES, like most gradient-free optimization approaches, is empirical, however, we can establish its connection with a well-known REINFORCE trick algorithm [43]. Let make a monotonic transformation F[f](x) of the target function f to be minimized that transforms minimum to maximum. A reasonable choice for F[·] is the Fermi-Dirac function F[f](x) = 1 exp (f(x) − y min − E)/T + 1 , where y min is an exact or approximate minimum of f, T > 0 is a parameter and E > 0 is some small threshold. With the function F[f] we can find a maximum of the expectation max θ E ξ θ F[f](ξ θ ), where a family of random variables ξ θ has a parameterised distribution function p θ (x). Using REINFORCE trick, we can estimate the gradient of the expectation by the following Monte-Carlo-like expression ∇ θ E ξ θ F[f](ξ θ ) ≈ 1 M M i=1 F[f](x i )∇ θ log p θ (x i ),(5) where {x i } M 1 are independent realizations of the random variable ξ θ . If we find the optimal values of θ, then we expect the optimal distribution p θ to have a peak at the point of maximum for function F[f]. Thus we can obtain the argument of its maximum by sampling from this distribution. For very small values of T , only a few terms contribute to the sum (5), namely those x i for which f(x i ) − y min < E is hold. For these values of x, F[f] is close to 1, while for the other samples its value is 0. Hence, we can discard all other samples and keep a few samples with the best values. So, we come to the loss function (2), where instead of the parameter E we use a fixed number k of the best samples, i. e., the samples for which the value of the target function f is the smallest. Application of the method to constrained optimization. A very nice property of the proposed method is that it can be adapted to efficiently handle constraints such as a specified set of admissible multi-indices. One option is just to remove invalid samples from the top-k values, but in some cases, the probability of sampling multi-indices that are admissible is very low, so this approach will not work. Instead, if the constraint permits, we use the algorithm from the work [32] for the constructive building of tensors in the TT-format by a known analytic function, which defines the constraints. Once the indicator tensor (1 if the index is admissible and 0 if it is not) is built in the TT-format, we can just initialize the starting distribution P by it, and it will be guaranteed that the samples almost always belong to the admissible set. Related work Below we give a brief analysis of classical approaches for discrete optimization and then discuss the methods based on the low-rank tensor approximations, which have become popular in the last years. Classical methods for gradient-free optimization. In many situations, the problem-specific target function is not differentiable, too complex, or its gradients are not helpful due to the non-convex nature of the problem [1], and standard well-known gradient-based methods cannot be applied directly. The examples include hyper-parameter selection, training neural networks with discrete weights, and policy optimization in reinforcement learning. In all these contexts, efficient direct gradient-free optimization procedures are highly needed. In the case of high dimensional blackbox optimization, evolutionary strategies (ES) [9] are one of the most advanced methods. This approach aims to optimize the parameters of the search distribution, typically a multidimensional Gaussian, to maximize the objective function. Finite difference schemes are commonly used to approximate gradients of the search distribution. Numerous works proposed techniques to improve the convergence of ES [23], for example, second-order natural gradient [42] or the history of recent updates (Covariance Matrix Adaptation Evolution Strategy; CMA-ES) [15] may be used to generate updates. There is also a large variety of other heuristic methods for finding the global extremum. In particular, we note such popular approaches as NoisyBandit [33], Particle Swarm Optimization (PSO) [17], Simultaneous Perturbation Stochastic Approximation (SPSA) [22], Differential Evolution (DE) [38] and scrambled-Hammersley (scr-Hammersley) [14]. Tensor-based methods for gradient-free optimization. Recently, the TT-decomposition has been actively used for multidimensional optimization. An iterative method TTOpt based on the maximum volume approach is proposed in the work [37]. TTOpt utilizes the theorem of sufficient proximity of the maximum modulo element of the submatrix having the maximum modulus of the determinant to the maximum modulo element of the tensor. Based on this observation, tensor elements are sampled from specially selected successive unfoldings of the tensor. To be able to find the minimum element, dynamic mapping of the tensor elements is carried out, which converts the minimum values into maximum ones. The authors applied this approach to the problem of optimizing the weights of neural networks in the framework of reinforcement learning problems in [37] and to the QUBO problem in [24]. A similar optimization approach was also considered in [34] and [35]. One more promising algorithm, named Optima-TT, was proposed in recent work [5]. This approach is based on the probabilistic sampling from the TT-tensor and makes it possible to obtain a very accurate approximation for the optimum of the given TT-tensor. However, this method is intended for directly optimizing the TT-tensors, which means that its success strongly depends on the quality of the TT-approximation for the original multidimensional data array. Therefore, one of the related methods in the TT-format (TT-SVD, TT-ALS, TT-cross, etc.) should be additionally used for approximation. Numerical experiments To evaluate the effectiveness of the proposed method, we carried out a series of 20 numerical experiments for various formulations of model problems. The results are presented in Table 1, where we report the approximation to the minimum value for each model problem (P-1, P-2, . . . , P-20) and all considered optimization methods (PROTES, BS-1, BS-2, . . . , BS-7). Taking into account the analysis of discrete optimization methods in the previous section, as baselines we consider two tensor-based optimization methods: TTOpt 4 (BS1) and Optima-TT 5 (BS2), and five popular gradientfree optimization algorithms from the nevergrad framework: [3] For all the considered optimization problems, we used the default set of parameters for baselines, and for PROTES we fixed parameters as K = 100, k = 10, k gd = 1, λ = 0.05, R = 5 (the description of these parameters was presented in Algorithm 1). For all methods, the limit on the number of requests to the objective function was fixed at the value M = 10 4 . As can be seen from Table 1, PROTES, in contrast to alternative approaches, gives a consistently top result for almost all model problems (the best result for 19 of the 20 problems considered). To demonstrate the convergence behavior of methods, we also present the corresponding plot in Figure 3. Multivariable analytic functions First, we consider the optimization task for various tensors arising from the discretization of multivariable analytic functions. We select 10 popular benchmarks: Ackley (P-01), Alpine (P-02), Exponential (P-03), Griewank (P-04), Michalewicz (P-05), Piston 7 (P-06), Qing (P-07), Rastrigin (P-08), Schaffer (P-09) and Schwefel (P-10). These functions have a complex landscape and are often used in problems of evaluating the effectiveness of optimization algorithms [8,16], including tensor-based optimizers [4,39]. We consider the 7-dimensional case (since this is the dimension of the Piston function) and discretization on a uniform grid with 16 nodes. As follows from Table 1 (benchmarks P-1, P-2, . . . , P-10), our method, like the other two tensor approaches (BS-1 and BS-2), gave the most accurate solution for all model problems. The most sophisticated approach from the nevergrad package (BS-7) turned out to be the next in accuracy (the method did not converge only in two cases out of ten). Quadratic unconstrained binary optimization QUBO is a widely known NP-hard problem [12] which unifies a rich variety of combinatorial optimization problems from finance and economics applications to machine learning and quantum computing. QUBO formulation in a very natural manner utilizes penalty functions, yielding exact model representations in contrast to the approximate representations produced by customary uses of penalty functions. The standard QUBO problem can be formulated as follows f(x) = x T Qx → min x , s.t. x ∈ {0, 1} d , where x is a vector of binary decision variables of the length d and Q ∈ R d×d is a square matrix of constants. In all our experiments we fixed the number of dimensions as d = 50. We consider the following QUBO problems from the qubogen package: 8 Max-Cut Problem (P-11; which refers to finding a partition of an undirected graph into two sets such that the number of edges between the two sets is as large as possible), Minimum Vertex Cover Problem (P-12; which refers to finding a cover with a minimum number of vertices in the subset of the graph vertices such that each edge in the graph is incident) and Quadratic Knapsack Problem (P-13; which refers to finding a subset of maximum profit that satisfies the budget limitations from a set of potential projects with specified interactions between pairs of projects). We also consider one more benchmark (P-14) from the work [11] (problem k 3 ; d = 50), where angle-modulated bat algorithm (AMBA) was proposed for high-dimensional QUBO problems with engineering application to antenna topology optimization. This is the ordinary binary knapsack problem with fixed weights w i ∈ [5,20], profits p i ∈ [50, 100] (i = 1, 2, . . . , d) and the maximum capacity C = 1000. In experiments, we used the same values of the weights and profits as in [11]. For all four considered problems ( P-11, P-12, P-13, P-14) the proposed method PROTES gives the best result, as can be seen from Table 1, and the baseline BS-7 again turned out to be the next in accuracy. We also note that several optimization methods were compared in [11] for the P- , and the solution obtained using the PROTES method (the result −3079) turns out to be more accurate. Optimal control Suppose we have a state variable z ∈ R controlled by a binary variable x called control (i.e., it's just a switch with modes "off" = 0 and "on" = 1) over some discrete interval of time [0, T ]. The state z(t + 1) at time t + 1 depends on the control x(t) at time t and obtained from the solution of the following differential equationż(τ ) = g(z(τ ), x(t)), t ≤ τ < t + 1, where the function g is called an equation function. The optimal control problem is to find such a sequence of controls x * = [x * (0), x * (1), . . . , x * (T )] (optimal solution) over the given time interval [0, T ] that minimizes the given objective function G. Formulating the problem mathematically, we need to find such a solution The initial and the reference state are fixed at values z 0 = 0.8, z ref = 0.7. For a fixed initial value z 0 and fixed equation function g, the objective function G can be represented as a binary multidimensional tensor, whose elements are calculated using the following function: f(x) = G(z(x), x), hence we can apply discrete optimization methods to find x min , which approximates the optimal solution x * . G(z, x) → min z,x , s.t. z(0) = z 0 , z(τ ) = g(z(τ ), x(t)), t ≤ τ < t + 1, x(t) ∈ We considered several values for variable T , such as 25, 50 and 100 (benchmarks P-15, P-16 and P-17 respectively). As follows from the results presented in Table 1, PROTES gives the most accurate solution in all three cases. Note that a result comparable in accuracy for baselines is obtained only in one case. Note that a result comparable in accuracy with our method is obtained only in one case when using baselines (i.e., P-16, BS-7). Optimal control with constraints In practical applications, some conditions or constraints may be imposed on the solution of the optimal control problem. We consider the following control constraint P in this work: the control variable x ∈ {0, 1} N can take value "1" no less than 3 times during the whole time interval. Formally, this can be written as follows: P = x x[t] ≥ x[t − 1] − x[t − 2] x[t] ≥ x[t − 1] − x[t − 3] , ∀t : 1 ≤ t ≤ N + 2; we let x[t] := 0 for t < 1 and t > N . To account for this condition in the PROTES algorithm, we constructively build the initial distribution in the form of an indicator tensor as was described in Section 4 in the constrained optimization subsection. The details of this construction are presented in the Appendix. The numerical results for T = 25 (P-18), T = 50 (P-19) and T = 100 (P-20) are reported in Table 1. In two cases out of three (P-19, P-20), our method showed the best result, and in one case (P-18) slightly yielding to the TTOpt method (BS-1), which, however, in two other cases gave a significantly worse result. Robustness and performance of the PROTES The results in Table 1 relate to the "intuitive" selection of the hyperparameters for the PROTES method (as was mentioned above, we have used the values: K = 100, k = 10, k gd = 1, λ = 0.05 and R = 5). In Figure 4, we present an analysis of the dependence of the optimization result for the benchmark P-14 on the choice of hyperparameters K, k and R, with fixed k gd = 1 and λ = 0.05. We report the relative error of the result for all combinations K = 50, 100, 150, 200, 250; k = 5, 10, 15, 20, 25; R = 3, 5, 7. As we can see from the plots, the hyperparameters used in the main calculations are not optimal for this particular problem, that is, additional fine-tuning of the method for specific problems or classes of problems is possible. At the same time, according to the results in Figure 4, the method remains stable over a wide range of hyperparameter values. We also note that all computations were carried out on a regular laptop, while the operating time of the considered optimizers was commensurate, for example, for the benchmark P-17, the measured operating time (in seconds) turned out to be as follows: PROTES (641), BS-1 (607), BS-2 (4245), BS-7 (780). A more detailed analysis of the PROTES performance and the dependence of optimization results on the values of hyperparameters are considered in the Appendix. Conclusions In this work, we presented an optimization algorithm PROTES based on sampling from the probability density defined in the tensor train format. For all considered numerical experiments, we used the same set of hyperparameters, so our algorithm is rather universal. To take into account the constraints, as in the problem of optimal control with constraints, we only considered them in the form of a specially selected initial approximation (a special form of an indicator tensor in the tensor train format); further on, the algorithm did not consider the constraints explicitly. This approach allows us to extend the capabilities of the algorithm by using the properties of the tensor train representation. Numerical experiments show that we outperform many popular optimization methods. The main direction in our future work is scaling of the method to large dimensions. For d ≥ 1000 we have encountered numerous technical difficulties, which can be alleviated by other tensor formats (such as hierachical Tucker, which can be parallelized over d) and more efficient implementations of the optimization method (now we used standard automatic differentiation without special tensor optimization methods such as Riemannian optimization). Supplementary Material 1 Stability of the PROTES To check the stability of the optimization result, a series of 10 calculations were performed for each method (PROTES, BS1 -BS7) with random initializations. We consider the binary knapsack problem from the work [11] (benchmark P-14, described in detail in the main text of the work) with the known exact minimum −3103. For all methods, the limit on the number of requests was fixed at the value M = 10 5 . All other parameters were the same as in the computations from the main text, i.e., the PROTES parameters are K = 100, k = 10, k gd = 1, λ = 0.05, R = 5. In Table 2 we present the average and best results over 10 runs for each optimization method. As follows from the reported results, only PROTES and Portfolio method (BS-7) managed to successfully find the exact optimum, while the average result for PROTES is significantly better than that of all the baselines. Choice of hyperparameters for the PROTES We conduct additional experiments with varying the values of the hyperparameters of the PROTES method for benchmarks P-01 ( Figure 5 and Figure 6; we report the optimization result) and P-14 ( Figure 4 in the main text and Figure 7 below; we report the relative error of the optimization result). In the first series of experiments ( Figure 5 below and Figure 4 in the main text), we fixed As follows from the presented results, for problem P-01, the dependence on the choice of hyperparameters turns out to be extremely weak, except for outliers for the small values of the learning rate λ = 0.005 and λ = 0.01. For more complex problem P-14, the dependence of the result on the choice of hyperparameters is more complex, but it can be seen that the method remains stable over a wide range of hyperparameter values. Performance comparison of optimization methods All the results described in the main text and presented there in Table 1 were obtained on a regular laptop. In Table 3 we report the related computation time for each method (PROTES, BS1 -BS7) and each model problem (P01 -P20). As follows from the results, the PROTES works much faster than classical optimization methods (BS3 -BS7), as well as faster than tensor-based methods (BS1 and BS2) for most optimal control problems (P-15 -P-20). However, the TTOpt (BS-1) and Optima-TT (BS-2) methods are faster for simpler analytic (P-01 -P-10) and QUBO (P-11 -P-14) problems. This is due to the fact that within the framework of the dynamic TT-rank refinement procedure in these methods, the TT-rank, and hence the computation time, turn out to be significantly higher for the optimal control problems. We note that the very short running time of the TTOpt method for P-20 is because most of the requests of the method did not satisfy the imposed constraints, and in this case, the differential equation was not solved. Derivative functions for the indicator tensor in the optimal control problem We use the following derivative functions for the constructive building of the tensor described in [32] f k 0 (x) = 0, x = 0 or x = l None, otherwise, f k 1 (x) = min(l, x + 1), for all TT-cores except the last one (k = 1, 2, . . . , d − 1), and for the last TT-core f d 0 (x) = 1, x = 0 or x = l 0, otherwise, f d 1 (x) = 1, x ≥ l − 1 0, otherwise. A tensor in the TT-format built on such derivative functions is equal to 0 if there are less than l ones among its vector argument in a row, and is equal to 1 in all other cases. Let us briefly explain why such derivative functions give a tensor of the restriction condition (i.e., the constraint in the considered optimal control problem). Recall that the upper index in the derivative function notation corresponds to the index number of the tensor argument, and the lower index corresponds to the value of this argument index. In our scheme, the argument of the derivative functions has the meaning of the number of ones that already stand to the left of the given index. First, let's focus on the functions for all indices except the rightmost one (k = 1, 2, . . . , d − 1). If the current index is one, then we simply increase the value of the argument by 1 and pass it on, to the input of the next derivative function. This is what the function f k 1 does (we take the maximum, so if the number of ones has already reached l, we don't care if it is greater than or equal to l; note that the maximum operation reduces the TT-rank but does not affect the result). For the zero value of the current index, if the argument is zero (which means, that the previous index is also zero) the function f k 0 also returns zero as nothing have changed. If the argument of this function is l, it means that there are l or more ones in a row to the left of the considered index, so the given condition is not violated, and the function simply returns zero, which means that there are no ones. If the argument is greater than zero but less than l, it means that the condition is violated, because the previous sequence of ones of length less than l is cut off at the current index. In this case, the function f k 0 returns None, which means that the value of the tensor will be 0 regardless of the subsequent indices. The functions that correspond to the last (k = d) index behave similarly. Namely, if the last index is zero, then the function f k 0 returns 1 if its argument is 0 or l since it does not violate the condition as just explained. Otherwise, it returns 0. If the last index is 1, then the function f k 1 returns 1 only if its argument is equal to (l − 1) or l since this means that the current one is an element of a sequence of ones of length at least l. Otherwise, the function f k 1 returns 0. ), s.t. x = [n 1 , n 2 , . . . , n d ], n i ∈ {1, 2, . . . , N i }, a, b, c, . . .), we denote vectors with bold letters (a, b, c, . . .), we use upper case letters (A, B, C, . . .) for matrices, and calligraphic upper case letters (A, B, C, . . .) for tensors with d > 2. The (n1, n2, . . . , n d )th entry of a d-dimensional tensor Y ∈ R N 1 ×N 2 ×...×N d is denoted by y = Y[n1, n2, . . . , n d ], where ni = 1, 2, . . . , Ni (i = 1, 2, . . . , d), and Ni is a size of the i-th mode. The mode-i slice of such tensor is denoted by Y[n1, . . . , ni−1, :, ni+1, . . . , n d ]. Figure 1 : 1Schematic representation of the proposed optimization method PROTES. Figure 2 : 2Schematic representation of the TT-decomposition. The top picture demonstrates the calculation of the specific tensor element x = [n 1 , n 2 , . . . , n d ] from its TT-representation, and the bottom picture presents the related tensor network diagram. 6 OnePlusOne (BS3), PSO (BS4), NoisyBandit (BS5), SPSA (BS6), and Portfolio approach (BS7), which is based on the combination of CMA-ES, DE, and scr-Hammersley methods. The model problems and obtained results will be discussed in detail below in this section. Figure 3 : 3The dependence of the found optimum value f(x min ) in log scale on the number of requests to the objective function for all considered optimization methods for benchmarks P-02 (plot on the left), P-14 (plot in the middle; for clarity of demonstration, the negative values for this benchmark are inverted, i. e., we plot −f(x min ) values) and P-16 (plot on the right). 14 problem: BPSO (with the result −2854), BBA (with the result −2976), AMBA (with the result −2956), A-AMBA (with the result −2961), P-AMBA (with the result −2989) {0 ,Figure 4 : {041}, t = 0, 1, . . . , T, where z = [z(0), z(1), . . . , z(T )] is a state variable path. In numerical experiments we consider the nonlinear equation function g(z, x) = z 3 − x, and since it is nonlinear, finding an optimal solution Relative error of the optimization result with PROTES method for the P-14 model problem with a known exact minimum for different values of the hyperparameters K, k and R. raises a lot of difficulties. The objective function G we take in the form G(z, x) Figure 5 : 5Optimization result (approximation of the minimum value) with PROTES method for the P-01 for different values of the hyperparameters: K, k and R. Figure 6 : 6Optimization result (approximation of the minimum value) with PROTES method for the P-01 model problem for different values of the hyperparameters: learning rate (LR), k gd , R. Figure 7 : 7Relative error of the optimization result with PROTES method for the P-14 model problem with a known exact minimum for different values of the hyperparameters: learning rate (LR), k gd , R. the values k gd = 1 and λ = 0.05 and tried all combinations of the remaining hyperparameters: K = 50, 100, 150, 200, 250; k = 5, 10, 15, 20, 25; R = 3, 5, 7. In the second series of experiments (Figure 6 and 7), we fixed the values K = 100 and k = 10, and tried all combinations: k gd = 1, 10, 25, 50, 100, λ = 0.005, 0.01, 0.05, 0.1, 0.5; R = 3, 5, 7. Probability tensor ... ... ... ...Sample K multi-indices Select k top items Update for selected multi-indices Compute black box values Black Box ... Table 1 : 1Minimization results for all selected benchmarks (P-01 -P-20). The values obtained by the proposed method PROTES and by all considered baselines (BS1 -BS7) are reported.PROTES BS-1 BS-2 BS-3 BS-4 BS-5 BS-6 BS-7 ANALYTIC FUNCTIONS P-01 1.3E+01 1.3E+01 1.3E+01 1.3E+01 1.3E+01 2.1E+01 1.3E+01 1.3E+01 P-02 6.5E+00 6.5E+00 6.5E+00 6.9E+00 6.8E+00 1.5E+01 7.5E+00 6.8E+00 P-03 -9.4E-01 -9.4E-01 -9.4E-01 -9.4E-01 -9.4E-01 -3.5E-01 -9.4E-01 -9.4E-01 P-04 1.3E+00 1.3E+00 1.3E+00 1.3E+00 1.3E+00 6.3E+00 1.3E+00 1.3E+00 P-05 -3.7E+00 -3.7E+00 -3.7E+00 -2.6E+00 -3.0E+00 -1.8E+00 -1.2E+00 -3.7E+00 P-06 1.2E-01 1.2E-01 1.2E-01 1.2E-01 1.2E-01 1.3E-01 4.2E-01 1.2E-01 P-07 6.2E+06 6.2E+06 6.2E+06 6.3E+06 1.7E+07 2.2E+10 3.1E+08 6.2E+06 P-08 6.0E+01 6.0E+01 6.0E+01 6.0E+01 6.0E+01 1.2E+02 1.0E+02 6.0E+01 P-09 2.7E+00 2.7E+00 2.7E+00 3.0E+00 2.7E+00 2.9E+00 3.4E+00 2.7E+00 P-10 -8.7E+02 -8.7E+02 -8.7E+02 -6.1E+02 -6.9E+02 7.0E+02 2.6E+03 -8.5E+02 QUBO P-11 -3.6E+02 -3.5E+02 -3.4E+02 -3.2E+02 -3.4E+02 -3.2E+02 -3.3E+02 -3.6E+02 P-12 -5.9E+03 -5.9E+03 -5.9E+03 -5.6E+03 -5.9E+03 -5.3E+03 -5.9E+03 -5.9E+03 P-13 -3.1E+00 -3.0E+00 -2.8E+00 0.0E+00 1.5E+01 2.8E+02 -2.9E+00 -3.0E+00 P-14 -3.1E+03 -2.8E+03 -3.0E+03 -2.6E+03 -3.0E+03 -2.7E+03 -3.0E+03 -3.0E+03 CONTROL P-15 6.7E-03 7.4E-03 2.3E-02 8.4E-03 8.9E-03 3.1E-02 8.7E-02 7.3E-03 P-16 1.4E-02 2.6E-02 3.5E-02 1.7E-02 1.7E-02 5.3E-02 5.2E-02 1.4E-02 P-17 3.0E-02 5.7E-01 1.5E-01 4.8E-02 3.6E-02 7.7E-02 5.3E-02 3.7E-02 CONTROL +CONSTR. P-18 1.4E-02 1.1E-02 1.4E-02 3.4E-02 6.2E-02 2.8E-01 6.4E-02 2.1E-02 P-19 6.4E-02 5.7E-01 6.7E-02 FAIL FAIL FAIL FAIL FAIL P-20 1.5E-01 FAIL 2.0E-01 FAIL FAIL FAIL FAIL FAIL Table 2 : 2Average and best result for 10 independent runs for the P-14 benchmark.PROTES BS-1 BS-2 BS-3 BS-4 BS-5 BS-6 BS-7 MEAN -3095 -2992 -3048 -2650 -2937 -2701 -3064 -3075 BEST -3103 -3074 -3084 -2825 -2996 -2752 -3094 -3103 Table 3 : 3Computation time in seconds for all selected benchmarks (P-01 -P-20) and for all used optimization methods (PROTES, BS1 -BS7).PROTES BS-1 BS-2 BS-3 BS-4 BS-5 BS-6 BS-7 ANALYTIC FUNCTIONS P-01 3.28 0.06 0.11 23.45 25.31 22.78 22.03 67.23 P-02 2.25 0.05 0.06 22.42 28.68 20.6 19.78 77.01 P-03 2.36 0.05 0.03 26.11 23.62 20.55 19.71 65.62 P-04 2.32 0.05 0.07 22.04 24.04 20.67 19.9 65.51 P-05 2.33 0.05 0.06 26.47 26.56 20.78 20.1 78.42 P-06 2.34 0.05 0.1 23.51 28.32 22.42 21.58 84.1 P-07 2.25 0.06 0.03 26.72 34.95 21.66 21.26 83.54 P-08 2.25 0.05 0.07 22.7 24.03 21.29 20.28 65.47 P-09 2.31 0.06 0.11 22.98 24.11 21.14 20.56 65.55 P-10 2.35 0.05 0.03 23.56 33.66 21.25 22.61 81.94 QUBO P-11 2.7 0.06 0.44 16.32 22.54 17.65 17.98 67.61 P-12 2.16 0.05 0.36 17.06 21.95 17.28 16.77 79.74 P-13 2.29 0.05 0.34 18.78 21.09 20.51 17.97 77.22 P-14 2.33 0.13 0.4 16.41 27.42 17.61 17.59 74.77 CONTROL P-15 513.6 1256.0 1839.0 550.0 545.0 620.3 530.4 707.5 P-16 542.4 969.4 3007.0 578.0 595.0 570.7 573.7 697.5 P-17 640.7 607.3 4245.0 673.6 687.6 661.0 687.1 779.5 CONTROL +CONSTR. P-18 328.1 92.9 588.1 319.1 516.2 202.9 533.9 474.5 P-19 8.74 69.49 912.0 17.13 17.45 16.16 16.84 47.43 P-20 9.23 0.53 931.8 21.16 21.42 20.28 20.84 62.26 Further, for concreteness, we will consider the minimization problem in this paper, while the proposed method can be applied to the discrete maximization problem without any significant modifications.3 The program code with the proposed method and numerical examples is available in the public repository https://github.com/anabatsh/PROTES. We used implementation of the method from https://github.com/AndreiChertkov/ttopt.5 We used implementation from https://github.com/AndreiChertkov/teneva. The TT-tensor for optimization was generated by the TT-cross method.6 See https://github.com/facebookresearch/nevergrad. This function corresponds to the problem of modeling the time that takes a piston to complete one cycle within a cylinder; the description of its parameters can be found in[44,4]. See https://github.com/tamuhey/qubogen. Acknowledgments and Disclosure of Funding Two decades of blackbox optimization applications. Stephane Alarie, EURO Journal on Computational Optimization. 9100011Stephane Alarie et al. "Two decades of blackbox optimization applications". In: EURO Journal on Computational Optimization 9 (2021), p. 100011. Challenging the curse of dimensionality in multidimensional numerical integration by using a low-rank tensor-train format. Boian Alexandrov, Mathematics 11.3 (2023). 534Boian Alexandrov et al. "Challenging the curse of dimensionality in multidimensional nu- merical integration by using a low-rank tensor-train format". In: Mathematics 11.3 (2023), p. 534. Nevergrad: black-box optimization platform. Pauline Bennet, SIGEVOlution 14.1 (2021). Pauline Bennet et al. "Nevergrad: black-box optimization platform". In: SIGEVOlution 14.1 (2021), pp. 8-15. Black box approximation in the tensor train format initialized by ANOVA decomposition. Andrei Chertkov, Gleb Ryzhakov, Ivan Oseledets, arXiv:2208.03380arXiv preprintAndrei Chertkov, Gleb Ryzhakov, and Ivan Oseledets. "Black box approximation in the ten- sor train format initialized by ANOVA decomposition". In: arXiv preprint arXiv:2208.03380 (2022). Optimization of functions given in the tensor train format. Andrei Chertkov, arXiv:2209.14808arXiv preprintAndrei Chertkov et al. "Optimization of functions given in the tensor train format". In: arXiv preprint arXiv:2209.14808 (2022). Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. Andrzej Cichocki, Foundations and Trends in Machine Learning. 9Andrzej Cichocki et al. "Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions". In: Foundations and Trends in Machine Learning 9.4-5 (2016), pp. 249-429. Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives. Andrzej Cichocki, Foundations and Trends in Machine Learning. 9Andrzej Cichocki et al. "Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives". In: Foundations and Trends in Machine Learning 9.6 (2017), pp. 431-673. Empirical review of standard benchmark functions using evolutionary global optimization. Johannes Dieterich, Bernd Hartke, Applied Mathematics. 3Johannes Dieterich and Bernd Hartke. "Empirical review of standard benchmark functions using evolutionary global optimization". In: Applied Mathematics 3.10 (2012), pp. 1552- 1564. A survey on recent progress in the theory of evolutionary algorithms for discrete optimization. Benjamin Doerr, Frank Neumann, ACM Transactions on Evolutionary Learning and Optimization. 1Benjamin Doerr and Frank Neumann. "A survey on recent progress in the theory of evolutionary algorithms for discrete optimization". In: ACM Transactions on Evolutionary Learning and Optimization 1.4 (2021), pp. 1-43. Approximation and sampling of multivariate probability distributions in the tensor train decomposition. Sergey Dolgov, Statistics and Computing. 30Sergey Dolgov et al. "Approximation and sampling of multivariate probability distributions in the tensor train decomposition". In: Statistics and Computing 30 (2020), pp. 603-625. A phase angle-modulated bat algorithm with application to antenna topology optimization. Jian Dong, Zhiyu Wang, Jinjun Mo, Applied Sciences. 1152243Jian Dong, Zhiyu Wang, and Jinjun Mo. "A phase angle-modulated bat algorithm with appli- cation to antenna topology optimization". In: Applied Sciences 11.5 (2021), p. 2243. Quantum bridge analytics I: a tutorial on formulating and using QUBO models. Fred Glover, Annals of Operations Research (2022). Fred Glover et al. "Quantum bridge analytics I: a tutorial on formulating and using QUBO models". In: Annals of Operations Research (2022), pp. 1-43. A literature survey of low-rank tensor approximation techniques. Lars Grasedyck, Daniel Kressner, Christine Tobler, GAMM-Mitteilungen. 36Lars Grasedyck, Daniel Kressner, and Christine Tobler. "A literature survey of low-rank tensor approximation techniques". In: GAMM-Mitteilungen 36.1 (2013), pp. 53-78. Monte Carlo methods for solving multivariable problems. John Hammersley, Annals of the New York Academy of Sciences. 863John Hammersley. "Monte Carlo methods for solving multivariable problems". In: Annals of the New York Academy of Sciences 86.3 (1960), pp. 844-874. The CMA Evolution Strategy: A Comparing Review". In: Towards a new evolutionary computation: advances in the estimation of distribution algorithms. Nikolaus Hansen, SpringerBerlin HeidelbergNikolaus Hansen. "The CMA Evolution Strategy: A Comparing Review". In: Towards a new evolutionary computation: advances in the estimation of distribution algorithms. Springer Berlin Heidelberg, 2006, pp. 75-102. A literature survey of benchmark functions for global optimization problems. Momin Jamil, Xin-She Yang, Journal of Mathematical Modelling and Numerical Optimisation. 4Momin Jamil and Xin-She Yang. "A literature survey of benchmark functions for global optimization problems". In: Journal of Mathematical Modelling and Numerical Optimisation 4.2 (2013), pp. 150-194. Particle swarm optimization. James Kennedy, Russell Eberhart, Proceedings of ICNN'95-international conference on neural networks. ICNN'95-international conference on neural networks4James Kennedy and Russell Eberhart. "Particle swarm optimization". In: Proceedings of ICNN'95-international conference on neural networks. Vol. 4. IEEE. 1995, pp. 1942-1948. Adam: a method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. "Adam: a method for stochastic optimization". In: arXiv preprint arXiv:1412.6980 (2014). Efficient structure-preserving support tensor train machine. Kirandeep Kour, Journal of Machine Learning Research. 24Kirandeep Kour et al. "Efficient structure-preserving support tensor train machine". In: Journal of Machine Learning Research 24.4 (2023), pp. 1-22. Approaching quartic convergence rates for quasi-stochastic approximation with application to gradient-free optimization. Kalil Caio, Sean P Lauand, Meyn, Advances in Neural Information Processing Systems. Caio Kalil Lauand and Sean P Meyn. "Approaching quartic convergence rates for quasi-stochastic approximation with application to gradient-free optimization". In: Advances in Neural Information Processing Systems. 2022. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. Tianyi Lin, Zeyu Zheng, Michael Jordan, Advances in Neural Information Processing Systems. 35Tianyi Lin, Zeyu Zheng, and Michael Jordan. "Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization". In: Advances in Neural Information Processing Systems 35 (2022), pp. 26160-26175. Global random optimization by simultaneous perturbation stochastic approximation. John Maryak, Daniel Chin, Proceedings of the 2001 American Control Conference.(Cat. No. 01CH37148). the 2001 American Control Conference.(Cat. No. 01CH37148)IEEE2John Maryak and Daniel Chin. "Global random optimiza- tion by simultaneous perturbation stochastic approximation". In: Proceedings of the 2001 American Control Conference.(Cat. No. 01CH37148). Vol. 2. IEEE. 2001, pp. 756-762. Random gradient-free minimization of convex functions. Yurii Nesterov, Vladimir Spokoiny, Foundations of Computational Mathematics. 17Yurii Nesterov and Vladimir Spokoiny. "Random gradient-free minimization of convex func- tions". In: Foundations of Computational Mathematics 17.2 (2017), pp. 527-566. Are quantum computers practical yet? A case for feature selection in recommender systems using tensor networks. Artyom Nikitin, arXiv:2205.04490arXiv preprintArtyom Nikitin et al. "Are quantum computers practical yet? A case for feature selection in recommender systems using tensor networks". In: arXiv preprint arXiv:2205.04490 (2022). Tensor-train density estimation. Georgii Novikov, Maxim Panov, Ivan Oseledets, PMLR. 2021Uncertainty in artificial intelligence. Georgii Novikov, Maxim Panov, and Ivan Oseledets. "Tensor-train density estimation". In: Uncertainty in artificial intelligence. PMLR. 2021, pp. 1321-1331. Tensor-train decomposition. Ivan Oseledets, SIAM Journal on Scientific Computing. 33Ivan Oseledets. "Tensor-train decomposition". In: SIAM Journal on Scientific Computing 33.5 (2011), pp. 2295-2317. Breaking the curse of dimensionality, or how to use SVD in many dimensions. Ivan Oseledets, Eugene Tyrtyshnikov, SIAM Journal on Scientific Computing. 31Ivan Oseledets and Eugene Tyrtyshnikov. "Breaking the curse of dimensionality, or how to use SVD in many dimensions". In: SIAM Journal on Scientific Computing 31.5 (2009), pp. 3744- 3759. TT-cross approximation for multidimensional arrays. Ivan Oseledets, Eugene Tyrtyshnikov, Linear Algebra and its Applications. 432Ivan Oseledets and Eugene Tyrtyshnikov. "TT-cross approximation for multidimensional arrays". In: Linear Algebra and its Applications 432.1 (2010), pp. 70-88. Learning Feynman diagrams with tensor trains. Olivier Parcollet, Bulletin of the. American Physical SocietyOlivier Parcollet et al. "Learning Feynman diagrams with tensor trains". In: Bulletin of the American Physical Society (2023). Discrete optimization. Gary Parker, Ronald Rardin, ElsevierGary Parker and Ronald Rardin. Discrete optimization. Elsevier, 2014. Exploiting low-rank tensor-train deep neural networks based on Riemannian gradient descent with illustrations of speech processing. Jun Qi, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 31Jun Qi et al. "Exploiting low-rank tensor-train deep neural networks based on Riemannian gradient descent with illustrations of speech processing". In: IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023), pp. 633- 642. Constructive TT-representation of the tensors given as index interaction functions with applications. Gleb Ryzhakov, Ivan Oseledets, 11th International Conference on Learning Representations, ICLR. Gleb Ryzhakov and Ivan Oseledets. "Constructive TT-representation of the tensors given as index interaction functions with applications". In: 11th International Conference on Learning Representations, ICLR. 2023. Lower bounds on regret for noisy gaussian process bandit optimization. Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher, Conference on Learning Theory. Jonathan Scarlett, Ilija Bogunovic, and Volkan Cevher. "Lower bounds on regret for noisy gaussian process bandit optimization". In: Conference on Learning Theory. PMLR. 2017, pp. 1723-1742. Global optimization of surface warpage for inverse design of ultra-thin electronic packages using tensor train decomposition. Cheryl Selvanayagam, IEEE Access. 10Cheryl Selvanayagam et al. "Global optimization of surface warpage for inverse design of ultra-thin electronic packages using tensor train decomposition". In: IEEE Access 10 (2022), pp. 48589-48602. Tensor train for global optimization problems in robotics. Suhan Shetty, arXiv:2206.05077arXiv preprintSuhan Shetty et al. "Tensor train for global optimization problems in robotics". In: arXiv preprint arXiv:2206.05077 (2022). Global optimization with the iterative power algorithm via quantum computing and quantics tensor trains. Micheline Soley, Bulletin of the. American Physical SocietyMicheline Soley et al. "Global optimization with the iterative power algorithm via quantum computing and quantics tensor trains". In: Bulletin of the American Physical Society (2023). TTOpt: a maximum volume quantized tensor train-based optimization and its application to reinforcement learning. Konstantin Sozykin, Advances in Neural Information Processing Systems. Konstantin Sozykin et al. "TTOpt: a maximum volume quantized tensor train-based optimization and its application to reinforcement learning". In: Advances in Neural Information Processing Systems. 2022. Differential evolution -a simple and efficient heuristic for global optimization over continuous spaces. Rainer Storn, Kenneth Price, Journal of Global Optimization. 11Rainer Storn and Kenneth Price. "Differential evolution -a simple and efficient heuristic for global optimization over continuous spaces". In: Journal of Global Optimization 11.4 (1997), pp. 341-359. Approximation in the extended functional tensor train format. Christoph Strössner, Bonan Sun, Daniel Kressner, arXiv:2211.11338arXiv preprintChristoph Strössner, Bonan Sun, and Daniel Kressner. "Approximation in the extended func- tional tensor train format". In: arXiv preprint arXiv:2211.11338 (2022). Tensor decompositions for hyperspectral data processing in remote sensing: a comprehensive review. Minghua Wang, IEEE Geoscience and Remote Sensing Magazine. Minghua Wang et al. "Tensor decompositions for hyperspectral data processing in remote sens- ing: a comprehensive review". In: IEEE Geoscience and Remote Sensing Magazine (2023). ZARTS: on zero-order optimization for neural architecture search. Xiaoxing Wang, Advances in Neural Information Processing Systems. 35Xiaoxing Wang et al. "ZARTS: on zero-order optimization for neural architecture search". In: Advances in Neural Information Processing Systems 35 (2022), pp. 12868-12880. Natural evolution strategies. Daan Wierstra, Journal of Machine Learning Research. 15Daan Wierstra et al. "Natural evolution strategies". In: Journal of Machine Learning Research 15.27 (2014), pp. 949-980. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ronald Williams, Machine learning. 8Ronald Williams. "Simple statistical gradient-following algorithms for connectionist reinforce- ment learning". In: Machine learning 8.3-4 (1992), pp. 229-256. Gradient descent-based D-optimal design for the least-squares polynomial approximation. Gleb Vitaly Zankin, Ivan Ryzhakov, Oseledets, arXiv:1806.06631arXiv preprintVitaly Zankin, Gleb Ryzhakov, and Ivan Oseledets. "Gradient descent-based D-optimal design for the least-squares polynomial approximation". In: arXiv preprint arXiv:1806.06631 (2018).
[ "https://github.com/anabatsh/PROTES.", "https://github.com/AndreiChertkov/ttopt.5", "https://github.com/AndreiChertkov/teneva.", "https://github.com/facebookresearch/nevergrad.", "https://github.com/tamuhey/qubogen." ]
[ "Phase Space Reconstruction from Accelerator Beam Measurements Using Neural Networks and Differentiable Simulations", "Phase Space Reconstruction from Accelerator Beam Measurements Using Neural Networks and Differentiable Simulations" ]
[ "R Roussel ", "A Edelen ", "C Mayes ", "D Ratner ", "J P Gonzalez-Aguilera ", "S Kim ", "E Wisniewski ", "J Power ", "\nDepartment of Physics\nSLAC National Accelerator Laboratory\n94025Menlo ParkCAUSA\n", "\nArgonne National Laboratory, Argonne\nUniversity of Chicago\n60637, 60439ChicagoIllinois, IllinoisUSA, USA\n" ]
[ "Department of Physics\nSLAC National Accelerator Laboratory\n94025Menlo ParkCAUSA", "Argonne National Laboratory, Argonne\nUniversity of Chicago\n60637, 60439ChicagoIllinois, IllinoisUSA, USA" ]
[]
Characterizing the phase space distribution of particle beams in accelerators is a central part of accelerator understanding and performance optimization. However, conventional reconstructionbased techniques either use simplifying assumptions or require specialized diagnostics to infer highdimensional (> 2D) beam properties. In this Letter, we introduce a general-purpose algorithm that combines neural networks with differentiable particle tracking to efficiently reconstruct highdimensional phase space distributions without using specialized beam diagnostics or beam manipulations. We demonstrate that our algorithm accurately reconstructs detailed 4D phase space distributions with corresponding confidence intervals in both simulation and experiment using a single focusing quadrupole and diagnostic screen. This technique allows for the measurement of multiple correlated phase spaces simultaneously, which will enable simplified 6D phase space distribution reconstructions in the future. arXiv:2209.04505v2 [physics.acc-ph]
10.1103/physrevlett.130.145001
[ "https://export.arxiv.org/pdf/2209.04505v2.pdf" ]
252,199,642
2209.04505
5d11c2110747b3d20e31793f25624e4f5338a876
Phase Space Reconstruction from Accelerator Beam Measurements Using Neural Networks and Differentiable Simulations 26 Jan 2023 R Roussel A Edelen C Mayes D Ratner J P Gonzalez-Aguilera S Kim E Wisniewski J Power Department of Physics SLAC National Accelerator Laboratory 94025Menlo ParkCAUSA Argonne National Laboratory, Argonne University of Chicago 60637, 60439ChicagoIllinois, IllinoisUSA, USA Phase Space Reconstruction from Accelerator Beam Measurements Using Neural Networks and Differentiable Simulations 26 Jan 2023(Dated: January 30, 2023) Characterizing the phase space distribution of particle beams in accelerators is a central part of accelerator understanding and performance optimization. However, conventional reconstructionbased techniques either use simplifying assumptions or require specialized diagnostics to infer highdimensional (> 2D) beam properties. In this Letter, we introduce a general-purpose algorithm that combines neural networks with differentiable particle tracking to efficiently reconstruct highdimensional phase space distributions without using specialized beam diagnostics or beam manipulations. We demonstrate that our algorithm accurately reconstructs detailed 4D phase space distributions with corresponding confidence intervals in both simulation and experiment using a single focusing quadrupole and diagnostic screen. This technique allows for the measurement of multiple correlated phase spaces simultaneously, which will enable simplified 6D phase space distribution reconstructions in the future. arXiv:2209.04505v2 [physics.acc-ph] Increasingly precise control of the distribution of particles in position-momentum phase space is needed for emerging applications of accelerators [1]. This includes, for example, new experiments at free electron lasers [2][3][4][5][6] and novel acceleration schemes that promise higherenergy beams in compact spaces [7]. Numerous techniques have been developed for precision shaping of beam distributions [8]; however, the effectiveness of these techniques relies on accurate measurements of the 6D phase space distribution, which is a challenging task unto itself. Tomographic measurement techniques are used in accelerators to determine the density distribution of beam particles in phase space ρ(x, p x , y, p y , z, p z ) from limited measurements [9][10][11][12][13][14]. The simplest form of this uses scalar metrics, such as second-order moments, to describe observations of the transverse beam distribution when projected onto a scintillating screen. [15][16][17]. This process however discards significant amounts of information about the beam distribution captured by high-resolution diagnostic screens and only predicts scalar quantities of the beam distribution. In contrast, methods using projections of the beam image, including filtered backprojection [12,18], algebraic reconstruction [19][20][21], and maximum entropy tomography (MENT) [13,22] produce more accurate reconstructions. The MENT algorithm is particularly well-suited to reconstructing beams from limited and/or partial information sources about the beam distribution, as is the case in most experimental accelerator measurements. MENT solves for a phase space distribution that maximizes entropy (and, as a result, likelihood), subject to the constraint that the distribution accurately reproduces ex-perimental measurements. While these techniques have been shown to effectively reconstruct 2D phase spaces from image projections using algebraic methods, application to higher-dimensional spaces requires independence assumptions between the phase spaces of principal coordinate axes (x, y, z), complicated phase space rotation procedures [20,23], or simultaneous measurement of multiple 2D sub-spaces with specialized diagnostic hardware [24]. Numerical optimization methods can also be used to infer beam distributions from experimental data. For example, arbitrary beam distributions can be parameterized by a set of principal components [25] whose relative weights can be optimized to produce a beam distribution that, when tracked through a simulation, reproduces experimental measurements. Alternatively, heuristics can be used to delete or generate particles in a distribution until particle tracking results match experiments [26,27]. Unfortunately, these methods suffer from increasing computational cost when extending them to reconstructing high-dimensional phase space distributions, primarily due to the cost associated with optimizing the large number of free parameters needed to represent detailed beam characteristics in high-dimensional phase spaces. In this Letter we describe a new method that provides detailed reconstructions of the beam phase space using simple and widely-available accelerator elements and diagnostics. To achieve this, we take advantage of recent developments in machine learning to introduce two new concepts (shown in Fig. 1): a method for parameterizing arbitrary beam distributions in 6D phase space, and a differentiable particle tracking simulation that allows FIG. 1. Description of our approach for reconstructing phase space beam distributions. First, a 6D base distribution is transformed via neural network, parameterized by θt, into a proposed initial distribution. This distribution is then transported through a differentiable accelerator simulation of the tomographic beamline. The quadrupole is scanned to produce a series of images on the screen, both in simulation and on the operating accelerator. The images produced both from the simulation Q (i,j) n and the accelerator R (i,j) n are then compared with a custom loss function, which attempts to maximize the entropy of the proposal distribution, constrained on accurately reproducing experimental measurements. This loss function is then used to update the neural network parameters θt → θt+1 via gradient descent. The neural network transformation that minimizes the loss function generates the beam distribution that has the highest likelihood of matching the real initial beam distribution. us to learn the beam distribution from arbitrary downstream accelerator measurements. We examine how this method extracts detailed 4-dimensional phase space distributions from measurements in simulation and experiment, using a simple diagnostic beamline, containing a single quadrupole, drift and diagnostic screen to image the transverse (x, y) beam distribution. Finally, we discuss current limitations of this method as well as future directions for the design of novel accelerator diagnostics using this technique. We first demonstrate our algorithm using a synthetic example, where we attempt to determine the distribution of a 10-MeV beam given a predefined structure in 6D phase space. The propagation of a synthetic beam distribution through a simple diagnostic beamline containing a 10 cm long quadrupole followed by a 1.0 m drift is simulated using a custom implementation of Bmad [28] referred to here as Bmad-X. To illustrate the capabilities of our technique, the synthetic beam contains multiple higher order moments between each phase space coordinates (see Supplemental Materials for details). To simulate an experimental measurement, we simulate particles traveling through the diagnostic beamline while the quadrupole strength k is scanned over N points. The final transverse distribution of the beam is measured at each quadrupole strength using a simulated 200 × 200 pixel screen, with a pixel resolution of 300 µm (image data can be viewed in the Supplemental Materials). The set of images, where the intensity of pixel (i, j) on the n'th image is represented by R (i,j) n , is then collected with the corresponding quadrupole strengths to create the data set, which is then split into training and testing subsets by selecting every other sample as a test sample, resulting in 10 samples for each data subset. The reconstruction algorithm begins with the generation of arbitrary initial beam distributions (referred to here as proposal distributions) through the use of a neural network transformation. A neural network, consisting of only 2 fully-connected layers of 20 neurons each, is used to transform samples drawn from a 6D normal distribution centered at the origin to macro-particle coordinates in real 6D phase space (where positional coordinates are given in meters and momentum coordinates are in radians for transverse momenta). As a result, the coordinates of particles in the proposal distribution are fully parameterized by the neural network parameter set θ t . Fitting neural network parameters to experimental measurements is done by minimizing a loss function to determine the most likely initial beam distribution, subject to the constraint that it reproduces experimental measurements; this is similar to the MENT algorithm [22]. The likelihood of an initial beam distribution in phase space is maximized by maximizing the distribution entropy, which is proportional to the log of the 6D beam emittance ε 6D [29]. Thus, we specify a loss function that minimizes the negative entropy of the proposal beam distribution, penalized by the degree to which the proposal distribution reproduces measurements of the transverse beam distribution at the screen location. To evaluate the penalty for a given proposal distribution, we track the proposal distribution through a batch of accelerator simulations that mimic experimental conditions to generate a set of simulated images Q (i,j) n to compare with experimental measurements. The total loss function is given by l = − log (2πe) 3 ε 6D + λ 1 N IJ N,I,J n,i,j |R (i,j) n − Q (i,j) n | (1) where λ scales the distribution loss penalty function relative to the entropy term and is chosen empirically based on the resolution of the images. However, the large (> 10 3 ) number of free parameters contained in the neural network transformation used to generate proposal distributions necessitates the use of gradient-based optimization algorithms such as Adam [30] to minimize the loss function. Thus, we need to implement computation of the loss function such that it supports backward differentiation [31] (referred to here as differentiable computations), allowing us to cheaply compute loss function derivatives with respect to every neural network parameter. This requires that every step involved in calculating the loss function is also differentiable, including computing the beam emittance and tracking particles through the accelerator. Unfortunately, to the best of our knowledge, no particle tracking codes currently support backwards differentiation. To satisfy this requirement, we implement particle tracking in Bmad-X using the machine learning library PyTorch [32]. We estimate screen pixel intensities from a discrete particle distribution with a differentiable implementation of kernel density estimation [33]. Results from our reconstruction of the initial beam phase space using synthetic images are shown in Fig. 2. We characterize the uncertainty of our reconstruction us-ing snapshot ensembling [34]. During model training, we cycle the learning rate of gradient descent in a periodic fashion which encourages the optimizer to explore multiple possible solutions (if they exist). After several of these cycles (known as a "burn-in" period), we save model parameters at each minima of the learning rate cycle, as shown in Fig. 3(a). We then weight predictions from each model equally, using them to predict a mean initial beam density distribution Fig. 2(a-e) with associated confidence intervals Fig. 2(f-j). Performing this analysis by tracking 10 5 particles for each image took less than 30 seconds per ensemble sample using a professional grade GPU (< 60 ms per iteration, 500 steps per ensemble sample). We see excellent agreement between the average reconstructed and synthetic projections in both transverse correlated and uncorrelated phase spaces. Furthermore, the prediction uncertainty from ensembling is on the order of a few percent relative to the predicted mean, providing confidence that the overall solution found during optimization is unique. As shown in Table I, reconstructions of the beam distribution from image data predicts transverse phase space emittances that are closer to ground truth values than those predicted from second-order moment measurements of the transverse beam distribution. This results from non-linearities and cross-correlations present in the 4-D transverse phase space distribution. It is instructive to examine the evolution of the proposal distribution during model training. In Fig. 3(b) we examine second order scalar metrics of the proposal distribution after each training iteration for each phase space coordinate. The entropy term in Eq. 1 causes the distribution to expand in 6D phase space until constrained by experimental evidence. Phase space components that have the strongest impact on beam transport through the beamline as a function of quadrupole strength converge quickly to the true values, whereas the ones that have little-to-no impact (e.g. the longitudinal distribution characteristics) continue to grow. In other cases, there is weak coupling between the experimental measurements and beam properties; for example, chromatic focusing effects due to the energy spread σ δ of the beam weakly affect the measured images. Here, the reconstruction can only provide an upper-bound estimate of the energy spread, since small changes in transverse beam propagation due to chromatic aberrations are overshadowed by statistically dominated particle motion. Convergence of the proposal distribution thus provides a useful indicator of which phase space components can be reliably reconstructed from arbitrary sets of measurements. We now describe a demonstration of our method on an experimental example at the Argonne Wakefield Accelerator (AWA) [35] facility at Argonne National Laboratory. Our objective is to identify the phase space dis-tribution of 65-MeV electron beams at the end of the primary accelerator beamline. The focusing strength of a quadrupole, with an effective length of 12 cm, is scanned while imaging the beam at a transverse scintillating screen located 3.38 m downstream. Charge windowing, image filtering, thresholding and downsampling were used to generate a set of 3 images for each quadrupole setting (see the Supplemental Materials for additional details). We developed a differentiable simulation in Bmad-X of the experimental beamline, including details of the diagnostics used, such as the location and properties of beamline elements and the per-pixel resolution of the imaging screen. With this simulation, we used our method to reconstruct the beam distribution from experimentallymeasured transverse beam images. The results, as shown in Figure 4 and Table II, demonstrate good agreement between experimental measurements of the beam distribution and predictions from our reconstruction. Scalar predictions of the beam emittances from the image-based reconstruction are consistent with those calculated from RMS measurements. Additionally, our reconstruction method accurately reproduces fine features of the transverse beam distribution that were not present in the training data set. In this work, we have demonstrated how differentiable particle tracking simulations, combined with neural network based representations of beam distributions, can be used to interpret common image-based diagnostic measurements. Our method produces detailed reconstructions of 4-dimensional transverse phase space distributions from limited data sets, without the use of complex phase space manipulations or specialized diagnostics. Additionally, our reconstruction identifies limitations in resolving certain aspects of the beam distribution based on available measurements. This analysis is enabled by inexpensive gradient calculations provided by backwards differentiable physics simulations. As a result, we are able to determine thousands of free parameters used to describe complex beam distributions on a time scale similar to the time it takes to perform the physical tomographic measurements themselves. Thus, our reconstruction technique is suitable for inferring detailed beam distributions in an online fashion, i.e. during accelerator operations. As with any new algorithmic technique, there are areas for future improvement. Uncertainty estimates provided by the reconstruction algorithm only capture systematic uncertainties from optimizing the loss function, Eq. 1; thus it ignores systematic uncertainties of the physical measurement and stochastic noise inherent in real accelerators. Future work will incorporate Bayesian analysis techniques into the reconstruction to provide calibrated uncertainty estimates to experimental measurements. Also, while our method significantly increases the speed of high-dimensional phase space reconstructions, achieving this requires substantial amounts of memory to store the derivative information of each macro-particle at every tracking step (∼ 4 GB for each snapshot in the analysis performed here). Peak memory consumption can be reduced through the use of checkpointing [36] or pre-computing derivatives associated with tracking particles through the entire beamline. Finally, this method is limited by the availability of accurate, computationally efficient, backwards differentiable particle tracking simulations. In order to expand the range of diagnostic measurements that can be analyzed by this technique, further investment in differentiable implementations of particle tracking simulations is needed. This new reconstruction approach opens the door to efficient, detailed characterization of 6-dimensional phase space distributions and new types of compound diagnostic measurements. By adding longitudinal beam manipulations, such as transverse deflecting cavities paired with dipole spectrometers, to the beamline used here, full phase space distributions can be characterized through a series of quadrupole strength and deflecting cavity phase scans. The authors would like to thank Lukas Heinrich and Michael Kagan for useful discussion during the early con- Author Contributions: R.R. and A.E. conceived of the idea to combine differentiable simulations with machine learning for phase space tomography. R.R. led the studies and performed the work for phase space reconstruction. A.E. and D.R. provided technical guidance and feedback. J.P.G. developed the differentiable simulation with guidance from R.R. and C.M.. S.K., E.W. and J.P. assisted with experimental studies at AWA. R.R. and A.E. wrote the manuscript. J.P.G., C.M. provided substantial edits to the manuscript. All authors provided feedback on the manuscript. FIG. 2 . 2Comparisons between the synthetic and reconstructed beam probability distributions using our method. (a-e) Plots of the mean predicted phase space density projections in 4D transverse phase space. Contours that denote the 50 th (black) and 95 th (white) percentiles of the synthetic ground truth (dashed) and reconstructed (solid) distributions. (f-j) Plots of the predicted phase space density uncertainty. FIG. 3 . 3Evolution of the proposal distribution during training on synthetic data. (a) Learning rate schedule for snapshot ensembling. (b) Second order moments of beam reconstruction during training for each phase space coordinate. Dashed lines denote ground truth values. Vertical lines denote snapshot locations after burn-in period. FIG. 4 . 4Reconstruction results from experimental measurements at AWA. Comparison between measured and predicted beam centroids (a) and second-order beam moments (b) on the diagnostic screen as a function of geometric quadrupole focusing strength (k). Points denote training samples and crosses denote test samples. Dashed line shows second order polynomial fit of training data and solid line shows predictions from image-based phase space reconstruction. We also compare (c-h) screen images and reconstructed predictions for a subset of quadrupole strengths. Contours denote the 50 th (black) and 95 th (white) percentiles of the measured (dashed) and predicted (solid) screen distributions. Orange borders denote test samples. ceptual development of this work. This work was supported by the U.S. Department of Energy, under DOE Contract No. DE-AC02-76SF00515, the Office of Science, Office of Basic Energy Sciences and the Center for Bright Beams, NSF award PHY-1549132. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award ERCAP0020725. TABLE I . IPredicted Emittances Compared to True ValuesParameter Ground truth RMS Prediction Reconstruction Unit εx 2.00 2.47 2.00 ± 0.01 mm-mrad εy 11.45 14.10 10.84 ± 0.04 mm-mrad ε4D 18.51 34.83 * 17.34 ± 0.08 mm 2 -mrad 2 * Assumes x-y phase space independence TABLE II . IIPredicted Emittances from Experimental DataParameter RMS Prediction Reconstruction Unit εx,n 4.18 ± 0.71 4.23 ± 0.02 mm-mrad εy,n 3.65 ± 0.36 3.42 ± 0.02 mm-mrad First lasing and operation of an aangstrom-wavelength free-electron laser. P Emma, R Akre, J Arthur, R Bionta, C Bostedt, J Bozek, A Brachmann, P Bucksbaum, R Coffee, F J Decker, Y Ding, D Dowell, S Edstrom, A Fisher, J Frisch, S Gilevich, J Hastings, G Hays, P Hering, Z Huang, R Iverson, H Loos, M Messerschmidt, A Miahnahri, S Moeller, H D Nuhn, G Pile, D Ratner, J Rzepiela, D Schultz, T Smith, P Stefan, H Tompkins, J Turner, J Welch, W White, J Wu, G Yocky, J Galayda, 10.1038/nphoton.2010.176Nature Photonics. 4641Nature Publishing GroupP. Emma, R. Akre, J. Arthur, R. Bionta, C. Bost- edt, J. Bozek, A. Brachmann, P. Bucksbaum, R. Cof- fee, F. J. Decker, Y. Ding, D. Dowell, S. Edstrom, A. Fisher, J. Frisch, S. Gilevich, J. Hastings, G. Hays, P. Hering, Z. Huang, R. Iverson, H. Loos, M. Messer- schmidt, A. Miahnahri, S. Moeller, H. D. Nuhn, G. Pile, D. Ratner, J. Rzepiela, D. Schultz, T. Smith, P. Stefan, H. Tompkins, J. Turner, J. Welch, W. White, J. Wu, G. Yocky, and J. Galayda, First lasing and operation of an aangstrom-wavelength free-electron laser, Nature Photonics 4, 641 (2010), publisher: Nature Publishing Group. Generation of highly mutually coherent hard-x-ray pulse pairs with an amplitude-splitting delay line. H Li, Y Sun, J Vila-Comamala, T Sato, S Song, P Sun, M H Seaberg, N Wang, J B Hastings, M Dunne, P Fuoss, C David, M Sutton, D Zhu, 10.1103/PhysRevResearch.3.043050Phys. Rev. Research. 343050H. Li, Y. Sun, J. Vila-Comamala, T. Sato, S. Song, P. Sun, M. H. Seaberg, N. Wang, J. B. Hastings, M. Dunne, P. Fuoss, C. David, M. Sutton, and D. Zhu, Generation of highly mutually coherent hard-x-ray pulse pairs with an amplitude-splitting delay line, Phys. Rev. Research 3, 043050 (2021). Realizing split-pulse x-ray photon correlation spectroscopy to measure ultrafast dynamics in complex matter. Y Sun, M Dunne, P Fuoss, A Robert, D Zhu, T Osaka, M Yabashi, M Sutton, 10.1103/PhysRevResearch.2.023099Phys. Rev. Research. 223099Y. Sun, M. Dunne, P. Fuoss, A. Robert, D. Zhu, T. Os- aka, M. Yabashi, and M. Sutton, Realizing split-pulse x-ray photon correlation spectroscopy to measure ultra- fast dynamics in complex matter, Phys. Rev. Research 2, 023099 (2020). High-intensity double-pulse x-ray free-electron laser. A Marinelli, D Ratner, A A Lutman, J Turner, J Welch, F J Decker, H Loos, C Behrens, S Gilevich, A A Miahnahri, S Vetter, T J Maxwell, Y Ding, R Coffee, S Wakatsuki, Z Huang, 10.1038/ncomms7369Nature Communications. 6A. Marinelli, D. Ratner, A. A. Lutman, J. Turner, J. Welch, F. J. Decker, H. Loos, C. Behrens, S. Gile- vich, A. A. Miahnahri, S. Vetter, T. J. Maxwell, Y. Ding, R. Coffee, S. Wakatsuki, and Z. Huang, High-intensity double-pulse x-ray free-electron laser, Nature Communi- cations 6, 10.1038/ncomms7369 (2015). . F.-J Decker, K L Bane, W Colocho, S Gilevich, A Marinelli, J C Sheppard, J L Turner, J J , F.-J. Decker, K. L. Bane, W. Colocho, S. Gilevich, A. Marinelli, J. C. Sheppard, J. L. Turner, J. J. Tunable x-ray free electron laser multipulses with nanosecond separation. S L Turner, A Vetter, C Halavanau, A A Pellegrini, Lutman, 10.1038/s41598-022-06754-yScientific Reports. 12Turner, S. L. Vetter, A. Halavanau, C. Pellegrini, and A. A. Lutman, Tunable x-ray free electron laser multi- pulses with nanosecond separation, Scientific Reports 12, 10.1038/s41598-022-06754-y (2022). 10.2172/1358081Advanced Accelerator Development Strategy Report: DOE Advanced Accelerator Concepts Research Roadmap Workshop. USDOE Office of ScienceDC (United StatesTech. Rep.Advanced Accelerator Development Strategy Report: DOE Advanced Accelerator Concepts Research Roadmap Workshop, Tech. Rep. (USDOE Office of Science, Wash- ington, DC (United States), 2016). Bunch shaping in electron linear accelerators, Reviews of Modern. G Ha, K.-J Kim, J Power, Y Sun, P Piot, Physics. 9425006G. Ha, K.-J. Kim, J. Power, Y. Sun, P. Piot, et al., Bunch shaping in electron linear accelerators, Reviews of Mod- ern Physics 94, 025006 (2022). Phase space tomography of relativistic electron beams. C Mckee, P O&apos;shea, J Madey, 10.1016/0168-9002(94)01411-6Nuclear Instruments and Methods in Physics Research Section A: Accelerators. 358264C. McKee, P. O'Shea, and J. Madey, Phase space tomog- raphy of relativistic electron beams, Nuclear Instruments and Methods in Physics Research Section A: Acceler- ators, Spectrometers, Detectors and Associated Equip- ment 358, 264 (1995). Metcalf, Tomographic measurements of longitudinal phase space density. S Hancock, M Lindroos, E Mcintosh, M , 10.1016/S0010-4655(99)00194-0Computer Physics Communications. 11861S. Hancock, M. Lindroos, E. McIntosh, and M. Met- calf, Tomographic measurements of longitudinal phase space density, Computer Physics Communications 118, 61 (1999). Phase space tomography of beams with extreme space charge. D Stratakis, R A Kishek, I Haber, M Walter, R B Fiorito, S Bernal, J Thangaraj, K Tian, C Papadopoulos, M Reiser, P G O&apos;shea, 10.1109/PAC.2007.44413352007 IEEE Particle Accelerator Conference. PACD. Stratakis, R. A. Kishek, I. Haber, M. Walter, R. B. Fiorito, S. Bernal, J. Thangaraj, K. Tian, C. Papadopou- los, M. Reiser, and P. G. O'Shea, Phase space tomogra- phy of beams with extreme space charge, in 2007 IEEE Particle Accelerator Conference (PAC) (2007) pp. 2025- 2029. Electron beam phase-space measurement using a high-precision tomography technique. V Yakimenko, M Babzien, I Ben-Zvi, R Malone, X.-J Wang, 10.1103/PhysRevSTAB.6.122801Physical Review Special Topics -Accelerators and Beams. 6122801V. Yakimenko, M. Babzien, I. Ben-Zvi, R. Malone, and X.-J. Wang, Electron beam phase-space measurement us- ing a high-precision tomography technique, Physical Re- view Special Topics -Accelerators and Beams 6, 122801 (2003). Time-resolved electron beam phase space tomography at a soft x-ray free-electron laser. M Röhrs, C Gerth, H Schlarb, B Schmidt, P Schmüser, 10.1103/PhysRevSTAB.12.050704Physical Review Special Topics -Accelerators and Beams. 1250704M. Röhrs, C. Gerth, H. Schlarb, B. Schmidt, and P. Schmüser, Time-resolved electron beam phase space tomography at a soft x-ray free-electron laser, Physi- cal Review Special Topics -Accelerators and Beams 12, 050704 (2009). Four-dimensional emittance measurements of ultrafast electron diffraction optics corrected up to sextupole order. M Gordon, W Li, M Andorf, A Bartnik, C Duncan, M Kaemingk, C Pennington, I Bazarov, Y.-K Kim, J Maxson, Physical Review Accelerators and Beams. 2584001M. Gordon, W. Li, M. Andorf, A. Bartnik, C. Duncan, M. Kaemingk, C. Pennington, I. Bazarov, Y.-K. Kim, and J. Maxson, Four-dimensional emittance measure- ments of ultrafast electron diffraction optics corrected up to sextupole order, Physical Review Accelerators and Beams 25, 084001 (2022). Implementation of Quadrupole-scan Emittance Measurement at Fermilab's Advanced Superconducting Test Accelerator (ASTA). A Green, Y.-M Shin, 10.18429/JACoW-IPAC2015-MOPMA0526th International Particle Accelerator Conference. 52A. Green and Y.-M. Shin, Implementation of Quadrupole-scan Emittance Measurement at Fer- milab's Advanced Superconducting Test Accelerator (ASTA), in 6th International Particle Accelerator Conference (2015) p. MOPMA052. Four-dimensional transverse beam matrix measurement using the multiple-quadrupole scan technique. E Prat, M Aiba, 10.1103/PhysRevSTAB.17.052801Physical Review Special Topics -Accelerators and Beams. 1752801E. Prat and M. Aiba, Four-dimensional transverse beam matrix measurement using the multiple-quadrupole scan technique, Physical Review Special Topics -Accelerators and Beams 17, 052801 (2014). Chromatic effects in quadrupole scan emittance measurements. A Mostacci, M Bellaveglia, E Chiadroni, A Cianchi, M Ferrario, D Filippetto, G Gatti, C Ronsivalle, 10.1103/PhysRevSTAB.15.082802Phys. Rev. ST Accel. Beams. 1582802A. Mostacci, M. Bellaveglia, E. Chiadroni, A. Cianchi, M. Ferrario, D. Filippetto, G. Gatti, and C. Ronsivalle, Chromatic effects in quadrupole scan emittance measure- ments, Phys. Rev. ST Accel. Beams 15, 082802 (2012). The Physics of Medical Imaging. S Webb, 10.1201/9780367805838CRC PressBoca RatonS. Webb, The Physics of Medical Imaging (CRC Press, Boca Raton, 1987). A C Kak, M Slaney, Principles of computerized tomographic imaging. SIAMA. C. Kak and M. Slaney, Principles of computerized to- mographic imaging (SIAM, 2001). Transverse phase space characterization in an accelerator test facility. A Wolski, D Christie, B Militsyn, D Scott, H Kockelbergh, Physical Review Accelerators and Beams. 2332804A. Wolski, D. Christie, B. Militsyn, D. Scott, and H. Kockelbergh, Transverse phase space characterization in an accelerator test facility, Physical Review Accelera- tors and Beams 23, 032804 (2020). A Wolski, M A Johnson, M King, B L Militsyn, P H Williams, arXiv:2209.00814Transverse phase space tomography in the clara accelerator test facility using image compression and machine learning. arXiv preprintA. Wolski, M. A. Johnson, M. King, B. L. Militsyn, and P. H. Williams, Transverse phase space tomography in the clara accelerator test facility using image compression and machine learning, arXiv preprint arXiv:2209.00814 (2022). A study of the maximum entropy technique for phase space tomography. K M Hock, M G Ibison, 10.1088/1748-0221/8/02/p02003Journal of Instrumentation. 8022003K. M. Hock and M. G. Ibison, A study of the maximum entropy technique for phase space tomography, Journal of Instrumentation 8 (02), P02003. Tomographic reconstruction of the full 4d transverse phase space. K Hock, A Wolski, Nuclear Instruments and Methods in Physics Research Section A: Accelerators. 726K. Hock and A. Wolski, Tomographic reconstruction of the full 4d transverse phase space, Nuclear Instruments and Methods in Physics Research Section A: Acceler- ators, Spectrometers, Detectors and Associated Equip- ment 726, 8 (2013). 4D transverse phase space tomography of an operational hydrogen ion beam via noninvasive 2D measurements using laser wires. J C Wong, A Shishlo, A Aleksandrov, Y Liu, C Long, 10.1103/PhysRevAccelBeams.25.042801Physical Review Accelerators and Beams. 2542801J. C. Wong, A. Shishlo, A. Aleksandrov, Y. Liu, and C. Long, 4D transverse phase space tomography of an operational hydrogen ion beam via noninvasive 2D mea- surements using laser wires, Physical Review Accelera- tors and Beams 25, 042801 (2022). Adaptive Machine Learning for Robust Diagnostics and Control of Time-Varying Particle Accelerator Components and Beams. A Scheinker, 10.3390/info12040161number: 4 Publisher: Multidisciplinary Digital Publishing Institute. 12A. Scheinker, Adaptive Machine Learning for Robust Di- agnostics and Control of Time-Varying Particle Accelera- tor Components and Beams, Information 12, 161 (2021), number: 4 Publisher: Multidisciplinary Digital Publish- ing Institute. Four-dimensional phase space measurement using multiple two-dimensional profiles. M Wang, Z Wang, D Wang, W Liu, B Wang, M Wang, M Qiu, X Guan, X Wang, W Huang, S Zheng, 10.1016/j.nima.2019.162438Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 943162438M. Wang, Z. Wang, D. Wang, W. Liu, B. Wang, M. Wang, M. Qiu, X. Guan, X. Wang, W. Huang, and S. Zheng, Four-dimensional phase space measure- ment using multiple two-dimensional profiles, Nuclear In- struments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 943, 162438 (2019). Electron beam transverse phase space tomography using nanofabricated wire scanners with submicrometer resolution. B Hermann, V A Guzenko, O R Hürzeler, A Kirchner, G L Orlandi, E Prat, R Ischebeck, 10.1103/PhysRevAccelBeams.24.022802Physical Review Accelerators and Beams. 2422802B. Hermann, V. A. Guzenko, O. R. Hürzeler, A. Kirch- ner, G. L. Orlandi, E. Prat, and R. Ischebeck, Elec- tron beam transverse phase space tomography using nanofabricated wire scanners with submicrometer res- olution, Physical Review Accelerators and Beams 24, 022802 (2021). Bmad: A relativistic charged particle simulation library. D Sagan, 10.1016/j.nima.2005.11.001proceedings of the 8th International Computational Accelerator Physics Conference. the 8th International Computational Accelerator Physics ConferenceRussia558Computational accelerator physics. Proceedings, 8th International ConferenceD. Sagan, Bmad: A relativistic charged particle simula- tion library, Computational accelerator physics. Proceed- ings, 8th International Conference, ICAP 2004, St. Pe- tersburg, Russia, June 29-July 2, 2004, Nucl. Instrum. Meth. A558, 356 (2006), proceedings of the 8th Interna- tional Computational Accelerator Physics Conference. Emittance, entropy and information. J Lawson, R Gluckstern, P M Lapostolle, Part. Accel. 561J. Lawson, R. Gluckstern, and P. M. Lapostolle, Emit- tance, entropy and information, Part. Accel. 5, 61 (1973). D P Kingma, J Ba, arXiv:1412.6980arXiv: 1412.6980Adam: A Method for Stochastic Optimization. D. P. Kingma and J. Ba, Adam: A Method for Stochas- tic Optimization, arXiv:1412.6980 [cs] (2017), arXiv: 1412.6980. Y A Lecun, L Bottou, G B Orr, K.-R Müller, Efficient Backprop, 10.1007/978-3-642-35289-8_3Neural Networks: Tricks of the Trade: Second Edition. G. Montavon, G. B. Orr, and K.-R. MüllerBerlin, HeidelbergSpringerY. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, Efficient BackProp, in Neural Networks: Tricks of the Trade: Second Edition, Lecture Notes in Computer Sci- ence, edited by G. Montavon, G. B. Orr, and K.-R. Müller (Springer, Berlin, Heidelberg, 2012) pp. 9-48. PyTorch: An Imperative Style, High-Performance Deep Learning Library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d. Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Rai- son, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, PyTorch: An Imperative Style, High-Performance Deep Learning Library, in Advances in Neural Information Processing Systems 32 , edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d. Alché- Buc, E. Fox, and R. Garnett (Curran Associates, Inc., 2019) pp. 8024-8035. Remarks on some nonparametric estimates of a density function, The annals of mathematical statistics. M Rosenblatt, 832M. Rosenblatt, Remarks on some nonparametric esti- mates of a density function, The annals of mathematical statistics , 832 (1956). G Huang, Y Li, G Pleiss, Z Liu, J E Hopcroft, K Q Weinberger, arXiv:1704.00109Snapshot ensembles: Train 1, get m for free. arXiv preprintG. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger, Snapshot ensembles: Train 1, get m for free, arXiv preprint arXiv:1704.00109 (2017). Research program and recent results at the argonne wakefield accelerator facility (awa). M Conde, S Antipov, D Doran, W Gai, Q Gao, G Ha, C Jing, W Liu, N Neveu, J Power, Proc. IPAC'17. IPAC'172885M. Conde, S. Antipov, D. Doran, W. Gai, Q. Gao, G. Ha, C. Jing, W. Liu, N. Neveu, J. Power, et al., Research program and recent results at the argonne wakefield ac- celerator facility (awa), Proc. IPAC'17 , 2885 (2017). The data-flow equations of checkpointing in reverse automatic differentiation. B Dauvergne, L Hascoët, International Conference on Computational Science. SpringerB. Dauvergne and L. Hascoët, The data-flow equations of checkpointing in reverse automatic differentiation, in International Conference on Computational Science (Springer, 2006) pp. 566-573.
[]
[ "Chapter 3. Perception of the Environment", "Chapter 3. Perception of the Environment" ]
[ "Martin Drasar " ]
[]
[]
This chapter discusses the intricacies of cybersecurity agents' perception. It addresses the complexity of perception and illuminates how perception is shaping and influencing the decision-making process. It then explores the necessary considerations when crafting the world representation and discusses the power and bandwidth constraints of perception and the underlying issues of AICA's trust in perception. On these foundations, it provides the reader with a guide to developing perception models for AICA, discussing the trade-offs of each objective state approximation. The guide is written in the context of the CYST cybersecurity simulation engine, which aims to closely model cybersecurity interactions and can be used as a basis for developing AICA. Because CYST is freely available, the reader is welcome to try implementing and evaluating the proposed methods for themselves.
10.48550/arxiv.2210.13070
[ "https://export.arxiv.org/pdf/2210.13070v1.pdf" ]
253,098,644
2210.13070
67041443084141277ba92d4c5e99a0262d00eba4
Chapter 3. Perception of the Environment Martin Drasar Chapter 3. Perception of the Environment This chapter discusses the intricacies of cybersecurity agents' perception. It addresses the complexity of perception and illuminates how perception is shaping and influencing the decision-making process. It then explores the necessary considerations when crafting the world representation and discusses the power and bandwidth constraints of perception and the underlying issues of AICA's trust in perception. On these foundations, it provides the reader with a guide to developing perception models for AICA, discussing the trade-offs of each objective state approximation. The guide is written in the context of the CYST cybersecurity simulation engine, which aims to closely model cybersecurity interactions and can be used as a basis for developing AICA. Because CYST is freely available, the reader is welcome to try implementing and evaluating the proposed methods for themselves. Background Perception is a critical component of AICA and one of the few that cannot be omitted. Perception provides information about the environment, communicates the results of the agent's actions, and shapes and influences the agent's reasoning. While it may be possible to consider only the raw data gathered from sensors as the perception, this narrow view does not appreciate the complexity involved and only defers the issues of percept processing to other parts of AICA, such as the decision-making engine. Perception in AICA is as multifaceted concept as it is in biological systems. Even though the artificial systems have the benefit of not being required to copy nature, many of the constraints and drivers are universal. The raw percepts or stimuli go through a lot of preprocessing and transformations before they can be subjected to the reason. Consider the optical illusion in Figure 1. Our brain is hardwired to identify real-world objects, so we get thrown off because they are not there. Moreover, it takes actual willpower to treat this image as just an image. The perception mechanisms shape how we think about our environment, and the same goes for AICA. There are multiple ways to conceptualize the perception in AICA. One possible way is in the context of the DIKW pyramid, which conceptualizes the relation between data, information, knowledge, and wisdom (Ackoff, 1989). This is depicted in Figure 2, where the perception occupies the two lower tiers of the pyramid (data and information) but can sometimes venture up to the knowledge tier due to its close relation with AICA's world model. Another way we will adopt in this chapter is a pipeline, as shown in Figure 3, consisting of four main parts: physical sensors, logical sensors, transformers, and the world representation. Physical sensors: are primarily out of the scope of AICA. Physical sensors process non-virtual stimuli reaching the agent from the environment. Each of these sensors has specific operation capabilities, requirements, and physical domain, but they all share the need for power. Therefore, AICA using physical sensors must very carefully manage its power envelope. Physical sensor examples: Temperature or pressure sensors and noise detectors could be employed by AICA tasked with maintaining physical security inside a building. Perimeter sensors could be used in outside deployment. Gyroscopes and lidars may be used within the context of unmanned vehicles. Logical sensors: in the context of this chapter, they are understood as a counterpart to the physical ones. That is any source of data that rests within the software. A vast range of data can be fed to AICA in this way. Ranging from its internal state measurements, host diagnostics, and network measurements to open-source intelligence readings and even news feed. The only common attribute of this data is that there is nothing in common. The data provided by logical sensors is heterogeneous, with many dimensions, and can potentially require a large bandwidth to process. These attributes go counter to the current reinforcement learning algorithms, so there is a need for data reduction. Logical sensor examples: Reading of running processes to gather information about the state of AICA and the infrastructure it operates in. Network probe to gather information about traffic within a guarded infrastructure. A periodic download of the CVE (MITRE) database to provide updates to AICA's knowledge base. Transformers: provide means to reduce data complexity, dimensionality, and size. They ensure the move from the data tier of the DKIW pyramid up to the information tier. They can provide additional semantics to the data and serve as a heuristic that offloads a part of the logic that we do not want the ML algorithms to discover. There are many different types of processors, arguably more than types of data. The selection of transformers ultimately dictates how an agent perceives the environment and how it can reason about it. Transformer examples: Statistical aggregation and transformation of observed network traffic (from packet traces to flows). Anomaly detection (from flows to events). Application of ML-driven tools (from events to patterns). World representation: is AICA's representation of itself and of the environment it operates in. A model of the world as it is being perceived. It is the foundation on which the agent chooses its actions and against which their impact is evaluated. Currently, there exist no firm guidelines for the design of state representation. If anything, it is considered an art by some because the representation influences which algorithms can be used, how demanding the agent's training will be, and ultimately, what the agent can achieve. Even though a pipeline is a fitting and easy-to-grasp concept, it gives an illusion of serial data processing. However, the sensors are usually independent, and the same mostly holds for transformers. As the data is being processed in parallel, delays, time skews, and interval differences are bound to happen, as illustrated in Figure 4. The impact of these irregularities strongly depends on the agent's mode of operation and choice of algorithms. Passive observing agents are largely unaffected because they can evaluate snapshots of the world state as the data comes in. However, for active agents, this de-serialization can impact AICA's efficiency by providing only partial observations over a longer time, thus impacting both learning and acting. This chapter addresses the complexity surrounding the perception and provides readers with guidelines and state-of-the-art examples. It does not present definite solutions, as many of these problems are still open and subject to research, but all the presented approaches have either been peer-reviewed or tested in attempts to develop a functional AICA. From percepts to the world representation AICAs will always operate in a partially observable environment. In fact, the observations provided by sensors will usually cover only a sliver of the environment. AICA will not observe, among other things, triggers that make other actors behave the way they do. Therefore, to enable rational and sensible actions, AICA must construct its belief state to be as close to the objective world state as possible. Through a sensible world representation, perception can go a long way to enable AICA to do just that. Conversely, choosing a suboptimal world representation will widen the gap between the belief and the objective state. The world representation, as constructed from observations, can be split into three categories depending on their complexity and expressive poweratomic, factored, and structured (Russel & Norvig, 2020). With the atomic representation, states are indivisible and without an internal structure. This is the equivalent of perception playing no role in shaping the agent's understanding and providing only raw inputs to the decision-making engine. With the factored representation, incoming percepts are processed and represented as collections of attributes. These attributes may be primary, where parts of raw inputs are given their semantics, or secondary, where raw inputs are transformed into higher-level representations encoding some knowledge. With structured representation, attributes also encode their relation to other attributes. Going from atomic, over factored to structured representation leads to a sharp increase in expressiveness, where the world representation can concisely describe a complex environment and its interactions. However, this increase in expressiveness causes an inevitable increase in complexity, impacting reasoning and learning. Real-world AICA thus may be forced to combine representations of all three categories, carefully balancing the upsides and downsides. Orthogonal to complexity, nevertheless with a considerable impact on creating a sensible world representation, is the matter of how perception deals with time. While analog sensors may measure continuously and provide an uninterrupted stream of stimuli, perception in AICA is inevitably discrete, with processing being done in independent time slices (Russel & Norvig, 2020). Meanwhile, sensors are unlikely to be synchronized, and their readings (or transformations) arrive at various intervals. It is then bound to happen that percepts related to one event will be split between two or more time slices. This, in turn, can impact the decision-making because the responses to AICA's actions may be incomplete. Three strategies can be used to counter this effect: slice extension, multi-slice perception, and contextual perception. Slice extension, as the name suggests, extends the time frame when percepts are collected. The problem persists, but the frequency of occurrence decreases, and the impact could be considered acceptable at some point. The downside is that an acceptable interval may be long enough to hamper AICA's speed of reaction to the point of jeopardizing its mission. With multi-sliced perception, the percepts are sampled in parallel with different interval lengths. Perception then produces multiple state updates, and AICA needs to have a strategy to cope with that, either on the perception level or at the decision-making level. With contextual perception, the percepts are still sampled; however, 1 to N neighboring samples are inspected, and the completeness of percepts is evaluated in relation to AICA's actions. This approach is the most complex one, as it requires the perception to have a clear model of which percepts occur and when. As with the complexity issue above, real-world AICA will likely have to combine all three approaches, carefully balancing the trade-offs. The last consideration when designing the perception mechanism of AICA is the distinction between active and passive sensors, which can also be viewed as a distinction between the pull and the push model. Active sensors (pull) gather percepts as a result of their interaction with the environment. Passive sensors (push) receive stimuli from the environment and do not exert control over when and how it happens. As such, active sensors can be set in such a way to diminish the impact of the aforementioned sampling issue. However, this usually comes with considerably increased power or bandwidth requirements, as mentioned in the following text. Power and bandwidth constraints A naïve wisdom would suggest that the more sensors and the more sensory inputs, the better. After all, every new sensor can shorten the gap between the world representation and the world's objective state. New sensors can provide new auxiliary readings, additional details to already present sensors, and even wholly new percepts enabling the AICA to understand the world around itself. However, with each added sensor, there is a trade-off (Theron, et al., 2020). On the pure hardware level, each enabled sensor equates to energy expenditure. Whether this is a problem is a matter of AICA's deployment. Large stationary installations would probably be unaffected; however, AICAs on mobile platforms, personnel, or autonomous devices will have a strict power envelope, and the decision which sensors to use and when will rest on many factors, which are also shared with sensors on the software level. These usually do not have such stringent power limitations; rather, their issue is bandwidth. With pull-based sensors, too short sampling rate or too broad data collection can easily overwhelm the ability of AICA to process and reflect on the data. When designing an AICA, one has to balance several sensor properties and prioritize sensors providing maximum utility for AICA's operation. • Sensors (or their percepts) should be ordered by their importance for the decision-making process. This entails understanding how the percepts are transformed into the world representation and how the representation influences the decision-making. This can either be achieved through methods of explainable AI or by extensive testing evaluating the importance of each sensor. • If the hierarchy is established, a base set of sensors should be selected, and AICA should activate the rest on demand. • Especially for power-constrained environments, there should be a strategy to limit sensor function with the smallest possible impact on decision making, e.g., turning off sensors, prolonging sampling intervals, switching from pull to push mode, etc. • AICA's decision-making should also be fortified against sensor impairment or partial sensor subversion. Trusting the perception One common theme in the literature is that the sensors provide objective input to agents' systems. Whether they are physical or logical sensors, it is taken for granted that the percepts they are producing are forming the world representation that is a clear reflection of the objective world state. This, however, need not hold in deployment settings. In fact, unintentional or deliberate fault of sensors can widen the gap between AICA's belief and objective state so much that the actions of an agent will go contrary to its goals. Attacks against physical sensors have been studied in the literature (Nasralla, García-Magariño, & Lloret, 2020) (Man, Li, & Gerdes, 2020). Recently, the interest has been in the area of autonomous cars (Yan, Xu, & Liu, 2016) (Liu & Park, 2021); however, for any potential AICA deployment where physical sensors may play a role, the same concepts apply. While the faults cannot be eliminated, there are ways to build fault tolerance into the system, namely into data acquisition and data processing. For data acquisition, that can take the form of active probing of the environment against a known baseline (Shoukry, Martin, Yona, Diggavi, & Srivastava, 2015). For data processing, a simple sensor redundancy, fault-tolerant approaches researched in the area of sensor networks, and others can be used (Modarez, Kiumarsi, Lewis, Frank, & Davoudi, 2020). Regardless of the chosen approach, there will be incurred costs stemming from the need to increase the number of physical sensors, both as a procurement cost and increased power envelope. For attacks against the logical sensors, mostly the same holds as for the physical ones. Active probing and sensor redundancy can be employed with the same expected results; however, some measures may be unattainable. Consider the example of the AICA measuring the state of the machine it is on. If the adversary managed to hide itself via hijacking certain syscalls, no amount of sensor redundancy would help because ultimately, every probe or every query would end up calling said syscalls, and the adversary would remain hidden. In such a case, only indirect information may hint at the presence of an adversary. One may argue that the time when an adversary hijacks syscalls is the time when the machine is effectively lost, but the same principle applies in different scenarios, where there is only one ultimate source of information for logical sensors, which is susceptible to subversion. Considering the previous paragraphs, the perception cannot and should not be fully trusted, and the possibility of its subversion should be taken into account, especially when AICA is being built as a resilient solution operating in an adversarial environment. However, the price to maximize the trust in perception may be too high, and alternative solutions may have to be employed. Aside from fault-tolerant decision making, which is explored later in this book, multi-agent setups of AICA allow for perception sharing. In such a case, the setup can be considered a sensor network, and all the approaches, issues and limitations apply. Finally, the perception may not only be a victim of an external adversary but also of wrong expectations. The purpose of perception is mapping sensory inputs to possible real-world states, with the key word being "possible" here. Most sensors come with expectations about the domain of possible values or their combination. However, the vast history of program faults caused by unexpected inputs should be treated as a cautionary tale. With physical sensors, the domain of percepts is bound by physical laws, but with logical sensors, all bets are off. That is why we have a lot of provably secure bridges and not many provably secure programs. Developing a perception model for AICA Perception models, i.e., world representations and associated transformations, are being extensively researched in the areas of autonomous cars, planes, and robots, where correct processing of environmental stimuli is of paramount importance. A similar situation is in natural language processing, where approaches to word embedding for encoding semantic similarity between words can also be considered a perception model for natural languages (Mikolov, Corrado, Chen, & Dean, 2013). However, perception models for fully virtual entities like AICA are not extensively researched. As mentioned earlier, sensory inputs are treated as objective, and the creator of such a virtual entity is left to develop the perception model on their own, despite there being no solid guidance in the literature. Even the seminal work of Russel and Norwig, which provides probably the most complete exploration of the field of artificial intelligence, only skirts over this topic and rather focuses on the transformations of visual and physical stimuli. However, the book at least presents three essential properties which a good world representation should have (Russel & Norvig, 2020): • it contains enough information for the agent to make good decisions, • it is structured so that it can be updated efficiently, • it is natural in the sense that it corresponds to the real world. These are essential properties but not as easy to use as a starting point. Modeling the cybersecurity domain, i.e., creating a perception model that is a good representation of the environment and satisfies the three properties above, is not an easy task. Unlike the scenarios that are being used across the literature, the cybersecurity domain in its entirety is highly dynamic, ever-expanding, and complex. The model has to reflect this to provide actionable information to the agent. Nevertheless, with such complexity, one can easily run into the so-called curse of dimensionality, when the total number of states that an agent can encounter is only a tiny fraction of states that exist in a world representation. And the agent would be wasting scarce resources to try and work with it. At the same time, it is not possible to simply resort to methods reducing the dimensionality of the representation, such as low-dimensional embedding via unsupervised learning (Saul & Roweis, 2003) or principal component analysis. While these methods are perfectly applicable in a technical sense, lowering the dimension count risk going counter to one of the aforementioned propertiesthat the representation is natural. If AICAs are ever to be used as a replacement for human cybersecurity experts or trusted with control over infrastructure, a key requirement will be full auditability in the form of explainable AI. However, if sensor inputs are non-linearly transformed into compact representation, AICAs and humans lose a shared vocabulary for explanation. Currently, the only way to create satisfactory perception models is to handcraft them together with required heuristics (transformations) and painstakingly evaluate their efficiency. The author is aware of research in the area of unsupervised dimensionality reduction, which preserves explainability; however, that research is still in too early phase to be useful to the reader. As there do not seem to be guidelines for creating perception models in an area as complex as cybersecurity, the following text will present a couple of use cases, which should help readers gain insights useful for building their own models. These use cases were taken from real-life attempts to create autonomous attackers driven by reinforcement learning algorithms. Each of these use cases was realized within the CYST cybersecurity simulation engine, which is, to our knowledge, the currently most complex cybersecurity simulator that is freely available. (Drašar, Moskal, Yang, & Zaťko, 2020) CYST is a multi-agent discrete-event simulator based on message passing and tailored for cybersecurity applications. Given its complexity, only the parts relevant to the topic of perception are introduced in the following text. However, this chapter is accompanied by a code repository where the presented use cases are implemented, and readers are welcome to try and tinker with the ideas presented here. Environment -the objective reality The environment observed by the AICA is the environment simulated by CYST and defined by its simulation model. To minimize the cognitive load on the reader, this text uses only the bare minimum needed to execute and understand the presented use cases. However, if the reader is so inclined, they can further explore the simulation model in the relevant paper or the CYST's documentation (Drašar, CYST, 2022). The infrastructure where AICA resides consists of simulated machines on which services are running. These machines are connected via a simulated network that replicates an ethernet network without networking details. The network is partitioned utilizing active network devices called routers. AICA is just one of the services running on one or more simulated machines. AICA communicates with or influences the environment through messages. These messages are also the only mechanism through which AICA can observe the environment. The messages used in CYST come in two types: requests and responses. One request-response pair represents an entire exchange related to one AICA's action. The fragmentation related to, e.g., packets or even TCP sessions, is treated as an implementation detail; thus, the perception is fully realized through observing one response to each request. Request (in addition to all Message attributes) action an effect that AICA wants to achieve (for the purpose of this text, a string from a finite domain, otherwise a much more complicated structure) Response (in addition to all Message attributes) status structured description of the effect of the request. Contains origin (network, node, service, system), value (success, failure, error), and detail (an enumeration of possible values). content currently, unstructured data sent in response. Session start a tuple containing an originating IP address and a service of the session end a tuple containing a destination IP address and a service of the session These attributes are the variables that AICA can observe for the purpose of this text. The number of variables is higher within the CYST simulation, but these were omitted for clarity as the added complexity does not affect the proposed approaches. Also, despite the previously expressed concern about trust in perception, the presented use case treats all these observed attributes as trustworthy and reflecting the objective state, because CYST does not currently support fabrication of wrong percepts. The following text presents several potential approximations of the objective state, which is perceived from the attributes of incoming responses. These approximations are largely independent, and their ordering rather reflects a thought process when developing the perception model than some kind of hierarchy. First approximation -taking inputs verbatim The first and probably the most straightforward way to represent the objective state is based on responses being the only percept that the AICA has. The world representation is constructed as a set of all possible response values. Size: The size is 2 n, where n is the number of bits in each response. If we take a compact representation of the response structure above (and give ourselves a bit of leeway in limiting the infinite domain attributes and set the strings at most 256 bits long), we will reach the n over 1500. Pros: This representation is very easy to make. Just take the incoming response and pass it to the decision-making engine to process. Cons: This is a clear case of the curse of dimensionality. States that are to be encountered during an AICA's run will represent only a minuscule portion of the entire state space, and the burden of data filtering and turning it into a reasonable belief state will be left to the decision-making engine, which will have to expend disproportional amount of energy. Second approximation -elimination of (semi)static observations As mentioned before, the number of states that could effectively be encountered is disproportionate to the size of the world representation. One of the reasons is that many observations are static or semi-static within the context of AICA's operation. Consider the type of message. It can be either a request or a response; however, the perception only processes the responses. This attribute is static and can be freely omitted without any loss of precision. The same goes for source IP addresses and services of both message and its session, as these are fixed for the AICA. Destination IP addresses can be considered semi-static if all AICAs activities happen within specific subnets. In such a case, it is not necessary to process the entire range of IP addresses, and only a subset can be a basis for world representation. Size: The size is still 2 n , where n is the number of non-static bits in each response. Pros: This representation is still as easy as the first approximation to make and requires only limited analysis. Cons: The actual reduction in world representation size depends on the nature of observations, and there is no easy way to specify a fixed upper bound. Approximation detour -the interplay between request and responses The selection of actions is not a responsibility of the perception, as it belongs to the decisionmaking engine. However, unlike many scenarios that can be seen in the literature, in cybersecurity, an action may have a similar complexity as a response. That is, not some tightly packed domain or one or more real numbers, but a complex structure dependent on the observed percepts. AICA thus must be able to use the data from the observation and must be able to use them accordingly. Approximations of the objective state may reduce precision, especially the ones in the following text. Yet, AICA's actions may require precise attributes for their correct execution. Therefore, any lossy approximation or transformation must be accompanied by supplementary data to enable the reconstruction of the attributes within the decision-making engine. Third approximation -indexing of large domains Among the attributes in the responses, some expand the world representation unreasonably, at least considering the total number of states the AICA can encounter. For such large domains, it is better to keep a dictionary of encountered values and map them to an index that is used in the world representation. Size: This approximation enables almost arbitrary size reduction of the world representation by specifying a fixed index size. Pros: The size reduction does not come with a loss of information and benefits larger domains more. The approximation is still comparatively easy to implement. Using a fixed index with a reasonable eviction strategy can enable AICA to forget superfluous observations. Cons: Using the index can hamper the transferability of the algorithms because the mappings of attribute values may not be static. This non-static property can also harm the learning algorithms, where a change in mapping between runs may lead to wrong transition function inference. Fixed index sizes risk unintended consequences in case of overflow. This is further exacerbated if AICA is trained in diverse and fluctuating environments, where diversity of percepts will fuel the index overflow. Fourth approximation -state restructuring There is a distinct difference between the objective state as was described earlier, and the contents of responses. AICA's version of this objective state -its belief state -is pieced from small probes of request-response pairs. However, there is no reason why the perception should not be modeled closer to the objective state as it is being understood. This is one of the possible versions of world representation: Machine Machine IP Services Sessions IP Services Sessions … The perception is centered around the information about possible targets. For each target, an IP address, the running services, and active sessions are retained. All this information can be indexmapped, especially the services, as their domain is finite, and many services are likely to be shared among different machines. There would probably be a limit on the number of machines the AICA had in its operating memory. Size: is likely to be similar to the third approximation. In this approximation, information is only restructured and not necessarily changed. Pros: this heuristic approach removes the burden of understanding the world from AICA's decision-making, and it can focus more on the higher-level strategic decisions. It also provides a more natural representation that AICA's operator can understand. Cons: depending on the restructuring, this approximation can help or hinder the decision-making process. It is thus very dependent on the capabilities of the person doing the restructuring. Fifth approximation -explicit activity history Operations in the cybersecurity domain naturally have complex dependencies on past events. The decision-making process thus has to keep track of what was done by AICA, how the counterparty reacted, how the infrastructure evolved, and so on. While these considerations can be technically modeled as a k-order Markov process, the k would be very large. Current decision-making algorithms tackle these dependencies, e.g., through the use of LSTM neural networks, Gated Recurrent Units, and similar. However, training and imprinting these memories to be correctly used over disjoint response-request pairs can be resource-consuming or currently infeasible. The alternative is for perception to act as an explicit memory that is (partially) taking the role of decision-making processes. In the presented use case, this could mean adding new attributes to the world representation by means of also observing the requests. The potential representation for a service can then look like this: Pros: this approach, which is the first strong application of transformers into the perception pipeline, provides several guarantees that the dependency on LSTM and such do not. The memory over which the decision is being made is explicit, precise, and does not rely on gradual imprinting into a neural network. This explicitness also supports better explainability. Cons: Some important information may be hidden from the decision-making process if the attributes are not chosen carefully. Sixth approximation -additional transformations within the perception This final approximation is an umbrella one for every other conceivable transformation that can be added to the perception pipeline. In principle, each new transformation moves the logic away from the decision-making engine through heuristics application. The goal is to let the decisionmaking engine concentrate on high-level decisions while automating the things that are possible to be automated. Today, this approach seems the most viable one to achieve notable results. Summary and conclusions Perception is a key component of AICA, strongly shaping and influencing decision-making. This chapter introduced perception as a pipeline that acquires, transforms, and stores the raw percepts into a form that benefits the decision-making engine the most. The extent of this benefit depends on several important decisions taken when developing a perception model of AICA: • What are the intended complexity and expressive power of the world representation? • How should the perception deal with time? • Should it be actively polling the percepts or waiting for their arrival? • What power or bandwidth constraints are there for percepts' processing, and what is the importance of specific sensors? • How can perception be trusted in the adversarial environment? This chapter discussed these questions and presented trade-offs associated with various decisions. It then delved deeper into developing an actual perception model for AICA. Because the cybersecurity domain where AICA operates is much more complex than the traditional environments used in the literature, it introduced CYST, a cybersecurity simulation engine whose simulation model was used as an objective reality on which the world representation building approaches were demonstrated. In total, six approaches to approximating the objective state were presented, and their properties were explored: • Passing the raw percepts to the decision-making engine. • Eliminating (semi)static observations. • Using indexing to eliminate the impact of percepts with large domains. • Restructuring the world state to a form useful for the decision-making engine. • Keeping an explicit activity history. • Including additional transformations within perceptions. Because these approaches were developed in the context of CYST, which is freely available, users are welcome to try implementing them and experiment with their implementation. Figure 1 : 1An optical illusion. Figure 2 :Figure 3 : 23DIKW pyramid(Baldasarre, 2017) A simple perception pipeline. Figure 4 : 4A complex perception pipeline. In this case, name and version are taken from responses, vulnerable is evaluated by consulting the list of vulnerable services (CVE or such), exploitation attempts, and time are taken from requests.Size: each new attribute expands the world representation; however, this expansion can be limited by carefully choosing an appropriate domain.Service Name Version Vulnerable Exploitation attempts Time since the last exploitation From data to wisdom. R Ackoff, Journal of Applied Systems Analysis. Ackoff, R. (1989). From data to wisdom. Journal of Applied Systems Analysis, pp. 3-9. Think big: learning contexts, algorithms and data science. M Baldasarre, Baldasarre, M. (2017). Think big: learning contexts, algorithms and data science. Research on Education and Media, stránky 69-83. Session-level Adversary Intent-Driven Cyberattack Simulator. M ; M Drašar, S Moskal, S Yang, P Zaťko, 2020 IEEE/ACM 24th International Symposium on Distributed Simulation and Real Time Applications (DS-RT). Drašar, M. (2022). CYST. Načteno z https://muni.cz/go/cyst/ Drašar, M., Moskal, S., Yang, S., & Zaťko, P. (2020). Session-level Adversary Intent-Driven Cyberattack Simulator. 2020 IEEE/ACM 24th International Symposium on Distributed Simulation and Real Time Applications (DS-RT). Seeing is Not Always Believing": Detecting Perception Error Attacks Against Autonomous Vehicles. J Liu, J.-M Park, IEEE Transactions on Dependable and Secure Computing. Liu, J., & Park, J.-M. (2021). "Seeing is Not Always Believing": Detecting Perception Error Attacks Against Autonomous Vehicles. IEEE Transactions on Dependable and Secure Computing, stránky 2209-2223. GhostImage: Remote Perception Attacks against Camerabased Image Classification Systems. Y Man, M Li, R Gerdes, 2020Man, Y., Li, M., & Gerdes, R. (2020). GhostImage: Remote Perception Attacks against Camera- based Image Classification Systems. RAID 2020. Efficient Estimation of Word Representations in Vector Space. T Mikolov, G S Corrado, K Chen, J Dean, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsMikolov, T., Corrado, G. S., Chen, K., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations . Resilient and Robust Synchronization of Multiagent Systems Under Attacks on Sensors and Actuators. H Kiumarsi, B Lewis, F L Frank, F Davoudi, A , IEEE Transactions on Cybernetics. MITRE. (nedatováno). CVE. Načteno z https://cve.mitre.org/ Modarez, H., Kiumarsi, B., Lewis, F. L., Frank, F., & Davoudi, A. (2020). Resilient and Robust Synchronization of Multiagent Systems Under Attacks on Sensors and Actuators. IEEE Transactions on Cybernetics, stránky 1240-1250. Defenses Against Perception-Layer Attacks on IoT Smart Furniture for Impaired People. M M Nasralla, I García-Magariño, J Lloret, IEEE Access. Nasralla, M. M., García-Magariño, I., & Lloret, J. (2020). Defenses Against Perception-Layer Attacks on IoT Smart Furniture for Impaired People. IEEE Access, stránky 119795- 119805. S Russel, P Norvig, Artificial Intelligence: A Modern Approach. Pearson4th editionRussel, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach, 4th edition. Pearson. Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds. L K Saul, S T Roweis, Journal of Machine Learning Research. Saul, L. K., & Roweis, S. T. (2003). Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds. Journal of Machine Learning Research, stránky 119-155. PyCRA: Physical Challenge-Response Authentication For Active Sensors Under Spoofing Attacks. Y Shoukry, P Martin, Y Yona, S Diggavi, M Srivastava, CCS '15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. Shoukry, Y., Martin, P., Yona, Y., Diggavi, S., & Srivastava, M. (2015). PyCRA: Physical Challenge-Response Authentication For Active Sensors Under Spoofing Attacks. CCS '15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, (stránky 1004-1015). Reference architecture of an autonomous agent for cyber defense of complex military systems. P Theron, A Kott, M Drašar, K Rzadca, B Leblanc, M Pihelgas, . . De Gaspari, F , V Adaptive Autonomous Secure Cyber Systems. stránky 1-21Theron, P., Kott, A., Drašar, M., Rzadca, K., LeBlanc, B., Pihelgas, M., . . . de Gaspari, F. (2020). Reference architecture of an autonomous agent for cyber defense of complex military systems. V Adaptive Autonomous Secure Cyber Systems (stránky 1-21). Can You Trust Autonomous Vehicles: Contactless Attacks. C Yan, W Xu, J Liu, DEF CON. 24Yan, C., Xu, W., & Liu, J. (2016). Can You Trust Autonomous Vehicles: Contactless Attacks. DEF CON 24.
[]
[ "Learning and Refining of Privileged Information-based RNNs for Action Recognition from Depth Sequences", "Learning and Refining of Privileged Information-based RNNs for Action Recognition from Depth Sequences" ]
[ "Zhiyuan Shi [email protected] \nDepartment of Electrical and Electronic Engineering\nImperial College London\n\n", "Tae-Kyun Kim [email protected] \nDepartment of Electrical and Electronic Engineering\nImperial College London\n\n" ]
[ "Department of Electrical and Electronic Engineering\nImperial College London\n", "Department of Electrical and Electronic Engineering\nImperial College London\n" ]
[]
Existing RNN-based approaches for action recognition from depth sequences require either skeleton joints or handcrafted depth features as inputs. An end-to-end manner, mapping from raw depth maps to action classes, is nontrivial to design due to the fact that: 1) single channel map lacks texture thus weakens the discriminative power; 2) relatively small set of depth training data. To address these challenges, we propose to learn an RNN driven by privileged information (PI) in three-steps: An encoder is pretrained to learn a joint embedding of depth appearance and PI (i.e. skeleton joints). The learned embedding layers are then tuned in the learning step, aiming to optimize the network by exploiting PI in a form of multi-task loss. However, exploiting PI as a secondary task provides little help to improve the performance of a primary task (i.e. classification) due to the gap between them. Finally, a bridging matrix is defined to connect two tasks by discovering latent PI in the refining step. Our PI-based classification loss maintains a consistency between latent PI and predicted distribution. The latent PI and network are iteratively estimated and updated in an expectation-maximization procedure. The proposed learning process provides greater discriminative power to model subtle depth difference, while helping avoid overfitting the scarcer training data. Our experiments show significant performance gains over stateof-the-art methods on three public benchmark datasets and our newly collected Blanket dataset.
10.1109/cvpr.2017.498
[ "https://arxiv.org/pdf/1703.09625v4.pdf" ]
4,650,688
1703.09625
f062f2c12d12dd60e047a15916b4cef23d88bc8a
Learning and Refining of Privileged Information-based RNNs for Action Recognition from Depth Sequences Zhiyuan Shi [email protected] Department of Electrical and Electronic Engineering Imperial College London Tae-Kyun Kim [email protected] Department of Electrical and Electronic Engineering Imperial College London Learning and Refining of Privileged Information-based RNNs for Action Recognition from Depth Sequences Existing RNN-based approaches for action recognition from depth sequences require either skeleton joints or handcrafted depth features as inputs. An end-to-end manner, mapping from raw depth maps to action classes, is nontrivial to design due to the fact that: 1) single channel map lacks texture thus weakens the discriminative power; 2) relatively small set of depth training data. To address these challenges, we propose to learn an RNN driven by privileged information (PI) in three-steps: An encoder is pretrained to learn a joint embedding of depth appearance and PI (i.e. skeleton joints). The learned embedding layers are then tuned in the learning step, aiming to optimize the network by exploiting PI in a form of multi-task loss. However, exploiting PI as a secondary task provides little help to improve the performance of a primary task (i.e. classification) due to the gap between them. Finally, a bridging matrix is defined to connect two tasks by discovering latent PI in the refining step. Our PI-based classification loss maintains a consistency between latent PI and predicted distribution. The latent PI and network are iteratively estimated and updated in an expectation-maximization procedure. The proposed learning process provides greater discriminative power to model subtle depth difference, while helping avoid overfitting the scarcer training data. Our experiments show significant performance gains over stateof-the-art methods on three public benchmark datasets and our newly collected Blanket dataset. Introduction Action recognition from depth sequences [57,34,29,44,49] has attracted significant interest recently due to the emergence of low-cost depth sensors. Human action refers to a temporal sequence of primitive movements carried out by a person [55]. Recurrent neural network (RNN) [17] is naturally suited for modeling temporal dynamics of human actions as it can be used to model joint probability distribution over sequences, especially in the case of long short-term memory (LSTM) [18] which is capable of modeling long-term contextual information of complex sequential data. RNN-based approaches become the dominant solution [61,42,9,27] for action recognition from depth sequence recently. However, these approaches require either skeleton joints [61,9,22] or hand-crafted depth features [42] as inputs in both training and testing. Skeleton-based action recognition assumes that a robust tracker can estimate body joints accurately in the testing stage. This often does not hold in practice, especially when a human body is partly in view or the person is not in an upright position. Handcrafted features with heuristic parameters are designed for task-specific data. This often requires multi-stage processing phases, each of which needs to be carefully designed and tuned. An end-to-end trainable model from raw video frames [8] is desired to extract spatio-temporal features and model complex sequences in a unified framework. This learning pipeline typically combines a deep convolutional neural network (CNN) [25] as visual feature extractor and an RNN [17] to model and recognize temporal dynamics of sequential data. Unfortunately, these conventional end-toend manners (CNN+RNN) are difficult to be applied to action recognition from depth sequences due to the fact that: 1) Color and texture are precluded in depth maps, which weaken the discriminative power of the representation captured by the CNN model. 2) Existing depth data of human actions are considered as a small-scale dataset compared to publicly available RGB image dataset. These conventional pipelines are purely data-driven that learn its representation directly from the pixels. Such model is likely at the risk of overfitting when the network is optimized on limited training data. To address the above-mentioned issues, we propose a privileged information-based recurrent neural network (PRNN) that exploits additional knowledge to obtain a better estimate of network parameters. This additional knowl- Figure 1: The proposed framework of PI-based RNNs. Our approach consists of three steps: 1) The pre-training step taking both depth maps and skeleton as input. An embedded encoder is trained in a standard CNN-RNN pipeline. 2) The trained encoder is used to initialize the learning step. A multi-task loss is applied to exploit the PI in the regression term as a secondary task. 3) Finally, refining step aims to discover the latent PI by defining a bridging matrix, in order to maximize the effectiveness of the PI. The latent PI is utilized to close the gap between different information. The latent PI, bridging matrix and the network are optimized iteratively in an EM procedure. edge, also referred to as privileged information (PI) [41], hidden information [50] or side information [54,19], is only available during training but not available during testing. Our model aims to encode PI into the structure or parameters of networks automatically and effectively during the training stage. In this work, we consider skeleton joints as the PI in the proposed three-step training process (see Fig. 1). A pre-training stage is introduced that taking both depth sequences and skeleton joints as input. The learned embedding layers construct intermediate distributions over the appearance of depth sequences and skeleton joints. As our method aims to utilize only depth sequences as input in testing stage, we then optimize our model by formulating the PI into an multi-task loss in learning step: a standard softmax classification loss as our primary task, and a regression loss as our secondary task, which learn the mapping parameters to predict the skeleton joints from depth appearance. However, We observe empirically that exploiting PI as a secondary task provides little help to improve the performance of primary task due to the gap between them. Finally, a bridging matrix is defined to connect two tasks by discovering latent PI in the refining step. We present a PIbased classification loss serving as a connector to maintain a consistency between latent PI and primary output distribution by penalizing the violation of the loss inequality. We enforces dependencies across regression and classification targets by seeking shared information. The bridging matrix, latent PI and network parameters are iteratively esti-mated and updated in an expectation-maximization (EM) procedure. This proposed learning process can provide greater discriminative power to model subtle depth difference, while helping avoid overfitting the scarcer training data. As we encode skeleton joints as PI, our model does not require a skeleton tracker in a testing stage, showing its better generalizability in a more challenging scenario, such as when a human body is partly in view or the person is not in an upright position. We evaluate the proposed PRNN against state-of-the-arts on the task of action recognition from depth sequences. We demonstrate that our approach can achieve higher accuracy on the three public benchmark datasets: MSR Ac-tion3D [26], SBU Interaction dataset [59] and Cornell Activity [39]. A larger performance gain can be obtained on our newly collected Blanket dataset, where actions captured from a challenging camera view-point and some actions are partially occluded by a blanket. We also compare with several variants of our model and show that each component consistently contributes to the overall performance. Related Work Action recognition from depth sequence Human action recognition using depth maps can be classified in local or global methods. The elaborately designed features [26,47,34] are typically extracted from spatio-temporal interest points to describe the local appearance in 3D volumes or the area around human joints [16]. On the other hand, high-level representations [56] aim to globally model the postures and capture the temporal evolution of actions. To model sequential state transitions in a principled way, hidden Markov model (HMM) has attracted a lot of interest [14] in capturing the temporal structure of human action dynamics. These HMM-based methods require that video sequences are precisely cropped and aligned with actions of interest, which itself is a difficult task for real-world videos. RNNs are able to handle both variable-length input and output that become the dominant model [42,9,61] recently, achieving superior performance over previous approaches. HBRNN [9] divides human skeleton into five corresponding parts and feed them into five bidirectionally recurrently connected subnets. [61] improve the model of [9] by automatically discovering the inherent correlations among skeleton joints. Instead of assuming skeleton joints are always reliable in testing stages, [42] model the dynamic evolution of actions by measuring the salient motions from the input depth appearance. The depth features are still extracted based on hand-crafted heuristics. In this paper, we provide an end-to-end solution to action recognition from raw depth sequences. Learning with PI Data-driven approaches leverage large amounts of training data to determine the optimal model parameters in a bottom-up fashion. Purely data-driven methods are often very brittle and prone to fail when learning with limited training data, due to overfitting or an optimization obstacle involved. Learning with additional knowledge is a natural solution to alleviate this issue. This knowledge, also referred to as PI [41], hidden information [50] or side information [54], which can help to provide more explanations in training but will not be available at testing. Learning with PI has been investigated in many existing algorithms. [11] incorporate PI into an objective function of a structural SVM to improve object localization performance. [7] show that the incorporation of additional information can enhance the dependency between output variables and latent variables in a random forest framework. Additional knowledge has also been considered in neural networks. [5] explore the architecture by providing intermediate targets. [10] demonstrate the effectiveness of prior distribution for adjusting the model parameters to improve its generalization. More recently, [30] present a regularized RNNs with additional information for RGB video sequences. However, PI is either pre-trained or fixed in previous methods. In this work, we propose to optimize our end-to-end trainable model with iteratively estimating and updating latent PI for depth-based action recognition. Spatio-Temporal Modeling We illustrate an overall view of our model in Figure 1. The architecture mainly consists of an encoder, recurrent layers and PI-based learning. The encoder consists of several layers of convolutions which takes as input a collection of videos V, where each video V j is a sequence of frames V j = {v t : t = 1, ..., T j }. The encoder produces vector space representations X j = {x t : t = 1, ..., T j } for all frames of V j . The recurrent network is built for integrating over time all the available information from X j . Finally, PI is incorporated to jointly optimize all the layer parameters in the proposed three-step learning process. Convolutional Neural Network The spatial appearance of action and contextual scenes on an individual frame is captured by our encoder. The architecture of our encoder is illustrated in Figure 2. It is inspired from VGG-VeryDeep [37], which is slightly modified from the 11 weights layer version by considering the depth maps and smaller training data. The network comprises five convolutional layers, five max-pooling layers. The rectified linear unit [25] is adopted as the activation function. Compared to the widely used CNN encoder for RGB data [30,37], our encoder is more compact and effective for depth sequences. It is used to extract a feature vector from an input frame. Given an input depth frame v t ∈ R 224×224 , an activation map f 6 t ∈ R 7×7×512 can be obtained from "outMap6" layer. We apply a linear transformation between the activation map and feature vectors by x t = tanh(W 6 f 6 t + b 6 ). This "map to sequences" operation generates an input vector x t ∈ R 1×1000 for recurrent layers in refining step. Recurrent Neural Network RNNs are neural networks with feedback loops that produce the recurrent connection in the unfolded network [6,33,28]. Given an input sequence from the above encoder X n , the hidden states of a recurrent layer h j = (h t : t = 1, ..., T j ) are defined as h t = tanh(W h x t + U h h t−1 + b h ). Here W h ,U h are parameters of an affine transformation which update the connection weights among input layer, hidden layer. RNNs suffer from the vanishing and the exploding gradient problem [4]. We adopt LSTM [18] to address the problem of learning long-range dependencies, where a memory cell vector c t is maintained at each time step t. LSTM contains one self-connected memory cell c and three multiplicative units, i.e. the input gate i, the forget gate f and the output gate o, which can store and access the long range contextual information of a temporal sequence. Please refer to [18] for the precise form of the update. Pre-training with PI A pre-training strategy is proposed to learn a joint embedding by taking both depth sequences V j and skeleton joints annotation E = {e 1 , ..., e S } as input. Each e s ∈ R 3 has 3 coordinates. In this stage, x t is not directly applied to RNNs. Instead, the additional layer transforms x t together with E to derive an embedding space : x t = tanh(W 7 x t + W e E + b 7 )(1) where W e is the weight matrix connecting the skeleton joints. The resulting x t have the same dimensionality (1000) as x t . This is followed by RNNs to model the dynamics of sequential data. Finally, similar to most RNNs for classification task, a softmax layer is adopted to transform the hidden state vector into the probability distribution of action classes. The key insight of the pre-training stage is to learn a depth encoder that optimizes the embedding over both depth appearance and skeleton joints. The learned encoder serves as an initialization in the next learning stage. This pretraining stage leads to a significant improvement in both efficiency and effectiveness. Learning with PI Multi-task loss. To obtain the class predictions of an input sequence X j , the hidden state can be mapped to an output vector y j = (y t : t = 1, ..., T j ). During training, we measure the deviation between groundtruth and last memory cell at the frame T for classification loss, since LSTMs have the ability to memorize the content of an entire sequence. For regression loss, we accumulate the loss of each frame t across the T frame sequence. The final objective function in the learning step is to minimize the cumulative maximum-likelihood loss over all training sequences: L L (Ω) = J j=1 L c (T, j) + λ J j=1 T t=1 L r (t, j)(2) There are J sequences in the training set Ω. The hyperparameter λ in Eqn. 2 controls the balance between the two losses. The classification loss and regression loss are defined as follows: Classification loss. y t ∈ R K represents an 1-of-K encoding of the confidence scores on K classes of actions, which can be derived as y t = tanh(W y h t + b y ). This output vector can be transformed into a vector of probabilities p(y tk ) for each class k by softmax function as p(y tk ) = e y tk / K l=1 e y tl . To learn the model parameters of our model, cross entropy loss between the predicted distribution p(y t ) and target class g t is defined as L c (t, j)(= − K k=1 δ(k − g t ) log p(y jtk ) for the sample t of the j-th video, where δ(·) is the Dirac delta function, and g t denotes the groundtruth label of the sample t. Regression loss. Besides classification output, our model has another sibling output layer as regression term. We define a skeleton regression targets for groundtruth key-pointsÊ t = {ê t1 , ...,ê tS } and predicted locations B t = {b t1 , ..., b tS } at each time step t. We selectÊ as a subset of the skeleton annotations E, because this is secondary target and an accurate estimation of all skeleton joints is not needed in testing. Each instance is accompanied with a set of keypoint {ê x ts ,ê y ts } S s=1 locations, which are normalized with respect to the center and the width and height of the input region. The loss associated with the task of measuring the skeleton estimation can be expressed as L r (t, j) = 1 S s=S s=1 ((ê x jts − b x jts ) 2 + (ê y jts − b y jts ) 2 ) where we use L 2 distance between the normalized keypoints location to quantify the dissimilarity. This loss function and regression layer only appear in the training stage for optimizing the neural network with additional information. This extension, known as multi-task learning [32], utilize the task relationships to learn all individual tasks simultaneously, such that information can be shared in the common structure of the model to benefit all tasks. Similar as [12], it will help the classification prediction by considering the regression aspects. During testing, the regression component will be disabled. Refining with PI However, the conventional multi-task loss in the last step does not consider any relationship between two tasks. We observe empirically that purely exploiting PI as a secondary task provides little help to improve the performance of primary task due to the gap between them. To maximize the effectiveness of PI for helping primary task, we propose to discover latent PI from the secondary task in this refining step. The latent PI is utilized in the primary task to optimize the network. The updated network is further used to refine latent PI iteratively in an EM procedure. Latent PI modeling We define latent PI as a informative distribution which is jointly modeled by secondary task and a bridging matrix. The bridging matrix M aim to capture the underlying dependencies between primary and secondary task. The log-likelihood of the defined model can be expressed as: Q(Θ, M) = J j=1 log( K k=1 p(y |X j ; Θ)p(g j |y ; M)),(3) where Θ is the set of parameters of the network in refining step. Given Θ, which initialized by the model from the learning step, we can predict the skeleton joints B t of a depth frame. We concatenate the predicted skeleton of every frame to a single vector B = {B 1 , ..., B Tn }. y is then calculated as a fully connected layer: y = W y B + b y . W y and b y is part of Θ, but they are trained from scratch. During the training of the refining step, our model aims to maximize the likelihood function by optimizing both the bridging matrix and network parameter iteratively in an EM procedure. Estimating latent PI The explicit expression of latent PI is as follows: u k = p(y k |B, g; W y , M) = p(g|y k ; M)p(y k |B; W y ) K l=1 p(g|y l ; M)p(y k |B; W y ) = M kg exp(W y k B + b y ) K l=1 M lg exp(W y k B + b y )(4) p(y k |B; W y ) is a predicted probability of the class k by observing the predicted skeleton joints B of a input depth sequences. The bridging matrix M aims to transform the predicted distribution to a latent distribution that can be effectively used in optimizing the network. Updating model with latent PI The distribution of latent PI p(û j ) of an input sequence X j is defined by p(û j ) = u j z t , where z t ∈ R K is randomly generated for each frame t from a Multinoulli distribution {ĝ ∼ P(α), zĝ = 1, z l = 0, ∀l = g}, where P(α) is defined as p g = 1 − K−1 K α and p l = 1 K α, where α is to control how strongly the prior distribution is pushed to classification loss, and g is the groundtruth label. We replace the groundtruth label by the probabilities of latent PI to formulate the PI-based classification loss in refining step: L R = − J j=1 K k=1 p(û jk ) log p(y jT k ) −β K k=1 δ(k − g j ) log p(y jk )(5) a standard softmax loss is also included in L R to update the parameters (e.g. W y , b y ) from the branch of secondary task. Apart from optimizing network parameters, the bridging matrix of modeling latent PI can be updated iteratively by a closed-form solution in the M-step of EM procedure [31,36,3]: M kl (Ω) = J j=1 u jk δ(l − g j ) u jk , k, l ∈ {1, ..., K}(6) Algorithm 1: PI-based RNNs Input: A collection of videos V, skeleton joints annotation E, subset of skeleton jointsÊ, groundtruth class label g. Output: Network parameters, bridging matrix M Pre-training: Eq.1 taking both x of depth sequences V and skeleton joints E, A encoder is trained by minimize the standard softmax loss. end Learning: Taking the subset of skeleton jointsÊ in the regression term. The parameters of network are optimized by minimizing the multi-task loss Eq. 2 end Refining: while not converge do E-step: Estimating and updating the latent PI by Eq. 4 end M-step: The parameters of network are optimized by PI-based classification loss Eq. 5. The bridging matrix M is updated with Eq. 6 end end end Discussion on latent PI Latent PI can be treated as a sufficient information to act as a teacher network [24,40]. However, our latent PI is obtained in the same framework rather than trained from a separate model. Our model further refines latent PI according to the feedback of the network in each iteration. This updating process us two benefits: (1) The formulation strikes a good balance between the class distributions learned from depth appearance and skeleton information. This is similar in spirit to [35], where a weight distribution is utilized to improve the learning process of random forest. Sun et al. [38] also incorporate prior information (e.g. human height) to enhance the dependency between output variables and latent variables, where the prior can help to split data effectively. The skeleton and raw depth sequence should share relevant and complementary information. Here, we measure the loss by partially considering the posterior obtained from skeleton joints. We show that this learning process improves the discriminative power of the network. (2) Apart from learning better depth representation, our PI-based classification loss provides an effective way to prevent overfitting. Since the prior label is not per- 0 : 224×224×1 224×224×64 112×112×128 56×56×256 28×28×512 14×14×512 COV1 COV2 COV3 COV4 COV5 outMap6 7×7×512 map-to-sequence 1 : 1×1000 … Figure 2: The architecture of the encoder. The convolutional layers (from COV1 to COV5) with kernel size 3×3 and a stride of 1. The padding implements same convolution (and pooling), where the input and output maps have the same spatial extent. max-pooling is performed from COV1 to COV5 over 2×2 spatial windows with stride 2. fectly trained, the noise is introduced when we switch to the prior label according to α. This term can be treated as a regularizer similar as [53], where they intentionally generate incorrect training labels at the loss layer. Our loss function also seeks to minimize the confusion between the two distributions. Model Training We summarize the whole training process of the proposed PI-based RNNs in Algorithm 1. Note that the learning step and refining step can be potentially preformed alternatively to improve the effectiveness of the trained model. In our experiments, we show that one round of learning and refining step achieves significant improvements. While small improvements can be further obtained with more rounds, which has been verified on SBU dataset, we fix to one round of learning and refining for all experiments with the good trade-off between accuracy and efficiency. In refining step, the EM procedure is still run iteratively until convergence. For all three steps, the error differentials measured by the last layer of the recurrent neural network will be backpropagated to feature sequences and feed back to the convolutional layers across every frame in the videos. Our approach is an end-to-end trainable network that jointly learns the parameters of the CNN and the RNN. We train each model with stochastic gradient descent on the negative loglikelihood using the Adam optimizer, with a learning rate of 0.001 for MSR Action3D and 0.0001 for the rest. A minibatch size of 10 is applied to all datasets. We use early stopping when the validation error starts to increase. Experiments We compare the performance of our model with state-ofthe-art methods and baselines on four datasets: MSR Ac-tion3D Dataset [26] (Action3D), SBU Interaction dataset [59] (SBU), Cornell Activity Dataset [39](CAD60), and the proposed Blanket dataset (Blanket). We also analyze each component of our model and the computational efficiency. Datasets: Action3D is an action dataset of depth sequences captured by a depth camera. This dataset consists of 20 actions performed by 10 subjects. Every action was performed by ten subjects three times each. All sequences are captured in 15 FPS, and each frame in a sequence contains 20 skeleton joints. Altogether, the dataset has 557 valid action sequences with 23797 frames of depth maps. SBU consists of 282 pre-segmented sequences, which includes 8 classes depicting two-person interaction. Each action is performed by 21 pairs of subjects. CAD60 consists of 68 video clips captured by Microsoft Kinect device. Each video is of length about 45s. Four different subjects performed 14 different activities in five locations: office, kitchen, bedroom, bathroom and living room. Blanket contains 120 depth video clips. There are 12 different action classes performed by 10 subjects. Our dataset contains more static actions (e.g. lying and sitting). This dataset is very challenging, as some actions are partially occluded by a blanket. For example, one actor is sitting on the bed while he is covered by a blanket (please refer to our supplementary video for all actions). Implementation details: We implemented the network using TensorFlow [1]. The architecture of convolutional layers (see Fig. 2) is slightly modified from VGG-VeryDeep [37] (with 11 weight layers) for depth maps. We initialize the weights without pre-training by using the normalized initialization procedure [13]. Unlike images which can be rescaled and randomly cropped to a fixed size, spatiotemporal consistency has to be considered for video sequences. Each input video frame is scaled to 227x227 from the whole frame. We did not perform the operation of randomly cropped and flipped for utilizing PI easily. The depth values are normalized to [-1,1]. Our model has a stack of 2 LSTMs of 1000 hidden units each. To reduce the computation cost, we sample each video of CAD60 with a maximum length of 200 frames. We do not sample frames from MSR Action3D, SBU and Blanket dataset. We unroll the LSTM to a maximum length of 200 time steps for CAD60, 300 time steps for Blanket and 100 time steps for the rest during training, which is a good trade-off between accuracy and complexity. We mainly consider the skeleton joints as our PI. The prior class distribution is obtained by training DURNN-L [9] with all available skeletons. In our regression loss, we Action3D CAD60 Blanket SBU Figure 3: Examples of depth maps on four datasets. use only six joints (i.e. head, hand left, hand right, foot left, foot right, hip center) as this secondary target is formulated for helping classification accuracy. For our Blanket dataset, we annotate the six joints for both pre-training and refining stage because of the special camera view-point. We normalize 3D joint coordinates to a unified coordinate system from the world coordinate system by placing the hip center at the origin [43]. Similar as [9], we apply a simple Savitzky-Golay smoothing filter to smooth the skeleton annotations. Comparison to the State-of-the-art The experimental results are shown in Table 1. Existing state-of-the-art methods can be partitioned into two groups: using 1) only the depth sequences or 2) at least skeleton information in the testing stage. Results on Action3D : We follow a similar evaluation protocol from [45,46]. In this setting, the dataset is divided into two sets, where half of the subjects are used for training and the other half are used for testing. Compared to another protocol [9] that splits classes into three subsets, this setting is more challenging as all actions are evaluated together. The average accuracy corresponds to the mean of the confusion matrix diagonal of all classes. Note that 10 skeleton sequences were not used [47] because of missing data. We compare the proposed model PRNN with Xia et al. [52], Oreifej et al. [34], and Yang et al. [56]. All theses methods require only depth maps as input during testing. We can see that our proposed PRNN achieves the best average accuracy (94.9 %) compared with them. For a complete comparison, we also list those skeleton-based approach in the lower part of the Table 1. Skeleton-based approaches demonstrate slightly better performance by assuming a robust skeleton tracker is available in testing. Our method aims to provide a more general framework allowing us to learn the model directly from raw observations of depth videos, rather than explicitly modeling skeletal joints [9] or local appearance [42]. Many of these methods either focus on modeling spatio-temporal structure with a certain assumption [34], or exploit the trajectories of human joints [42,60] in the testing stage which rely on accurate skeleton joints detection. SBU : We follow the experimental setting of [59,61] and use five-fold cross-validation. All action categories are composed of interactions between actors, involving human acting and reacting. This dataset is very challenging, especially in our setting where skeleton information is not available in testing. We summarize the results in Table 1 -80.35 --Wang et al. [48] 96.9 ---Wang et al. [45] 91.40 ---Zhu et al. [61] -90.41 --Gori et al. [15] 95.38 93.08 --Wang et al. [47] 88.2 -74.7 - Table 1: Comparison with state-of-the-art methods on four datasets for action recognition. '-' indicates no result was reported and no code is available for implementation. see that our method achieves superior performance to the depth-based approaches and perform close to the skeletonbased approaches. CAD60 : We follow the same experimental setting as in [47,20] by adopting the leave-one-person-out cross validation. i.e. the model was trained on three of the four people, and tested on the fourth. Table 1 compares the results on CAD60. We can see that the proposed PRNN achieves the 87.6% accuracy with only seeing the depth maps, comparing against previous works which utilize multiple cues (i.e. RGB frames, depth maps and the tracked skeleton joint positions) in testing. Some different human actions of CAD60 share similar body motions such as "chopping" and "stirring". Our model takes advantage of the PI-based learning process, which allows to distinguish the subtle motions from depth maps [21]. Blanket: Similar to CAD60, we follow the protocol as [47] and perform cross-validation on our proposed dataset. We compare our model with three baseline methods: Xia et al. [52], Oreifej et al. [34], Yang et al. [57]. We use their publicly available codes and train their model with varying their parameters, so as to report the best results for fair comparison. The experimental results are shown in Table 1. The proposed PRNN obtains the state-of-the-art accuracy of 53.5%. Our collected data is more difficult to learn than the existing dataset. Although each basic action is simple like "sitting" and "lying down", the actor (i.e. patient) is either partially occluded by a blanket or in a suffering status when he performs these actions. It introduces severe noise (e.g. shaking his body, trembling) to the basic actions. Moreover, this special camera view-point (see Figure 3) and the occlusion by a blanket will cause difficulties for skeletal estimation. As expected, a larger performance gap is seen between our model and other approaches. This demonstrates the potential of our model in representing and modeling the dynamics of actions directly from depth maps. In brief, we show the competitive performance of the Table 2: Contribution of each model component proposed PRNN on four human action datasets. Our model provides an effective end-to-end solution for modeling temporal dynamics in action sequences by exploiting the PI in training time. Unlike most of the previous works that are based on a certain assumption about the structure of the depth maps or the availability of a robust skeleton tracker, our model automatically learns features from raw depth maps irrespective of any assumptions [58,51] on the structure of video sequences . Model Analysis Evaluation of individual components To verify the effect of individual components in our framework and demonstrate that if each of them contributes to the performance boost, we evaluate three variants of our approach: (1) PRNN-NoPreTrain discards the pre-training strategy as shown in Sec. 4.1. Instead, the CNN encoder is trained from the scratch in the learning stage. (2) PRNN-NoRefine ignores the last refining step as described in Sec. 4.3. The final model is trained by pre-train and learning steps. Note that the learning step in Sec. 4.2 can not be removed individually, because the latent PI is obtained based on the regression term of the learning step. We report the performance of a vanilla CNN-RNN pipeline. This is similar to our model in pre-training step, except that skeleton is not a part of the input during training. Note that our pre-training stage (taking both depth and skeleton as input) is specifically designed for our learning stage (with classification and regression loss). We tried to initialize vanilla CNN-RNN (depth input with classification loss) with our pretrained model. It performs much worse than learning from scratch. We show the average accuracy of all stripped-down versions of our model in Table 2. Overall, our method consistently achieves better performance with integrating each individual component, suggesting that each one of them contributes to the final performance. Without exploiting PI in the pre-training step, our model performs poorly due to the ineffective initialization. The vanilla CNN-RNN also suffers from the relatively small number of training data, and thus cannot take full advantage of the end-to-end manner. By considering the latent PI information in the refining step, this overfitting problem can be greatly alleviated from CNN-RNN and PRNN-NoRefine. It is clear that the performance has been substantially improved (PRNN) when combining these steps together. Qualitative analysis We compare our approach with three variants in Figure 4, which illustrates the real-time predic- Figure 4: Qualitative comparison of real-time prediction as time evolves for the action "falling from the bed" tion of an example sequence on every 15 time steps. The groundtruth action label is "falling from the bed". All methods give a low confidence to the correct action class at the beginning. As time evolves, we find that our approach first correctly predict the action labels. We attribute this faster learning ability to the mechanism of encoding PI [2], which allows us to distinguish the subtle depth difference across successive frames. Computational efficiency We take the Action3D as an example to discuss the efficiency of our approach. With Python and C++ implementation on a NVIDIA Titan X GPU, our three-steps learning process takes about 11 hours to converge after continuously decreasing over 200k SGD iterations. Gradients are averaged over each minibatch in every training iteration. During testing, it can achieve real-time performance (≈ 38 FPS). Compared with multistage models, the efficiency of our approach is mainly attributed to its end-to-end property without preprocessing step. Please refer to our supplementary video for real-time testing performance. Conclusion and Future Work In this paper, we propose to learn a recurrent neural network with PI. The presented learning process provides threefold benefits: 1) The pre-training stage provides a midlevel embeddings which can be effectively tuned in the further stage. 2) In learning stage, a multi-task loss is formulated to exploit PI as a secondary task. 3) The learned information is further modeled to a latent PI, which is defined to close the gap between two tasks. The latent PI is used to enhance the discriminative power of the learned representation by closing two distributions. The latent PI is also updated iteratively in an EM fashion. In addition, the randomly sampled classification loss operates as a regularizer to reduce the tendency for overfitting. We apply our model to the problem of action recognition from depth sequences, and achieve better performance on three publicly available datasets and our newly collected dataset. In the future, we will consider to investigate more different types of PI and seek to model this information in the intermediate level of neural network [5]. . We canMethod Action3D SBU CAD60 Blanket depth Xia et al. [52] 89.3 43.69 - 40.6 Oreifej et al. [34] 88.9 77.0 72.7 42.8 Yang et al. [57] 93.45 - - 41.2 PRNN 94.9 89.2 87.6 53.5 skeleton Vemulapalli et al. [43] 89.48 - - - Veeriah et al. [42] 92.03 - - - Hu et al. [20] - - 84.1 - Koppula et al. [23] - - 71.4 - Du et al. [9] . PI-Based RNNs Standard recurrent neural networks do not provide a mechanism to exploit the PI when it is available at training time. We first present a pre-training strategy. The learned encoder is applied to the learning step and tuned together with RNNs by formulating the PI into a multi-task loss. In the final refining step, latent PI is discovered and iteratively updated with network parameters. Acknowledgement: This work was supported by the Omron Corporation. Tensorflow: Large-scale machine learning on heterogeneous systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, Software available from tensorflow. orgM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow. org, 2015. Real-time online action detection forests using spatio-temporal contexts. S Baek, K I Kim, T.-K Kim, WACV. S. Baek, K. I. Kim, and T.-K. Kim. Real-time online action detection forests using spatio-temporal contexts. In WACV, 2016. Training deep neuralnetworks based on unreliable labels. A J Bekker, J Goldberger, ICASSP. A. J. Bekker and J. Goldberger. Training deep neural- networks based on unreliable labels. In ICASSP, 2016. Learning long-term dependencies with gradient descent is difficult. Y Bengio, P Simard, P Frasconi, TNNY. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. TNN, 1994. Knowledge matters: Importance of prior information for optimization. Ç Aglar Gülçehre, Y Bengio, JMLRÇ aglar Gülçehre and Y. Bengio. Knowledge matters: Impor- tance of prior information for optimization. JMLR, 2016. A recurrent latent variable model for sequential data. J Chung, K Kastner, L Dinh, K Goel, A C Courville, Y Bengio, NIPS. J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model for sequential data. In NIPS, 2015. Realtime Facial Feature Detection using Conditional Regression Forests. M Dantone, J Gall, G Fanelli, L V Gool, CVPR. M. Dantone, J. Gall, G. Fanelli, and L. V. Gool. Real- time Facial Feature Detection using Conditional Regression Forests . In CVPR, 2012. Long-term recurrent convolutional networks for visual recognition and description. J Donahue, L A Hendricks, S Guadarrama, M Rohrbach, S Venugopalan, K Saenko, T Darrell, CVPR. J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recur- rent convolutional networks for visual recognition and de- scription. In CVPR, 2015. Hierarchical recurrent neural network for skeleton based action recognition. Y Du, W Wang, L Wang, CVPR. Y. Du, W. Wang, L. Wang, and . Hierarchical recurrent neu- ral network for skeleton based action recognition. In CVPR, 2015. Attend, infer, repeat: Fast scene understanding with generative models. S Eslami, N Heess, T Weber, Y Tassa, K Kavukcuoglu, G E Hinton, arXiv:1603.08575arXiv preprintS. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016. Object localization based on structural svm using privileged information. J Feyereisl, S Kwak, J Son, B Han, NIPS. J. Feyereisl, S. Kwak, J. Son, and B. Han. Object localiza- tion based on structural svm using privileged information. In NIPS. 2014. Fast R-CNN. R Girshick, ICCV. R. Girshick. Fast R-CNN. In ICCV, 2015. Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, AISTATS. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010. Structured time series analysis for human action segmentation and recognition. D Gong, G Medioni, X Zhao, TPAMID. Gong, G. Medioni, and X. Zhao. Structured time se- ries analysis for human action segmentation and recognition. TPAMI, 2014. Multitype activity recognition in robot-centric scenarios. I Gori, J K Aggarwal, L Matthies, M S Ryoo, IEEE Robotics and Automation Letters. I. Gori, J. K. Aggarwal, L. Matthies, and M. S. Ryoo. Mul- titype activity recognition in robot-centric scenarios. IEEE Robotics and Automation Letters, 2016. Histogram of oriented displacements (hod): Describing trajectories of human joints for action recognition. M A Gowayyed, M Torki, M E Hussein, M El-Saban, IJCAI. M. A. Gowayyed, M. Torki, M. E. Hussein, and M. El-Saban. Histogram of oriented displacements (hod): Describing tra- jectories of human joints for action recognition. In IJCAI, 2013. Supervised sequence labelling with recurrent neural networks. A Graves, SpringerA. Graves. Supervised sequence labelling with recurrent neural networks. Springer, 2012. Long short-term memory. S Hochreiter, J Schmidhuber, Neural Computation. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. Learning with side information through modality hallucination. J Hoffman, S Gupta, T Darrell, CVPR. J. Hoffman, S. Gupta, and T. Darrell. Learning with side information through modality hallucination. In CVPR, 2016. Jointly learning heterogeneous features for rgb-d activity recognition. J.-F Hu, W.-S Zheng, J Lai, J Zhang, CVPR. J.-F. Hu, W.-S. Zheng, J. Lai, and J. Zhang. Jointly learn- ing heterogeneous features for rgb-d activity recognition. In CVPR, 2015. Low-rank tensor learning with discriminant analysis for action classification and image recovery. C Jia, G Zhong, Y Fu, C. Jia, G. Zhong, and Y. Fu. Low-rank tensor learning with discriminant analysis for action classification and image re- covery. 2014. Tensor representations via kernel linearization for action recognition from 3d skeletons. P Koniusz, A Cherian, F Porikli, ECCV. P. Koniusz, A. Cherian, and F. Porikli. Tensor representa- tions via kernel linearization for action recognition from 3d skeletons. In ECCV, 2016. Learning human activities and object affordances from rgb-d videos. IJRR. H S Koppula, R Gupta, A Saxena, H. S. Koppula, R. Gupta, A. Saxena, and . Learning human activities and object affordances from rgb-d videos. IJRR, 2013. Bayesian dark knowledge. A Balan, V Rathod, K P Murphy, M Welling, NIPS. A. Korattikara Balan, V. Rathod, K. P. Murphy, and M. Welling. Bayesian dark knowledge. In NIPS, 2015. ImageNet Classification with Deep Convolutional Neural Networks. A Krizhevsky, Sutskever, G E Ilya, Hinton, NIPS. A. Krizhevsky, Sutskever, Ilya, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS, 2012. Action Recognition Based on A Bag of 3D Points. W Li, Z Zhang, Z Liu, CVPRWW. Li, Z. Zhang, and Z. Liu. Action Recognition Based on A Bag of 3D Points. In CVPRW, 2010. Spatio-temporal lstm with trust gates for 3d human action recognition. J Liu, A Shahroudy, D Xu, G Wang, ECCV. J. Liu, A. Shahroudy, D. Xu, and G. Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In ECCV, 2016. Predicting the next location: A recurrent model with spatial and temporal contexts. Q Liu, S Wu, L Wang, T Tan, Q. Liu, S. Wu, L. Wang, and T. Tan. Predicting the next loca- tion: A recurrent model with spatial and temporal contexts. 2016. Group Sparsity and Geometry Constrained Dictionary Learning for Action Recognition from Depth Maps. J Luo, W Wang, H Qi, ICCV. J. Luo, W. Wang, and H. Qi. Group Sparsity and Geome- try Constrained Dictionary Learning for Action Recognition from Depth Maps. In ICCV, 2013. Regularizing long short term memory with 3d human-skeleton sequences for action recognition. B Mahasseni, S Todorovic, CVPR. B. Mahasseni and S. Todorovic. Regularizing long short term memory with 3d human-skeleton sequences for action recog- nition. In CVPR, June 2016. The EM algorithm and extensions. G Mclachlan, T Krishnan, John Wiley & Sons382G. McLachlan and T. Krishnan. The EM algorithm and ex- tensions, volume 382. John Wiley & Sons, 2007. T M Mitchell, Machine Learning. New York, NY, USAMcGraw-Hill, Inc1 editionT. M. Mitchell. Machine Learning. McGraw-Hill, Inc., New York, NY, USA, 1 edition, 1997. Siamese recurrent architectures for learning sentence similarity. J Mueller, A Thyagarajan, AAAI. J. Mueller and A. Thyagarajan. Siamese recurrent architec- tures for learning sentence similarity. In AAAI, 2016. HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences. O Oreifej, Z Liu, CVPR. O. Oreifej and Z. Liu. HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences. In CVPR, 2013. Alternating decision forests. S Schulter, P Wohlhart, C Leistner, A Saffari, P M Roth, H Bischof, CVPR. S. Schulter, P. Wohlhart, C. Leistner, A. Saffari, P. M. Roth, and H. Bischof. Alternating decision forests. In CVPR, 2013. Bayesian joint modelling for object localisation in weakly labelled images. Z Shi, T M Hospedales, T Xiang, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Z. Shi, T. M. Hospedales, and T. Xiang. Bayesian joint modelling for object localisation in weakly labelled images. IEEE Transactions on Pattern Analysis and Machine Intelli- gence (TPAMI), 2015. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. Conditional Regression Forests for Human Pose Estimation. M Sun, P Kohli, J Shotton, CVPR. M. Sun, P. Kohli, and J. Shotton. Conditional Regression Forests for Human Pose Estimation. In CVPR, 2012. Unstructured Human Activity Detection from RGBD Images. J Sung, C Ponce, B Selman, A Saxena, ICRA. J. Sung, C. Ponce, B. Selman, and A. Saxena. Unstructured Human Activity Detection from RGBD Images. In ICRA, 2012. Rethinking the Inception Architecture for Computer Vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, ArXiv e-printsC. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the Inception Architecture for Computer Vision. ArXiv e-prints, 2015. A new learning paradigm: Learning using privileged information. V Vapnik, A Vashist, NNV. Vapnik and A. Vashist. A new learning paradigm: Learn- ing using privileged information. NN, 2009. Differential recurrent neural networks for action recognition. V Veeriah, N Zhuang, G.-J Qi, ICCV. V. Veeriah, N. Zhuang, G.-J. Qi, and . Differential recurrent neural networks for action recognition. In ICCV, 2015. Human action recognition by representing 3d skeletons as points in a lie group. R Vemulapalli, F Arrate, R Chellappa, CVPR. R. Vemulapalli, F. Arrate, R. Chellappa, and . Human action recognition by representing 3d skeletons as points in a lie group. In CVPR, 2014. STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences. A W Vieira, E R Nascimento, G L Oliveira, Z Liu, M F Campos, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. A. W. Vieira, E. R. Nascimento, G. L. Oliveira, Z. Liu, and M. F. Campos. STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2012. Recognizing actions in 3d using action-snippets and activated simplices. C Wang, J Flynn, Y Wang, A Yuille, AAAI. C. Wang, J. Flynn, Y. Wang, and A. Yuille. Recognizing actions in 3d using action-snippets and activated simplices. In AAAI, 2016. Mining 3d key-posemotifs for action recognition. C Wang, Y Wang, A L Yuille, CVPR. C. Wang, Y. Wang, and A. L. Yuille. Mining 3d key-pose- motifs for action recognition. In CVPR, 2016. Learning Actionlet Ensemble for 3D Human Action Recognition. J Wang, Z Liu, Y Wu, J Yuan, TPAMIJ. Wang, Z. Liu, Y. Wu, and J. Yuan. Learning Actionlet Ensemble for 3D Human Action Recognition. TPAMI, 2014. Beyond covariance: Feature representation with nonlinear kernel matrices. L Wang, J Zhang, L Zhou, C Tang, W Li, ICCV. L. Wang, J. Zhang, L. Zhou, C. Tang, and W. Li. Beyond co- variance: Feature representation with nonlinear kernel ma- trices. In ICCV, 2015. Action Recognition from Depth Maps Using Deep Convolutional Neural Networks. P Wang, W Li, Z Gao, J Zhang, C Tang, P Ogunbona, IEEE Transactions on Human Machine Systems. P. Wang, W. Li, Z. Gao, J. Zhang, C. Tang, and P. Ogunbona. Action Recognition from Depth Maps Using Deep Convo- lutional Neural Networks. In IEEE Transactions on Human Machine Systems, 2015. Classifier learning with hidden information. Z Wang, Q Ji, CVPR. Z. Wang and Q. Ji. Classifier learning with hidden informa- tion. In CVPR, June 2015. Learning motion categories using both semantic and structural information. S F Wong, T.-K Kim, R Cipolla, CVPR. S. F. Wong, T.-K. Kim, and R. Cipolla. Learning motion categories using both semantic and structural information. In CVPR, 2007. Spatio-Temporal Depth Cuboid Similarity Feature for Activity Recognition Using Depth Camera. L Xia, J K Aggarwal, CVPR. L. Xia and J. K. Aggarwal. Spatio-Temporal Depth Cuboid Similarity Feature for Activity Recognition Using Depth Camera. In CVPR, 2013. L Xie, J Wang, Z Wei, M Wang, Q Tian, DisturbLabel: Regularizing CNN on the Loss Layer. In CVPR. L. Xie, J. Wang, Z. Wei, M. Wang, and Q. Tian. DisturbLa- bel: Regularizing CNN on the Loss Layer. In CVPR, 2016. Speedup matrix completion with side information: Application to multi-label learning. M Xu, R Jin, Z.-H Zhou, NIPS. M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multi-label learning. In NIPS. 2013. Recognizing human action in time-sequential images using hidden markov model. J Yamato, J Ohya, K Ishii, CVPR. J. Yamato, J. Ohya, and K. Ishii. Recognizing human action in time-sequential images using hidden markov model. In CVPR, 1992. Super Normal Vector for Activity Recognition Using Depth Sequences. X Yang, Y Tian, CVPR. X. Yang and Y. Tian. Super Normal Vector for Activity Recognition Using Depth Sequences. In CVPR, 2014. Super normal vector for human activity recognition with depth cameras. X Yang, Y Tian, TPAMIX. Yang and Y. Tian. Super normal vector for human activity recognition with depth cameras. TPAMI, 2016. Real-time action recognition by spatiotemporal semantic and structural forest. T.-H Yu, T.-K Kim, R Cipolla, BMVC. T.-H. Yu, T.-K. Kim, and R. Cipolla. Real-time action recog- nition by spatiotemporal semantic and structural forest. In BMVC, 2010. Two-person interaction detection using bodypose features and multiple instance learning. K Yun, J Honorio, D Chattopadhyay, T L Berg, D Samaras, CVPRWK. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras. Two-person interaction detection using body- pose features and multiple instance learning. In CVPRW, 2012. The Moving Pose: An Efficient 3D Kinematics Descriptor for Low-Latency Action Recognition and Detection. M Zanfir, M Leordeanu, C Sminchisescu, ICCV. M. Zanfir, M. Leordeanu, and C. Sminchisescu. The Mov- ing Pose: An Efficient 3D Kinematics Descriptor for Low- Latency Action Recognition and Detection. In ICCV, 2013. Co-occurrence Feature Learning for Skeleton based Action Recognition using Regularized Deep LSTM Networks. W Zhu, C Lan, J Xing, W Zeng, Y Li, L Shen, X Xie, AAAI. W. Zhu, C. Lan, J. Xing, W. Zeng, Y. Li, L. Shen, and X. Xie. Co-occurrence Feature Learning for Skeleton based Action Recognition using Regularized Deep LSTM Networks. In AAAI, 2016.
[]
[ "Estimation of Camera Locations in Highly Corrupted Scenarios: All About that Base, No Shape Trouble", "Estimation of Camera Locations in Highly Corrupted Scenarios: All About that Base, No Shape Trouble" ]
[ "Yunpeng Shi \nUniversity of Minnesota\nUniversity of Minnesota\n\n", "Gilad Lerman [email protected] \nUniversity of Minnesota\nUniversity of Minnesota\n\n" ]
[ "University of Minnesota\nUniversity of Minnesota\n", "University of Minnesota\nUniversity of Minnesota\n" ]
[]
We propose a strategy for improving camera location estimation in structure from motion. Our setting assumes highly corrupted pairwise directions (i.e., normalized relative location vectors), so there is a clear room for improving current state-of-the-art solutions for this problem. Our strategy identifies severely corrupted pairwise directions by using a geometric consistency condition. It then selects a cleaner set of pairwise directions as a preprocessing step for common solvers. We theoretically guarantee the successful performance of a basic version of our strategy under a synthetic corruption model. Numerical results on artificial and real data demonstrate the significant improvement obtained by our strategy.
10.1109/cvpr.2018.00303
[ "https://arxiv.org/pdf/1804.02591v1.pdf" ]
4,712,888
1804.02591
dfbb4ec2b5aecbe61aec5d192adc7860ee781ee1
Estimation of Camera Locations in Highly Corrupted Scenarios: All About that Base, No Shape Trouble 7 Apr 2018 Yunpeng Shi University of Minnesota University of Minnesota Gilad Lerman [email protected] University of Minnesota University of Minnesota Estimation of Camera Locations in Highly Corrupted Scenarios: All About that Base, No Shape Trouble 7 Apr 2018 We propose a strategy for improving camera location estimation in structure from motion. Our setting assumes highly corrupted pairwise directions (i.e., normalized relative location vectors), so there is a clear room for improving current state-of-the-art solutions for this problem. Our strategy identifies severely corrupted pairwise directions by using a geometric consistency condition. It then selects a cleaner set of pairwise directions as a preprocessing step for common solvers. We theoretically guarantee the successful performance of a basic version of our strategy under a synthetic corruption model. Numerical results on artificial and real data demonstrate the significant improvement obtained by our strategy. Introduction The problem of Structure from Motion (SfM), that is, reconstructing 3D structure from 2D images, is critical in computer vision. The common pipeline for 3D reconstruction consists of the following steps: 1. Matching keypoints among images using SIFT [12]; 2. Computing the essential matrices from the matched image pairs and extracting relative camera rotations [8]; 3. Finding global camera orientations via rotation synchronization and estimating relative camera translations [1,3,6,9,13,17]; 4. Estimating camera locations from estimated pairwise directions [1,2,4,5,6,7,15,16,17,21,22], where a pairwise direction between two cameras is the normalized relative location vector between them; 5. Recovering the 3D structure using bundle adjustment [20]. The key for successful 3D recovery is the accurate estimation of camera parameters, including camera locations and orientations. These parameters can be misestimated due to erroneous keypoint matching, which results in inaccurate estimates of the essential matrices [18]. This paper develops a robust and theoretically-guaranteed strategy for improving camera location estimation from corrupted pairwise directions. Previous Works A variety of camera location solvers have been proposed in the past two decades [18]. The least squares methods [1,2,5] are among the earliest solvers. However, these methods are not robust to outliers (namely, maliciously corrupted pairwise directions) and furthermore they typically produce collapsed location estimates. That is, the estimated camera locations are usually clustered around few points. The constrained least squares (CLS) method [21,22] introduced an anti-collapsed constraint, which makes it more stable to noise but not outliers. The semidefinite relaxation (SDR) solver [17] converts the least squares problem into an SDP formulation with a nonconvex anti-collapse constraint. However, it is not outliers-robust, and its computation is challenging even after convex relaxation. Other solvers include the ℓ ∞ method [15] and the Lie-algebraic averaging method [6], but the ℓ ∞ norm is sensitive to outliers and [6] suffers from convergence to local minima and from sensitivity to outliers. Recent outlier-robust methods have been proposed for camera location estimation. One class of solvers use outlier detection algorithms as a preprocessing step to improve their subsequent estimator. For the different problem of camera rotation estimation, cycle-consistency constraints were proposed in [15,24] to remove outlying relative orientation measurements. For camera location recovery, the 1DSfM algorithm was proposed in [23] for removing outlying pairwise directions. It projects the 3D direction vectors onto 1D, reformulates the cycle-consistency constraints as an ordering problem and solves it using a heuristic combinatorial method. However, its convergence to the global minimum is not guaranteed. Another class of methods directly solve robust convex optimization problems and include the least unsquared deviations (LUD) algorithm [16] and the ShapeFit algorithm [7]. Exact recovery guarantees under a certain corruption model were established for ShapeFit and LUD in [7] and [11] respectively. An ADMM-accelerated version of ShapeFit, called ShapeKick, was proposed in [4]. However, it sacrifices accuracy for speed. A robust formulation for estimating the fundamental matrices was presented in [19]. However, it may suffer from convergence to local minima and requires good initialization. Contribution of This Work We propose a novel algorithm for detecting and removing highly corrupted pairwise directions. We use it as a preprocessing step for existing location recovery algorithms. Our method forms a statistic for any pairwise direction between two given cameras. This statistic estimates the average inconsistency of this pairwise direction with any two pairwise directions associated with an additional camera. This inconsistency is based on the shortest path in S 2 between a direction vector and a base of a spherical triangle. We thus refer to this inconsistency and statistic as All-About-that-Base (AAB). After computing a fast version of the AAB statistic, we remove edges with large statistics and apply a preferable solver. This method is fast and easy to implement, and it can be used as a preprocessing step for any camera location solver. Most importantly, we are able to theoretically guarantee its successful classification on corrupted and uncorrupted edges. We are not aware of any other theoretically-guaranteed algorithm for removing corrupted pairwise direction measurements. We also present an iterative procedure for improving the AAB statistic, so outliers could be identified more accurately. Experiments on synthetic and real data demonstrate significant improvement of camera location accuracy by our proposed method. Setting for Camera Location Estimation A mathematical setting for camera location estimation assumes n unknown camera locations {t * i } i∈[n] ⊆ R 3 , where [n] = {1,2,...,n}. The ground truth pairwise direction γ * ij between cameras i, j ∈ [n] is defined by γ * ij = t * i −t * j t * i −t * j ,(1) where · denotes the Euclidean norm. In practice, one often measures a corrupted pairwise direction γ ij between cameras i and j. The mathematical problem assumes possibly corrupted pairwise measurements γ ij ∈ E for some E ⊆ [n] × [n] and asks to estimate the camera locations {t * i } i∈[n] up to ambiguous translation and scale. Note that E may not include all the pairs of indices, so that some values can be missing. In order to establish theoretical guarantees and conduct synthetic data experiments for the AAB procedure, we assume that the true camera locations and corrupted pairwise directions are generated by the following slight modification of the Uniform Corruption Model UC(n, p, q, σ) [16]: Let V = {t * i } i∈[n] be generated by i.i.d. N (0, I 3 ) and let G(V, E) be a graph generated by the Erdös-Rényi model G(n,p), where p denotes the connection probability among edges. For any ij ∈ E, a corrupted pairwise direction γ ij is generated by γ ij = v ij , w.p. q; γ * ij +σǫij γ * ij +σǫij , w.p. 1−q,(2) where 0 < q < 1 is the probability of corruption, σ ≥ 0 is the noise level and v ij , ǫ ij are independently drawn from a uniform distribution on S 2 . The UC model of [16] assumes instead that ǫ ij are i.i.d. N (0,I 3 ). We have noticed similar numerical results for data generated from both models, however, our theory described below is easier to state and verify under the uniform assumption. Statistics for Corruption Reduction We describe a statistic that may distinguish corrupted edges. It uses the geometric notion of cycle-consistency of uncorrupted edges. Cycle-consistency measures were used in [15,23,24] as criteria for outlier removal. For location recovery, the cycle-consistency of 3 vectors γ 1 , γ 2 , γ 3 ∈ S 2 refers to the existence of λ 1 , λ 2 , λ 3 > 0 such that λ 1 γ 1 +λ 2 γ 2 +λ 3 γ 3 = 0. (3) One may easily observe that the pairwise directions γ * ij , γ * jk , γ * ki are cycle-consistent by substituting in (3) λ ij = t * i − t * j , λ jk = t * j − t * k and λ ki = t * k − t * i . However, if any of the three vectors is randomly corrupted, the consistency constraint is most probably violated. Thus, we may define a certain cycle-inconsistency measure that indicates the underlying corruption level. Section 3.1 describes a basic measure of inconsistency of a given pairwise direction with respect to 2 other pairwise directions, where the 3 directions result from 3 unknown locations. It is referred to as the AAB inconsistency. A formula for efficiently computing it is proposed at the end of this section. Section 3.2 uses these inconsistencies to define the naive AAB statistic of a given pairwise direction that is used to remove corrupted edges. Section 3.3 discusses the iteratively reweighted AAB (IR-AAB) statistic, which aims to further improve the accuracy of naive AAB in removing corrupted edges. At last, Section 3.4 discusses some issues regarding practical implementation of naive AAB and IR-AAB. AAB Inconsistency and Formula We define the cycle-consistency region of γ 1 , γ 2 ∈ S 2 as Ω(γ 1 ,γ 2 ) = {γ ∈ S 2 : γ 1 ,γ 2 ,γ are cycle-consistent}. We denote by d g the great-circle distance, i.e., the length of the shortest path on S 2 . The AAB inconsistency of γ 3 ∈ S 2 with respect to γ 1 and γ 2 is defined by Figure 1 shows that I AAB (γ 3 ;γ 1 ,γ 2 ) is the smallest angle needed to rotate γ 3 so that γ 1 ,γ 2 ,γ 3 are cycle-consistent. The following formula for computing the AAB inconsistency is crucial for efficient implementation of the algorithms described below. Its proof appears in Appendix A.1. For γ 1 , γ 2 , γ 3 ∈ S 2 , x = γ T 1 γ 3 , y = γ T 2 γ 3 , z = γ T 1 γ 2 and a = I(x < yz)·I(y < xz), where I is the indicator function, I AAB (γ 3 ;γ 1 ,γ 2 ) = cos −1 a· x 2 +y 2 −2xyz 1−z 2 +(a−1)min(x,y) . (5) Figure 1. Clarification of the AAB Inconsistency. The red arc is the cycle-consistency region Ω(γ1, γ2). Indeed, it follows from (3) that the points in Ω(γ1,γ2) are linear combinations in S 2 with positive coefficients of −γ1 and −γ2. The AAB inconsistency IAAB(γ3;γ1,γ2) is the distance in S 2 of γ3 from Ω(γ1,γ2) and is the length of the blue arc. Similarly, IAAB(γ4;γ1,γ2) is the length of the green arc. I AAB (γ 3 ;γ 1 ,γ 2 ) = d g (γ 3 ,Ω(γ 1 ,γ 2 )) = min γ∈Ω(γ1,γ2) d g (γ 3 ,γ).(4) The Naive AAB Statistic We initially define the naive AAB statistic of an edge ij ∈ E as the average of the AAB inconsistencies I AAB (γ ij ; γ jk , γ ki ) over the set C ij = {k ∈ [n] : ik ∈ E and jk ∈ E}. That is, S initial AAB (ij) = 1 |C ij | k∈Cij I AAB (γ ij ;γ jk ,γ ki ).(6) We use it as an indication for the corruption level of γ ij and thus remove the edges with largest AAB statistics. Note that the AAB formula in (5) enables computation of the naive AAB statistic through vectorization instead of using a loop, and thus allows efficient coding in programming languages with an effective linear algebra toolbox. However, the average over C ij can be costly and we thus advocate using a small random sample from C ij of size s, where the default value of s is 50. We summarize this basic procedure of computing the AAB statistic, S AAB , in Algorithm 1. AAB (ij) ij∈E Iteratively Reweighted AAB The naive AAB statistic may suffer from unreliable AAB inconsistencies when the corruption level q is high. Specifically, for an uncorrupted direction γ ij , its AAB inconsistency with respect to γ jk and γ ki can be unreasonably high if either γ jk or γ ki is severely corrupted. Moreover, if many adjacent edges of ij are corrupted, then the naive AAB statistic of this edge may not accurately measure its corruption level. The main issue is not the misleading effect of neighboring edges, but the fact that only such edges are considered and relevant information from other edges is not incorporated. To overcome this issue, the iteratively reweighted AAB (IR-AAB) statistic computes a weighted mean of AAB inconsistencies and iteratively updates these weights. This results in propagation of global information from other non-neighboring edges to edge ij. Initially, the IR-AAB procedure computes the naive AAB statistic. The reweighting strategy of IR-AAB tries to reduce the weights of I AAB (γ ij ; γ jk , γ ki ) when either ki or kj are highly corrupted. In order to do this, at each iteration the AAB inconsistencies I AAB (γ ij ; γ jk , γ ki ) involving suspicious edges are penalized by the reweighting function exp(−τ (t) x). The number x is the maximal value of the reweighted AAB statistics computed in previous iteration for edges ik and kj. The parameter τ (t) increases iteratively and depends on the initial maximal and minimal values of inconsistencies, denoted by M and m. in the first iterations ensures that only the most unreliable AAB inconsistencies are ignored. As the data is iteratively purified, the AAB inconsistencies involving "good" edges are weighted more and more. We remark that increasing τ (t) corresponds to focusing more on "good" edges and ignoring more "suspicious" edges. The details of computing the IR-AAB statistic are described in Algorithm 2. Algorithm 2 Computation of the IR-AAB statistic Input: {γ ij } ij∈E : pairwise directions, s: number of samples, T : number of iterations Compute S ij , S (0) AAB (ij) ∀ij ∈ E by Algorithm 1 M = max ij∈E,k∈Sij I AAB (γ ij ;γ jk ,γ ki ) m = min ij∈E,k∈Sij I AAB (γ ij ;γ jk ,γ ki ) L = (M −m)/T for t = 1 : T do τ (t) = π/M M = M −L for ij ∈ E and k ∈ S ij do w (t) ij,k = exp −τ (t) max S (t−1) AAB (ki),S (t−1) AAB (jk) w (t) ij,k = w (t) ij,k / k∈Sij w (t) ij,k S (t) AAB (ij) = k∈Sij w (t) ij,k I AAB (γ ij ;γ jk ,γ ki ) end for end for Output: IR-AAB statistic: S (T ) AAB (ij) ij∈E Note that IR-AAB alternatively updates the weights using the AAB statistics and then updates the AAB statistics using the new weights. This way better weights can reduce the effect of highly corrupted edges so that the updated AAB statistics measures more accurately the corruption level of edges. Similarly, better estimates of the corruption level by the AAB statistics provide more accurate weights, which emphasize the more relevant edges. In the special practical case of repetitive patterns (e.g., due to identical windows), this procedure can help in identifying corrupted edges that are self-consistent with each other. At last we comment that the failure mode for any AAB procedure is when there are no outliers, so the task of identifying corruptions is ill-posed. This can also happen when the noise magnitude is enormous and outliers are not distinguishable. Numerical Considerations As mentioned earlier, implementations for naive AAB and IR-AAB may avoid loops and use instead vectorization due to the AAB formula. An efficient Matlab code will be provided in the future supplemental webpage. For naive AAB and IR-AAB we recommend using s = 50 as default, and we applied this value in all of our experiments. For IR-AAB we recommend and implement the default value T = 10. We note that the computational complexity of naive AAB is O(s · |E|), where |E| is the number of edges. In general, for dense graphs the complexity is O(s · n 2 ), but for sparser graphs the complexity decreases, e.g., for sparse Erdös-Rényi graphs with p ≪ 1, the complexity is O P (s · p · n 2 ) since E[|E|] = n(n− 1)p/2. The computational complexity of IR-AAB is also O(s · |E|). While IR-AAB is iterated T = 10 times, its main computation is due to the initial application of naive AAB, which requires the computation of the AAB inconsistencies. On the other hand the weight computations in the subsequent iterations is much cheaper. Therefore in practice, the computational complexity of naive AAB and IR-AAB are truly comparable. For synthetic data, we demonstrate in Section 5 that a threshold on the naive AAB and IR-AAB statistics can be chosen by their corresponding histograms. We also demonstrate performance with differently chosen thresholds via ROC curves. The histograms of real data are not so simple, and thus in this case we keep half of the edges with the lowest values of the corresponding statistic. We have noticed that the less edges we keep the higher accuracy we obtain for location estimation. However, extremely low threshold results in limited number of camera locations. Demonstrations of other thresholds appear in Appendix A.3. Theoretical Guarantees for Outliers Removal We show that the naive AAB statistic can be used for near-perfect separation of corrupted and uncorrupted edges. Given pairwise directions generated on an edge set E by the uniform corruption model, we denote by E g the uncorrupted edges, namely, all edges ij ∈ E such that γ ij = γ * ij . We denote the rest of edges in E by E b . The theorem below states that under the uniform corruption model with sufficiently small corruption probability and noise level, the naive AAB statistic is able to perfectly separate E g as well as a large portion of E b . Theorem 4.1. There exist absolute positive constants C 0 ,C such that for any ǫ ∈ [0, 1] and for pairwise directions randomly generated by the uniform corruption model UC(n, p, q, σ) with n = Ω(1/pqǫ), np 2 (1 − q) 2 ≥ C 0 log n and q + σ < Cǫ/ √ logn, there exists a set E ′ ⊆ E b such that |E ′ | ≥ (1−ǫ)|E b | and with probability 1−O(n −5 ), min ij∈E ′ E[S (0) AAB (ij)] > max ij∈Eg E[S (0) AAB (ij)].(7) The theorem can be extended to other synthetic models. For instance, the assumption in the UC model that the locations are sampled from a Gaussian distribution can be generalized to any distribution that generates "c-well-distributed locations", which are explained in Section 4.1.1. One can show that a compactly supported distribution with continuous and positive density satisfies this criterion with an absolute constant c (unlike the Gaussian case) and consequently the theorem may have the weaker assumption: q + σ < Cǫ. The uniform noise assumption in the UC model of this paper can be directly extended to any compactly supported distribution. For Gaussian noise, one needs to slightly modify the theorem so the RHS of (7) is maximized over a sufficiently large subset of E g (similarly to the LHS w.r.t. E b ). Proof of Theorem 4.1 After reviewing preliminary results and notation in Section 4.1.1, Section 4.1.2 describes the main part of the proof. It starts with stating two essential bounds: An upper bound on the expectation of S (0) AAB (ij) when ij ∈ E g and a lower probabilistic bound on the expectation of S (0) AAB (ij) when ij ∈ E b . The upper bound is stated in (9) and later proved in Section 4.1.3. The lower bound is stated in (10) and later proved in proved in Section 4.1.4. While the upper bound is uniform over ij ∈ E g , the lower bound depends on the corruption level of each edge ij ∈ E b . However, there is an absolute bound which holds within a large subset of E b . We show that the uniform upper bound is lower than the absolute lower bound and thus conclude that with high probability the expected values of S Preliminaries We first summarize some properties of the AAB inconsistency: (i) I AAB (γ 3 ;γ 1 ,γ 2 ) ∈ [0,π] ∀ γ 1 ,γ 2 ,γ 3 ∈ S 2 . (ii) I AAB (γ 3 ;γ 1 ,γ 2 ) = 0 iff γ 1 ,γ 2 ,γ 3 are cycle-consistent. (iii) The AAB inconsistency is rotation-invariant. That is, for any rotation R: I AAB (γ 3 ;γ 1 ,γ 2 ) = I AAB (Rγ 3 ;Rγ 1 ,Rγ 2 ). We denote by U (S 2 ) the uniform distribution on S 2 and define Z : = I AAB (z; x, y), where x, y, z i.i.d. ∼ U (S 2 ). For x ∈ [0,π], let f (x) := E[I AAB (v 2 (x);v 1 ,v)|v ∼ U (S 2 )], where v 1 = (−1,0,0) T and v 2 (x) = (cosx,sinx,0) T . The following property is proved in Appendix A.2. Lemma 4.1. If x ∈ [0,π], then f (x) = 1 2 (x+sinx). We will use the following definition and Lemma of [7]. generated by N (0, I 3 ) and the graph G(V, E) is generated by the Erdös-Rényi model G(n, p). There exist absolute positive constants C 0 , C 1 such that if np 2 ≥ C 0 logn, then with probability 1 − n −5 , the set V is C 1 / √ logn-well-distributed along G. Definition 4.1 (Definition 2 of [7]). Let G = G(V,E) be a graph with vertices V = {t i } n i=1 ⊆ R 3 . For x, y ∈ R 3 , c > 0 and A ⊆ V , we say that A is c-well-distributed with respect to (x,y) if the following holds for any h ∈ R 3 : 1 |A| t∈A P Span{t−x,t−y} ⊥ (h) ≥ c P (x−y) ⊥ (h) . (8) We say that V is c-well-distributed along G if for all distinct 1 ≤ i, j ≤ n, the set S ij = {t k ∈ V : ik, jk ∈ E(G)} is c-well-distributed with respect to (t i ,t j ). Lemma 4.2 (Lemma 18 of [7]). Assume that V = {t i } n i=1 is i.i.d. The Main Part of the Proof Let e ij := ∡(γ ij , γ * ij ) denote the corruption level of edge ij ∈ E. We later prove in Sections 4.1.3 and 4.1.4 respectively the following two inequalities. The first one holds for any fixed ij ∈ E: E[S (0) AAB (ij)|ij ∈ E g ] ≤ πσ(1−q) 2 +πq(1−q)+q 2 E[Z].(9) The second one holds with probability 1−n −5 for all ij ∈ E: E[S (0) AAB (ij)|ij ∈ E b ] ≥(1−q) 2 C ′ √ logn min(e ij ,π−e ij )− π 2 σ +q 2 E[Z].(10) We conclude the proof by assuming these inequalities. Recall that there exists an absolute constant C such that q+σ < Cǫ √ logn .(11) Multiplying both sides of (11) by 3π(1 − q) 2 /2, noting that for n sufficiently large 1−q ≥ 1−Cǫ/ √ logn > 2/3 and thus 3q(1−q) 2 /2 > q(1−q) and setting C ′ = 6C yield π q(1−q)+ 3 2 σ(1−q) 2 < (1−q) 2 C ′ √ logn · πǫ 4 .(12) Clearly (12) can be rewritten as πσ(1−q) 2 +πq(1−q) < (1−q) 2 C ′ √ logn · πǫ 4 − π 2 σ . (13) Combining (9), (10) and (13) Let E ′ = {ij ∈ E b : min(e ij ,π − e ij ) > πǫ/4}. Since e ij is i.i.d.∼ U [0,π], X ij := I(ij / ∈ E ′ ) is a Bernoulli random variable with mean µ = ǫ/2. Applying Chernoff bound [14] yields Pr ij∈E b X ij > 2|E b |µ < exp(−Ω(|E b |µ)).(15) That is, with probability 1 − exp −Ω(n 2 pqǫ) , |E ′ | > (1 − ǫ)|E b |. Since n = Ω(1/pqǫ) this probability is sufficiently high. Thus, Theorem 4.1 is concluded if (9) and (10) are correct. (9) We investigate the distribution of I AAB (γ ij ; γ jk , γ ki ) for fixed ij ∈ E g and k ∈ C ij in the following 3 complementary cases: Proof of Inequality Case 1. jk, ki ∈ E g . In this case, γ ij = γ * ij + v ij , γ jk = γ * jk + v jk and γ ki = γ * ki + v ki , where v ij = (γ * ij + σǫ ij )/ γ * ij + σǫ ij − γ * ij , ǫ ij ∼ U (S 2 ) and v jk and v ki are defined in the same way. We note that if σ = 0, then the AAB inconsistency is 0 in the current case. If σ > 0, then since ǫ ij = 1 the AAB inconsistency is bounded as follows: X g ij (k) :=I AAB (γ ij ;γ jk ,γ ki ) =d g γ * ij +v ij ,Ω(γ * jk +v jk ,γ * ki +v ki ) ≤d g γ * ij +v ij ,γ * ij +d g γ * ij ,Ω(γ * jk +v jk ,γ * ki +v ki ) ≤d g γ * ij +v ij ,γ * ij +d g γ * ij ,Ω(γ * jk ,γ * ki ) +max d g (γ * jk ,γ * jk +v jk ),d g (γ * ki ,γ * ki +v ki ) ≤ π 2 σ+0+ π 2 σ = πσ.(16) Case 2. Either jk ∈ E g or ki ∈ E g , but not both in E g . We assume WLOG that jk ∈ E g and ki ∈ E b . According to the uniform corruption model, γ ki ∼ U (S 2 ), γ jk = γ * jk + v jk , γ ij = γ * ij + v ij . For any indices ijk, let θ ijk denotes the angle between γ ij and γ kj . By choosing appropriate rotation matrix R, Y g ij (k) := I AAB (γ ij ;γ jk ,γ ki ) = I AAB (Rγ ij ;Rγ jk ,Rγ ki ) = I AAB (v 2 (θ ijk );v 1 ,v),(17) where v 1 and v 2 (θ ijk ) were defined in Section 4.1.1 and v ∼ U (S 2 ). Lemma 4.1 and the fact that f (x) ∈ [0,π/2] for x ∈ [0,π] imply the inequality E[Y g ij (k)] = E θ ijk [f (θ ijk )] ≤ π 2 .(18) Case 3. jk, ki ∈ E b Let Z g ij (k) be defined as follows with distribution equivalent formulations that use an arbitrary rotation R and x, y ∼ U (S 2 ): Since R is arbitrary, Z g ij (k) is independent of γ ij and for z ∼ U (S 2 ) At last, combining (16), (18) and (20) with probabilities (1 − q) 2 , 2q(1 − q) and q 2 for each case respectively yields (9). (10) We investigate the distribution of I AAB (γ ij ; γ jk , γ ki ) for fixed ij ∈ E b and k ∈ C ij in the following 3 complementary cases: Case 1. jk,ki ∈ E g . Observe that Proof of Inequality I AAB (γ ij ;γ * jk ,γ * ki ) = min v∈Ω(γ * jk ,γ * ki ) d g (γ ij ,v) ≥ min v∈Span{γ * jk ,γ * ki } d g (γ ij ,v) ≥ min v∈Span{γ * jk ,γ * ki } γ ij −v = P Span{t * k −t * i ,t * k −t * j } ⊥ (γ ij )(21) and X b ij (k) :=I AAB (γ ij ;γ jk ,γ ki ) =d g γ ij ,Ω(γ * jk +v jk ,γ * ki +v ki ) ≥d g γ ij ,Ω(γ * jk ,γ * ki ) −max d g (γ * jk ,γ * jk +v jk ),d g (γ * ki ,γ * ki +v ki ) =I AAB (γ ij ;γ * jk ,γ * ki ) −max d g (γ * jk ,γ * jk +v jk ),d g (γ * ki ,γ * ki +v ki ) ≥ P Span{t * k −t * i ,t * k −t * j } ⊥ (γ ij ) − π 2 σ.(22) Denote C g ij := {k ∈ C ij : ki ∈ E g , jk ∈ E g } so that k ∈ C g ij . Note that the underlying corruption model implies that G(V, E g ) is an Erdös-Rényi graph G(n, p(1 − q)). By combining the assumption np 2 (1 − q) 2 > C 0 log n and Lemma 4.2, we obtain that the set of vertices V is C 1 / √ logn-well-distributed along G(V, E g ) for some absolute constant C 1 with high probability. This fact and (22) imply that with probability 1−n −5 1 |C g ij | k∈C g ij I AAB (γ ij ;γ jk ,γ ki ) ≥ 1 |C g ij | k∈C g ij P Span{t * k −t * i ,t * k −t * j } ⊥ (γ ij ) − π 2 σ ≥ C 1 √ logn P (t * i −t * j ) ⊥ γ ij − π 2 σ = C 1 √ logn P γ * ⊥ ij γ ij − π 2 σ ≥ C 1 π 2 √ logn min(e ij ,π−e ij )− π 2 σ.(23) Case 2. Either jk ∈ E g or ki ∈ E g , but not both in E g . Let Y b ij (k) := I AAB (γ ij ; γ jk , γ ki ). The arguments used for the estimates of case 2 of Section 4.1.3 and the fact that f (x) ≥ 0 imply that E[Y b ij (k)] ≥ 0. Case 3. jk,ki ∈ E b This case is exactly the same as case 3 of Section 4.1.3 and we thus use (20) for z ∼ U (S 2 ). At last, combining the estimates of the 3 cases with respective probabilities (1−q) 2 , 2q(1−q) and q 2 yields (10). Experiments on Synthetic Data We first illustrate the ability of the statistics obtained by naive AAB, IR-AAB and 1DSfM [23] to separate corrupted and uncorrupted edges for a special synthetic dataset. The dataset was randomly generated by the uniform corruption model with n = 200, p = 0.5, q = 0.2 and σ = 0. Figure 3 first shows the three statistics' values of edges as a function of their corruption levels. These corruption levels are measured by the angles of the corresponding pairwise directions with the uncorrupted pairwise directions. We first note that 1DSfM may assign zero values to corrupted edges, unlike naive AAB and IR-AAB, and has the largest variance per corruption level. We also note that IR-AAB assigns negligible values to uncorrupted points, unlike naive AAB and 1DSfM, and has the lowest variance at low corruption levels. The figure also shows the histograms of the statistics for both corrupted and uncorrupted points. Since the 1DSfM statistic (which is referred to in [23] as inconsistency) obtains zero values for both corrupted and uncorrupted edges, it is hard to separate the whole histogram into two modes. On the other hand, naive AAB and IR-AAB can be nicely separated into two modes for this and other synthetic examples. For IR-AAB, but not naive AAB, this separation exactly recovers the uncorrupted edges in this particular example. Next we use ROC curves to diagnose the ability of naive AAB, IR-AAB and 1DSfM to detect corrupted edges in a similar synthetic data with varying percentages of corrupted edges and noise levels. The datasets were randomly generated by the uniform corruption model with n = 200, p = 0.5, q = 0.2, 0.4, 0.6 and σ = 0, 0.05, 0.1 and 0.2. For each statistic and choice of parameters, we assign 1000 equidistant thresholds between the largest and smallest values of this statistic, compute the true and false positive rates for recognizing uncorrupted points with values of this statistic above each threshold, and plot the corresponding ROC curve. We remark that the edges ij ∈ E are recognized as corrupted when ∡(γ ij ,γ * ij ) > sin −1 (σ). The resulting ROC curves are shown in Figure 4, where a larger area under the ROC curve corresponds to better classification performance. We note that classification based on IR-AAB consistently outperforms that of naive AAB and 1DSfM. Moreover, IR-AAB has a clear advantage over naive AAB and 1DSfM at low and moderate noise levels (σ = 0, 0.05, 0.1) among all levels of tested corruption. However, IR-AAB requires a certain portion of pairwise directions to be accurately estimated, and it thus does not significantly improve over the other two methods at high noise levels (σ = 0.2). Naive AAB works well when the corruption and noise levels are relatively low. However, due to the misleading effect of cor- rupted neighboring edges, it may misclassify uncorrupted edges when the overall corruption or noise level is high (q = 0.6 or σ = 0.2). The performance of 1DSfM is not competitive. Indeed, it may frequently misclassify edges even at low corruption and noise levels, since it may converge to local extrema and also the 1D projection loses information. Experiments on Real Data We consider real datasets and compare the improvement obtained by preprocessing current camera location solvers with naive AAB, IR-AAB and 1DSfM. We use the 14 datasets from [23]. For each dataset, we exactly follow the pipeline suggested by [16] for estimating camera orientations and pairwise directions. Given the estimated pairwise directions from [16], naive AAB, IR-AAB and 1DSfM are applied separately to delete 50% of the edges with the highest corresponding statistics. Different choices of 10% and 90% deleted edges are demonstrated in Appendix A.3. Since the graph may not be parallel rigid after deleting edges, we extract its maximal parallel rigid component using a procedure suggested in [10]. We then apply to this component the following three different camera location solvers: LUD [16] with IRLS implementation, CLS [21,22] with interior point method and ShapeFit [7] with ADMM implementation [4]. We remark that although 50% of edges are removed, the number of locations in the maximal parallel rigid graph is still close to the original graph. For faster implementation of LUD, only a subset of the Piccadilly dataset with 500 locations is used. For each dataset, each of the 3 statistics, and each of the 3 camera location solvers, we compute average and median distance (in meters) of the estimated camera locations to the ground truth locations 1 . The latter ones are provided by [23]. The experimental results are recorded in Table 1. These results show significant improvement of IR-AAB for all three camera location solvers. In particular, IR-AAB works best with LUD and CLS. For example, IR-AAB with LUD outperforms naive AAB and 1DSfM with LUD on 10 out of the 14 datasets in terms of both mean and median errors. For 2 additional datasets, IR-AAB with LUD still improves over LUD. For the two remaining datasets, Ellis Island and Gendarmenmarkt, which contain highly inaccurate pairwise directions, none of the three statistics significantly improve any of the solvers. We also note that while LUD is superior to CLS, after applying IR-AAB, both algorithms are comparable. Furthermore, CLS with IR-AAB outperforms plain LUD. We observe that 1DSfM outperforms IR-AAB on a few datasets when using ShapeFit. However, 1DSfM with ShapeFit is worse than plain ShapeFit on 6 other datasets. The inconsistent results of ShapeFit are due to its instability. Indeed, its formulation has a very weak constraint that cannot avoid collapsed solutions in the presence of highly corrupted pairwise directions and when, in particular, some locations have low degrees. On the other hand, both LUD and CLS have a very strong constraint, which avoids collapsed solutions. Note that the preprocessing step results in a large component of the original graph with a possibly different topology than the original graph and thus ShapeFit may be more sensitive to the resulting subgraph, especially if it has some vertices with low degrees not present in the original graph. Due to this sensitivity, none of the preprocessing methods consistently outperforms the other ones when using ShapeFit. We remark that the instability of ShapeFit can be observed from the large variation of its estimation error using different outlier-removing methods and different datasets. Figure 5 illustrates the improvement of the three preprocessing algorithms (naive AAB, IR-AAB and 1DSfM) over the three solvers (LUD, CLS and ShapeFit). The improvement is measured by the following formula: Improvement= e before −e after e before ·100%,(24) where e before is the mean/median error of estimated camera locations on the whole graph by the given solver without preprocessing and e after is the mean/median error of the same solver after removing 50% of edges by the given preprocessing algorithm. The two datasets with highly inaccurate pairwise directions, Ellis Island and Gendarmenmarkt, were removed. The first three subfigures indicate results of mean error for each solver separately and the last subfigure demonstrates the averaged mean and median errors result among the 12 remaining datasets. It is evident that IR-AAB has the best overall performance in improving the three solvers. On the other hand, 1DSfM has the worst performance. For example, IR-AAB succeeds in improving LUD's performance with average mean-error rate of 38% and it consistently reduces the estimation error of LUD on all of these datasets. On the other hand, 1DSfM has average mean-error improvement rate for LUD of 5.7%, whereas on five datasets preprocessing by 1DSfM increases the estimation error of LUD. For comparison, naive AAB has average mean-error improvement rate for LUD of 26.5%, whereas on two datasets preprocessing by naive AAB increases the estimation error of LUD. For all of the 3 statistics, the overall improvement of preprocessing with CLS is more significant than preprocessing with LUD and ShapeFit. Indeed the averaged mean and median improvement rates of CLS when preprocessing by IR-AAB is more than 50%. This is not surprising as CLS is not robust to corruption. For ShapeFit, the average improvements over the mean errors when preprocessing with naive AAB, IR-AAB and 1DSfM are 42.4%, 50.4% and 2.9% respectively. However, when considering each individual dataset, IR-AAB and naive AAB may not consistently outperform 1DSfM due to the instability of ShapeFit discussed above. Table 1. Negative number of improvement rate corresponds to increase of the estimation error after removing edges. The last subfigure illustrates the averaged mean and median improvement of the 3 statistics over the first 12 datasets when preprocessing the three solvers by the three statistics. We report the computational speed of the algorithms on the largest dataset: Roman Forum, which has 967 locations. While Piccadilly has 2226 locations, it was run with 500 locations to ease the computational time for LUD and for extracting the maximal parallel rigid component. The Algorithms LUD [16] CLS [21,22] Table 1. Comparison of naive AAB, IR-AAB and 1DSfM for improving 3 location solvers (LUD, CLS, ShapeFit) using 14 datasets from [23]. The median and mean distance from the estimated camera locations to the ground truth (provided in [23]) are denoted byẽ andê respectively. computations were performed on a machine with 2.5GHz Intel i5 quad core processors and 8GB memory. The total time needed to compute 1DSfM, naive AAB and IR-AAB is 2, 5 and 8 seconds respectively. For comparison, the total time to run CLS, ShapeFit and LUD is 8, 8 and 160 seconds respectively. We expect the runtime of ADMM for LUD to be comparable to that of ShapeFit. The slowest component was finding the maximal parallel rigid component. For Roman Forum, it took 550 seconds, while it took less than 200 seconds for the other datasets. Conclusion We proposed the AAB statistic for estimating the underlying corruption level on camera pairwise directions. We improved this estimation by incorporating a careful reweighting strategy. We further established theoretical guarantee on the accuracy of the non-reweighted statistic, i.e., naive AAB, for detecting corrupted edges when the corruption and noise levels are sufficiently low. The experiments on both synthetic data and real data show the significant advantage of applying the reweighting strategy with the AAB statistic. Applying our method as a preprocessing step significantly improves the performance of current camera location solvers. This work suggests several interesting future projects. First of all, we believe that a similar strategy can be developed for improving camera orientation estimation. Second of all, we are interested in theoretically guaranteeing the reweighting strategy for segmenting corrupted and uncorrupted edges. Third of all, an interesting direction for future work is to study and provide guarantees for synthetic models that more realistically mirror real scenarios. At last, we find it important to develop a faster method for extracting the maximal parallel rigid graph so that the total runtime can be significantly reduced. A. Appendix A.1. Proof of the AAB Formula (5) As is shown in Figure 1, Ω(γ 1 ,γ 2 ) is exactly the shortest path on the manifold S 2 between −γ 1 and −γ 2 . Since I AAB (γ 3 ; γ 1 , γ 2 ) is the length of shortest path between γ 3 and Ω(γ 1 , γ 2 ), it can be computed via the following procedure: Let γ p be the orthogonal projection of γ 3 onto Span{γ 1 ,γ 2 }, then I AAB (γ 3 ;γ 1 ,γ 2 ) = ∡(γ p ,γ 3 ), if γp γp ∈ Ω(γ 1 ,γ 2 ); min(∡(γ 1 ,γ 3 ),∡(γ 2 ,γ 3 )), otherwise. (25) By the definition of γ p it can be expressed as λ 1 γ 1 +λ 2 γ 2 , where (γ 3 − λ 1 γ 1 − λ 2 γ 2 )⊥ Span{γ 1 , γ 2 }. That is, γ 3 −λ 1 γ 1 −λ 2 γ 2 ,γ 1 = γ 3 −λ 1 γ 1 −λ 2 γ 2 ,γ 2 = 0. Thus, we obtain the following system of equations for λ 1 and λ 2 λ 1 +zλ 2 = x (26) zλ 1 +λ 2 = y,(27) where we recall that x = γ T 1 γ 3 , y = γ T 2 γ 3 and z = γ T 1 γ 2 . The solution of (26) is given by λ 1 = (x−yz)/(1−z 2 ), λ 2 = (y − xz)/(1 − z 2 ). Note that γ p / γ p ∈ Ω(γ 1 , γ 2 ) if and only if λ 1 < 0 and λ 2 < 0. That is, when y < xz and x < yz, I AAB (γ 3 ;γ 1 ,γ 2 ) = cos −1 (γ T p γ 3 ) = cos −1 (λ 1 γ T 1 γ 3 +λ 2 γ T 2 γ 3 ) = cos −1 (λ 1 x+λ 2 y) = x 2 +y 2 −2xyz 1−z 2 . Otherwise, I AAB (γ 3 ;γ 1 ,γ 2 ) = min(∡(γ 1 ,γ 3 ),∡(γ 2 ,γ 3 )) = cos −1 max γ T 1 γ 3 ,γ T 2 γ 3 = cos −1 (max(x,y)). where θ and ϕ are azimuthal angle and polar angle in spherical coordinate system respectively. Table 2 and 3 are similar to Table 1 of Section 6, however, while in Table 1 50% of edges were removed, in the new tables 10% and 90% of edges are removed. Table 3. Comparison of naive AAB, IR-AAB and 1DSfM for improving 3 location solvers (LUD, CLS, ShapeFit) using 14 datasets from [23]. Using any of the three statistics, 90% of edges are removed. The median and mean distance from the estimated camera locations to the ground truth (provided in [23]) are denoted byẽ andê respectively. Even after removing 90% of edges, in most of the cases the maximal parallel rigid subgraph still contains > 50% camera locations. "NA" means that the resulting maximal parallel rigid component had only 16 or less locations, whereas in the rest of cases the maximal parallel rigid component had at least 100 locations. A.3. Additional Real Data Experiments Figure 2 . 2Figure 2 illustrates the reweighting functions with M = 1, m = 0 and 10 iterations. The use of slowly-Demonstration of the reweighting function exp(−τ (t) x) used in IR-AAB. Here, t ∈ [10] and the rate of decrease is τ (t) = π/(1.1 − 0.1t), which increases with t. The labels on the x-axis are of the points xt = 1.1−0.1t, 1 ≤ t ≤ 10. At each iteration t, exp(−τ (t) x) < e −π ≈ 0.04 for x > xt. The red line separates for each curve the values in [0,xt] and [xt,1]. Therefore, exp(−τ (t) x) gives little weight to points in [xt,1]. ( 0 ) 0AAB (ij) when ij ∈ E g are separated from the expected values of S (0) AAB (ij) when ij is in a large subset of E b . Z g ij (k) := I AAB (γ ij ;γ jk ,γ ki ) d = I AAB (Rγ ij ;Rγ jk ,Rγ ki ) d = I AAB (Rγ ij ;x,y). (19) Z g ij (k) := I AAB (γ ij ;γ jk ,γ ki ) d = I AAB (z;x,y) = Z.(20) Figure 3 . 3Demonstration of corruption identification for a synthetic dataset by naive AAB, IR-AAB and 1DSfM. The dataset was generated by the uniform corruption model UC(200, 0.5, 0.2, 0). The 3 columns of subfigures correspond to naive AAB, IR-AAB and 1DSfM respectively. The subfigures in the first row show the correlation of the computed statistics (on y-axis) with the corruption level (on x-axis). Edges with no corruption are blue and the rest are red. The subfigures in the second row are the histograms of computed statistics for both corrupted and uncorrupted edges. Figure 4 . 4ROC curves for corruption detection of naive AAB, IR-AAB and 1DSfM with varying corruption and noise levels. Figure 5 . 5Percentage of improvement of location estimation by preprocessing 3 solvers with the 3 statistics. The first 3 subfigures illustrate the mean error improvement for LUD, CLS and ShapeFit respectively. Numbers 1-12 on the horizontal axis are the indices of the first 12 datasets in the proof of formula (5).A.2. Proof of Lemma 4.1Proof. Let l(x 1 ,x 2 ) denote the shortest path on S 2 connecting the points x 1 and x 2 . Let u 1 = −v 1 ,u = −v. Note that x = ∡(v 2 (x),u 1 ) by the definition of v 2 (x) and u 1 . Madrid Metropolis 1.84 5.94 1.28 3.57 1.21 3.53 1.46 5.79 7.1 11.2 1.48 5.28 1.26 3.44 2.73 3.59 14 27.3 1.51 17.8 1.22 7.64 4.61 29.58 Montreal N.D. Piazza Del Popolo 1.66 5.28 1.12 4.03 0.91 1.54 0.94 1.95 3.42 6.46 1.22 4.31 0.98 1.57 1.47 2.57 1.48 6.81 1.09 4.07 0.89 1.51 Vienna Cathedral 7.26 13.1 6.86 14.9 4.21 12.7 9.05 17.4 9.59 13.7 10 14.8 8.45 13.9 8.62 15.6 28.6 36.6 28.5 36.5 28.5 36.4 27.6 36.4 Gendarmenmarkt 17.5 38.8 15.1 40.9 15 41.3 17.1 40.6 20.7 40.9 19.4 43.1 18.3 42.1 19.2 42.3 32.8 51.7 32.7 52.1 32.5 52.1 32.4 51.5ShapeFit [7] None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM Datasetẽêẽêẽêẽêẽêẽêẽêẽêẽêẽêẽêẽê Alamo 0.47 1.74 0.38 0.92 0.36 0.85 0.38 1.3 1.35 2.79 0.39 0.93 0.37 0.69 0.44 1.44 0.44 1.83 0.38 0.92 0.36 0.85 0.38 2.82 0.56 1.22 0.4 0.61 0.39 0.59 0.53 1.2 0.9 1.79 0.4 0.6 0.41 0.6 0.69 1.86 0.58 3.25 0.39 0.63 0.39 0.58 0.61 4.08 Notre Dame 0.29 0.85 0.26 0.6 0.24 0.51 0.28 1 1.05 2.12 0.36 0.86 0.27 0.55 0.61 1.47 0.24 0.96 0.23 0.58 0.22 0.53 0.24 1.27 NYC Library 2.43 6.95 0.95 2.89 0.69 2.24 1.83 5.64 5.3 8.51 1.89 4.51 0.72 2.52 4.49 7.31 13.3 14.3 0.85 5.69 0.66 2.23 13.3 14.2 1 5.74 Piccadilly 2.02 3.87 1.37 3.07 1.19 2.69 2.12 3.95 3.64 5.42 1.56 3.28 1.23 2.42 3.5 5.11 14.2 13.4 5.72 14.4 11.6 13.3 2.09 6.39 Roman Forum 2.21 8.33 1.74 7.28 1.62 7.13 3.4 10.1 6.2 12.4 3.11 9.24 2.56 8.58 6.62 15.3 26.7 41 1.53 12.7 7.46 17.7 26.9 33.2 Tower of London 4.03 17.9 2.41 4.79 2.33 4.36 2.83 15.8 16 27 2.6 4.87 2.36 4.34 12.6 24.8 2.41 16.9 2.34 4.74 2.27 3.92 2.48 20.1 Union Square 7.57 11.7 7.24 11.2 7.3 11.3 7.89 12.9 8.03 12.5 7.39 11.7 7.84 11.7 8.54 13.6 12.9 19 12.3 18.6 12.5 18.8 13.1 19.2 Yorkminster 2.51 5.26 1.61 6.74 1.62 4.91 2 3.69 5.95 8.72 2.8 6.95 2.29 6.36 4.76 6.89 19.9 28.4 2.35 10.9 2.03 14.6 1.65 4.51 Ellis Island 22 22.4 23.5 23.7 24.7 24.6 21.7 22.5 20.9 22 23.4 23.6 25.3 24.7 22.1 22.4 26.7 27.7 26.5 27.6 26.6 27.8 26.4 27.6 For each solver, the unknown scale and shift are estimated by least squares minimization with respect to the ground truth data. AcknowledgementThis work was supported by NSF award DMS-14-18386. We thank Soumyadip Sengupta, Thomas Goldstein, Tal Amir and Paul Hand for providing codes and real data. We also thank the anonymous reviewers and Tyler Maunu for helpful comments on an earlier version of this manuscript. Global motion estimation from point matches. M Arie-Nachimson, S Z Kovalsky, I Kemelmacher-Shlizerman, A Singer, R Basri, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission. Zurich, SwitzerlandM. Arie-Nachimson, S. Z. Kovalsky, I. Kemelmacher- Shlizerman, A. Singer, and R. Basri. Global motion estimation from point matches. In 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visual- ization & Transmission, Zurich, Switzerland, October 13-15, 2012, pages 81-88, 2012. 1 Spectral solution of large-scale extrinsic camera calibration as a graph embedding problem. M Brand, M E Antone, S J Teller, Computer Vision -ECCV 2004, 8th European Conference on Computer Vision. Prague, Czech RepublicM. Brand, M. E. Antone, and S. J. Teller. Spectral solution of large-scale extrinsic camera calibration as a graph embedding problem. In Computer Vision -ECCV 2004, 8th European Conference on Computer Vision, Prague, Czech Republic, May 11-14, 2004. Proceedings, Part II, pages 262-273, 2004. Efficient and robust large-scale rotation averaging. A Chatterjee, V M Govindu, IEEE International Conference on Computer Vision, ICCV 2013. Sydney, AustraliaA. Chatterjee and V. M. Govindu. Efficient and robust large-scale rotation averaging. In IEEE International Con- ference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, pages 521-528, 2013. 1 Shapefit and shapekick for robust, scalable structure from motion. T Goldstein, P Hand, C Lee, V Voroninski, S Soatto, Computer Vision -ECCV 2016 -14th European Conference. Amsterdam, The Netherlands17Proceedings, Part VIIT. Goldstein, P. Hand, C. Lee, V. Voroninski, and S. Soatto. Shapefit and shapekick for robust, scalable structure from motion. In Computer Vision -ECCV 2016 -14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VII, pages 289-304, 2016. 1, 7 Combining two-view constraints for motion estimation. V M Govindu, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Kauai, HI, USAV. M. Govindu. Combining two-view constraints for motion estimation. In 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), 8- 14 December 2001, Kauai, HI, USA, pages 218-225, 2001. 1 Lie-algebraic averaging for globally consistent motion estimation. V M Govindu, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004). Washington, DC, USAV. M. Govindu. Lie-algebraic averaging for globally con- sistent motion estimation. In 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), 27 June -2 July 2004, Washington, DC, USA, pages 684-691, 2004. 1 Shapefit: Exact location recovery from corrupted pairwise directions. P Hand, C Lee, V Voroninski, Communications on Pure and Applied Mathematics. 71112P. Hand, C. Lee, and V. Voroninski. Shapefit: Exact location recovery from corrupted pairwise directions. Communica- tions on Pure and Applied Mathematics, 71(1):3-50, 2018. 1, 5, 7, 9, 12 Multiple view geometry in computer vision. A Harltey, A Zisserman, Cambridge University PressA. Harltey and A. Zisserman. Multiple view geometry in computer vision (2. ed.). Cambridge University Press, 2006. 1 L1 rotation averaging using the weiszfeld algorithm. R I Hartley, K Aftab, J Trumpf, The 24th IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, CO, USAR. I. Hartley, K. Aftab, and J. Trumpf. L1 rotation averaging using the weiszfeld algorithm. In The 24th IEEE Confer- ence on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011, pages 3041-3048, 2011. 1 Identifying maximal rigid components in bearing-based localization. R Kennedy, K Daniilidis, O Naroditsky, C J Taylor, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012. Vilamoura, Algarve, PortugalR. Kennedy, K. Daniilidis, O. Naroditsky, and C. J. Taylor. Identifying maximal rigid components in bearing-based lo- calization. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura, Al- garve, Portugal, October 7-12, 2012, pages 194-201, 2012. 7 Exact camera location recovery by least unsquared deviations. G Lerman, Y Shi, T Zhang, abs/1709.09683CoRRG. Lerman, Y. Shi, and T. Zhang. Exact camera lo- cation recovery by least unsquared deviations. CoRR, abs/1709.09683, 2017. 1 Distinctive image features from scale-invariant keypoints. D G Lowe, International Journal of Computer Vision. 602D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110, 2004. 1 Robust rotation and translation estimation in multiview reconstruction. D Martinec, T Pajdla, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007). Minneapolis, Minnesota, USAD. Martinec and T. Pajdla. Robust rotation and translation estimation in multiview reconstruction. In 2007 IEEE Com- puter Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007, Minneapolis, Minnesota, USA, 2007. 1 Probability and computing: Randomized algorithms and probabilistic analysis. M Mitzenmacher, E , Cambridge university pressM. Mitzenmacher and E. Upfal. Probability and comput- ing: Randomized algorithms and probabilistic analysis. Cambridge university press, 2005. 5 Global fusion of relative motions for robust, accurate and scalable structure from motion. P Moulon, P Monasse, R Marlet, IEEE International Conference on Computer Vision, ICCV 2013. Sydney, Australia1P. Moulon, P. Monasse, and R. Marlet. Global fusion of relative motions for robust, accurate and scalable structure from motion. In IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, pages 3248-3255, 2013. 1, 2 Robust camera location estimation by convex programming. O Özyesil, A Singer, IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA912O.Özyesil and A. Singer. Robust camera location estimation by convex programming. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 2674-2683, 2015. 1, 2, 7, 9, 12 Stable camera motion estimation using convex programming. O Özyesil, A Singer, R Basri, SIAM Journal on Imaging Sciences. 82O.Özyesil, A. Singer, and R. Basri. Stable camera motion estimation using convex programming. SIAM Journal on Imaging Sciences, 8(2):1220-1262, 2015. 1 A survey of structure from motion. O Özyesil, V Voroninski, R Basri, A Singer, Acta Numerica. 261305364O.Özyesil, V. Voroninski, R. Basri, and A. Singer. A survey of structure from motion. Acta Numerica, 26:305364, 2017. 1 A new rank constraint on multi-view fundamental matrices, and its application to camera location recovery. S Sengupta, T Amir, M Galun, T Goldstein, D W Jacobs, A Singer, R Basri, IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USAS. Sengupta, T. Amir, M. Galun, T. Goldstein, D. W. Jacobs, A. Singer, and R. Basri. A new rank constraint on multi-view fundamental matrices, and its application to camera location recovery. IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, Hawaii, USA, June 22-25, 2017, pages 4798-4806, 2017. 1 Bundle adjustment -A modern synthesis. B Triggs, P F Mclauchlan, R I Hartley, A W Fitzgibbon, Vision Algorithms: Theory and Practice, International Workshop on Vision Algorithms, held during ICCV '99. Corfu, GreeceB. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgib- bon. Bundle adjustment -A modern synthesis. In Vision Algorithms: Theory and Practice, International Workshop on Vision Algorithms, held during ICCV '99, Corfu, Greece, September 21-22, 1999, Proceedings, pages 298-372, 1999. 1 Distributed image-based 3-d localization of camera sensor networks. R Tron, R Vidal, Proceedings of the 48th IEEE Conference on Decision and Control. the 48th IEEE Conference on Decision and ControlShanghai, China912R. Tron and R. Vidal. Distributed image-based 3-d local- ization of camera sensor networks. In Proceedings of the 48th IEEE Conference on Decision and Control, CDC 2009, December 16-18, 2009, Shanghai, China, pages 901-908, 2009. 1, 7, 9, 12 Distributed 3-d localization of camera sensor networks from 2-d image measurements. R Tron, R Vidal, IEEE Trans. Automat. Contr. 591212R. Tron and R. Vidal. Distributed 3-d localization of camera sensor networks from 2-d image measurements. IEEE Trans. Automat. Contr., 59(12):3325-3340, 2014. 1, 7, 9, 12 Robust global translations with 1dsfm. K Wilson, N Snavely, Computer Vision -ECCV 2014 -13th European Conference. Zurich, Switzerland912Proceedings, Part IIIK. Wilson and N. Snavely. Robust global translations with 1dsfm. In Computer Vision -ECCV 2014 -13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III, pages 61-75, 2014. 1, 2, 6, 7, 8, 9, 12 Disambiguating visual relations using loop constraints. C Zach, M Klopschitz, M Pollefeys, The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010. San Francisco, CA, USA1C. Zach, M. Klopschitz, and M. Pollefeys. Disambiguating visual relations using loop constraints. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recogni- tion, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pages 1426-1433, 2010. 1, 2 N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM. Algorithms LUD[16] CLS [21, 22] ShapeFit [7Algorithms LUD[16] CLS [21, 22] ShapeFit [7] None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM . N Montreal, Montreal N.D. . Ellis, Island 22 22.4 22.6 23.2 23.8 23.6 22.2 22.8 20.9 22 22.6 23.3 24.3 23.7 22.5 22.8 26.7 27.7 26.6 27.5 26.5 27.8 26.7 27Ellis Island 22 22.4 22.6 23.2 23.8 23.6 22.2 22.8 20.9 22 22.6 23.3 24.3 23.7 22.5 22.8 26.7 27.7 26.6 27.5 26.5 27.8 26.7 27.7 Table 2. Comparison of naive AAB, IR-AAB and 1DSfM for improving 3 location solvers (LUD, CLS, ShapeFit) using 14 datasets from. 23Table 2. Comparison of naive AAB, IR-AAB and 1DSfM for improving 3 location solvers (LUD, CLS, ShapeFit) using 14 datasets from [23]. Using any of the three statistics, 10% of edges are removed. The median and mean distance from the estimated camera locations to the ground truth. N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM. provided in [23]) are denoted byẽ andê respectively. Algorithms LUD[16] CLS [21, 22] ShapeFit [7Using any of the three statistics, 10% of edges are removed. The median and mean distance from the estimated camera locations to the ground truth (provided in [23]) are denoted byẽ andê respectively. Algorithms LUD[16] CLS [21, 22] ShapeFit [7] None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM None N-AAB IR-AAB 1DSfM . Datasetẽêẽêẽêẽêẽêẽêẽêẽêẽêẽêẽêẽê, Alamo 0.47 1.74 0.37 0.81 0.35 0.59 0.37 0.93 1.35 2.79 0.36 0.8 0.35 0.6 0.42 0.98 0.44 1.83 0.36 0.82 0.35 0.72 0.37 0.91Datasetẽêẽêẽêẽêẽêẽêẽêẽêẽêẽêẽêẽê Alamo 0.47 1.74 0.37 0.81 0.35 0.59 0.37 0.93 1.35 2.79 0.36 0.8 0.35 0.6 0.42 0.98 0.44 1.83 0.36 0.82 0.35 0.72 0.37 0.91
[]
[ "LOGSPACE AND COMPRESSED-WORD COMPUTATIONS IN NILPOTENT GROUPS", "LOGSPACE AND COMPRESSED-WORD COMPUTATIONS IN NILPOTENT GROUPS" ]
[ "Jeremy Macdonald \nAND SVETLA VASSILEVA\n\n", "Alexei Myasnikov \nAND SVETLA VASSILEVA\n\n", "Andrey Nikolaev \nAND SVETLA VASSILEVA\n\n" ]
[ "AND SVETLA VASSILEVA\n", "AND SVETLA VASSILEVA\n", "AND SVETLA VASSILEVA\n" ]
[]
For finitely generated nilpotent groups, we employ Mal'cev coordinates to solve several classical algorithmic problems efficiently. Computation of normal forms, the membership problem, the conjugacy problem, and computation of presentations for subgroups are solved using only logarithmic space and quasilinear time. Logarithmic space presentation-uniform versions of these algorithms are provided. Compressed-word versions of the same problems, in which each input word is provided as a straight-line program, are solved in polynomial time.
10.1090/tran/8623
[ "https://arxiv.org/pdf/1503.03888v3.pdf" ]
721,447
1503.03888
d90759f79c73ad541c63115b9d6284e560a0049e
LOGSPACE AND COMPRESSED-WORD COMPUTATIONS IN NILPOTENT GROUPS 19 Dec 2021 Jeremy Macdonald AND SVETLA VASSILEVA Alexei Myasnikov AND SVETLA VASSILEVA Andrey Nikolaev AND SVETLA VASSILEVA LOGSPACE AND COMPRESSED-WORD COMPUTATIONS IN NILPOTENT GROUPS 19 Dec 2021 For finitely generated nilpotent groups, we employ Mal'cev coordinates to solve several classical algorithmic problems efficiently. Computation of normal forms, the membership problem, the conjugacy problem, and computation of presentations for subgroups are solved using only logarithmic space and quasilinear time. Logarithmic space presentation-uniform versions of these algorithms are provided. Compressed-word versions of the same problems, in which each input word is provided as a straight-line program, are solved in polynomial time. Some of the earlier results in this area were obtained before complexity issues became a concern in algebra, while others were mostly focused on practical aspects of computing. Lately, ideas from space complexity are exerting a strong influence on computations in algebra, and in particular there is an active research interest in logarithmic space computations. Another influence of computer science on modern algebraic computing concerns data representation in compressed form and its use in developing algorithms with lower computational complexity. Polynomial time, logarithmic space, and compressed-word computations are now main players in modern algorithmic group theory. We will elaborate their roles in §1.2- §1.4 below. In this paper we study the computational complexity of several fundamental algorithmic problems in finitely generated nilpotent groups. These problems include computing normal forms (Mal'cev coordinates) of group elements (hence deciding the word problem), deciding the conjugacy and membership problems, computing kernels of homomorphisms, and finding presentations for finitely generated subgroups. We prove that in a fixed group all these problems are computable in space logarithmic in the size of the input and, simultaneously, in quasilinear time. For the decision problems above (i.e., word, conjugacy, and membership problems), our algorithms also solve the 'search' version of the problem, meaning if the algorithm answers "Yes" it also provides a witness to its answer, in logarithmic space and quasilinear time. For the word problem, this means writing a trivial element as a product of conjugates of relators; for the conjugacy problem, finding a conjugator; for the membership problem, expressing the given element as a product of subgroup generators. Note that, generally speaking, the search version of the word problem can be arbitrarily more difficult than the decision problem, for example, see [24,Section 4]. We also consider compressed-word versions of these problems, in which each input word is provided as a straight-line program (compressed word) producing a word over the generating set. We solve all compressed versions in time quartic in the input size. All our algorithms can be executed uniformly, meaning the nilpotent group may be given by an arbitrary presentation as part of the input, but in general this will invalidate the complexity assessment as the nilpotency class and number of generators play a role in the complexity bound. However, if both the number of generators and the nilpotency class are bounded, the algorithms run in logarithmic space and polynomial time, with the degree of the polynomial depending on the bound. 1.1. Known approaches and summary of new results. An early example of the study of algorithmic problems in nilpotent groups is the work of Mostowski [28], who provided solutions to several algorithmic problems (word, membership, finiteness problems) and further expressed hope that the algorithms are practical for carrying out on a digital computer. As another example, while the word problem in nilpotent groups has long been known to be decidable, it was only established comparatively recently [13] that it is in fact decidable in real (therefore, linear) time. In 1958, Mal'cev [26] investigated finite approximations of finitely generated nilpotent groups, which allowed him to prove that they have decidable membership problem. In 1965, Blackburn [2], using the same method, showed decidability of the conjugacy problem for the same class of groups. Such separability arguments, while sufficient to show that the corresponding decision problems are solvable, offer, by themselves, no reasonable estimates on the time or space complexity of the algorithms involved. Only very recently, certain bounds for so-called full residual finiteness growth of nilpotent groups were established [3]. We would like to point out two more established approaches to algorithmic questions in nilpotent groups. One is based on the fact that every torsion-free nilpotent group embeds into a linear group UT d (Z) (see for example [10] or [15]). This easily solves the word problem, and was used by Grunewald and Segal in 1980 [9] to solve the last (at the time) standing major algorithmic problem in nilpotent groups, the isomorphism problem. It is worth mentioning that such an embedding shows that the word problem for torsion-free finitely generated nilpotent groups is, indeed, in LSPACE [19], but does not seem to provide any concrete time or space complexity estimates in the case of the conjugacy or membership problems. Another fruitful approach is due to the fact that finitely generated nilpotent groups admit a so-called Mal'cev (Hall-Mal'cev) basis (see, for example, [10] and [15]), which allows one to carry out group operations by evaluating polynomials (see Lemma 2.2). This approach was systematically used in [16], which gave solutions to a number of algorithmic problems in a class of groups that includes finitely generated nilpotent groups. It was also used in the above-mentioned work of Mostowski [28]. Mal'cev basis techiniques can be viewed as part of a more general picture. Indeed, every finitely generated nilpotent group G is polycyclic. To a polycyclic series one may associate a polycyclic presentation, which in the case of nilpotent groups is closely connected to establishing a Mal'cev basis. Many algorithimic problems may be solved using such a presentation. This approach is described in detail in [35] and further studied in [18,31], and in particular may be used to solve the membership and conjugacy problems in polycyclic groups. To our knowledge, no robust complexity estimates for such methods have been established in the case of nilpotent groups, with one exception of the recent papers [29,30], which find polynomial bounds for the equalizer and membership problems. We follow the Mal'cev basis approach. Let G be a fixed finitely generated nilpotent group. We describe algorithms to solve each of the following problems. (I) Given g ∈ G, compute the (Mal'cev) normal form of g. (II) Given g, h 1 , . . . , h n ∈ G, decide whether g ∈ h 1 , . . . , h n and if so express g as a product of subgroup generators from a standardized set. (III) Fix another finitely generated nilpotent group, H. Given K = g 1 , . . . , g n , a homomoprhism φ : K → H specified by φ(g i ) = h i , and h ∈ Im(φ), compute a generating set for ker(φ) and find g ∈ G such that φ(g) = h. (IV) Given K = g 1 , . . . , g n ≤ G, compute a presentation for K. (V) Given g ∈ G, compute a generating set for the centralizer of g. (VI) Given g, h ∈ G, decide whether or not there exists u ∈ G such that u −1 gu = h and if so find such an element u. In every case, the algorithm runs in space O(log L) and, simultaneously, in time O(L log 3 L), where L is the size of the input of the given problem. Problems (I), (V), and (VI) in fact run in time O(L log 2 L). To every subgroup one may associate a standardized or full-form generating set, and it is this set that is used in problem (II) to express g as a product of subgroup generators; we may in addition express g in terms of the given generating set, but the algorithm then runs in polynomial time (and not logspace). A Mal'cev basis consisting of m elements establishes a coordinatization of G, whereby each element is identified with an m-tuple of integers. The coordinates of a product gh are given by polynomial functions of the coordinates of g and h, and the coordinates of a power g l are given by polynomial functions of l and the coordinates of g. Our algorithms work directly with the coordinate tuples and these multiplication/exponentiation polynomials. The key to obtaining logarithmic space bounds, and polynomial time bounds for compressed-word problems, is the fact that coordinates of an n-fold product g 1 g 2 . . . g n are bounded in magnitude by a polynomial function whose degree depends only on the nilpotency class c of G (a constant) and not, as may be inferred by composing the polynomials n times, on the length of the product (Theorem 2.3). The class LSPACE of problems decidable in logarithmic space is defined via machines called logspace transducers, and we recall the relevant definitions in §1.2. Logarithmic space computations in groups have been studied primarily in relation to the word and normal form problems. In free groups, the word problem was solved in logarithmic space by [19]. Normal forms were computed in logarithmic space for graph groups and Coxeter groups in [5] and the class of groups with logspace-computable normal forms was shown to be closed under several important constructions in [6]. The conjugacy problem was also solved in logspace for free solvable groups, the Grigorchuk group, and certain wreath products in [36]. We also consider compressed-word versions of problems (I)-(VI). In this formulation, every input word is given in the form of a straight-line program (see §1.3) and the input size L is the sum of the sizes of all input programs. A program of size L may encode a word of length up to 2 L−1 , and so efficient (i.e. polynomial time) algorithms must work directly on the straight-line program without producing the encoded word itself. We solve all of the problems (I)-(VI), with compressed-word inputs, in time O(L 4 ), with (I), (V), and (VI) being solved in time O(L 3 ). The approach is to first solve (I) to compute the Mal'cev coordinates of each input element, write each coordinate as a binary number, then apply the previous algorithms to the coordinate tuples. This also shows that a second 'compressed' formulation, in which every input word is given by its (binary) Mal'cev coordinates, can be solved in the same time complexity for each problem. The compressed version of the word problem is known to be polynomial-time decidable in several classes of groups, including free groups [21], partially commutative groups [20], limit groups [23], and nilpotent groups [11]; further, polynomialtime decidability is preserved under many important group-theoretic constructions, such as graph products (see [22] for a summary). One motivation for obtaining a polynomial-time solution to the compressed word problem in a group G is that such a solution give rise to a polynomial-time solution to the (non-compressed) word problem in any finitely generated subgroup of Aut(G) and in semi-direct products involving G [34]. Less is known about the compressed conjugacy problem. It is polynomial-time decidable in free groups [34] and more generally partially commutative groups [11], these results being part of a polynomial-time solution to the word problem in the outer automorphism group of these groups. Further, the compressed conjugacy problem was recently shown to be polynomial-time decidable in hyperbolic groups [12]. For the compressed membership problem, we are not aware of any previous results for interesting classes of groups. Even in free groups no polynomialtime algorithm is known, though recent results of Jez [14] on DFAs with compressed labels are closely related. Compressed membership in abelian groups is easily solved in polynomial time by converting the straight-line programs to O(n)-bit integer vectors and applying linear algebra techniques, and our proof for nilpotent groups uses a similar approach. 1.2. Logspace. We define logarithmic space computation via a machine called a logspace transducer. Briefly, this is a deterministic Turing machine with three tapes: a read-only input tape, a read-write work tape with number of cells logarithmic in the size of the input tape, and a write-only output tape. We provide the details below. Let Σ be a finite alphabet containing a symbol ǫ called the blank symbol. A tape is an infinite sequence X = {x n } n with x n ∈ Σ and all but finitely many x n being the blank symbol. The subsequence consisting of all symbols up to and including the last non-blank symbol is called the content of the tape and the length of this sequence is the size of the tape. Intuitively, we think of it as a one-ended infinite array of cells, each cell holding an element of this sequence. To every tape X we associate a positive integer h X called the head position. Let S be a finite set, called the set of states. A configuration is a tuple C = (s, I, h I , W, h W , O, h O ) consisting of a state s ∈ S and three tapes I, W, O called the input tape, work tape, and output tape (respectively) together with the head positions for each tape. A transducer is a function which assigns to every possible configuration C a successor configuration C ′ = (s ′ , I ′ , h I ′ , W ′ , h W ′ , O ′ , h O ′ ) with the following properties: • C ′ depends only on s and the symbols at h I on I and at h W on W ; • I = I ′ and h I ′ differs from h I by at most 1; • W and W ′ are identical except possibly at position h W , and h W ′ differs from h W by at most 1; • either h O = h O ′ and O = O ′ or h O ′ = h O + 1 and O ′ differs from O only in position h O . A run of the transducer is a finite sequence of configurations C 1 , C 2 , . . . , C k where C i+1 = C ′ i for i = 1, . . . , k − 1, C ′ k = C k ( there is no further computation to perform), and the work and output tapes of C 1 contain only the blank symbol β. The content of the input tape of C 1 is called the input and the content of the output tape of C k is the output. Let c > 0 be any integer. A transducer is called a c-logspace transducer if for every possible run the size of the work tape in every configuration is bounded by c log n where n is the size of the input tape (the base of the logarithm is not generally relevant, though using |Σ| or |Σ| − 1 is natural). Provided such a constant c exists for a given transducer, it will be called simply a logspace transducer. Let Σ ′ be another alphabet. We say that a function f : Σ * → (Σ ′ ) * is logspace computable if there is a logspace transducer which for every input w ∈ Σ * produces f (w) on the output tape. A decision problem, which we define as a subset of Σ * , is logspace decidable if its characteristic function is logspace computable. The complexity class LSPACE consists of all decision problems that are logspace decidable. Note that in order to discuss multi-variable functions, we simply add a new symbol α to the alphabet and separate the input words by this symbol. Any function that is logspace computable is also computable in polynomial time (meaning the length of every run is bounded by a polynomial function of n). Indeed, in any run the sequence of configurations that follow a given configuration C i are determined by C i only. Hence no run may contain the same configuration twice since runs have finite length by definition. Thus, the length of any given run (and hence the time complexity of the machine) is bounded by the number of possible configurations |S| · n · |Σ| c log n · c log n ∈ O(n c+2 ), where n is the length of the input. Since the degree c + 2 of this polynomial can be expected to be quite high, it is usually advantageous to analyze the time complexity of logspace transducers directly and obtain, if possible, a low-degree polynomial time bound. The type of computations that can be done in logspace are quite limited. For example, to store an integer M requires log M bits. Hence we can store and count up to n c , but not higher. We may also store and manipulate pointers to different locations in the input, as each such pointer is an integer of size at most n. Basic arithmetic operations are also computable in logspace, and it is possible to compose logspace transducers (i.e., the class of logspace computable functions is closed under composition). The above is a formal description of logspace computability via transducers, but in practice we simply work with informal algorithm descriptions and ensure that our algorithms require no more than logarithmic space. Each of our algorithms may be formally encoded as a logspace transducer. 1.3. Compressed words. Let Σ be a set of symbols containing a special symbol ǫ used to denote the empty word. A straight-line grammar A over Σ in Chomsky normal form consists of an ordered finite set A, called the set of non-terminal symbols, together with for each A ∈ A exactly one production rule either of the form A → BC where B, C ∈ A and B, C < A or of the form A → x where x ∈ Σ. The greatest non-terminal is called the root, elements of Σ are called terminal symbols, and the size of A is the number of non-terminal symbols and is denoted |A|. For shortness, with a slight abuse of terminology, in this paper we refer to straight-line grammars in Chomsky normal form as straight-line programs or compressed words. Note that any program A may, by encoding the non-terminal symbols as integers, be written down using O(|A| · log |A|) bits. The output or evaluation of A is the word in Σ * obtained by starting with the root non-terminal and successively replacing every non-terminal symbol with the right-hand side of its production rule. It is denoted eval(A) and we similarly denote by eval(A) the word obtained starting with the non-terminal A ∈ A. For example, the program B over {x} with production rules B n → B n−1 B n−1 , B n−1 → B n−2 B n−2 , . . . , B 1 → x has eval(B) = x 2 n−1 and eval(B 2 ) = x 2 . As this example illustrates, the length of eval(A) may be exponential in |A| (this program in fact achieves the maximumlength output). Thus to have efficient algorithms dealing with compressed words, one must avoid computing eval(A) and instead work directly with the production rules. A fundamental result of Plandowski [32] states that two straight-line programs may be checked for character-for-character equality of their outputs in time polynomial in the sum of the program lengths. Let us note here that for any fixed word w = x 1 x 2 · · · x m over Σ, the word w n may be encoded with a program of size O(log(n)). Indeed, we may first encode w using binary subdivision, i.e. with the scheme W i → x i for i = 1, . . . , m, W m+1 → W 1 W 2 , W m+2 → W 3 W 4 and so on, obtaining eval(W k ) = w where k is some integer bounded by |w| + log 2 |w|. A program similar to B above, with W k in place of x and suitable modifications when n is not a power of 2, will encode w n . A straight-line program over a group G is a straight-line program over a given symmetrized generating set of G. Any algorithmic problem for G which takes words as input may be considered in 'compressed-word form' where all input words are provided as straight-line programs. For example, the compressed conjugacy problem asks, given two straight-line programs A and B over G, whether or not eval(A) and eval(B) represent conjugate group elements. Computing Mal'cev normal forms To produce efficient algorithms, our nilpotent group will need to be given by a particular type of presentation, known as a consistent nilpotent presentation. Such a presentation can be computed from an arbitrary presentation (Prop. 2.1). During computation, we represent group elements in their Mal'cev normal form, which we define below. Critically, converting a general group word to Mal'cev form involves at most a polynomial expansion in word length (Theorem 2.3), and may be performed in logarithmic space (Theorem 2.7). (1) G = G 1 ✄ G 2 ✄ . . . ✄ G s ✄ G s+1 = 1 such that G i /G i+1 ≤ Z(G/G i+1 ) for all i = 1, . . . , s where Z denotes the center. Equivalently, G possesses a normal series in which [G, G i ] ≤ G i+1 for all i = 1, . . . , s. If G is finitely generated, so are the abelian quotients G i /G i+1 , 1 ≤ i ≤ s. Let a i1 , . . . , a imi be a standard basis of G i /G i+1 , i.e. a generating set in which G i /G i+1 has presentation a i1 , . . . , a imi |a eij ij , j ∈ T i in the class of abelian groups, where T i ⊆ {1, . . . , m i } and e ij ∈ Z >0 . Formally put e ij = ∞ for j / ∈ T i . Note that A = {a 11 , a 12 , . . . , a sms } is a polycyclic generating set for G, and we call A a Mal'cev basis associated to the central series (1). For convenience, we will also use a simplified notation, in which the generators a ij and exponents e ij are renumbered by replacing each subscript ij with j + ℓ<j m ℓ , so the generating set A can be written as A = {a 1 , . . . , a m }. We allow the expression ij to stand for j + ℓ<j m ℓ in other notations as well. We also denote T = {i | e i < ∞}. By the choice of the set {a 1 , . . . , a m } every element g ∈ G may be written uniquely as a group word of the form g = a α1 1 . . . a αm m , where α i ∈ Z and 0 ≤ α i < e i whenever i ∈ T . The m-tuple (α 1 , . . . , α m ) is called the coordinate tuple of g and is denoted Coord(g), and the expression a α1 1 . . . a αm m is called the (Mal'cev) normal form of g. We also denote α i = Coord i (g). When we need to make a distinction between integers and their binary notation for algotithmic purposes, we refer to the tuple of binary notations for (α 1 , . . . , α m ) as binary coordinate tuple of g. To a Mal'cev basis A we associate a presentation of G as follows. For each 1 ≤ i ≤ m, let n i be such that a i ∈ G ni \ G ni+1 . If i ∈ T , then a ei i ∈ G ni+1 , hence a relation (2) a ei i = a µ iℓ ℓ · · · a µim m holds in G for some µ ij ∈ Z and ℓ > i such that a ℓ , . . . , a m ∈ G ni+1 . Let 1 ≤ i < j ≤ m. Since the series (1) is central, relations of the form a j a i = a i a j a α ijℓ ℓ · · · a αijm m (3) a −1 j a i = a i a −1 j a β ijℓ ℓ · · · a βijm m(4) hold in G for some α ijk , β ijk ∈ Z and l > j such that a ℓ , . . . , a m ∈ G nj +1 . The set of (abstract) symbols {a 1 , . . . , a m } together with relators (2)-(4) present a group G ′ that is isomorphic to G under the natural isomorphism (relator (4) may be omitted when j ∈ T ). Indeed, any presentation on symbols {a 1 , . . . , a m } with relators of the form (2) for any choice of T and relators of the form (3) and (4) for all 1 ≤ i < j ≤ m defines a nilpotent group G ′′ with cyclic central series having terms a i , . . . , a m for i = 1, . . . , m. Such a presentation is called a consistent nilpotent presentation for G ′′ if the order of a i modulo a i+1 , . . . , a m is precisely e i . While presentations of this form need not, in general, be consistent, those derived from a central series of a group G as above must be consistent (if in G ′ the order of a i is e ′ i < e i , then this fact follows from (2)-(4) and hence a relation for a e ′ i i would have been written in (2)). Consistency of the presentation implies that G ′ ≃ G. Proposition 2.1. There is an algorithm that, given a finite presentation of a nilpotent group G, finds a consistent nilpotent presentation of G and an explicit isomorphism. The presentation may be chosen to be the presentation derived from a Mal'cev basis associated with the lower or upper central series of G. Proof. Prop. 3.2 of [1] proves that a given nilpotent presentation may be checked for consistency, so to produce a consistent nilpotent presentation it suffices to enumerate presentations obtained from the given presentation of G by finite sequences of Tietze transformations until a consistent nilpotent presentation is found (cf. [1] Thm. 3.3). Further, we may compute both the lower and upper central series from a presentation of G. Indeed, each term Γ i = [G, Γ i−1 ] of the lower central series is precisely the normal closure of the set {[g, γ]} g,γ where g and γ run over generating sets of G and Γ i−1 , respectively, and as such a generating set may be computed by [1] Lem. 2.5. The upper central series is computed in Cor. 5.3 of the same work. From either series we may find a Mal'cev basis and then write down, via exhaustive search, the required relators (2)-(4). In Proposition 5.1 we employ techniques described in Section 3.1 to give a polynomial-time version of Proposition 2.1. An essential feature of the coordinate tuples for nilpotent groups is that the coordinates of a product (a α1 1 · · · a αm m )(a β1 1 · · · a βm m ) may be computed as a "nice" (polynomial if T = ∅) function of the integers α 1 , . . . , α m , β 1 , . . . , β m , and the coordinates of a power (a α1 1 · · · a αm m ) l may similarly be computed as a 'nice' function of α 1 , . . . , α m and l. The existence of such polynomial functions for torsion-free nilpotent groups is proved in [10] and [15], and an explicit algorithm to construct them from a nilpotent presentation of G is given in [18]. If any of the factors G i /G i+1 are finite (which must occur when G has torsion), the coordinate functions also involve the extraction of quotients and remainders modulo e i . For each i ∈ T , we define functions r i : Z → {0, 1, . . . , e i − 1} and s i : Z → Z by the decomposition t = r i (t) + s i (t)e i , where t ∈ Z. Let F Q (n) denote the set of all functions f : Z n → Z formed as a finite composition of the functions from the set {·, +, r i , s i | i ∈ T } using constants from Q, where · and + denote multiplication and addition in Q. Lemma 2.2. Let G be a nilpotent group with Mal'cev basis a 1 , . . . , a m . There exist p 1 , . . . , p m ∈ F Q (2m) and q 1 , . . . , q m ∈ F Q (m + 1) satisfying the following properties. For every g, h ∈ G and l ∈ Z, writing Coord(g) = (γ 1 , . . . , γ m ) and Coord(h) = (δ 1 , . . . , δ m ), (i) Coord i (gh) = p i (γ 1 , . . . , γ m , δ 1 , . . . , δ m ), (ii) Coord i (g l ) = q i (γ 1 , . . . , γ m , l), (iii) if i ∈ T then p i = r i • p ′ i and q i = r i • q ′ i for some p ′ i ∈ F Q (2m) and q ′ i ∈ F Q (m + 1), and (iv) if, for some index 1 ≤ k < m, γ i = 0 for all i ≤ k, or δ i = 0 for all i ≤ k, then for all i ≤ k + 1 (a) Coord i (gh) = γ i + δ i and Coord i (g l ) = lγ i if i ∈ T , and (b) Coord i (gh) = r i (γ i + δ i ) and Coord i (g l ) = r i (lγ i ) if i ∈ T . Proof. This lemma is a special case of Theorem 6.7 of [27]. We note that the existence of such functions extends to nilpotent groups admitting exponents in a binomial principal ideal domain, as described in Theorem 5.7 of [27]. Computation of the functions p 1 , . . . , p m , q 1 , . . . , q m , in the case when all factors G i /G i+1 are infinite, can be done via the "Deep Thought" algorithm [18]. If some factors are finite, one may introduce the functions r i and t i one factor at a time using a procedure similar to that used to compute normal forms given in §7.2 of [18]. It is worth observing that in this construction the values of the numbers α ijl , β ijl , µ il , and e i are not essential: one may compute functions in which all of these values appear as variables. For §3 and §4, and the remainder of this section, we fix G to be a finitely presented nilpotent group. Set c to be the nilpotency class of G and fix a Mal'cev basis A = {a 1 , . . . , a m } associated with the lower central series of G, with m the size of this basis. Algorithmic results in these sections do not take G as part of the input and so we may, in light of Proposition 2.1 and Lemma 2.2, assume that the presentation of G is precisely the consistent nilpotent presentation corresponding to A and that the functions p i , q i are known (computed in advance). However, we state algorithmic results without restriction on the presentation of G with the understanding that such algorithms will, in pre-computation, translate to such a presentation if needed. In §5 we provide uniform algorithms, in which G is included in the input. 2.2. Polynomial bound on the length of normal forms. Suppose w = x 1 x 2 · · · x n is a word over A ± . In order to compute the coordinate tuple of w, one may use the polynomials p i to compute the coordinates of x 1 x 2 , then use this result to find the coordinates of (x 1 x 2 )x 3 and so on. However, the resulting computation is an n-fold composition of the polynomials and thus we may a priori expect a bound of order k n on the magnitude of the coordinates, with k being the maximum degree of the polynomials. This presents an obstacle to logspace computation, as the binary representation of integers that size requires linear space. We show that the coordinates are in fact of order n c , where c is the nilpotency class of G. The following result is, to our knowledge, folklore (our proof is adapted from unpublished lecture notes by C.Druţu and M.Kapovich). Theorem 2.3. Let G be a nilpotent group with a lower central series G = Γ 1 ✄ . . . ✄ Γ c ✄ Γ c+1 = {1} with an associated Mal'cev basis A = {a 11 , . . . , a cmc }. There is a constant κ, depending only on the presentation of G, such that for every word w over A ± , (5) |Coord ij (w)| ≤ κ · |w| i for all i = 1, . . . , c, 1 ≤ j ≤ m i . Proof. We must show that if a γ11 11 · · · a γij ij · · · a γcm c cmc , 1 ≤ i ≤ c, 1 ≤ j ≤ m i , is the normal form of w, then |γ ij | ≤ κ|w| i . Note that since [Γ i , Γ j ] ≤ Γ i+j , we have for each a = a ±1 ik and a ′ = a ±1 jℓ , 1 ≤ i, j ≤ c, 1 ≤ k ≤ m i , 1 ≤ ℓ ≤ m j , a commutation relation of the form (6) [a, a ′ ] = a αt1 t1 a αt2 t2 · · · a αcm c cmc , where t ≥ i + j. Similarly, for each a = a ±1 ik with ik ∈ T , we have a relation (7) a e ik = a µt1 t1 a µt2 t2 · · · a µcm c cmc , where t > i. Put E = max{e ik |ik ∈ T } (or E = 1 if T = ∅). Let C 0 ∈ Z be greater than the word length of the right hand sides of all equalities (6), (7). Note that C 0 only depends on the presentation of G. For any word v over A ± and integer 1 ≤ n ≤ c, denote by |v| n the number of occurrences of letters a 11 , . . . , a 1m1 , . . . , a n1 , . . . , a nmn and their inverses in v. We also formally put |v| n = 0 for n ≤ 0 and |v| n = |v| c for n > c. Claim. For every 1 ≤ k ≤ c there is a constant C k+1 that depends only on k and the presentation of G such that for every word w over A ± the corresponding group element can be represented in the form w = G a γ11 11 · · · a γij ij · · · a γ km k km k · w k+1 , where (A) 1 ≤ i ≤ k, 1 ≤ j ≤ m i , (B) |γ ij | ≤ C k+1 |w| i , and (C) 0 ≤ γ ij < e ij if ij ∈ T , and (D) w k+1 ∈ Γ k+1 is a word in a (k+1)1 , . . . , a cmc with |w k+1 | k+ℓ ≤ C k+1 |w| k+ℓ for all k + 1 ≤ k + ℓ ≤ c. We prove this claim by induction on k. We allow k = 0 as the base case, which holds with w 1 = w and C 1 = 1. Suppose the claim holds for some k − 1 ≥ 0. Denote w (0) k+1 = w k and push an occurrence of a ±1 k1 in this word to the left, using the commutation relations (6). This gives the expression w (0) k+1 = G a ±1 k1 · w (1) k+1 . Notice that for a = a ±1 k1 and a ′ = a ±1 ij , the right-hand side R of (6) satisfies |R| k+ℓ = 0 for all i > ℓ and |R| k+ℓ ≤ C 0 otherwise. Therefore swapping a ±1 k1 with a ±1 ij increases the word length, by at most C 0 , only for i = 1, . . . , ℓ. So |w (1) k+1 | k+ℓ ≤ |w (0) k+1 | k+ℓ + C 0 |w (0) k+1 | ℓ . Notice that this inequality holds in fact for all ℓ ∈ Z. We proceed in the same fashion to move left all occurrences of a ±1 k1 , followed by all occurrences of a ±1 k2 and so on. At step j + 1 we represent w (j) k+1 = G a ±1 kij · w (j+1) k+1 , where {i j } is a non-decreasing sequence. We similarly get (8) |w (j+1) k+1 | k+ℓ ≤ |w (j) k+1 | k+ℓ + C 0 |w (j) k+1 | ℓ , for all ℓ ∈ Z. All letters a ±1 ki are collected on the left in at most N ≤ |w k | k ≤ C k |w| k steps, which gives (9) w k = G a γ k1 k1 · · · a γ km k km k w (N ) k+1 with w (N ) k+1 ∈ Γ k+1 . We immediately see by the induction hypothesis that |γ ki | ≤ |w k | k ≤ C k |w| k , which delivers (A) provided C k+1 is chosen to be at least C k . We also find a bound on |w (N ) k+1 | k+ℓ , for all k + 1 ≤ k + ℓ ≤ c, which will be used to prove (C), by applying (8) repeatedly. Namely, when we apply (8) j times, 1 ≤ j ≤ N , we obtain |w (N ) k+1 | k+ℓ ≤ |w (N −1) k+1 | k+ℓ + C 0 |w (N −1) k+1 | ℓ ≤ |w (N −2) k+1 | k+ℓ + C 0 |w (N −2) k+1 | ℓ + C 0 |w (N −2) k+1 | ℓ + C 0 |w (N −2) k+1 | ℓ−k = |w (N −2) k+1 | k+ℓ + 2C 0 |w (N −2) k+1 | ℓ + C 2 0 |w (N −2) k+1 | ℓ−k ≤ . . . ≤ j 0 |w (N −j) k+1 | k+ℓ + . . . + j j C j 0 |w (N −j) k+1 | k+ℓ−jk = ι≤ ℓ k j ι C ι 0 |w (N −j) k+1 | k+ℓ−ιk . For j = N , this yields |w (N ) k+1 | k+ℓ ≤ C c/k 0 ι≤ ℓ k N ι |w (0) k+1 | k+ℓ−ιk = C c/k 0 ι≤ ℓ k N ι |w k | k+ℓ−ιk ≤ C c/k 0 ι≤ ℓ k N ι C k |w| k+ℓ−ιk . Now we recall that N ≤ C k |w| k , which gives |w (N ) k+1 | k+ℓ ≤ C c/k 0 ι≤ ℓ k C ι k |w| ιk C k |w| k+ℓ−ιk ≤ C k+1 |w| k+ℓ . Before obtaining (C), we must first reduce certain exponents to obtain (B), then repeat the collection process described above. Consider the word a γ k1 k1 · · · a γ km k km k and, for each ki ∈ T , rewrite using (7) a γ ki ki = a δ ki ki · (a eij ki ) si = G a δ ki ki · v i , where 0 ≤ δ ki < e ki and v i consists of s i copies of the right-hand side of (7). Note that |v i | ≤ C 0 · |s i | ≤ C 0 · |γ ki | ≤ C 0 C k · |w| k . For ki / ∈ T , put δ ki = γ ki and v i = 1. Thus the resulting word w k = a δ k1 k1 v 1 · · · a δ km k km k v m k has length at most N + m k C 0 C k · |w| k ≤ C k · |w| k + m k C 0 C k · |w| k =C k · |w| k . Repeating the collection procedure for the wordw k , as above, we obtain that w k = G a δ k1 k1 · · · a δ km k km k u k+1 , where, using (8) repeatedly as before, |u k+1 | k+ℓ ≤ C c/k 0 ι≤ ℓ k N ι |w k | k+ℓ−ιk ≤ C c/k 0 ι≤ ℓ k (C k |w| k ) ℓ k |w k | ≤ C c/k 0 ι≤ ℓ k C ℓ k k |w| ℓC k |w| k ≤ C ′ k+1 |w| k+ℓ , for all k + 1 ≤ k + ℓ ≤ c. Combining this with with (9), we get w k = G a δ k1 k1 · · · a δ km k km k u k+1 w (N ) k+1 . Thus setting w k+1 = u k+1 w (N ) k+1 and C k+1 = C k+1 + C ′ k+1 delivers (C), completing the inductive step and the proof of the claim. To prove the theorem it is only left to notice that the claim with k = c suffices, which gives κ = C c+1 . Remark 2.4. Observe that the torsion part of the above argument (obtaining (B)) can be adjusted to allow group relations a e ik = a µ i,k+1 i,k+1 a µ i,k+2 i,k+2 · · · a µcm c cmc for a = a ±1 ik in place of (7), by processing a k1 , a k2 , . . . , a km k in succession (in that order). Further, note that though we use the lower central series in Theorem 2.3, we only need the property (10) [ Γ i , Γ j ] ≤ Γ i+j . Therefore, any central series having this property will suffice, with c being replaced by the length of the series. In light of the importance of Theorem 2.3 to our work, we will refer to any Mal'cev basis associated with the lower central series of G as a lower central Mal'cev basis. Our algorithmic results usually assume that a lower central Mal'cev basis of G is given, though by Remark 2.4 one may substitute a central series satisfying (10). If one instead uses a polycyclic basis derived from a cyclic central series (called simply a Mal'cev basis in the literature) a similar polynomial bound, albeit of a higher degree, takes place (recall that the functions p i and q i are described in Lemma 2.2, and m is the polycyclic length of G). Lemma 2.5. Let G be a finitely generated nilpotent group with Mal'cev basis A = {a 1 , . . . , a m }. There are a constants κ ′ and δ depending on p i , q i , and m such that for every word w over A ± , |Coord i (w)| ≤ κ ′ · |w| δ for all i = 1, . . . , c. Proof. Assume the lemma holds for all nilpotent groups having a cyclic series of length m − 1, in particular for G 2 = a 2 , . . . , a m . Write w in the form (11) w = w 1 a η1 1 w 2 a η2 1 · · · w r a ηr 1 w r+1 , where each w j is a word (possibly empty) in letters a ±1 2 , . . . , a ±1 m and η j ∈ Z \ {0} for all j. The proof then proceeds via a right-to-left collection process utilizing commutation relations to obtain a word of the form w = a η 1 w ′ 1 w ′ 2 · · · w ′ r w r+1 , where η = r i=1 η r and w ′ 1 w ′ 2 . . . w ′ r w r+1 is an element of G 2 . We use Theorem 2.3 over the above statement in subsequent arguments, so we leave the details of the proof to the reader. We would like to mention that our methods, generally speaking, do not extend to polycyclic groups. Indeed, while every polycyclic group has a set of polycyclic generators, below we show that no polynomial bound for coordinates similar to (5) can be met unless the group is virtually nilpotent. Proposition 2.6. Let H be a polycyclic group with polycyclic generators A = {a 1 , . . . , a m }. Suppose there is a polynomial P (n) such that if w is a word over A ±1 of length n then |Coord i (w)| ≤ P (n) for all i = 1, 2, . . . , m. Then H is virtually nilpotent. Proof. Without loss of generality, assume that P is monotone. Let B n be the ball of radius n centered at 1 in the Cayley graph of H relative to A. Then for every w ∈ B n , |Coord i (w)| is bounded by P (n), so |B n | ≤ (2P (n) + 1) m , i.e., H has polynomial growth. By a result of Gromov [8], it follows that H is virtually nilpotent. 2.3. Computation of normal forms and the word problem. Since our algorithms will accept words of A ± as input, but perform subsequent computations using Mal'cev coordinates, a necessary first step is to compute these coordinates. The following statement improves on a result in [6], which proved that UT n (Z) has normal forms computable in logspace. Theorem 2.7. Let G be a finitely generated nilpotent group with Mal'cev basis A. There is an algorithm that, given a word w of length L over A ± , outputs the binary coordinate tuple of w in space O(log L) and, simultaneously, time O(L log 2 L). Proof. Denote w = x 1 x 2 · · · x L . We hold in memory an array γ = (γ 1 , . . . , γ m ), initialized to (0, . . . , 0), which at the end of the algorithm will hold the Mal'cev coordinates of w. First, we use the functions {p i } i to compute the coordinates of the product x 1 x 2 , storing the result in γ. We then compute, again using the {p i } i , the coordinates of the product (x 1 x 2 ) · x 3 , using the saved coordinates of x 1 x 2 . We continue in this way, performing m(L − 1) total evaluations of functions from {p i } i , and obtain the coordinates of w. Each subword x 1 · · · x j has length bounded by L, so by Theorem 2.3 each of its coordinates may be stored as an O(log L)-bit number and hence γ may be stored in logarithmic space. Each evaluation of a function p i involves addition, multiplication, and division (needed to evaluate the functions r k and s k ) with O(log L)-bit inputs. The time complexity of these operations is (sub)quadratic in the number of bits, hence the overall time complexity is O(L log 2 L). The standard 'schoolbook' arithmetic operations use space linear in the number of bits, hence space O(log L) in this case. Theorem 2.7 of course implies that the word problem in G is decidable in logspace. This was previously known via embedding into UT n (Z), since [19] proved that linear groups have LSPACE word problem. Corollary 2.9. Every finitely generated nilpotent group has word problem decidable in LSPACE. Note that Theorem 2.7 does not give a solution to the search version of the word problem, that is, the problem of writing a trivial element w as a product of conjugates of relators. We give a polynomial-time solution to this problem in Theorem 4.3. While the above 'letter-by-letter' application of the functions p i is efficient for the initial conversion of input words into coordinates, subsequent coordinate computations generally involve a product of a constant number of factors, the coordinates each of which are known, hence it is more efficient to apply the polynomials to the coordinates of the factors. Proof. Since k is fixed, this follows immediately from the fact that arithmetic operations are computable in quadratic time and linear space. 2.4. Compressed-word problems and binary Mal'cev coordinates. Theorem 2.3 implies that there is a constant b, depending on G, such that every coordinate of a word of length L may be expressed as a b log(L)-bit number. Therefore every element g ∈ G which may be, on the one hand, represented by a word of length L, may be more efficiently represented as an m-tuple of b log(L)-bit numbers. In this sense, the specification of a Mal'cev basis provides a natural compression scheme on G. In formulating algorithmic problems over G it is therefore natural to expect that input group elements are encoded in this scheme, similar to the expectation that elements of Z are encoded as binary numbers rather than in 'unary encoding' as words. On the other hand, straight-line programs provide a general method to formulate algorithmic problems over groups with compressed input, and in the case of nilpotent groups the use of straight-line programs eliminates the need to specify a particular Mal'cev basis. The two schemes are in fact equivalent for nilpotent groups: a coordinate tuple (α 1 , . . . , α m ) encoded using L bits is easily converted to a straight-line program of size O(L) producing the normal form a α1 1 · · · a αm m , and we show below that conversion is also efficient in the opposite direction. We can now approach compressed-word versions of various algorithmic problems in G by converting straight-line programs to Mal'cev coordinates then applying algorithms which work with coordinates. The first of these is the compressed word problem. A polynomial-time solution to this problem was also observed in [11], via reduction to UT n (Z), and a new result in [17] shows, using the same reduction, that the compressed word problem in any non-trivial finitely generated torsion-free nilpotent group is complete for the logspace counting class C = L. Corollary 2.12. The compressed word problem in every finitely generated nilpotent group is decidable in (sub)cubic time. Throughout this paper we describe several algorithms which take as input one or more words over a finitely presented nilpotent group G. Each such algorithm also comes in two 'compressed' versions. In the 'compressed-word' version, all inputs and outputs are straight-line programs and the size L of the input is the sum of the sizes of all input programs. In the 'binary Mal'cev coordinate' version, G is provided with a fixed Mal'cev basis A and all input and output words are coordinate tuples (relative to A) with each entry written as a binary number. The size L of the input is the total number of bits in the input coordinates. In all cases, the compressed-word version works by first computing the Mal'cev coordinates of each input straight-line program using Theorem 2.11 and then invoking the binary Mal'cev coordinate version. Matrix reduction, membership problem, and subgroup presentations Several algorithmic problems, including the membership problem, may be solved by constructing an integer matrix from coordinate tuples corresponding to the generating set of a subgroup and performing a process similar to Gaussian elimination over Z to reduce the matrix to a unique 'standard form'. This approach was detailed in [35], but without a computational complexity analysis. We review this reduction process and analyze its complexity in §3.1, apply it to solve the membership problem in §3.2, and use it to compute presentations for subgroups in §3.3. It is also essential for computing kernels of homomorphisms and thereby solving the conjugacy problem in §4. Matrix reduction. Let h 1 , . . . , h n be elements of G given in normal form by h i = a αi1 1 · · · a αim m , for i = 1, . . . , n, and let H = h 1 , . . . , h n . To the tuple (h 1 , . . . , h n ) we associate the matrix of coordinates (13) A =    α 11 · · · α 1m . . . . . . . . . α n1 · · · α nm    , and conversely, to any n × m integer matrix, we associate an n-tuple of elements of G, whose Mal'cev coordinates are given as the rows of the matrix, and the subgroup H generated by the tuple. For each i = 1, . . . , n where row i is non-zero, let π i be the column of the first non-zero entry ('pivot') in row i. The sequence (h 1 , . . . , h n ) is said to be in standard form if the matrix of coordinates A is in row-echelon form with no zero rows and its pivot columns are maximally reduced, i.e. if A satisfies the following properties: (i) all rows of A are non-zero (i.e. no h i is trivial), (ii) π 1 < π 2 < . . . < π s (where s is the number of pivots), (iii) α iπi > 0, for all i = 1, . . . , n, (iv) 0 ≤ α kπi < α iπi , for all 1 ≤ k < i ≤ s, (v) if π i ∈ T , then α iπi divides e πi , for i = 1, . . . , s. The sequence is called full if in addition (vi) H ∩ a i , a i+1 , . . . , a m is generated by {h j | π j ≥ i}, for all 1 ≤ i ≤ m. In (vi), note that {h j | π j ≥ i} consists of those elements having 0 in their first i − 1 coordinates. Let us remark here that (vi) holds for a given i if and only if the following two properties hold. (vi.i) For all 1 ≤ k < j ≤ s with π k < i, h h k j and h h −1 k j are elements of h l | l > k . (vi.ii) For all 1 ≤ k ≤ s with π k < i and π k ∈ T , h eπ k /α kπ k k ∈ h l | l > k . Indeed, Lemma 2.2(iv) implies that h h k j , h h −1 k j , and h eπ j /αjπ j j have coordinates 1 through π j equal to 0, so the forward implication is clear. Conversely, given an element of H ∩ a i , a i+1 , . . . , a m , written as a product of generators of H, one may first reduce exponents of h 1 using (vi.ii) (if π 1 ∈ T ), and then, observing that the exponent sum of h 1 must be 0, eliminate all occurrences of h ±1 1 in conjugate pairs using (vi.i). Repeating with h 2 , h 3 , . . . , h k where π k < i ≤ π k+1 , we obtain a word in generators {h j | π j ≥ i}. The importance of full sequences is described in the lemma below, which can be found in [35], Propositions 9.5.2 and 9.5.3. i+1 · · · a αm m . Clearly, all three of these operations preserve H. By combining these operations, we may also (4) replace h i with h −1 i , and (5) append to the tuple an arbitrary product h i1 · · · h i k of elements in the tuple. We also note that these operations, with exception of (3 ′ ), preserve the property that all rows are Mal'cev coordinate tuples. Further, operation (3 ′ ) is applied only in Step 3 (see below on p. 19), by completion of which that property is regained. Using the row operations defined above, we show how to reduce any coordinate matrix to its unique full form, thus producing the unique full generating sequence for the corresponding subgroup H. While it is not difficult to see that such reduction is possible, the details of the procedure are essential for our complexity estimates. We make use of the following algorithmic fact regarding greatest common divisors. Lemma 3.2. There is an algorithm that, given integers a 1 , . . . , a n as binary numbers, computes in time O(L 3 ) an expression x 1 a 1 + . . . + x n a n = d = gcd(a 1 , . . . , a n ) with |x i | ≤ 1 2 max{|a 1 |, . . . , |a n |}, where L is the total number of bits in the input. If n is fixed, the algorithm may be run in space O(L). Proof. We compute the expression using the binary tree method described in [25] Thm. 9. This computation proceeds in two phases. In the first or 'bottom-top' phase, we place the integers a 1 , . . . , a n as the leaves of a binary tree, and we compute GCDs of the pairs (a 1 , a 2 ), (a 3 , a 4 ), . . . , (a n−1 , a n ), recording each GCD as its expression as a linear combination of a i and a i+1 in the parent node. We then continue up the tree computing the GCDs of the pairs of parents in the same fashion, obtaining d at the root. This involves invoking the extended Euclidean algorithm (or a more efficient algorithm) at most n − 1 times, each time with inputs bounded by M = max{|a 1 |, . . . , |a n |}. Each invocation runs in time O(log 2 M ), hence the entire phase runs in time O(L 3 ). In the second or 'top-bottom' phase, we compute the coefficients x 1 , . . . , x n (satisfying the given bound) from the top of the tree downward, using 'small' coefficients at each step. Each computation uses a fixed number of arithmetic operations, hence this phase also runs in time O(L 3 ). For the space complexity, simply observe that when n is fixed the tree has constant size and we use logspace arithmetic operations, which must run in time polynomial in log L. Let A 0 be a matrix of coordinates, as in (13) above. We produce matrices A 1 , . . . , A s , with s the number of pivots in the full form of A 0 , such that for every k = 1, . . . , s the first π k columns of A k form a matrix satisfying (ii)-(v), the condition (vi) is satisfied for all i < π k+1 , and A s is the full form of A 0 . Here we formally denote π s+1 = m + 1. Set π 0 = 0 and assume that A k−1 has been constructed for some k ≥ 1. In the steps below we construct A k . We let n and m denote the number of rows and columns, respectively, of A k−1 . At all times during the computation, h i denotes the group element corresponding to row i of A k and α ij denotes the (i, j)-entry of A k , which is Coord j (h i ). These may change after every operation. Step 1: Locate the column π k of the next pivot, which is the minimum integer π k−1 < π k ≤ m such that α iπ k = 0 for at least one k ≤ i ≤ n. If no such integer exists, then k − 1 = s and A s is already constructed. Otherwise, set A k to be a copy of A k−1 and denote π = π k . Compute a linear expression of d = gcd(α kπ , . . . , α nπ ), d = l k α kπ + · · · + l n α nπ . The coefficients l k , . . . , l n must be chosen so that |l i | ≤ M for all i, where M = max{|α kπ |, . . . , |α nπ |}. Let h n+1 = h l k k · · · h ln n and note that h n+1 has coordinates of the form Coord(h n+1 ) = (0, . . . , 0, d, . . .) with d occurring in position π. Perform operation (5) to append h n+1 as row n + 1 of A k . Step 2: For each i = k, . . . , n, perform row operation (2) to replace row i by Coord(h i · h −αiπ /d n+1 ). For each i = 1, . . . , k − 1, use (2) to replace row i by Coord(h i · h −⌊αiπ /d⌋ n+1 ). Using (1), swap row k with row n + 1. At this point, properties (ii)-(iv) hold on the first k columns of A k . Step 3: If π ∈ T , we additionally ensure condition (v) as follows. Perform row operation (3 ′ ), with respect to π, to append a trivial element h n+2 as row (0, . . . , 0, e π , . . .) to A k . Let δ = gcd(d, e π ) and compute the linear expression δ = n 1 d + n 2 e π , with |n 1 |, |n 2 | ≤ max{d, e π }. Let h n+3 = h n1 k h n2 n+2 and append this row to A k , as row n+3. Note that Coord(h n+3 ) = (0, . . . , 0, δ, . . .), with δ in position π. Replace row k by Coord(h k · h −d/δ n+3 ) and row n+2 by Coord(h n+2 · h −eπ /δ n+3 ), producing zeros in column π in these rows. Swap row k with row n + 3. At this point, (ii), (iii), and (v) hold (for the first π k columns) but (iv) need not, since the pivot entry is now δ instead of d. For each j = 1, . . . , k − 1, replace row j by Coord(h j · h −⌊αjπ /δ⌋ k ), ensuring (iv). Step 4: Identify the next pivot π k+1 , setting π k+1 = m + 1 if π k is the last pivot. We now ensure condition (vi) for i < π k+1 . Observe that Steps 1-3 preserve h j | π j ≥ i for all i < π k . Hence (vi) holds in A k for i < π k since it holds in A k−1 for the same range. Now consider i in the range π k ≤ i < π k+1 . It suffices to prove (vi.i) for all j > k and (vi.ii) for π k only. To obtain (vi.i), we notice that h −1 k h j h k , h k h j h −1 k ∈ h ℓ | ℓ > k if and only if [h j , h ±1 k ] ∈ h ℓ | ℓ > k . Further, note that the subgroup generated by the set S j = {1, h j , [h j , h k ], . . . , [h j , h k , . . . , h k ]}, where h k appears m − π k times in the last commutator, is closed under commutation with h k since if h k appears more than m − π k times then the commutator is trivial. An inductive argument shows that the subgroup S j coincides with h h ℓ k j | 0 ≤ ℓ ≤ m − π k . Similar observations can be made for conjugation by h −1 k . Therefore, appending via operation (5) rows Coord(h h ℓ k j ) for all 1 ≤ |ℓ| ≤ m − π k and all k < j ≤ n + 3 delivers (vi.i) for all j > k. Note that (vi.i) remains true for i < π k . To obtain (vi.ii), in the case π k ∈ T , we add row Coord(h e k /α kπ k k ). Note that this element commutes with h k and therefore (vi.i) is preserved. Step 5: Using (3), eliminate all zero rows. The matrix A k is now constructed. In applying row operation (2) or (5), the magnitude of the largest entry in the matrix may increase. It is essential to observe that during the matrix reduction algorithm the growth of this value is bounded by a polynomial of fixed degree (depending on G). Lemma 3.3. Let g 1 , . . . , g t ∈ G and let R be the full form of the associated matrix of coordinates. Then every entry α ij of R is bounded by |α ij | ≤ C · L K , where L = |g 1 |+· · ·+|g t | is the total length of the given elements, and K = m(8c 2 ) m and C are constants depending on G. Proof. Denote by A 0 the t × m matrix of coordinates associated with (g 1 , . . . , g t ). Following the matrix reduction algorithm described above, we will bound the entries of A k in terms of the entries of A k−1 and by induction obtain a bound of the entries of R = A s in terms of the entries of A 0 . For a given A k−1 , denote by n the number of rows of A k−1 and by N the magnitude of the largest entry, i.e., N = max{|α ij | | 1 ≤ i ≤ n, 1 ≤ j ≤ m}. Observe that for each 1 ≤ i ≤ n, the element h i corresponding to row i of A k−1 has length |h i | = |a αi1 1 · · · a αim m | ≤ mN . Now in Step 1 we append the row h n+1 , which satisfies |h n+1 | = |h l k k · · · h ln n | = |l k ||h k | + · · · + |l n ||h n | ≤ N |h k | + · · · + |h n | ≤ mnN 2 .(14) Denote by α ′ ij the (i, j)-entry at the end of Step 2. Since this number is Coord j (h i h −⌊αiπ /d⌋ n+1 ), except for i = k, we have, using Theorem 2.3, |α ′ ij | ≤ κ h i h −⌊αiπ /d⌋ n+1 c = κ |h i | + |α iπ | d |h n+1 | c ≤ κ mN + N · mnN 2 c ≤ κ 2mnN 3 c = κ(2m) c · n c N 3c .(15) For i = k, the tighter bound (14) holds. Proceeding to Step 2, denote E = max{e i | i ∈ T }. The new rows h n+2 and h n+3 satisfy |h n+2 | ≤ 2κE c |h n+3 | = |h n1 k h n2 n+2 | = |n 1 ||h k | + |n 2 ||h n+2 | ≤ (EN )(mnN 2 ) + (EN )(2κE c ) ≤ 2mnκE c+1 N 3 .(16) Let α ′′ ij denote the (i, j)-entry of A k−1 at the end of Step 4, and recall that Step 4 only appends rows to the bottom of the matrix. For row n + 3 (row k before swapping) we have, for all j, |α ′′ (n+3)j | ≤ κ|h k h −d/δ n+3 | c ≤ κ (|h k | + N |h n+3 |) c ≤ κ(mnN 2 + 2mnκE c+1 N 4 ) c ≤ κ c+1 (3mn) c E c 2 +c N 4c .(17) In row n + 2 we have |α ′′ (n+2)j | ≤ κ h n+2 h −eπ/δ n+3 c ≤ κ (|h n+2 | + E|h n+3 |) c ≤ κ(2κE c + 2mnκE c+2 N 3 ) c ≤ κ c+1 (4mn) c E c 2 +2c N 3c .(18) Finally, for rows 1 through k−1 notice that each of h 1 , . . . , h k−1 has length bounded by m times the bound (15), hence |α ′′ jl | ≤ κ h j h −⌊a ′ jπ /δ⌋ k c ≤ κ(|h j | + |α ′ jπ ||h k |) c ≤ κ m(2m) c κn c N 3c + κ(2m) c n c N 3c · 2mnκE c+1 N 3 c ≤ (6κmnEN ) 4c 2 +3c .(19) Note that at the conclusion of Step 3, bound (19) applies to rows 1 through k − 1, bound (16) applies to the element h k (formerly h n+3 ) in row k, and the maximum of (17) and (18) applies to all rows after k. In Step 4 we append all rows of the type h h l k j for 1 ≤ |l| ≤ m − π k and k < j ≤ n + 3. The entries of such a row are bounded by |α ′′ pq | ≤ κ h h l k j c ≤ κ |h j | + 2|l||h k | c ≤ κ mκ c+1 (4mn) c E c 2 +2c N 4c + 2mm · 2mnκE c+1 N 3 c ≤ C ′′ · (nN ) 4c 2 ,(20) where C ′ = (4κmE) c 3 +2c 2 +3c+1 . If π k ∈ T , we also append the row h e k /α kπ k k , and so the entries in the final row r of the matrix satisfy |α ′′ rq | ≤ κ|h e k /α kπ k k | c ≤ κ(Em · 2mnκE c+1 N 3 ) c . ≤ C ′′ · (nN ) 4c 2 .(21) Thus the magnitude of each entry of A k is bounded by C ′ · (nN ) 4c 2 , where n is the number of rows of A k−1 and N bounds the magnitude of the entries of A k−1 . Next, notice that Steps 1-3 add three rows, and Step 4 adds less than 2m(n+3) rows. We may bound the number of rows added by 10m · n. Consequently, the number of rows of A k is bounded by (10m) k · t. A simple inductive argument now shows that every entry of R is bounded by C ′′ · t 4c 2 s N (4c 2 ) s 0 , where N 0 is the maximum of the absolute value of entries in A 0 and C ′′ is a constant depending on m, c, E, and κ. Now N 0 ≤ max i {κ|g i | c } ≤ κL c . Moreover, t ≤ L and s ≤ m. Therefore the entries of R are bounded by C ′′ L 4c 2 m κ (4c 2 ) m L c(4c 2 ) m = C · L c(4c 2 ) m +4c 2 m . We simplify the exponent of L by using the bound K = m(8c 2 ) m . Lemma 3.3 allows us to produce a logspace version of the matrix reduction algorithm. Note that the matrix reduction algorithm, as presented, may use more than logarithmic space since the number of rows t of the initial matrix A 0 may be of the same order as the total input size. Proof. Algorithm description. Since the coordinate matrix for h 1 , . . . , h n cannot be stored in logarithmic space, we adopt a piecewise approach, appending one row at a time and reducing. Form the coordinate matrix B 0 of the first m elements, h 1 , . . . , h m and compute its full form B 1 . Append to B 1 a row corresponding to the coordinates of h m+1 and compute the full form B 2 of this matrix. Append h m+2 to B 2 and continue in this way until there are no more rows to append. Since the subgroup generated by the rows is preserved under row operations, the last matrix thus obtained, B n−m , is the full form of the matrix of coordinates of h 1 , . . . , h n . Space and time complexity. We first show that at every step the matrix we work with can be stored in logarithmic space. Since each intermediate matrix B l is in full form, Lemma 3.1 ensures that B l has at most m rows. Hence the number of rows appended the reduction of B l−1 to B l is, as seen in the proof of Lemma 3.3, bounded by 10m 2 . The size of the working matrix is therefore never more than 10m 2 × m (constant with respect to the input). As for the size of the entries, Theorem 2.3 shows that each entry α ij of the matrix B 0 satisfies |α ij | ≤ κ|h i | c ≤ κL c . Each entry can be encoded using O(log L) bits and therefore B 0 can be stored in logarithmic space. Since the matrix B l , 1 ≤ l ≤ n − m, is precisely the (unique) full-form matrix for the sequence h 1 , . . . , h m+l , Lemma 3.3 ensures that each entry of B l is bounded in magnitude by C · L K . The proof of Lemma 3.3 shows that the bound given there holds during all steps of the reduction algorithm, hence during the reduction of B l−1 to B l no entry can be greater in magnitude than C(m 2 CL K ) K . Consequently, all intermediate matrices can be stored in logarithmic space. It remains to show that the operations that we use can also be executed in logarithmic Remark 3.5. The factor of L in the time complexity arises most heavily from the fact that the number n of input elements can, in general, only be bounded by L. If n is regarded as a fixed number, the most time-consuming computation is computing the Mal'cev coordinates and the overall time complexity is reduced to O(L log 2 L). This remark also applies to Theorems 3.8, 3.11, and 4.1. Remark 3.6. The output of the algorithm uses binary numbers to encode the exponents which are the Mal'cev coordinates of g i . If one is interested in the expression of g i as words in generators of G, the time complexity grows according to the time it takes to print out the corresponding words, similarly to Remark 2.8(b) after Theorem 2.7. Given the polynomial bound on the total length of g i provided in the statement, the overall algorithm still runs in polynomial time. This remark also applies to all the search problems considered below, that is to Theorems 3.8, 3.11, 4.1, 4.3, 4.5, and 4.7. In order to solve the compressed-word version of the membership (and other) problems, we require a compressed version of matrix reduction that runs in polynomial time. Proof. For the compressed-word version, let A 1 , . . . , A n be the input straight-line programs encoding h 1 , . . . , h n . We first compute, by Theorem 2.11, the coordinate vectors Coord(eval(A i )) for i = 1, . . . , n. This operation occurs in time O(nL 3 ). Since |eval(A i )| ≤ 2 L , we obtain from Theorem 2.3 that each entry of B 0 is bounded by κ · (2 L ) c and hence is encoded using O(L), rather than O(log L), bits. Therefore the reduction process described in Theorem 3.4 runs in time O(nL 3 ). Note that one may instead include all n rows in the initial matrix, as in Steps 1-5, instead of the piecewise approach of Theorem 3.4, since logspace is not an issue here. Since |eval(A i )| ≤ 2 L , it follows from Lemma 3.3 that each coordinate in the full-form sequence for H is bounded in magnitude by a polynomial function of 2 L . Then each element of the sequence may be expressed as a compressed word of size polynomial in L. The binary Mal'cev coordinate version simply omits the initial computation of coordinates. Membership problem. We can now apply the matrix reduction algorithm to solve the membership problem in logspace, and its compressed-word version in quartic time. We also solve the membership search problem using only logarithmic space. Theorem 3.8. Let G be a finitely generated nilpotent group. There is an algorithm that, given elements h 1 , . . . , h n ∈ G and h ∈ G, decides whether or not h is an element of the subgroup H = h 1 , . . . , h n . The algorithm runs in space logarithmic in L = |h| + n i=1 |h i | and time O(L log 3 L). Moreover, the algorithm additionally returns the following: • the binary coordinates tuples of g 1 , . . . , g s ∈ G s. t. (g 1 , . . . , g s ) is a full-form sequence for the subgroup H (relative to a lower central Mal'cev basis), and • if h ∈ H, then also the unique binary tuple (γ 1 , . . . , γ s ) s.t. h = g γ1 1 · · · g γs s . Furthermore, the word length of g γ1 1 · · · g γs s is bounded by a degree 2m(8c 3 ) m polynomial function of L. Proof. Algorithm. Compute the full form B of the coordinate matrix corresponding to H and the full-form sequence (g 1 , . . . , g s ). As before, denote by α ij the (i, j)entry of B and by π 1 , . . . , π s its pivots. By Lemma 3.1, any element of H can be written as g γ1 1 · · · g γs s . We show how to find these exponents. Denote h (1) = h and Coord(h (j) ) = (β γ j = β (j) πj /α jπj and h (j+1) = g −γj j h (j) . If j < s, continue to j + 1. If j = s, then h = g γ1 1 · · · g γs s ∈ H if h (s+1) = 1 and h / ∈ H otherwise. Complexity and length bound. We first prove, by induction on s, that the length of the output g γ1 1 · · · g γs s is bounded by a degree | + |h| is also bounded by a degree δ(s−1) polynomial function, so by Theorem 2.3 |β (s) πs |, and therefore |γ s |, is bounded by a polynomial function of degree c · δ(s − 1). It follows that |g γs s | is bounded by a degree c · δ(s − 1) + m(8c 2 ) m = δ(s) polynomial function of L, and hence g γ1 1 · g γs s obeys the same degree bound. The bound stated in the theorem may be obtained using s ≤ m. Regarding complexity, the initial computation of (g 1 , . . . , g s ) is performed within the bounds by Lemma 3.4. Since the magnitude of each γ j is bounded, for all j, by a polynomial function of L, the coordinates may be computed and stored using logarithmic space and in time O(log 2 L) by Lemma 2.10. The complexity bound then follows from the fact that the recursion has constant depth s. Though the expression of h in terms of the full-form generators g 1 , . . . , g s of H provides a standardized representation, one may also wish to express h in terms of the given generators. Corollary 3.9. Let G be a finitely generated nilpotent group. There is an algorithm that, given elements h 1 , . . . , h n ∈ G and h ∈ h 1 , . . . , h n , computes an expression h = h ǫ1 i1 · · · h ǫt it where i j ∈ {1 , . . . , n} and ǫ j = ±1. The algorithm runs in time polynomial in L = |h|+ n i=1 |h i | and the output has length bounded by a polynomial function of L. Proof. We modify the matrix reduction algorithm from Section 3.1 so as to be able to express each g i as a product of h 1 , . . . , h n . To this end, along with the matrix A k , we store at every step an array C k which contains the elements corresponding to the rows of the matrix A k , each written as a product of h 1 , . . . , h n . Thus, C s will be an array containing in each entry, i, a product of h 1 , . . . , h n which is equal to g i . To obtain the array C k from the information in step k − 1, we perform on C k−1 the corresponding row operation that was performed on A k−1 , except we record the result not in terms of Mal'cev coordinates, but directly in terms of the words in the array C k−1 . For 1 ≤ k ≤ s, denote by L k the length (as a word over the alphabet {h ±1 1 , . . . , h ±1 n }) of the largest entry of C k . Note that at each step, every row has at most two operations performed on it that may increase its length (one application of (2) in Step 2 and one in Step 3), or is newly-created. Using the fact that all exponents involved are bounded in magnitude by C · L K , where C and K are the constants from Lemma 3.3, it easily follows that there is a poylnomial function f (L) such that L k ≤ f (L)L k−1 . Since L 0 = 1 and k ≤ s, it follows that L s is bounded by a polynomial function of L and therefore the computation of C s is perfomed in polynomial time. Finally, we can simply substitute the expression of g i (for 1 ≤ i ≤ s) in terms of h 1 , . . . , h n into the expression h = g γ1 1 · · · g γs s to obtain an expression h = h ±1 i1 · · · h ±1 it with i 1 , . . . , i t ∈ {1, . . . , n}. Since the length (over the alphabet {g ± 1 , . . . , g ± s }) of the expression g γ1 1 · · · g γs s is bounded by a polynomial function of L, producing the expression h = h ±1 i1 · · · h ±1 it occurs in polynomial time. Observe that this method will not yield a logspace algorithm, since the array C s is too large to be stored in memory. The algorithm used in Theorem 3.8 may be combined with the compressed version of matrix reduction (Lemma 3.7) to give a polynomial-time solution to the compressed membership problem. 3.3. Subgroup presentations. We now apply matrix reduction to show that finitely generated nilpotent groups are logspace effectively coherent: given a finitely generated subgroup H, we can in logspace and quasilinear time compute a consistent nilpotent presentation for H. By the size of a presentation we mean the number of generators plus the sum of the lengths of the relators. Proof. Algorithm. Begin by computing the full sequence (g 1 , . . . , g s ) for H using Lemma 3.4. Let H i = g i , g i+1 , . . . , g s . We claim that H = H 1 ≥ H 2 . . . ≥ H s+1 = 1 is a cyclic central series for H. From property (vi) we have H i = H ∩ a πi , a πi+1 , . . . , a m . Since a πi , . . . , a m is a normal subgroup of G, it follows that the above is a normal series, and since H i /H i+1 = g i H i+1 the series is cyclic. For i < j, [g i , g j ] ∈ a πj +1 , . . . , a m ∩ H ≤ H j+1 hence the series is central. We conclude that (g 1 , . . . , g s ) is a Mal'cev basis for H, so it suffices to compute the relators (2)-(4) in order to give a consistent nilpotent presentation of H. The order e ′ i of g i modulo H i+1 is simply e i /Coord πi (g i ). We establish each relation (2) by invoking Theorem 3.8 with input g e ′ i i and H i+1 = g i+1 , . . . , g s . Since g e ′ i i ∈ H i+1 and (g i+1 , . . . , g s ) is the unique full sequence for H i+1 , the membership algorithm returns the expression on the right side of (2). Relations (3) and (4) are established similarly. Complexity and size of presentation. Computation of the full sequence and invoking Theorem 3.8 occurs within the stated space and time bounds. Regarding the size of the presentation, the number of generators is s ≤ m, the number of relations is bounded by s + 2 s 2 , and each relation has length obeying the degree 2m(8c 3 ) m bound of Theorem 3.8. Remark 3.12. By Remark 2.4, the Mal'cev basis for H obtained in Theorem 3.11 may be used in Theorem 2.3. Indeed, one may group the Mal'cev generators g 1 , . . . , g s into the appropriate terms of the lower central series of G, i.e. set H ′ i = g ki , g ki+1 , . . . , g s where k i is the least index such that g ki ∈ Γ i \ Γ i−1 . Then H = H ′ 1 ≥ H ′ 2 ≥ . . . ≥ H ′ c+1 = 1 is a central series for H in which [H ′ i , H ′ j ] ≤ H ′ i+j and g 1 , . . . , g s is an associated Mal'cev basis. The compressed-word version of Theorem 3.11, running in polynomial time, follows immediately. However, the relators of the presentation are provided as straight-line programs. This is unconventional, but we may convert to an 'uncompressed' presentation, of polynomial size, using the following construction. Consider any presentation P of a group H in which each relator R is the output of a straight-line program A R . We apply the following construction to P. Add the set of non-terminals A of all A R to the generators of P. For each production rule A → BC, add the relation A = BC to the relations of P; for each production A → x add the relation A = x; for A → ǫ add the relation A = 1. Replace the original relator eval(A R ) by the root non-terminal of A R and denote the resulting presentation P ′ . It is clear that the sequence of Tietze transformations which at each step removes the greatest non-terminal and its production rule converts P ′ back to P. Thus P and P ′ present isomorphic groups. As the outputs eval(A R ) are, in many cases, exponentially longer than the size of the programs A R , this construction significantly reduces the size of the presentation. We can also use this construction to remove the binary numbers from the presentation found in Theorem 3.11 and at the same time produce a smaller presentation. However, the presentation will no longer fit the definition of a nilpotent presentation due to the presence of extra relators used for compressing the exponents. Proof. The number of generators and relators is bounded by a constant, as observed above. Each power a β appearing in a relator is compressed using a straight-line program of size O(log β). Since each exponent β is bounded by a polynomial function of L, the total size of the straight-line programs, and thus of the presentation, is logarithmic in L. Homomorphisms and the conjugacy problem Using matrix reduction, [35] showed how to compute the kernel of a homomorphism and compute a preimage for a given element under a homomorphism. We prove in §4.1 that these algorithms may be run in logarithmic space and compressedword versions in polynomial time. In §4.2 we apply these algorithms to solve the conjugacy problem. 4.1. Kernels. For fixed nilpotent groups G and H, we may specify a homomorphism φ from a subgroup K ≤ G to H via a generating set (g 1 , . . . , g n ) of K and a list of elements h 1 , . . . , h n where φ(g i ) = h i , i = 1, . . . , n. For such a homomorphism, we consider the problem of finding a generating set for its kernel, and given h ∈ φ(K) finding g ∈ G such that φ(g) = h. Both problems are solved using matrix reduction in the group H × G. There is an algorithm that, given • a subgroup K = g 1 , . . . , g n ≤ G, • a list of elements h 1 , . . . , h n ∈ H, such that the mapping g i → h i , 1 ≤ i ≤ n, extends to a homomorphism φ : K → H, • optionally, an element h ∈ H guaranteed to be in the image of φ, computes binary coordinates tuples (relative to the Mal'cev basis A) for (i) a generating set X for the kernel of φ, and (ii) an element g ∈ G such that φ(g) = h. extends to a homomorphism. Instead, we can check if it is the case within the same time and space bounds, and if not, output "Not a homomorphism" and a word in r = g i1 · · · g is representing 1 in G s.t. h i1 · · · h is does not represent 1 in H (like in Theorem 3.11, binary numbers are used to encode exponents appearing in r). We can apply Theorem 4.1 to solve the search version of the word problem in logspace and quasilinear time. Proof. Let G = X|R . Denote by F (X) the free nilpotent group on X of class c, by N the normal closure of R in F (X), and by φ the canonical epimorphism F (X) → G. Using Theorem 4.1, compute the full-form sequence Y = (y 1 , . . . , y s ) for N = ker φ. By exhaustive search, write each y i as a product of conjugates of elements of R. Note that this step is a precomputation. Since w represents the trivial element, w is in N , so we run the algorithm from Theorem 3.8 in order to express w in the form w = y γ1 1 · · · y γs s . Then substitute the expression of each y i as a product of conjugates of relators of G. The space complexity, time complexity, and length bound are as given in Theorem 3.8, noting that N is fixed and hence Remark 3.5 applies giving O(L log 2 L) rather than O(L log 3 L). In order to solve the compressed conjugacy problem, we need a compressed version of Theorem 4.1, in which all input and output words are encoded as straightline programs. We follow the same algorithmic steps, using Lemma 3.7 to compute W in Mal'cev coordinate form and Theorem 3.10 to compute β 1 , . . . , β r . 4.2. Conjugacy problem and centralizers. The decidability of the conjugacy problem in finitely generated nilpotent groups has been known since [33] and [7] proved that every polycyclic-by-finite group is conjugately separable: if two elements are not conjugate, there exists a finite quotient in which they are not conjugate. While this fact leads to an enumerative solution to the conjugacy problem, a much more practical approach, using matrix reduction and homomorphisms, was given in [35]. The computational complexity of this algorithm, however, was not analyzed. We show here that it may be run using logarithmic space, and that the compressedword version runs in polynomial time. A necessary step in the solution is the computation of centralizers, which is achieved by induction on the nilpotency class c of G. Theorem 4.5. Let G be a finitely generated nilpotent group. There is an algorithm that, given g ∈ G, computes a generating set X (in the form of the corresponding binary coordinates tuples) for the centralizer of g in G. The algorithm runs in space logarithmic in L = |g| and time O(L log 2 L), X contains at most m elements, and there is a degree (16m(c + 1) 2 ) 3(c+1)m polynomial function of L that bounds the length of each element of X. Proof. Let G = Γ 0 ≥ Γ 1 ≥ . . . ≥ Γ c+1 = 1 be the lower central series of G (or another series satisfying (10)). We proceed by induction on c. If c = 1, then G is abelian and C(g) = G so we return a 11 , a 12 , . . . , a 1m1 . Assume that the theorem holds for finitely presented nilpotent groups of class c − 1, in particular for G/Γ c . We use the series {Γ i /Γ c } c i=0 for G/Γ c . Compute a generating set K = {k 1 Γ c , . . . , k m−mc Γ c } for the centralizer of gΓ c in G/Γ c . Let J = k 1 , . . . , k m−mc , a c1 , a c2 , . . . , a cmc , which is the preimage of K under the homomorphism G → G/Γ c . Define f : J → G by f (u) = [g, u]. Since u ∈ J, u commutes with g modulo Γ c , hence [g, u] ∈ Γ c and so Im(f ) ⊂ Γ c . We claim that f is a homomorphism. Indeed, f (g, u 1 u 2 ) = [g, u 1 u 2 ] = [g, u 2 ][g, u 1 ][[g, u 1 ], u 2 ], but [g, u 1 ] ∈ Γ c hence [[g, u 1 ], u 2 ] ∈ Γ c+1 = 1, and [g, u 1 ] and [g, u 2 ] commute, both being elements of the abelian group Γ c . The centralizer of g is precisely the kernel of f : J → Γ c , since if h commutes with g, then hΓ c ∈ K so h ∈ J. We compute a generating set using Theorem 4.1. Complexity and length bound. We proceed to prove the length bound by induction on c. For c = 1, the algorithm returns a single symbol. In the inductive case, the algorithm is invoked with input gΓ c , which has size L. Each returned generator k i has length bounded by some number κ which is a polynomial function of L of degree (16mc 2 ) 3cm . Since for each i = 1, . . . , m − m c , [g, k i ] has length at most 4κ, each input-output pair (k i , [g, k i ]) or (a cj , [g, a cj ]) has length at most 5κ hence the total input to the kernel algorithm of Theorem 4.1 has length bounded by 5mκ. The length of the Mal'cev basis of Γ c is m c ≤ m, and its nilpotency class is 1, so we use 2m and c + 1 in place of m and c in the degree bound in Theorem 4.1. Hence the length of each output generator has length bounded by a polynomial function of L of degree (16mc 2 ) 3cm · 2m(8(c + 1) 2 ) 2m < (16m(c + 1) 2 ) 3(c+1)m , as required. The logarithmic space bound follows from the fact that in each recursive call, the number c of which is constant, the total size of the input is bounded by a polynomial function of L and the kernel algorithm runs in logarithmic space. The time complexity of O(L log 2 L) arises entirely from the computation of the Mal'cev coordinates of gΓ c . Indeed, the total bit-size of the coordinate tuple of g is logarithmic in L and each invocation of the kernel algorithm is made with at most m elements of bit-size logarithmic in a polynomial function of L, hence logarithmic in L, so each of the c recursive calls terminates in time O(log 3 L) by Theorem 4.4. We also require a compressed version of We may now solve the conjugacy and conjugacy search problems in logspace, again by induction on the nilpotency class of G. Theorem 4.7. Let G be a finitely presented nilpotent group. There is an algorithm that, given g, h ∈ G, either (i) produces binary coordinates tuple for u ∈ G such that g = u −1 hu, or (ii) determines that no such element u exists. The algorithm runs in space logarithmic in L = |g| + |h| and time O(L log 2 L), and the word length of u is bounded by a degree 2 m (6mc 2 ) m 2 polynomial function of L. Proof. Algorithm. We proceed by induction on c. If c = 1, then G is abelian and g is conjugate to h if and only if g = h. If so, we return u = 1. Now assume c > 1, and that the theorem holds for any nilpotent group of class c − 1, in particular for G/Γ c . Apply the algorithm to gΓ c and hΓ c , using the series {Γ i /Γ c } c i=0 for G/Γ c . If these elements are not conjugate, then g and h are not conjugate and we return 'No'. Otherwise, we obtain vΓ c ∈ G/Γ c . such that gΓ c = h v Γ c . Let φ : G → G/Γ c be the canonical homomorphism, J = φ −1 (C(gΓ c )), and define f : J → Γ c by f (x) = [g, x]. As in the proof of Theorem 4.5, the image of f is indeed in Γ c and f is a homomorphism. We claim that g and h are conjugate if and only if g −1 h v ∈ f (J). Indeed, if there exists w ∈ G such that g = h vw , then 1 · Γ c = g −1 w −1 h v w · Γ c = [g, w] · Γ c hence w ∈ J, so w −1 ∈ J as well. Then g −1 h v = [g, w −1 ] ∈ f (J), as required. The converse is immediate. So it suffices to express, if possible, g −1 h v as [g, w] with w ∈ J, in which case the conjugator is u = vw −1 . Compute a generating set {w 1 Γ c , . . . , w m−mc Γ c } for C(gΓ c ) using Theorem 4.6. Then J is generated by {w 1 , . . . , w m−mc , a c1 , a c2 , . . . , a cmc }. Compute Coord(g −1 h v ) and use Theorem 3.10 to determine if g −1 h v ∈ f (J) and if so use Theorem 4.4 to find w ∈ G such that g −1 h v = f (w). Return u = vw −1 . Complexity and length of u. We first prove the length bound by induction on c. For c = 1, we have u = 1. In the inductive case, each w i has length bounded by a degree (16mc 2 ) 3cm polynomial function of L and v has length bounded by a degree (16mc 3 ) m polynomial function of L. Take δ = (16mc 3 ) 3cm , which bounds both of these degrees. Then the input to Theorem 4.1, which consists of the pairs (w i , [g, w i ]) for i = 1, . . . , m − m c , the pairs (a cj , [g, a cj ]) for j = 1, . . . , m c , and the element g −1 h v , also has length bounded by a degree δ polynomial function of L. Hence w has length bounded by a polynomial function of degree (16mc 3 ) 3cm · 2m(8(c + 1) 3 ) m < (16m(c + 1) 3 ) 3(c+1)m . Since this is greater than the degree bound for v, the output u = vw −1 satisfies this degree bound. Logarithmic space complexity follows immediately from the fact that the conjugator length only grows by a polynomial function of L and the depth of the recursion is constant. The time complexity arises entirely from the computation of Mal'cev coordinates, in Theorem 4.5 and of g −1 h v . Indeed, Theorems 3.10 are each invoked with a constant number of inputs, each having bit-size O(log L), and therefore their time complexity is O(log 3 L). We solve the compressed version of the conjugacy problem in the same way, using the compressed version of the centralizer algorithm. Our solution to the conjugacy probelm in nilpotent groups allows to take advantage of results in [4], where Bumagin showed that a group G hyperbolic relative to finitely generated subgroups P 1 , . . . , P k has polynomial-time solvable conjugacy (search) problem whenever the parabolic subgroups P 1 , . . . , P k have polynomialtime solvable conjgacy (search) problem. Together with Theorem 4.7 this immediately gives the following result. Theorem 4.9. Let G be a finitely generated group hyperbolic relative to finitely generated nilpotent subgroups P 1 , . . . , P k . Then the word problem, the conjugacy problem, and the conjugacy search problem in G can be solved in polynomial time. Moreover, the time complexity of the word and conjugacy problems is O(L 3 log L), where L is the length of input. Proof. Note that [4,Theorem 1.5] allows to obtain specific estimates on the time complexity of the above algorithms in terms of the complexity of word and conjugacy (search) problems in P 1 , . . . , P k . In a given nilpotent group, on an input of length L, the time complexity of word problem is O(L) by [13], and that of a conjugacy problem is O(L log 2 L) by Theorem 4.7. By [4, Theorem 1.5], we obtain that the time complexity of conjugacy problem in G is bounded by where T (L) is as above, and C s (L) is a bound on the time complexity of conjugacy search problem in all parabolic subgroups P i . In our case, that gives a polynomial of a higher degree, according to Remark 3.6. Presentation-uniform algorithms The algorithms presented in the previous sections do not include the nilpotent group G as a part of the input. We now consider problems which do take G as a part of the input. Let N c be the class of nilpotent groups of nilpotency class at most c. We consider groups in N c presented using at most a fixed number, r, of generators. In the algorithms in this section, the integers c and r are constants. First, we use Theorem 3.4 to find consistent nilpotent presentations for such groups. Proposition 5.1. Let c and r be fixed integers. There is an algorithm that, given a finite presentation with r generators of a group G in N c , produces a consistent nilpotent presentation of G and an explicit isomorphism, in space logarithmic in the size L of the given presentation. Further, the size of the presentation is bounded by a polynomial function of L and if binary numbers are used in the output relators then the algorithm runs in time O(L log 3 L). Proof. Let G be presented as G = X | R . Let F = F (X) be the free nilpotent group of class c on generators X. As a precomputation, produce a consistent nilpotent presentation Y | S for F . One may do this in such a way that X ⊂ Y and elements of Y \ X are iterated commutators (so-called 'basic commutators') of elements of X. Consider the natural surjection φ : F → G and let N = ker(φ), which is the normal closure of R in F . Denoting R = {r 1 , . . . , r k }, N is generated by iterated commutators [. . . [[r i , x 1 ], x 2 ], . . . , x j ], where i = 1, . . . , k, j ≤ c, and x 1 , . . . , x j ∈ X ∪ X −1 . The total length of these generators is linear in L since c and r are constant. We now produce this generating set and apply Theorem 3.4 in F with this set, producing the full-form sequence T for N . Now G ≃ Y | S ∪ T , and we claim that this is a consistent nilpotent presentation. Since Y | S is a nilpotent presentation and the elements of T add relators of the form (2), the presentation is nilpotent. To prove that it is consistent, suppose some y i ∈ Y has order α i modulo y i+1 , . . . , y m in Y | S ∪ T . Since the order is infinite in F , there must be element of the form y αi i y αi+1 i+1 · · · y αm m in N . But then, by Lemma 3.1, T must contain an element y α ′ i i y α ′ i+1 i+1 · · · y α ′ m m where α ′ i divides α i . Hence α i cannot be smaller than α ′ i and so the presentation is consistent. The space and time complexity, and the polynomial bound on the size of the presentation, follow immediately from Theorem 3.4 since c is fixed and m depends only on c and r. If G is restricted to the class N c and presented with a fixed number r of generators, the above result can be employed to produce presentation-uniform versions of our algorithms for problems (I)-(VI) (see list on p. 3) that run in logarithmic space and quasi-linear time. Theorem 5.2. Let Π denote any of the problems (I)-(VI). For all c, r ∈ N, there is an algorithm that, given a finite presentation X|R with |X| ≤ r of a group in N c and input of Π as words over X, solves Π in X|R on that input in logarithmic space and quasi-linear time. In the case of (III), the second group may be specified in the input by a presentation but must also be in N c and use r generators. Proof. Let L denote the total input size. By Proposition 5.1, we can produce a consistent nilpotent presentation for G, of size polynomial in L, in logarithmic space and time O(L log 3 L). The number of generators m of this presentation depends only on c and r, as it is equal to the number of generators of the precomputed nilpotent presentation of the free nilpotent group F of class c. We compute, in advance, the functions p i and q i for any nilpotent presentation on m generators with the exponents α ijl , β ijl , µ il , e i appearing as variables (see the discussion following Lemma 2.2). Substituting the correct values for these exponents (obtained from the nilpotent presentation of G) we obtain the functions p i and q i for the specific group G. Note that the magnitude of all the numbers α ijl , β ijl , µ il , e i is bounded by a polynomial function of L, hence they may be encoded as O(log L)-bit numbers. We note that Theorem 2.3 is satisfied for the obtained presentation, by Remark 2.4. The constant κ depends on the presentation of G, but may be seen from the proof of Theorem 2.3 to be bounded by a polynomial function of L of degree depending on c and r. Hence the magnitude of each coordinate of a word w is bounded by a polynomial function of L of constant degree. One may substitute this bound in subsequent arguments in place of κ|w| i without affecting the logspace or quasi-linear time nature of the computations. In Lemma 3.3, the constants C and K will be greater, but remain bounded by a constant depending on c and r. Specifically, the algorithms of Theorems 2.7, 3.4, 3.8, 3.11, 4.1, 4.5, 4.7 as well as Corollaries 3.9 and 3.14 run in logspace and time O(L log 3 L) when G (from class N c with r generators) is included in the input. The length bounds given in these results do not hold as stated, but do hold with polynomials of some higher (but still constant for fixed c and r) degree. 2. 1 . 1Nilpotent groups and Mal'cev coordinates. A group G is called nilpotent if it possesses central series, i.e. a normal series Remark 2.8. (a) The time complexity is in fact O(L · f (log L)), where f (k) is the complexity of multiplying two k-bit numbers. Several bounds that are tighter than the k 2 bound obtained from 'long multiplication' are known. (b) By employing a binary counting variable, it is straightforward to output the Mal'cev normal form of w as a group word in space O(log L). However, in that case the algorithm will run in time O(max{L log 2 L, L c }), where c is the nilpotency class of G, because of the size of the output. Lemma 2 . 10 . 210Fix a positive integer k. Given integer vectors v 1 , . . . , v k representing the coordinates of group elements g 1 , . . . , g k and integers c 1 , . . . , c k one may compute Coord(g c1 1 g c2 2 · · · g c k k ) in time O(L 2 ) and space O(L), where L is the maximum number of bits required to represent any of the v i or any of the c i , for i = 1, . . . , k. Theorem 2 . 11 . 211Let G be a finitely generated nilpotent group with Mal'cev basis set A. There is an algorithm that, given a straight-line program A over A ± , computes in time O(L 3 ) the coordinate tuple Coord(eval(A)), where L = |A|. Each coordinate of eval(A) in the output is expressed as an O(L)-bit number.Proof. We may assume that A is a lower central Mal'cev basis. Let B be any non-terminal of A. Since |eval(B)| < 2 L , Theorem 2.3 gives the bound(12) |Coord i (eval(B))| ≤ κ2 Lc for each i = 1, . . . , m and so Coord i (eval(B)) may be expressed as a O(L)-bit number. For every i, we compute Coord i (eval(B)) by induction. If the production for B has the form B → x, we simply report 0 or ±1, as the case may be. Otherwise, we have B → CD and we assume that Coord j (eval(C)) and Coord j (eval(D)) have been computed for j = 1, . . . , m, and each coordinate is an O(L)-bit number. Then we compute Coord i (eval(B)) by evaluating p i at Coord(eval(C)) and Coord(eval(D)). Since the inputs are O(L)-bit numbers, evaluation of p i may be done in time (sub)quadratic in L. Repeating for each of the L non-terminals of A gives the O(L 3 ) bound. Lemma 3. 1 . 1Let H ≤ G. There is a unique full sequence U = (h 1 , . . . , h s ) that generates H. Further,H = {h β11 · · · h βs s | β i ∈ Z and 0 ≤ β i < e πi if π i ∈ T } and s ≤ m.We define three operations on tuples (h 1 , . . . , h n ) of elements of G, and the corresponding operations on the associated matrix, with the goal of converting (h 1 , . . . , h n ) to the unique full-form sequence for H = h 1 , . . . , h n . (1) Swap h i with h j . This corresponds to swapping row i with row j.(2) Replace h i by h i h l j (i = j, l ∈ Z). This corresponds to replacing row i by Coord(h i h l j ). (3) Add or remove a trivial element from the tuple. This corresponds to adding or removing a row of zeros; or (3 ′ ) a row of the form (0 . . . 0 e i α i+1 . . . α m ), where i ∈ T and a Theorem 3 . 4 . 34Let G be a finitely generated nilpotent group with lower-central Mal'cev basis A. There is an algorithm that, given h 1 , . . . , h n ∈ G, computes the full form of the associated matrix of coordinates (relative to A) and hence the unique full-form sequence (g 1 , . . . , g s ) generating h 1 , . . . , h n . The algorithm runs in space O(log L), where L = n i=1 |h i |, and in time O(L log 3 L), and the total length of the elements g 1 , . . . , g s is bounded by a polynomial function of L of degree m(8c 2 ) m . space and time O(L log 3 L). Computing the linear expression of a GCD is performed only on a bounded number of integers (the number being bounded by the number of rows, see Step 1 for the worst case), each encoded with O(log L) bits. It follows that the procedure for doing so described in Lemma 3.2 can be carried out in time O(log 3 L) and space O(log L). Computation of coordinates is performed initially for each h i using Theorem 2.7. Subsequent coordinate computations involve finding the coordinates of a product of a bounded number of factors raised to powers that are O(log L)-bit integers (no larger than the greatest entry in the matrix). The coordinates of each factor are known, hence the computations are performed in time O(log 2 L) and space O(log L) by Lemma 2.10. The other operations (swapping rows, removing zero rows, locating pivot, etc.) are trivial.Finally, for each reduction phase (computing B l from B l−1 ), the number of the above operations (GCD, coordinates, etc.) is bounded. The number of phases is bounded by n ≤ L, hence the time complexity is O(L log 3 L). Lemma 3 . 7 . 37The compressed version of Theorem 3.4, in which the input elements h 1 , . . . , h n are given either as straight-line programs or binary Mal'cev coordinate tuples, runs in time O(nL 3 ) where L is the total input size. Each g i is output as a straight-line program or binary Mal'cev coordinate tuple, of size polynomial in L. h (j) being defined below. For j = 1, . . . , s, do the following. If β l < π j , then h / ∈ H. Otherwise, check whether α jπj divides β(j) πj . If not, then h / ∈ H. If yes, let of L. First observe that each g i has length bounded by a degree m(8c 2 ) m polynomial function of L by Lemma 3.3. For s = 1 we have by Theorem 2.3,|γ 1 | ≤ |β (1) π1 | ≤ κ|h| c ≤ κL c hence h = g γ11 has length bounded by a degree c + m(8c 2 ) m polynomial function of L.Now assume that g γ1 1 · · · g γs−1 s−1 has length bounded by a degree δ(s−1) polynomial function of L. Then |h (s) | = |g Theorem 3. 10 . 10Let G be a finitely generated nilpotent group. There is an algorithm that, given compressed words A 1 , . . . , A n , B (or binary Mal'cev coordinate tuples) over G, decides whether or not eval(B) belongs to the subgroup generated by eval(A 1 ), . . . , eval(A n ). The algorithm runs in time O(nL 3 ), where L = |B| + |A 1 | + . . . + |A n |. As in Theorem 3.8, the algorithm may also compute the unique expression of eval(B) in terms of the standard-form sequence for eval(A 1 ), . . . , eval(A n ) . Theorem 3 . 11 . 311Let G be a finitely generated nilpotent group. There is an algorithm that, given h 1 , . . . , h n ∈ G, computes a consistent nilpotent presentation for the subgroup H = h 1 , . . . , h n . The algorithm runs in space logarithmic in L = n i=1 |g i | and time O(L log 3 L), the size of the presentation is bounded by a degree 2m(8c 3 ) m polynomial function of L, and binary numbers are used to encode exponents appearing in relators of the presentation. Theorem 3 . 13 . 313Let G be a finitely presented nilpotent group. There is an algorithm that, given a finite set A 1 , . . . , A n of straight-line programs over G (or binary Mal'cev coordinate tuples), computes a presentation for the subgroup eval(A 1 ), . . . , eval(A n ) . The algorithm runs in time O(nL 3 ), where L = n i=1 |A i |, and the size of the presentation is bounded by a polynomial function of L. Corollary 3 . 14 . 314The algorithm of Theorem 3.11 may be modified to produce a presentation of H (without binary numbers) of size logarithmic in L. The algorithm runs in logarithmic space and time O(L log 3 L). Theorem 4. 1 . 1Fix finitely generated nilpotent groups G and H, with lower-central Mal'cev basis A and B, and let c be the sum of their nilpotency classes and m = |A| + |B|. Theorem 4 . 3 . 43Let G be a finitely presented nilpotent group. There is an algorithm that, given a word w in generators of G which represents the identity element, produces a word w ′ written as a product of conjugates of relators of G such that w ′ is freely equivalent to w. The algorithm runs in space logarithmic in L = |w| and time O(L log 2 L), the length of w ′ is bounded by a degree 2m(8c 3 ) m polynomial function of L, and binary numbers are used to encode exponents appearing in w ′ . Theorem 4 . 4 . 44The compressed version of Theorem 4.1, in which all input words are given as straight-line programs or binary Mal'cev coordinate tuples and the output is given in the same form, runs in time O(nL 3 ). Theorem 4.5. The algorithm is the same, with a time complexity of O(L 3 ) arising both from the initial computation of Mal'cev coordinates and the fact that Theorem 4.4 is invoked with at most m elements of bit-size O(L). Theorem 4 . 6 . 46Let G be a finitely generated nilpotent group. There is an algorithm that, given a straight-line program A over G (or a binary Mal'cev coordinate tuple), computes in time O(L 3 ) a generating set of the centralizer of eval(A) in G, where L = |A|. The generators are given as straight-line programs (or binary Mal'cev coordinate tuples) of size polynomial in L. Theorem 4 . 8 . 48Let G be a finitely generated nilpotent group. There is an algorithm that, given two straight-line programs A and B over G (or binary Mal'cev coordinate tuples), determines in time O(L 3 ) whether or not eval(A) and eval(B) are conjugate in G, where L = |A| + |B|. If so, a straight-line program over G of size polynomial in L producing a conjugating element is returned. T (L) = max{O(L log 2 L), O(L 2 · L · log L)} = O(L 3 log L), where L is the length of the input. The corresponding estimate in the case of conjugacy search problem, by [4, Theorem 5.10] is given by max{T (L), O(C s (L))}, The algorithm runs in space logarithmic in L = |h| + n i=1 (|h i | + |g i |) and time O(L log 3 L), X consists of at most m elements, and there is a degree m(8c 2 ) m polynomial function of L that bounds the word length of each element of X and a degree 2m(8c 3 ) m polynomial function of L that bounds the word length of g.Proof. Let c 1 be the nilpotency class of G and c 2 that of H and consider the nilpotent group H × G. From the lower central seriesNotice that this series has the property that [∆ i , ∆ j ] ≤ ∆ i+j , since both lower central series have this property and the subgroups H and G commute in H × G. LettingLet W = (v 1 u 1 , . . . , v s u s ) be the sequence in full form for the subgroup Q, where u i ∈ G and v i ∈ H. Let 0 ≤ r ≤ s be the greatest integer such that v r = 1 (with r = 0 if all v i are 1). Set X = (u r+1 , . . . , u n ) and Y = (v 1 , . . . , v r ). We claim that X is the full-form sequence for the kernel of φ and Y is the full-form sequence for the image.From the fact that W is in full form, it follows that both X and Y are in full form. Since Q = P , it follows that v i = φ(u i ) for i = 1, . . . , s. Hence X is contained in the kernel and Y in the image. Now consider an arbitrary element φ(g)g of Q.There exist integers β 1 , . . . , β s such that φ(g)g = (v 1 u 1 ) β1 · · · (v s u s ) βs = v β1 1 · · · v βr r · u β1 1 · · · u βs s . If φ(g) is any element of the image, the above shows that φ(g) = v β1 1 · · · v βr r , hence Y generates the image. If g is any element of the kernel, then 1 = φ(g) = v β1 1 · · · v βr r . Since the integers β 1 , . . . , β s are unique (taking 0 ≤ β i < e i when e i ∈ T ), we have β 1 = · · · = β r = 0. Consequently g = u β1 1 · · · u βs s = u βr+1 r+1 · · · u βs s , hence X generates the kernel.Then to solve (i), it suffices to compute W using Lemma 3.4 and return X (or {1} if r = s). To solve (ii), use Theorem 3.8 to express h as h = v β1 1 · · · v βr r and return g = u β1 1 · · · u βr r . The length and complexity bounds are as given in Lemma 3.4 and Theorem 3.8.Remark 4.2.Notice that if G, H are as in Theorem 4.1, it is possible to check, given lists of elements g 1 , . . . , g n ∈ G and h 1 , . . . , h n ∈ H, whether the mapping g i → h i , 1 ≤ i ≤ n, extends to a homomorphism φ : g 1 , . . . , g n = K → H. Indeed, by Theorem 3.11 we can compute a finite presentation for K, and then for each relator r of that presentation check if the induced value φ(r) is trivial in H using the solution to the word problem in H, by Theorem 2.7. Note that both of the mentioned theorems run within the time and space bounds of Theorem 4.1. Therefore, in Theorem 4.1 we can omit the requirement that g i → h i , 1 ≤ i ≤ n, The algorithmic theory of polycyclic-by-finite groups. Gilbert Baumslag, Frank B Cannonito, Derek J Robinson, Dan Segal, 118-149. MR 1125209J. Algebra. 1421Gilbert Baumslag, Frank B. Cannonito, Derek J. Robinson, and Dan Segal, The algorithmic theory of polycyclic-by-finite groups, J. Algebra 142 (1991), no. 1, 118-149. MR 1125209 (92i:20036) Conjugacy in nilpotent groups. Norman Blackburn, Proceedings of the American Mathematical Society. 161Norman Blackburn, Conjugacy in nilpotent groups, Proceedings of the American Mathemat- ical Society 16 (1965), no. 1, 143-148. Khalid Bou-Rabee, Daniel Studenmund, arXiv:1406.3763Full residual finiteness growths of nilpotent groups. math.GRKhalid Bou-Rabee and Daniel Studenmund, Full residual finiteness growths of nilpotent groups, arXiv:1406.3763 [math.GR], 2014. Inna Bumagin, arXiv:1407.4528Time complexity of the conjugacy problem in relatively hyperbolic groups. math.GRInna Bumagin, Time complexity of the conjugacy problem in relatively hyperbolic groups, arXiv:1407.4528 [math.GR], 2014. Logspace Computations in Graph Groups and Coxeter Groups. Jonathan Volker Diekert, Markus Kausch, Lohrey, LATIN 2012: Theoretical Informatics. David Fernández-BacaBerlin HeidelbergEnglish7256Volker Diekert, Jonathan Kausch, and Markus Lohrey, Logspace Computations in Graph Groups and Coxeter Groups, LATIN 2012: Theoretical Informatics (David Fernández-Baca, ed.), Lecture Notes in Computer Science, vol. 7256, Springer Berlin Heidelberg, 2012, pp. 243- 254 (English). On groups that have normal forms computable in logspace. Murray Elder, Gillian Elston, Gretchen Ostheimer, Journal of Algebra. 3810Murray Elder, Gillian Elston, and Gretchen Ostheimer, On groups that have normal forms computable in logspace, Journal of Algebra 381 (2013), no. 0, 260-281. Conjugate separability in polycyclic groups. Edward Formanek, 1-10. MR 0419608J. Algebra. 4217626Edward Formanek, Conjugate separability in polycyclic groups, J. Algebra 42 (1976), no. 1, 1-10. MR 0419608 (54 #7626) Groups of polynomial growth and expanding maps. M Gromov, Publ. Math. IHES. 53M. Gromov, Groups of polynomial growth and expanding maps, Publ. Math. IHES 53 (1981), 53-73. Some general algorithms. II. Nilpotent groups. Fritz Grunewald, Daniel Segal, Ann. of Math. 5952072MRFritz Grunewald and Daniel Segal, Some general algorithms. II. Nilpotent groups, Ann. of Math. (2) 112 (1980), no. 3, 585-617. MR 595207 (82d:20048b) The Edmonton notes on nilpotent groups. Philip Hall, MR 0283083Queen Mary College Mathematics Notes, Mathematics Department. 44316Philip Hall, The Edmonton notes on nilpotent groups, Queen Mary College Mathematics Notes, Mathematics Department, Queen Mary College, London, 1969. MR 0283083 (44 #316) Compressed decision problems for graph products and applications to (outer) automorphism groups. Niko Haubold, Markus Lohrey, Christian Mathissen, Internat. J. Algebra Comput. 228Niko Haubold, Markus Lohrey, and Christian Mathissen, Compressed decision problems for graph products and applications to (outer) automorphism groups, Internat. J. Algebra Com- put. 22 (2012), no. 8, 1240007, 53. MR 3010821 Compressed decision problems in hyperbolic groups. Derek Holt, Markus Lohrey, Saul Schleimer, arXiv:1808.06886arXiv preprintDerek Holt, Markus Lohrey, and Saul Schleimer, Compressed decision problems in hyperbolic groups, arXiv preprint arXiv:1808.06886 (2018). Solving the word problem in real time. Derek F Holt, Sarah Rees, J. London Math. Soc. 63Derek F. Holt and Sarah Rees, Solving the word problem in real time, J. London Math. Soc. 63 (2001), 623-639. Compressed membership for NFA (DFA) with compressed labels is in NP (P). Artur Jeż, MR 290930929th International Symposium on Theoretical Aspects of Computer Science. Wadern14LIPIcs. Leibniz Int. Proc. Inform.Artur Jeż, Compressed membership for NFA (DFA) with compressed labels is in NP (P), 29th International Symposium on Theoretical Aspects of Computer Science, LIPIcs. Leibniz Int. Proc. Inform., vol. 14, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2012, pp. 136-147. MR 2909309 Fundamentals of the theory of groups. M I Kargapolov, Ju I Merzljakov, Graduate Texts in Mathematics. 62Springer-VerlagBurns. MRM. I. Kargapolov and Ju. I. Merzljakov, Fundamentals of the theory of groups, Graduate Texts in Mathematics, vol. 62, Springer-Verlag, New York, 1979, Translated from the second Russian edition by Robert G. Burns. MR 551207 (80k:20002) Algorithmic questions for σ-powered groups. M I Kargapolov, V N Remeslennikov, N S Romanovskii, V A Roman&apos;kov, V A Curkin, MR 0283060Algebra i Logika. 8293RussianM. I. Kargapolov, V. N. Remeslennikov, N. S. Romanovskii, V. A. Roman'kov, and V. A. Curkin, Algorithmic questions for σ-powered groups, Algebra i Logika 8 (1969), 643-659 (Russian). MR 0283060 (44 #293) . Daniel König, Markus Lohrey, arXiv:1502.03540Evaluating matrix circuits. cs.CCDaniel König and Markus Lohrey, Evaluating matrix circuits, arXiv:1502.03540 [cs.CC], 2015. Symbolic collection using Deep Thought. C R Leedham-Green, Leonard H Soicher, LMS J. Comput. Math. 1electronic). MR 1635719 (99f:20002C. R. Leedham-Green and Leonard H. Soicher, Symbolic collection using Deep Thought, LMS J. Comput. Math. 1 (1998), 9-24 (electronic). MR 1635719 (99f:20002) Word problems solvable in logspace. J Richard, Yechezkel Lipton, Zalcstein, J. ACM. 243Richard J. Lipton and Yechezkel Zalcstein, Word problems solvable in logspace, J. ACM 24 (1977), no. 3, 522-526. Efficient computation in groups via compression. M Lohrey, S Schleimer, Second International Symposium on Computer Science in Russia. Ekaterinburg, Russia; Berlin / HeidelbergSpringer4649ProceedingsM. Lohrey and S. Schleimer, Efficient computation in groups via compression, Second In- ternational Symposium on Computer Science in Russia, CSR 2007, Ekaterinburg, Russia, September 3-7, 2007. Proceedings, Lecture Notes in Computer Science, vol. 4649, Springer Berlin / Heidelberg, 2007, pp. 249-258. Markus Lohrey, MR 3043435MR 2161070 (2006c:68113) 22. , Algorithms on SLP-compressed strings: a survey, Groups Complex. Cryptol. BerlinSpringer3142Automata, languages and programmingMarkus Lohrey, Word problems on compressed words, Automata, languages and programming, Lecture Notes in Comput. Sci., vol. 3142, Springer, Berlin, 2004, pp. 906-918. MR 2161070 (2006c:68113) 22. , Algorithms on SLP-compressed strings: a survey, Groups Complex. Cryptol. 4 (2012), no. 2, 241-299. MR 3043435 Compressed words and automorphisms in fully residually free groups. Jeremy Macdonald, 343-355. MR 2658415Internat. J. Algebra Comput. 203Jeremy Macdonald, Compressed words and automorphisms in fully residually free groups, Internat. J. Algebra Comput. 20 (2010), no. 3, 343-355. MR 2658415 (2011h:20066) Pseudo-natural algorithms for the word problem for finitely presented monoids and groups. Klaus Madlener, Friedrich Otto, Journal of Symbolic Computation. 14Klaus Madlener and Friedrich Otto, Pseudo-natural algorithms for the word problem for finitely presented monoids and groups, Journal of Symbolic Computation 1 (1985), no. 4, 383-418. The complexity of greatest common divisor computations. S Bohdan, George Majewski, Havas, Algorithmic number theory. Ithaca, NY; BerlinSpringer877MR 1322722 (96g:11149Bohdan S. Majewski and George Havas, The complexity of greatest common divisor compu- tations, Algorithmic number theory (Ithaca, NY, 1994), Lecture Notes in Comput. Sci., vol. 877, Springer, Berlin, 1994, pp. 184-193. MR 1322722 (96g:11149) On homomorphisms onto finite groups. A I , Translation from Uch. Zap. Ivanov. Gos. Pedagog Inst. 2Am. Math. Soc.A.I. Mal'tsev, On homomorphisms onto finite groups, Transl., Ser. 2, Am. Math. Soc. 119 (1983), 67-79, Translation from Uch. Zap. Ivanov. Gos. Pedagog Inst. 18, 49-60 (1958). A G Miasnikov, M Sohrabi, arXiv:1311.1391v3Elementary bilinearization and coordinatization of finitely generated nilpotent groups. math.GRA. G. Miasnikov and M. Sohrabi, Elementary bilinearization and coordinatization of finitely generated nilpotent groups, arXiv:1311.1391v3 [math.GR], 2016. Computational algorithms for deciding some problems for nilpotent groups. A Mostowski, Fundamenta Mathematicae. 592engA. Mostowski, Computational algorithms for deciding some problems for nilpotent groups, Fundamenta Mathematicae 59 (1966), no. 2, 137-152 (eng). The Post correspondence problem in groups. Alexei Myasnikov, Andrey Nikolaev, Alexander Ushakov, Journal of Group Theory. 176Journal of Group TheoryAlexei Myasnikov, Andrey Nikolaev, and Alexander Ushakov, The Post correspondence prob- lem in groups, Journal of Group Theory 17 (2014), no. 6, 991-1008. 30. , Non-commutative lattice problems, Journal of Group Theory 19 (2016), no. 3, 455- 475. Werner Nickel, Computational Algebra Special Issue Celebrating the 65th birthday of Charles Leedham-Green. 300Matrix representations for torsion-free nilpotent groups by deep thoughtWerner Nickel, Matrix representations for torsion-free nilpotent groups by deep thought, Jour- nal of Algebra 300 (2006), no. 1, 376-383, Computational Algebra Special Issue Celebrating the 65th birthday of Charles Leedham-Green. Testing equivalence of morphisms on context-free languages. W Plandowski, Lecture Notes in Comput. Sci. 94SpringerAlgorithms-ESA. MR MR1328862 (96d:68126W. Plandowski, Testing equivalence of morphisms on context-free languages, Algorithms- ESA '94 (Utrecht), Lecture Notes in Comput. Sci., vol. 855, Springer, Berlin, 1994, pp. 460- 470. MR MR1328862 (96d:68126) V N Remeslennikov, 712-725. MR 0280593Conjugacy in polycyclic groups. 86313V. N. Remeslennikov, Conjugacy in polycyclic groups, Algebra i Logika 8 (1969), 712-725. MR 0280593 (43 #6313) Polynomial-time word problems. S Schleimer, Comment. Math. Helv. 834S. Schleimer, Polynomial-time word problems, Comment. Math. Helv. 83 (2008), no. 4, 741- -765. Charles C Sims, Computation with finitely presented groups, Encyclopedia of Mathematics and its Applications. CambridgeCambridge University Press48MR 1267733 (95f:20053Charles C. Sims, Computation with finitely presented groups, Encyclopedia of Mathematics and its Applications, vol. 48, Cambridge University Press, Cambridge, 1994. MR 1267733 (95f:20053) Space and Time Complexity of Algorithmic Problems in Groups. Svetla Vassileva, MontrealMcGill UniversityPh.D. thesisSvetla Vassileva, Space and Time Complexity of Algorithmic Problems in Groups, Ph.D. thesis, McGill University, Montreal, August 2013. . Blvd De Maisonneuve, Montreal, Quebec, Canada, H3G 1M8 Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030De Maisonneuve Blvd. W., Montreal, Quebec, Canada, H3G 1M8 Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 . Blvd De Maisonneuve, Montreal, Quebec, CanadaDe Maisonneuve Blvd. W., Montreal, Quebec, Canada, H3G 1M8
[]
[ "DExT: Detector Explanation Toolkit", "DExT: Detector Explanation Toolkit", "DExT: Detector Explanation Toolkit", "DExT: Detector Explanation Toolkit" ]
[ "Deepan Chakravarthi Padmanabhan [email protected] \nBonn-Rhein-Sieg University of Applied Sciences\nSankt AugustinGermany\n\nUniversity of Bremen\nBremenGermany\n\nUniversity of Groningen\nGroningenThe Netherlands\n", "Deepan Chakravarthi Padmanabhan [email protected] \nBonn-Rhein-Sieg University of Applied Sciences\nSankt AugustinGermany\n\nUniversity of Bremen\nBremenGermany\n\nUniversity of Groningen\nGroningenThe Netherlands\n" ]
[ "Bonn-Rhein-Sieg University of Applied Sciences\nSankt AugustinGermany", "University of Bremen\nBremenGermany", "University of Groningen\nGroningenThe Netherlands", "Bonn-Rhein-Sieg University of Applied Sciences\nSankt AugustinGermany", "University of Bremen\nBremenGermany", "University of Groningen\nGroningenThe Netherlands" ]
[]
0000−0003−0638−014X] , Paul G. Plöger 1[0000−0001−5563−5458] , Octavio Arriaga 2[0000−0002−8099−2534] , and Matias Valdenegro-Toro 3[0000−0001−5793−9498]Abstract. State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions. Object detection is imperative in applications such as autonomous driving [15], medical imaging [5], and text detection [18]. An object detector outputs bounding boxes to localize objects and categories for objects of interest in an input image. State-of-the-art detectors are deep convolutional neural networks [54] with high accuracy and fast processing compared to traditional detectors. However, convolutional detectors are considered black boxes [37] due to over-parameterization and hierarchically non-linear internal computations. This non-intuitive decisionmaking process restricts the capability to debug and improve detection systems. arXiv:2212.11409v2 [cs.CV] 4 Jun 2023 2 DC. Padmanabhan et al. The user trust in model predictions has decreased and consequently using detectors in safety-critical applications is limited. In addition, the process of verifying the model and developing secure systems is challenging [12] [52]. Numerous previous studies state interpreting detectors by explaining the model decision is crucial to earning the user's trust [48] [32] [40], estimating model accountability [20], and developing secure object detector systems [12] [52].
10.48550/arxiv.2212.11409
[ "https://export.arxiv.org/pdf/2212.11409v2.pdf" ]
254,974,011
2212.11409
72c5e927cbafd8df7d97cac4b078676e83683725
DExT: Detector Explanation Toolkit Deepan Chakravarthi Padmanabhan [email protected] Bonn-Rhein-Sieg University of Applied Sciences Sankt AugustinGermany University of Bremen BremenGermany University of Groningen GroningenThe Netherlands DExT: Detector Explanation Toolkit Object detectors · Explainability · Quantitative evaluation · Human-centric evaluation · Saliency methods 0000−0003−0638−014X] , Paul G. Plöger 1[0000−0001−5563−5458] , Octavio Arriaga 2[0000−0002−8099−2534] , and Matias Valdenegro-Toro 3[0000−0001−5793−9498]Abstract. State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions. Object detection is imperative in applications such as autonomous driving [15], medical imaging [5], and text detection [18]. An object detector outputs bounding boxes to localize objects and categories for objects of interest in an input image. State-of-the-art detectors are deep convolutional neural networks [54] with high accuracy and fast processing compared to traditional detectors. However, convolutional detectors are considered black boxes [37] due to over-parameterization and hierarchically non-linear internal computations. This non-intuitive decisionmaking process restricts the capability to debug and improve detection systems. arXiv:2212.11409v2 [cs.CV] 4 Jun 2023 2 DC. Padmanabhan et al. The user trust in model predictions has decreased and consequently using detectors in safety-critical applications is limited. In addition, the process of verifying the model and developing secure systems is challenging [12] [52]. Numerous previous studies state interpreting detectors by explaining the model decision is crucial to earning the user's trust [48] [32] [40], estimating model accountability [20], and developing secure object detector systems [12] [52]. With a range of users utilizing detectors for safety critical applications, providing humanly understandable explanations for the category and each bounding box coordinate predictions together is essential. In addition, as object detectors are prone to failures due to non-local effects [30], the visualization techniques for detector explanations should integrate explanations for multiple objects in a single image at the same time. Previous saliency map-based methods explaining detectors [26] [46] [17] focus on classification or localization decisions individually, not both at the same time. In this paper, we consider three deficits in the literature: methods to explain each category and bounding box coordinate decision made by an object detector, visualizing explanations of multiple bounding boxes into the same output explanation image, and a software toolkit integrating the previously mentioned aspects. This work concentrates on providing individual humanly understandable explanations for the bounding box and classification decisions made by an object detector for any particular detection, using gradient-based saliency maps. Figure 1 provides an illustration of the proposed solution by considering the complete output information to generate explanations for the detector decision. Explanations for all the decisions can be summarized by merging the saliency maps to achieve a high-level analysis and increasing flexibility to analyze detector decisions, improving improving model transparency and trustworthiness. We suggest methods to combine and visualize explanations of different bounding boxes in a single output explanation image as well as an approach to analyze the detector errors using explanations. This work contributes DExT, software toolkit, to explain each decisions (bounding box regression and object classification jointly), evaluate explanations, and identify errors made by an object detector. A simple approach to extend gradient-based explanation methods to explain bounding box and classification decisions of an object detector. An approach to identify reasons for the detector failure using explanation methods. Multi-object visualization methods to summarize explanations for all output detections in a single output explanation. And an evaluation of gradient-based saliency maps for object detector explanations, including quantitative results and a human user study. We believe our work reveals some major conclusions about object detector explainability. Overall quantitative metrics do not indicate that a particular object detector is more interpretable, but visual inspection of explanations indicates that recent detectors like EfficientDet seem to be better explained using gradient-based methods than older detectors (like SSD or Faster R-CNN, shown in Figure 2), based on lack of artifacts on their heatmaps. Detector backbone has a large impact on explanation quality ( Figure 6). The user study (Section 4.4) reveals that humans clearly prefer the convex polygon representation, and Smooth Guided Backpropagation provides the best detector explanations, which is consistent with quantitative metrics. We believe these results are important for practitioners and researchers of object detection interpretability. The overall message is to explain both object classification and bounding box decisions and it is possible to combine all explanations into a single image using the convex polygon representation of the heatmap pixels. The appendix of this paper is available at https://arxiv.org/abs/2212.11409. Related Work Interpretability is relatively underexplored in detectors compared to classifiers. There are post hoc [26] [46] [17] and intrinsic [21] [51] detector interpretability approaches. Detector Randomized Input Sampling for Explanation (D-RISE) [26] in a model-agnostic manner generates explanations for the complete detector output. However, saliency map quality depends on the computation budget, the method is time consuming, and individual explanations for bounding boxes are not evaluated. Contrastive Relevance Propagation (CRP) [46] extends Layerwise Relevance Propagation (LRP) [7] to explain individually the bounding box and classification decisions of Single Shot MultiBox Detector (SSD). This procedure includes propagation rules specific to SSD. Explain to fix (E2X) [17] contributes a framework to explain the SSD detections by approximating SHAP [24] feature importance values using Integrated Gradients (IG), Local Interpretable Model-agnostic Explanations (LIME), and Probability Difference Analysis (PDA) explanation methods. E2X identifies the detection failure such as false negative errors using the explanations generated. The individual explanations for bounding box decisions and classification decisions are unavailable. The intrinsic approaches majorly focus on developing detectors that are inherently interpretable. Even though the explanations are provided for free, currently, most of the methods are model-specific, do not provide any evaluations on the explanations generated, and includes complex additional designs. Certain attention-based models such as DEtector TRansformer (DETR) [10] and detectors using non-local neural networks [49] offer attention maps improving model transparency. A few previous works with attention reveal contradicting notions of using attention for interpreting model decisions. [35] and [19] illustrate attention maps are not a reliable indicator of important input region as well as attention maps are not explanations, respectively. [8] have revealed saliency methods provide better explanations over attention modules. We select the post hoc gradient-based explanation methods because they provide better model translucency, computational efficiency, do not affect model performance, and utilize the gradients in DNNs. Finally, saliency methods are widely studied in explaining DNN-based models [3]. A detailed evaluation of various detectors reporting robustness, accuracy, speed, inference time as well as energy consumption across multiple domains has been carried out by [4]. In this work, the authors compare detectors from the perspective of explainability. Proposed Approach Explaining Object Detectors This work explains various detectors using gradient-based explanation methods as well as evaluate different explanations for bounding box and classification decisions. The selected detectors are: SSD512 (SSD) [23], Faster R-CNN (FRN) [28], and EfficientDet-D0 (ED0) [43]. The short-form tags are provided in the bracket. SSD512 and Faster R-CNN are widely used single-stage and two-stage approaches, respectively. Explaining the traditional detectors will aid in extending the explanation procedure to numerous similar types of recent detectors. EfficientDet is a relatively recent state-of-the-art single-stage detector with higher accuracy and efficiency. It incorporates a multi-scale feature fusion layer called a Bi-directional Feature Pyramid Network (BiFPN). EfficientDet-D0 is selected to match the input size of SSD512. The variety of detectors selected aids in evaluating the explanation methods across different feature extractors such as VGG16 (SSD512), ResNet101 (Faster R-CNN), and EfficientNet (EfficientDet-D0). The gradient-based explanation methods selected in this work to explain detectors are: Guided Backpropagation (GBP) [41], Integrated Gradients (IG) [42], SmoothGrad [39] + GBP (SGBP), and SmoothGrad + IG (SIG). GBP produces relatively less noisy saliency maps by obstructing the backward negative gradient flow through a ReLU. For instance, an uncertainty estimate of the most important pixels influencing the model decisions is carried out using GBP and certain uncertainty estimation methods [50]. This combines uncertainty estimation and interpretability to better understand DNN model decisions. IG satisfies the implementation and sensitivity invariance axioms that are failed by various other state-of-the-art interpretation methods. SmoothGrad aids in sharpening the saliency map generated by any interpretation method and improves the explanation quality. These four explanation methods explain a particular detector decision by computing the gradient of the predicted value at the output target neuron with respect to the input image. The object detector decisions for a particular detection are bounding box coordinates (x min , y min , x max , y max ), and class probabilities (c 1 , c 2 , ..., c k ), where k is the total number of classes predicted by the detector. Usually these are output by heads at the last layer of the object detector. The classification head is denoted as model cls (x), while the bounding box regression head is model bbox (x). Considering that an explanation method computes a function expl(x,ŷ) of the input x and scalar output predictionŷ (which is one output layer neuron), then a classification explanation e cls is: c = model cls (x) k = arg max iĉ i e cls = expl x,l k(1) A bounding box explanation consists of four different explanations, one for each bounding box component e xmin , e ymin , e xmax , e ymax : x min ,ŷ min ,x max ,ŷ max = model bbox (x)(2)e xmin = expl (x,x min ) e ymin = expl (x,ŷ min ) (3) e xmax = expl (x,x max ) e ymax = expl (x,ŷ max )(4) In case of explaining the bounding box coordinates, the box offsets predicted by an object detectors are converted to normalized image coordinates before computing the gradient. In case of classification decisions, the logits (l k , before softmax probability,ĉ = softmax(l)) are used to compute the gradient. Figure 2 illustrates the explanations generated for each decisions of the cat detection by across detectors. Saliency explanations can be computed for each bounding box of interest in the image. Multi-object Visualization In order to summarize the saliency maps of all detections, the individual saliency maps corresponding to each detection are represented using a canonical form. This representation illustrates the most important pixels for the decision explanation. This paper proposes four different methods for combining detection explanations into a single format: principal components, contours, density clustering, and convex polygons. Each method uses a different representation, allowing for detected bounding box, and category to be marked using same colors on the input image. The general process is described in Figure 3. An example the four multi-object visualizations are illustrated in Figure 4. Appendix F provides additional details on the multi-object visualization approaches and how different combination methods work. including explanation heatmap samples. Multi-object visualizations generated to jointly visualize all detections from EfficientDet-D0 and the corresponding classification explanations generated using SIG in the same color. The combination approach is specified in sub-captions. Explanation pixels are colored same as the corresponding bounding box that is being explained. Experiments Section 4.1 visually analyzes the explanations generated for different detector and explanation method combinations. Section 4.3 provides the quantitatively evaluates different detector and explanation method combinations. Finally, Section 4.4 estimates an overall ranking for the explanation methods based on user preferences of the explanations produced for each decision. In addition, the multi-object visualization methods are ranked based on user understandability of the detections. In Section G, the procedure to analyze the failures of detector using the proposed approach is discussed. Most of the experiments use ED0, SSD, and FRN detectors detecting common objects from COCO [22]. The additional details about these detectors are provided in Table 2. In cases requiring training a detector, different versions of SSD with various pre-trained backbones detecting marine debris provided in Table 3 are used. The marine debris detectors are trained using a train split of the Marine Debris dataset [47] and explanations are generated for the test images. These detectors are used only to study how are the explanations change across different backbones and different performance levels (epochs) in Section 4.1. Visual Analysis Across target decision and across detectors. The saliency maps for the classification and bounding box decisions generated using a particular explanation method for a specific object change across different detectors as shown in Figure 2. All the bounding box explanations of EfficientDet-D0 in certain scenarios provide visual correspondence to the bounding box coordinates. Across different target objects. Figure 5 illustrate that the explanations highlight different regions corresponding to the objects explained. This behavior is consistent in most of the test set examples across the classification and bounding box explanations for all detectors. Figure 6 illustrates the classification explanations for the wall detection across the 6 different backbones. Apart from the attribution intensity changes, the pixels highlight different input image pixels, and the saliency map texture changes. MobileNet and VGG16 illustrate thin horizontal lines and highlight other object pixels, respectively. ResNet20 highlights the wall as a thick continuous segment. Figure 18 illustrate the y min and y max bounding box coordinate explanations for the chain detection across different backbones. The thin horizontal lines of MobileNet are consistent with the previous example. In addition, VGG16 illustrates a visual correspondence with the y min and y max bounding box coordinate by highlighting the upper half and lower half of the bounding box respectively. However, this is not witnessed in other detectors. This behavior is consistent over a set of 10 randomly sampled test set images from the Marine Debris dataset. The explanations generated using SSD model instances with ResNet20 backbone at different epochs are provided in Figure 7. The model does not provide any final detections at lower epochs. Therefore, the explanations are generated using the target neurons of the output box corresponding to the interest decision in the final detections from the trained model. Figure 7 illustrate variations in the saliency maps starting from a randomly initialized model to a completely trained model for the classification decision of the chain detection. The explanations extracted using the random model are dispersed around the features. The explanations slowly concentrate along the chain object detected and capture the object feature to a considerable amount. This behavior is qualitatively analyzed by visualizing the explanation of 10 randomly sampled test set images from the Marine Debris dataset. In the case of the small hook explained in Figure 19, the variations between the random model and the trained model are not as considerable as the previous chain example. This illustrates the variations change with respect to each class. Classification explanation for class "chain" across different epochs (along columns) of SSD-ResNet20 using GBP is illustrated. The first column is the chain ground truth annotation (white-colored box). Error Analysis The section analyzes detector errors by generating explanations using the proposed detector explanation approach. The saliency map highlighting the important regions can be used as evidence to understand the reason for the detector failure rather than assuming the possible reasons for detector failure. The failure modes of a detector are wrongly classifying an object, poorly localizing an object, or missing a detection in the image [26]. As the error analysis study requires ground truth annotations, the PASCAL VOC 2012 images are used. The PASCAL VOC images with labels mapping semantically to COCO labels are only considered as the detectors are trained using the COCO dataset. For instance, the official VOC labels such as sofa and tvmonitor are semantically mapped to couch and tv, respectively, by the model output trained on COCO. The procedure to analyze a incorrectly classified detection is straightforward. The output bounding box information corresponding to the wrongly classified detection can be analyzed in two ways. The target neuron can be the correct class or the wrongly classified class to generate the saliency maps ( Figure 8). More examples of error analysis are available in Section G in the appendix. Quantitative Evaluation Evaluating detector explanations quantitatively provides immense understanding on selecting the explanation method suitable for a specific detector. This section performs the quantitative evaluation of saliency explanations. We display two saliency explanations (GBP and SIG). In this figure, it is clear the model is imagining a long tail for the dog (GBP) and wrongly classifies the dog as a cat. The saliency map highlights certain features of the dog and the background stripes pattern along the edges of the dog body (GBP and SIG). In order to illustrate the tail clearly which is predominant in cats available in COCO dataset, the saliency map is only shown without overlaying on the input image. Evaluation Metrics The quantitative evaluation of the explanations of a detector incorporates causal metrics to evaluate the bounding box and classification explanations. This works by causing a change to the input pixels and measuring the effect of change in model decisions. The evaluation aids in estimating the faithfulness or truthfulness of the explanation to represent the cause of the model decision. The causal metrics discussed in this work are adapted from the previous work [33] [26] [25]. The two variants of causal evaluation metrics based on the cause induced to alter the prediction are deletion and insertion metric. The deletion metric evaluates the saliency map explanation by removing the pixels from the input image and tracking the change in model output. The pixels are removed sequentially in the order of the most important pixels starting with a larger attribution value and the output probability of the predicted class is measured. The insertion metric works complementary to the deletion metric by sequentially adding the most important pixel to the image and causing the model decision to change. Using deletion metric, the explanation methods can be compared by plotting the fraction of pixels removed along x-axis and the predicted class probability along y-axis. The method with lower Area Under the Curve (AUC) illustrates a sharp drop in probability for lesser pixel removal. This signifies the explanation method can find the most important pixels that can cause a significant change in model behavior. The explanation method with less AUC is better. In the case of insertion metric, the predicted class probability increases as the most relevant pixels are inserted. Therefore, an explanation method with a higher AUC is relatively better. [26] utilize constant gray replacing pixel values and blurred image as the start image for deletion and insertion metric calculation respectively. Effects Tracked. The previous work evaluating the detector explanations utilize insertion and deletion metric to track the change in the bounding box Intersection over Union (IoU) and classification probability together. [26] formulate a vector representation involving the box coordinates, class, and probability. The similarity score between the non-manipulated and manipulated vectors are tracked. However, this work performs an extensive comparison of explanation methods for each decision of a detector by tracking the change in maximum probability of the predicted class, IoU, distance moved by the bounding box (in pixels), change in box height (in pixels), change in box width (in pixels), change in top-left x coordinate of the box (in pixels), and change in top-left y coordinate of the box (in pixels). The box movement is the total movement in left-top and right-bottom coordinates represented as euclidean distance in pixels. The coordinates distances are computed using the interest box corresponding to the current manipulated image and the interest box corresponding to the non-manipulated image. This extensive evaluation illustrates a few explanation methods are more suitable to explain a particular decision. As declared in the previous sections, the image origin is at the top-left corner. Therefore, a total of 7 effects are tracked for each causal evaluation metric. Evaluation Settings. The previous section establishes the causal deletion and insertion metric along with the 7 different effects. In this section, two different settings used to evaluate the detectors using the causal metrics are discussed. Single-box Evaluation Setting. The detector output changes drastically when manipulating the input image based on saliency values. We denote principal box to the bounding box detecting the object in the original image. In this setting, seven principal box effects are tracked across insertion and deletion of input pixels. This aids in capturing how well the explanation captures true causes of the principal box prediction. The effects measured for the single-box setting are bounded because the principal box value is always measurable. This is called a single-box setting because only the changes in the principal box are tracked. Realistic Evaluation Setting. In this evaluation setting, all 7 effects are tracked for the complete object detector output involving all bounding boxes after the post-processing steps of a detector. In this setting, the current detection for a particular manipulated input image is matched to the interest detection by checking the same class and an IoU threshold greater than 0.9. For various manipulated input images, there is no current detection matching the interest detection. Therefore, depending on the effect tracked and to calculate AUC, a suitable value is assigned to measure the effect. For instance, if the effect tracked is the class probability for deletion metric and none of the current detection matches with the interest detection, a zero class probability is assigned. Similarly, if the effect tracked is box movement in pixels for deletion metric, the error in pixels increases to a large value. Interpretation Through Curves. Given the causes induced to change model output, effects tracked, and evaluation setting for the detector, this work uses 28 causal evaluation metrics. These correspond to causes ↓ Deletion (D) and ↑ Insertion (I), Effects tracked Class Maximum Probability (C), Box IoU (B), Box Movement Distance (M), Box X-top (X), Box Y-top (Y), Box Width (W) , Box Height(H), and evaluation settings Single-box (S) and Realistic (R). To interpret a causal evaluation metric, a graph is drawn tracking the change of the effect tracked along the y-axis and the fraction of pixels manipulated along the x-axis. For instance, consider the scenario of deleting image pixels sequentially to track the maximum probability of the predicted class at single-box evaluation setting. The x-axis is the fraction of pixels deleted. The y-axis is the maximum probability of the predicted class at the output of the box tracked. In this work, the curve drawn is named after the combination of the causal evaluation metrics, effects tracked, end evaluation settings. The curves are the DCS curve, DBS curve, ICS curve. For instance, the DCS curve is the change in the maximum probability for the predicted class (C) at the single output box (S) due to removing pixels (D). The curves are the evaluation metrics used in this work and also called as DCS evaluation metric (deletion + class maximum probability + single-box setting), DBS (deletion + box IoU + single-box setting) evaluation metric, and so on. In order to compare the performance of explanation methods to explain a single detection, as stated before, the AUC of a particular evaluation metric curve is estimated. The corresponding AUC is represented as AUC <evaluation_metric> . In order to estimate a global metric to compare the explanation methods explaining a particular decision of a detector, the average AUC, represented as AAUC <evaluation_metric> , is computed. As the explanations are provided for each detection, the evaluation set is given by the total number of detections. The total detections in the evaluation set are the sum of detections in each image of the evaluation set. The average evaluation metric curve is computed by averaging the evaluation metric curve at each fraction of pixels manipulated across all detections. AAUC of a particular evaluation metric curve is the AUC of the average evaluation metric curve. Results Figure 9 illustrates the AAUC computed by evaluating the explanations of each bounding box coordinate is similar across different evaluation metrics curves. This similarity is consistent for all the detectors and explanation methods combinations evaluated. Therefore, the explanation methods quantitatively explain each bounding box coordinate decisions with similar performance. In this work, the AAUC for the bounding box decision is computed by averaging the AUC of all the evaluation metric curves corresponding to all the box coordinate explanations. This offers the means to evaluate the explanation methods across all the bounding box coordinate decisions. Figure 10 and Figure 11 illustrate quantitatively complementary trends in the evaluation metric curves plotted by tracking box movement distance in pixels and box IoU. The IoU decreases and box movement distance increases as the pixels are deleted sequentially as shown in Figure 10. Similarly, Figure 11 illustrates the increase in box IoU and decrease in box movement distance as pixels are inserted to a blurred version of the image. There is a large difference in the AAUC between the single-stage and two-stage detectors. This is primarily due to the RPN in the two-stage detectors. The proposals from RPN are relatively more sensitive to the box coordinate change than the predefined anchors of the single-stage detectors. In addition, Figure 10d and Figure 11d indicates the steady change of box coordinates in the final detections of the EfficientDet-D0. However, SSD and Faster R-CNN saturate relatively sooner. In the remainder of this work, the ability of the box IoU effect is used for quantitative evaluation. This is only because the box IoU effect offers the same scale between 0 to 1 as the class maximum probability effect. In addition, both box IoU and class maximum probability effect follow the trend lower AUC is better for the deletion case. However, it is recommended to consider all the box IoU and box movement distance effects at the level of each box coordinate for a more accurate evaluation. Figure 12 and Figure 17 aids in understanding the explanation method interpreting both the classification and bounding box decision of a particular detector more faithful than other explanation methods. Figure 12a illustrate SSD512 classification decisions are better explained by SGBP at single-box setting for deletion metrics. However, the bounding box decisions are not explained as well as the classification decisions. Figure 12b illustrate a similar scenario for SGBP with EfficientDet-D0 and Faster R-CNN at the realistic setting for deletion metrics. However, all selected explanation methods explain the bounding box and classification decisions of SSD512 relatively better at the single-box setting for insertion metrics. In general, none of the selected explanation methods explain both the classification and bounding box regression decisions substantially well compared to other methods for all detectors. This answers EQ13. Similarly, none of the detectors is explained more faithfully for both classification and bounding box decisions among the selected detectors by a single method across all evaluation metrics discussed. This is illustrated by no explanation methods (by different colors) or no detectors (by different characters) being represent in the lower left rectangle or upper right rectangle in Figure 12 and Figure 17 respectively. Figure 14a and Figure 14c illustrate AAUC of the classification saliency maps and the saliency maps combined using different merging methods are different in certain scenarios while tracking the maximum probability. The AAUC of all the box coordinate saliency maps is provided for a baseline comparison. This denotes the effect on maximum probability by removing pixels in the order of most important depending on the all box coordinates saliency maps. Similarly, Figure 14b and Figure 14d coordinate explanations and the merged saliency maps while tracking the box IoU. In Figure 14a, the evaluation of the GBP classification saliency map is less faithful than the merged saliency map. Therefore, the merged saliency map represents the classification decision more faithfully than the standalone classification explanation in the case of EfficientDet-D0. However, Figure 14a and decisions of Faster R-CNN. This is coherent with the visual analysis. Therefore, in certain scenarios merging is helpful to represent the reason for a particular decision. However, each individual saliency map provides peculiar information about the detection. For instance, the visual correspondence shown in Figure 2 to each bounding coordinate information is seen only at the level of individual box coordinate explanations. An overall comparison of all quantitative metrics is shown in Figure 13. For the purpose of understanding, the ranking of explanation methods explaining a particular detector is provided in Table 1. SGBP performs relatively better across all selected detectors. In addition, IG is ranked least across all the selected detectors. SSD detector is better explained by all the explanation methods. One of the reasons can be SSD is a simpler architecture compared to EfficientDet-D0 and Faster R-CNN. EfficientDet-D0 and Faster R-CNN include a Bi-directional Feature Pyramid Network (BiFPN) and Region Proposal Network (RPN) respectively. However, further experiments should be conducted for validation. Human-centric Evaluation The human-centric evaluation ranks the explanation methods for each detector and ranks the multi-object visualization methods with a user study. All important details of the user study are presented in Appendix H. Ranking Explanation Methods. Previous work assess the user trust in the model explanations generated by a particular explanation method [26] [34] [29]. As user trust is difficult to evaluate precisely, this work in contrast to previous works estimate the user preferability of the explanation methods. The user preferability for the methods GBP, SGBP, IG, and SIG are evaluated by comparing two explanations corresponding to a particular predictions. In this Table 1. Ranking of all the explanation methods for a particular detector based on the quantitative evaluation metrics. A lower value is a better rank. The explanation method better explaining a particular detector is awarded a better rank. Each detector is ranked with respect to each evaluation metric considering a particular explanation method. The column names other than the last column and the first two columns represent the average AUC for the respective evaluation metric. The overall rank is computed by calculating the sum along the row and awarding the best rank to the lowest sum. OD -Object detectors, IM -Interpretation method. study, the explanation methods are compared directly for a particular interest detection and interest decision across SSD, EDO, and FRN detector separately. The evaluation identifies the relatively more trusted explanation method by the users for a particular detector. The explanation methods are ranked by relatively rating the explanations generated using different explanation methods for a particular detection made by a detector. The rating serves as a measure of user preference. A pair of explanations generated by different explanation methods using the same interest decision and same interest detection for the same detector is shown to a number of human users as shown in Figure 38. The detector, interest decision, interest detection, and explanation method used to generate explanations are randomly sampled for each question and each user. In addition, the image chosen for a particular question is randomly sampled from an evaluation set. The evaluation set is a randomly sampled set containing 50 images from the COCO test 2017. This avoids incorporating any bias into the question generation procedure. Each question is generated on the fly for each user performing the task. The explanations are named Robot A explanation and Robot B explanation to conceal the names of the explanation methods to the user. The robots are not detectors. In this study, the robots are treated as explanation methods. Robot A explanation and Robot B explanation for each question is randomly assigned with a pair of explanation method output. This is done to reduce the bias due to positioning and ordering bias of the explanations as shown to users. The task provided for the user is to rate the quality of the Robot A explanation based on the Robot B explanation. The scoring gives scores in the range [−2, 2] depending if Robot A or B is better. The available options are provided in Table 5. A single question in the evaluation is treated as a game between two randomly matched players. The explanation methods are the players. The game result depends on the explanation quality produced by the competing explanation methods for a particular detection decision. In case of a draw, both explanation methods receive the same score. During non-draw situations, the points won by a particular explanation method are the points lost by the other explanation method. By treating all the questions answered by numerous users as individual games, the global ranking is obtained using the Elo rating system [13]. Each explanation method is awarded an initial Elo rating of 1000. Ranking Multi-Object Visualization Methods. The rank for multiobject visualization methods is obtained by voting for the method producing the most understandable explanation among the four methods. Each user is asked a set of questions showing the multi-object visualization generated by all four methods. The user is provided with a None of the methods option to chose during scenarios where all the multi-object visualizations generated are confusing and incomprehensible to the user. The methods are ranked by counting the total number of votes each method has obtained. The experiment is performed using COCO 2017 test split and the VOC 2012. Results Each user is requested to answer 10 questions, split as 7 and 3 between Task 1 and Task 2, respectively. 52 participants have answered the user study for both task 1 and task 2. The participants range across researchers, students, deep learning engineers, office secretaries, and software engineers. Figure 15 indicates SGBP provide relatively more reasonable explanations with higher user preferability for both single-stage detectors. Similarly, SIG is preferred for the two-stage detector. Figure 16a illustrates the top two ranks are obtained by SmoothGrad versions of the SGBP and IG for all detectors. GBP relatively performs in the middle rank in the majority of cases. SGBP achieves the first rank in both the human-centric evaluation and functional evaluation. The ranking of multiobject visualization methods clearly illustrate that majority of the users are able to understand convex polygonbased explanations. 18 answers among the total 156 are None of the methods because none of the four other methods provided a legible summary of all the explanation methods and detections. The users have selected principal componentbased visualization in cases involving less than 3 detections in an image. In addition, None of the methods is chosen in most of the cases involving more than 9 detections or more than 3 overlapping detections in an image. Among the total participants, only 89 users (57%) agree with the convex polygon-based visualization. Therefore, by considering the remaining 43% users, there is a lot of need to improve the multi-object visualization methods discussed in this work and achieve a better summary. Conclusions and Future Work Explaining convolutional object detectors is crucial given the ubiquity of detectors in autonomous driving, healthcare, and robotics. We extend post-hoc gradientbased explanation methods to explain both classification and bounding box decisions of EfficientDet-D0, SSD512, and Faster R-CNN. In order to integrate explanations and summarize saliency maps into a single output images, we propose four multi-object visualization methods: PCA, Contours, Density clustering, and Convex polygons, to merge explanations of a particular decision. We evaluate these detectors and their explanations using a set of quantitative metrics (insertion and deletion of pixels according to saliency map importance) and with a user study to understand how useful these explanations are to humans. Insertion and deletion metrics indicate that SGBP provides more faithful explanations in the overall ranking. In general there is no detector that clearly provides better explanations, as a best depends on the criteria being used, but visual inspection indicates a weak relationship that newer detectors (like EfficientDet) have better explanations without artifacts (Figure 2), and that different backbones do have an influence on the saliency map quality (Figure 6). The user study reveals a human preference for SGBP explanations for SSD and EfficientDet (and SIG for Faster R-CNN), which is consistent with the quantitative evaluation, and for multi-object explanation visualizations, convex polygons are clearly preferred by humans. We analyze certain failure modes of a detector using the formulated explanation approach and provide several examples. The overall message of our work is to always explain both object classification and bounding box decisions, and that it is possible to combine explanations into a single output image through convex polygon representation of the saliency map. Finally, we developed an open-source toolkit, DExT, to explain decisions made by a detector using saliency maps, to generate multi-object visualizations, and to analyze failure modes. We expect that DExT and our evaluation will contribute to the development of holistic explanation methods for object detectors, considering all their output bounding boxes, and both object classification and bounding box decisions. Limitations. Firstly, the pixel insertion/deletion metrics might be difficult to interpret [16] and more advanced metrics could be used [45]. However, the metric selected should consider the specifics of object detection and evaluate both classification and bounding box regression. Moreover, as detectors are prone to non-local effects, removing pixels from the image [30] can cause bounding boxes to appear or disappear. Therefore, special tracking of a particular box is needed. We extend the classic pixel insertion/deletion metrics [3] for object detection considering these two aspects. The second limitation is about the user study. Given the challenges in formulating a bias-free question, we ask users to select which explanation method is better. This is a subjective human judgment and does not necessarily have to correspond with the true input feature attribution made by the explanation method. Another part of the user study is comparing multi-object visualization methods, where we believe there is a much clearer conclusion. The novelty of our work is to combine quantitative, qualitative, and a user study, to empirically evaluate saliency explanations for detectors considering object classification and bounding box regression decisions. In general, saliency methods are prone to heavy criticisms questioning the reliability of the methods. This study extends a few gradient-based saliency methods for detectors and conducts extensive evaluation. However, we acknowledge that there are other prominent saliency methods to study. Our work evaluates and explains real-world object detectors without any toy example. The literature has previously performed basic sanity checks on toy usecases that does not include multiple localization and classification outputs. In addition, object detectors are categorized on the basis of number of stages (single-stage [23] [43] and two-stage [28]), availability of anchors (anchor-based [23] [43] and anchor-free [27] [44]), and vision transformer based detectors [10] [9]. We explain detectors specific to certain groups (SSD512, Faster R-CNN, and EfficientDet) and leave anchor-free and transformer-based detectors for future. Even though fully white-box interpretable models would be the best solution [31], this is not yet available at the model scale required for high object detection performance. A Broader Impact Statement As concerns on AI safety is increasing, explainable machine learning is imperative to gain human trust and satisfy the legal requirements. Any machine learning model user for human applications should be able to explain its predictions, in order to be audited, and to decide if the predictions are useful or further human processing is needed. Similarly, such explanations are pivotal to earn user trust, increase applicability, address safety concerns for complex object detection models. We expect that our work can improve the explainability of object detectors, by first steering the community to explain all object detector decisions (bounding box and object classification), considering to visualize all saliency explanations in a single image per detector decision, and to evaluate the non-local effect of image pixels into particular detections. We believe that saliency methods can be used to partially debug object detection models. Consequently, saliency methods are useful to explain detector and address the trustworthiness and safety concerns in critical applications using detectors. However, additional validation of explanations is needed. We also perform sanity checks in object detectors [11] with similar conclusions and validation of saliency map quality. Additional large scale user studies could be done to evaluate how useful these explanations are for humans, instead of just asking which explanation method is better. Even though fully white-box interpretable models would be the best solution [31], this is not yet available at the model scale required for high object detection performance. In addition, the detectors are evaluated in various combinations with two settings: single-box and realistic. Both the former and the latter help to better understand the effects of the most relevant pixels on the predictions for the output box as well as the overall detector output respectively. From the overall ranking based on the quantitative evaluation metrics, all the explanation methods interpret SSD more faithfully in comparison to other detectors. SGBP provides more faithful explanations in the overall ranking developed from the quantitative evaluation metrics. This is coherent with the user study. Humans understand the explanations from SGBP in comparison more than the explanations generated by other shortlisted explanation methods. Convex polygon-based multi-object visualizations are better understood and preferred by humans. However, there is substantial scope to improve the generated multi-object visualizations. B Detectors Details Detectors detecting common objects available in COCO dataset is provided in Table 2: Detectors trained on Marine Debris Dataset is provided in Table 3: C Explanation Methods In this paper we use Guided Backpropagation (GBP), Integrated Gradients (IG), and their variations using SmoothGrad (SGBP and SIG). We describe these methods in detail below: Guided Backpropagation. (GBP) [41] is a backpropagation-based attribution method. GBP provides information about the input image features utilized by a DNN for the particular prediction. The method calculates the loss function gradient for a specific object category with respect to image pixels. In this approach, the activations at a higher level unit under study are propagated backward to reconstruct the input image. The reconstructed input image illustrates the input image pixels strongly activating the higher-level unit. The feature map f after passing though a ReLU activation relu at layer l, where i denotes each feature is given in Equation 5: f l+1 i = relu f l i = max f l i , 0(5) GBP handles backpropagation through ReLU non-linearity by combining vanilla backpropagation and DeconvNets as specified in Equation 6. R l i = f l i > 0 · R l+1 i > 0 · R l+1 i(6) The reconstructed image R at any layer l is generated by the positive forward pass activations f l i and the positive error signal activations R l+1 i . This aids in guiding the gradient by both positive input and positive error signals. The negative gradient flow is prevented in GBP, thereby, providing more importance to the neurons increasing the activation of the higher layer unit under study. In addition, this suppresses the image aspects negatively affecting the activation. Therefore, the regenerated images are relatively less noisy compared to the Gradients and DeconvNet methods. The explanation is computed as the gradient of a particular output neuron with respect to the input image, considering the previously mentioned modified ReLU gradient: expl GBP (x,ŷ) = ∂ŷ ∂x (7) Integrated Gradients. (IG) [42] achieves the implementation invariance as well as sensitivity axioms. The gradient-based attribution methods such as Gradients [38], DeconvNet [53], GBP [41], LRP [7], and DeepLIFT [36] fail either of two rules or both. The sensitivity rule states for a baseline and input image differing by a single feature and resulting in different predictions, the differing feature must be assigned a non-zero attribution. In addition, a zero attribution should be assigned to constant variables in the trained function. The implementation invariance rule signifies that the attribution method result should not be dependent on the network implementation. The functionally equivalent models should have identical attributions. Furthermore, IG satisfies the completeness axiom by balancing out the difference in the model output for the input image and baseline to the sum of all feature attribution. IG integrates along the local gradient for a particular image pixel over a linear path from the baseline x ′ to input image x pixels. IG for feature i in the input image is calculated using Equation 8 [2]. IntegratedGradients i (x, F ) = (x i − x ′ i ) × 1 α=0 ∂F (x ′ + α × (x − x ′ )) ∂x i dα (8) α is the interpolation constant for peturbing the features along the straight path between baseline and input image. F (x) is the model function mapping input image to output prediction. The solution is obtained using numerical approximation because calculating definite integral for Equation 8 is difficult. The full integrated gradients calculation is done over all input features and is: expl IG (x,ŷ) = [IntegratedGradients i (x,ŷ) ∀i ∈ 0... dim(x)](9) SmoothGrad. [39] is an approach to sharpen the saliency maps generated by any gradient-based explanation method. The idea is to estimate a saliency map by averaging all the saliency maps generated for different image samples by adding a small random noise. Given expl M (x,ŷ) is the unsmoothed saliency map explaining the decision for predicting class c with any previous saliency method. The final saliency map expl SM (x,ŷ) for the input image x is given by Equation 10. N is the total number of image samples generated by adding Gaussian noise N (0, σ 2 ) with standard deviation σ. expl SM (x,ŷ) = N −1 n 1 expl M (x + ϵ,ŷ)(10) With ϵ ∼ N 0, σ 2 being samples from a Gaussian distribution. The hyperparameters are the sample size to average the saliency maps N and standard deviation or noise level σ. [39] suggests a noise level between 10-20% balances the saliency map sharpness and captures the object structure. This is followed by averaging the saliency maps obtained for different noise levels to generate a final smoothed saliency map. We Table 4. Ranking of all detectors for a particular explanation method based on the quantitative evaluation metrics. A lower value is a better rank. The detector better explained by a particular explanation method is awarded a better rank. Each detector is ranked with respect to each evaluation metric considering a particular explanation method. The column names other than the last column and the first two columns represent the AAUC for the respective evaluation metric. The overall rank is computed by calculating the sum along the row and awarding the best rank to the lowest sum. OD -Object detectors, IM -Interpretation method. Figure 18 and Figure 19 illustrates the change in explanations across different backbones and performance levels. D Additional Comparison of Quantitative Metrics E Visual Analysis F Multi-object Visualization In order to summarize the explanations for a particular decision across all objects in an image, four multi-object visualization methods are proposed in Section 3.2. This procedure is concisely presented in Figure 20, Figure 21, Figure 22, and Figure 23. Figure 24 and Figure 25 illustrates the summarized visualizations for all objects predicted using all the proposed methods. The principal componentbased method represents the maximum and minimum data spread of the saliency map pixel intensities as ellipses centered at the center of mass. The contour-based method draws the contour map with two levels as depicting a heatmap and the output detection with the same color is difficult. The density cluster-based method performs density clustering using DBSCAN [14]. The hyperparameters of DBSCAN are tuned using the method stated in [14]. Finally, the convex polygon-based method draws a convex polygon over the density clustered saliency map pixels. This method provides a legible representation as the convex polygon resemble an irregularly shaped bounding box. The saliency maps for each bounding box coordinates can provide visual evidence for poor localization ( Figure 26). Finally, by generating saliency maps for the bounding box coordinate or classification decisions of the adjusted prior box close to the missed ground truth detection, the reason for missing detections can be studied ( Figure 29). Fig. 31. Example error analysis using gradient-based explanations. EfficientDet-D0 misses the motorcycle (red-colored box) in the ground truth subplot. The red-colored box in the detection subplot is the closest output box to the ground truth. The couch surface and the context of a cat lying on the surface of couch are clearly highlighted. However, the detector does not have sufficient evidence to accept the classification result due to lower confidence for the couch class (0.31) than the confidence threshold (0.5) for acceptable detections. This is likely due to the cat occluding part of the couch. H Details of User Study This section provides the task description given to the users and screenshots of the application developed to perform the user study and human-centered analysis. Table 5. User study options and scores awarded to respective explanations. Options A Score B Score Robot A explanation is much better 2 -2 Robot A explanation is slightly better 1 -1 Both explanations are same 0 0 Robot A explanation is slightly worse -1 1 Robot A explanation is much worse -2 2 H.1 Task Description Firstly, thank you for your time. I assure you that I will use the answers solely for research purposes without disclosing any user identity. The evaluation includes two tasks. Task I: Questions 1-7. Task II: Questions 8-10. Task I: Which Robot's explanation is better? -An artificial intelligence (AI) agent performing the task of localizing and classifying all the objects in an image is called an object detector. -The output from an object detector to detect a single object includes the bounding box representing the maximum rectangular area occupied by the object and the class name representing the category of the object inside the bounding box. The output is called detection. -(x_left_top, y_left_top) and (x_right_bottom, y_right_bottom) are the two coordinate points to represent a bounding box. The class name of the object is represented as a text label near the bounding box as shown in Figure 32. -Therefore, each detection is made of two decisions (predictions), namely, bounding box coordinates decision and classification decision. -In this study, the reason for a particular decision, say bounding box coordinates or class prediction, in a single detection, is shown. -This reason behind the decision-making process is given by the explanation. In this task, the explanation is generated by two different robots, Robot A and Robot B. The explanation images are provided for classification and bounding box decisions separately. -The explanation for a particular decision is provided by highlighting the pixels important for the decision-making process. The color bar provided in Figure 33 on the right of the explanation image indicates the pixel importance value. -In task 1, the author requests you to rate Robot A's explanation by comparing it against Robot B's explanation in terms of understandability and meaningfulness of the explanation. -A few classification decision explanations, Figure 34 and Figure 35, are provided below: . The most of the important pixels highlight the object detected. However, the explanations highlight pixels other than the detected object and is highly noisy. -A few bounding box coordinate explanations, Figure 36 and Figure 37, are provided below: Task II: Which method is better to summarize all detections and corresponding explanations? 37. A person detection made by the detector (left). An explanation for the person x_right_bottom coordinate prediction (right). The important pixels highlight the object detected. However, the explanation highlights numerous pixels outside the the detected object and is highly noisy. -Each image shown in this task includes all the detection made by the detector. Similar to the previous task, each detection is represented as shown in Figure 32. -In addition to all detections, each image illustrates the explanation for a particular decision, say bounding box coordinate or classification result, for all objects detected by the detector. -In order to map the detection and the respective explanation, the same colors are used. -The explanations are represented using 4 different methods. However, visually, across the 4 methods, the important pixels responsible for a particular decision are highlighted using either dots, ellipses, or irregular polygon. -For ellipses and irregular polygon, the pixels inside the ellipse and irregular polygon are the important pixels responsible for the decision-making process. -One of the options is None of the methods. This option can be selected when the detection and corresponding explanation of multiple objects illustrated in all 4 images are confusing and illegible to coherently understand the detection and the corresponding explanation. H.2 Application Screenshots This section provides the snapshots of the user study application. Figure 38 and Figure 39 shows a sample Task 1 and Task 2 question. Figure 40 illustrates the additional questions asked to understand the background of the user. I Screenshots of DExT This chapter provides the screenshots of the DExT interactive application which is available online at: https://share.streamlit.io/deepanchakravarthipadmanabhan/ dext/app.py. The code to launch the application locally along with the DExT python-based package is available at https://github.com/DeepanChakravarthiPadmanabhan/ dext. Figures 41, 42, 43, and 44 shows the sequential process involved in analyzing an input image. Figure 45 illustrates the user interface provided to interactively generate explanations and evaluate the explanations for different detections across various explanation method and detector combinations. . User interface of the DExT interactive application. The user can select any detector, interpretation method, interest decision, and interest detection. In addition, a slider to control the fraction of input image pixels deleted is provided. Fig. 1 . 1A depiction of the proposed approach to interpret all object detector decisions. The corresponding explanations are provided in the same colored boxes. This breakdown of explanations offers more flexibility to analyze decisions and serves as a holistic explanation for all the detections. Fig. 2 . 2Comparison of the classification and all bounding box coordinate explanations corresponding to the cat detection (red-colored box) across different detectors using SGBP is provided. The bounding box explanations from EfficientDet-D0 illustrate the visual correspondence to the respective bounding box coordinates. The explanations from Faster R-CNN illustrate a sharp checkerboard pattern. Fig. 4 . 4Fig. 4. Multi-object visualizations generated to jointly visualize all detections from EfficientDet-D0 and the corresponding classification explanations generated using SIG in the same color. The combination approach is specified in sub-captions. Explanation pixels are colored same as the corresponding bounding box that is being explained. Fig. 5 . 5Comparison of classification and bounding box explanations for all detections from EfficientDet-D0 using SIG is provided. Each row provides the detection (red-colored box) followed by the corresponding classification and all bounding box explanation heatmaps. Debris Detections with Different SSD Backbones Using Guided Backpropagation Fig. 6 . 6Comparison of class "wall" classification explanations across different SSD backbones. The detections from each SSD backbone are provided in the first row. The explanations of the wall detection (white-colored box) vary across each backbone. Fig. 7. Classification explanation for class "chain" across different epochs (along columns) of SSD-ResNet20 using GBP is illustrated. The first column is the chain ground truth annotation (white-colored box). Fig. 8 . 8Example error analysis using gradient-based explanations. EfficientDet-D0 wrongly classifies the dog (red-colored box) in ground truth as cat (red-colored box). Fig. 9 . 9The figure illustrates the average AUC, AAUC, for the evaluation metric curves obtained by tracking box IoU (a, c) and box movement distance (b, d) as the pixels are deleted sequentially. Each bar corresponds to the AAUC estimated by evaluating explanations generated for each bounding box coordinate decisions using the explanation methods specified in the x-axis of all detection made by EfficientDet-D0 in the evaluation set images. AAUC is computed by averaging the AUC of all the evaluation metric curves generated using the combination specified in the sub-captions. Lower AAUC is better in all the plots. Fig. 10 .Fig. 11 . 1011Comparison of average curves obtained by tracking box IoU (a, c) and box movement distance (b, d) as the pixels are deleted sequentially. Each average curve is the average of the evaluation curves plotted by evaluating the explanations of all bounding box coordinate decisions across all the detections by the respective detector. The explanations are generated using GBP. The evaluation metric curve is generated using the combination specified in the sub-captions. Comparison of average curves obtained by tracking box IoU (a, c) and box movement distance (b, d) as the pixels are inserted sequentially. Each average curve is the average of the evaluation curves plotted by evaluating the explanations of all bounding box coordinate decisions across all the detections by the respective detector. The explanations are generated using GBP. The evaluation metric curve is generated using the combination specified in the sub-captions. Fig. 12 .Fig. 13 . 1213Comparison between the Deletion AAUC of the evaluation metric curves for the classification and all bounding box coordinate explanations generated across the chosen explanation methods and detectors. Explanation methods (highlighted with different colors) placed at a lower value in the x-axis and y-axis perform relatively better at explaining the box coordinates and classification decisions respectively. Detectors (marked with different characters) placed at a lower value in x-axis and y-axis are relatively better explained for the box coordinates and classification decisions respectively. Multi-metric comparison of quantitative results. According to these metrics, all methods perform similarly when considering all object detectors. The user study and visual inspection of explanation heatmaps reveal more information. Fig. 14 . 14Figure 14cillustrate in the case of SGBP explaining EfficientDet-D0 and certain cases of Faster R-CNN respectively separately classification saliency maps are more faithful in depicting the classification decision. The larger AAUC for all the box coordinate saliency maps generated using each method for Faster R-CNN indicate the box saliency maps are not faithful to the bounding box Comparison of average AUC, AAUC, for the evaluation metric curves obtained by tracking maximum probability (a, c) and box IoU (b, d) as the most important pixels based on the explanation generated using the explanation methods specified in the x-axis are deleted sequentially. All the explanations are generated for detection made by EfficientDet-D0 (left) and Faster R-CNN (right) in the evaluation set images. Lower AAUC is better in both plots. Fig. 15 . 15Ranking obtained for the explanation methods from the user trust study for each detector selected in this work. An initial Elo rating of 1000 is used for all explanation methods. The explanation method with a higher Elo rating has gained relatively more user preferability in the random pair-wise comparisons of explanations for each detector. The rank of a particular method is provided on the top of the bar corresponding to the method. Fig. 16 . 16Figure 16aillustrates the overall ranking taking into account all the bounding box and classification explanations together. The ranking is similar in analyzing the bounding box and classification explanations separately. Ranking obtained from the user study considering all user answers. The rank of a particular method is provided on the top of the bar corresponding to the method. 44 . 44Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019. pp. 9626-9635. IEEE (2019) 45. Tomsett, R., Harborne, D., Chakraborty, S., Gurram, P., Preece, A.: Sanity checks for saliency metrics. In: Proceedings of the AAAI conference on artificial intelligence. vol. 34, pp. 6021-6029 (2020) 46. Tsunakawa, H., Kameya, Y., Lee, H., Shinya, Y., Mitsumoto, N.: Contrastive Relevance Propagation for Interpreting Predictions by a Single-Shot Object Detector. In: 2019 International Joint Conference on Neural Networks (IJCNN). pp. 1-9. IEEE (2019) 47. Valdenegro-Toro, M.: Forward-Looking Sonar Marine Debris Datasets. GitHub (2019), (Online accessed on 01 December 2021) 48. Wagstaff, K.L.: Machine Learning that Matters. In: Proceedings of the 29th International Conference on Machine Learning (ICML) 2012. icml.cc / Omnipress (2012) 49. Wang, X., Girshick, R.B., Gupta, A., He, K.: Non-Local Neural Networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7794-7803. IEEE (2018) 50. Wickstrøm, K., Kampffmeyer, M., Jenssen, R.: Uncertainty and Interpretability in Convolutional Neural Networks for Semantic Segmentation of Colorectal Polyps. Medical Image Analysis 60 (2020) 51. Wu, T., Song, X.: Towards Interpretable Object Detection by Unfolding Latent Structures. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). pp. 6032-6042. IEEE (2019) 52. Zablocki, É., Ben-Younes, H., Pérez, P., Cord, M.: Explainability of vision-based autonomous driving systems: Review and challenges. Computing Research Repository (CoRR) abs/2101.05307 (2021) 53. Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. In: Fleet, D.J., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision -ECCV 2014. LNCS, vol. 8689, pp. 818-833. Springer (2014) 54. Zou, Z., Shi, Z., Guo, Y., Ye, J.: Object Detection in 20 Years: A Survey. Computing Research Repository (CoRR) abs/1905.05055 (2019) combine SmoothGrad with Guided Backpropagation to produced Smooth Guided Backpropagatio (SGBP), and SmoothGrad with Integrated Gradients to produce Smooth Integrated Gradients (SIG). Fig. 17 . 17Comparison between the insertion AAUC of the evaluation metric curves for the classification and all bounding box coordinate explanations generated using different explanation methods across all detectors. This offers a means to understand the explanation method generating more faithful explanations for both classification explanations and all bounding box coordinates. As the curves to compute the respective AUC are computed using insertion metric, higher values in both axis are better. The explanation methods (highlighted with different colors) placed at a higher value in x-axis and y-axis perform relatively better at explaining the box coordinates and classification decisions respectively. The detectors (marked with different characters) placed at a higher value in x-axis and y-axis are relatively better explained for the box coordinates and classification decisions respectively. Fig. 18 .Fig. 19 . 1819An illustration of the chain ymin and ymax explanations across different SSD backbones is provided. The detections from each SSD backbone are provided in the first row. The chain detection explained is marked using a white-colored box. The explanations vary across each backbone. SSD-VGG16 ymin and ymax explanations highlight the upper half and lower half of the chain respectively, corresponding to the bounding box coordinate locations. The hook classification explanation across different epochs (along columns) of SSD-ResNet20 using GBP is illustrated. The first column is the hook ground truth annotation (white-colored box). Fig. 20 .Fig. 21 .Fig. 22 .Fig. 23 . 20212223The detector predicts a dog and a frisbee in the input image. The saliency map for the corresponding classification decisions are converted into a canonical form represented as elliptical principal components. The final multi-object visualization is generated by combining the ellipses, bounding boxes, and class predictions into a single image with a particular color for each object. The detector predicts a dog and a frisbee in the input image. The saliency map for the corresponding classification decisions are converted into a canonical form represented as contours based on importance for the decision. The final multi-object visualization is generated by combining the contours, bounding boxes, and class predictions into a single image with a particular color for each object. The detector predicts a dog and a frisbee in the input image. The saliency map for the corresponding classification decisions are converted into a canonical form represented as density clusters based on importance for the decision. The final multiobject visualization is generated by combining the density clusters, bounding boxes, and class predictions into a single image with a particular color for each object. The detector predicts a dog and a frisbee in the input image. The saliency map for the corresponding classification decisions are converted into a canonical form represented as convex polygon. The final multi-object visualization is generated by combining the polygons, bounding boxes, and class predictions into a single image with a particular color for each object. Fig. 24 . 24Multi-object visualizations generated to visualize together all the detections from SSD512 and the corresponding classification explanations generated using SGBP in the same color. The multi-object visualization approach is specified in the subcaptions. The important pixels responsible for the decision explained in the case of the principal component-based and convex polygon-based are the pixels inside the ellipses and irregular polygons respectively, marked in the same color as the corresponding detection. The important pixels responsible for the decision explained in the case of contour-based and density-based are the pixels highlighted in the same color as the corresponding detection. Fig. 25 . 25Multi-object visualizations generated to visualize together all the detections from Faster R-CNN and the corresponding classification explanations generated using SGBP in the same color. The multi-object visualization approach is specified in the subcaptions. The important pixels responsible for the decision explained in the case of the principal component-based and convex polygon-based are the pixels inside the ellipses and irregular polygons respectively, marked in the same color as the corresponding detection. The important pixels responsible for the decision explained in the case of contour-based and density-based are the pixels highlighted in the same color as the corresponding detection.G Additional Examples of Error AnalysisWe provide six additional examples, two are about poor localization (Figures 26 and 28), and six about misclassification or confusion with background (Figures 29, 30, 27, and 31). Fig. 26 . 26Example error analysis using gradient-based explanations. EfficientDet-D0 localizes the cat detection (red-colored box) poorly (IoU: 0.69) in the detection subplot. It is evident from the saliency map of y_max bounding box that the detector is looking at the end part of the tail. However, the detector misses the tail of the cat because of other nearby features from the monitor display. Fig. 27 .Fig. 28 . 2728Example error analysis using gradient-based explanations. SSD512 misses the person (red-colored box) in the ground truth subplot using the proposed approach. The red-colored box in the detection subplot is the closest output box to the ground truth. The saliency map highlights the the entire person and part of the boat, possibly indicating that the person feature is not prominent in that region. The detector classifies the box as background. However, the second dominant class of the box is person. Example error analysis using gradient-based explanations. EfficientDet-D0 localizes only a single chair in the back (red-colored box) It is evident from the localization saliency maps that the detector is localizing all the nearby chairs together as a single instance, and the saliency indicates this clearly. The bounding box saliency should focus in a single chair instead of multiple ones. Fig. 29 . 29Example error analysis using gradient-based explanations. EfficientDet-D0 misses the motorcycle (red-colored box) in the ground truth subplot. The red-colored box in the detection subplot is the closest output box to the ground truth. The motorcycle tank, right throttle, and certain other surfaces of the missed motorcycle are clearly highlighted. However, the detector does not have sufficient evidence to accept the classification result due to lower confidence for the motorcycle class (0.13) than the confidence threshold (0.5) for acceptable detections. Fig. 30 . 30Example error analysis using gradient-based explanations. SSD512 misses the motorcyle (red-colored box) in the ground truth subplot using the proposed approach. The red-colored box in the detection subplot is the closest output box to the ground truth. The saliency map highlights the entire motorcycle, person, and edges of the lane divider. The detector classifies the box as background. However, the second dominant class of the box is motorcycle. This is probably due to the person occluding part of the motorcycle. Fig. 32 . 32An illustration of a detection output from a detector. Fig. 33 . 33A heatmap representing the importance of pixels for a particular decision. Fig. 34 . 34A bowl detection made by the detector (shown in the left). An explanation for the bowl classification decision (right). The most of the important pixels highlight the object detected. The pixel importance values of objects other than the detected object are very less and negligible. Fig. 35 . 35A person detection made by the detector (left). An explanation for the person classification decision (right) Fig. 36 . 36A person detection made by the detector (left). An explanation for the person y_left_top coordinate prediction (right). The most of the important pixels highlight the object detected. In addition, the explanation is coherent with the bounding box coordinate as the explanation highlights region near the y_left_top. Fig. Fig. 37. A person detection made by the detector (left). An explanation for the person x_right_bottom coordinate prediction (right). The important pixels highlight the object detected. However, the explanation highlights numerous pixels outside the the detected object and is highly noisy. Fig. 38 . 38Sample Task 1 question asked to rank explanation methods based on the user trust in the explanations for a particular detector decision. The figure is best viewed in digital form. Fig. 39 . 39Sample Task 2 question asked to rank the multi-object visualization methods depending on the user understandability. The figure is best viewed in digital form. Fig. 40 . 40Additional questions asked to understand the user background. The figure is best viewed in digital form. Fig. 41 . 41Illustration of the input image user uploaded by the user (left) and detections (right) made by the SSD512, the detector selected by the user, in the input image. The detectors available for off the shelf analysis are EfficientDet-D[0-7, 7x], SSD512, and Faster R-CNN. Fig. 42 . 42Illustration of the interest detection (left) selected by user to generate explanation and saliency map (right) generated using GBP. The explanation interprets the classification decision for the interest detection. The interpretation methods available are GBP, SGBP, IG, and SIG. The interest detections are integer choices depending on the total detections in the image. The interest decisions are classification decision for the detected class and bounding box coordinates. Fig. 43 . 43Illustration of the single-box (left) and realistic (right) evaluation setting is provided as shown in DExT interactive application. Left: When the input image is the manipulated image by removing 80% of the most important pixels, the prior box detected as the output box for the original input image is shown. Right: The output detections for the manipulated input image. There are no output detections after removing 80% of the most important pixels. Fig. 44 . 44illustration of the manipulated image after removing 80% of the most important pixels depending on the generated saliency map. Fig. 45 45Fig. 45. User interface of the DExT interactive application. The user can select any detector, interpretation method, interest decision, and interest detection. In addition, a slider to control the fraction of input image pixels deleted is provided. OD IM DCS ICS DBS IBS DCR ICR DBR IBR Overall RankED0 GBP 4 3 1 2 4 3 3 1 3 SGBP 1 2 2 4 1 2 2 2 2 IG 3 4 4 3 3 4 4 4 4 SIG 2 1 3 1 2 1 1 3 1 SSD GBP 2 3 2 3 1 3 2 3 3 SGBP 1 2 1 2 2 2 1 1 1 IG 4 4 4 4 4 4 7 4 4 SIG 3 1 3 1 3 1 3 2 2 FRN GBP 4 3 1 2 2 1 1 1 1 SGBP 1 1 2 1 1 3 2 2 2 IG 3 4 4 4 4 4 4 4 4 SIG 2 2 3 3 3 2 3 3 3 Table 2 . 2Summary of object objector implementations used in this work.The detectors Table 3 . 3Detailsabout the marine debris objector used in this work. Reported mAP is at 0.5 IoU. SSD Backbones mAP (%) Input Image Size VGG16 91.69 300 x 300 ResNet20 89.85 96 x 96 MobileNet 70.30 96 x 96 DenseNet121 73.80 96 x 96 SqueezeNet 68.37 96 x 96 MiniXception 71.62 96 x 96 OD DCS ICS DBS IBS DCR ICR DBR IBR Overall RankGBP ED0 2 2 2 2 2 3 3 3 3 SSD 1 1 3 1 3 2 1 2 1 FRN 3 3 1 3 1 1 2 1 2 SGBP ED0 2 2 2 2 1 3 2 2 2 SSD 1 1 3 1 3 2 1 1 1 FRN 3 3 1 3 2 1 3 3 3 IG ED0 1 2 2 2 1 3 2 2 2 SSD 2 1 3 1 3 2 1 1 1 FRN 3 3 1 3 2 1 3 3 3 SIG ED0 2 2 2 2 1 3 2 2 2 SSD 1 1 3 1 3 2 1 1 1 FRN 3 3 1 3 2 1 3 3 3 Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. W Abdulla, GitHub. Abdulla, W.: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. GitHub (2017), (Online accessed on 20 September 2021) Towards better understanding of gradient-based attribution methods for Deep Neural Networks. M Ancona, E Ceolini, C Öztireli, M Gross, 6th International Conference on Learning Representations (ICLR) Conference Track Proceedings. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for Deep Neural Networks. In: 6th International Conference on Learning Representations (ICLR) Conference Track Proceedings (2018) Gradient-Based Attribution Methods. M Ancona, E Ceolini, C Öztireli, M H Gross, Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.Ancona, M., Ceolini, E., Öztireli, C., Gross, M.H.: Gradient-Based Attribution Methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. ChamSpringer11700Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, LNCS, vol. 11700, pp. 169-191. Springer, Cham (2019) A comprehensive study of real-time object detection networks across multiple domains: A survey. E Arani, S Gowda, R Mukherjee, O Magdy, S S Kathiresan, B Zonooz, Transactions on Machine Learning Research. survey CertificationArani, E., Gowda, S., Mukherjee, R., Magdy, O., Kathiresan, S.S., Zonooz, B.: A comprehensive study of real-time object detection networks across multiple domains: A survey. Transactions on Machine Learning Research (2022), survey Certification UOLO -Automatic Object Detection and Segmentation in Biomedical Images. T Araújo, G Aresta, A Galdran, P Costa, A M Mendonça, A Campilho, D Stoyanov, Z Taylor, G Carneiro, T F Syeda-Mahmood, A L Martel, L Maier-Hein, J M R S Tavares, A P Bradley, J P Papa, V Belagiannis, J C Nascimento, Z Lu, S Conjeti, M Moradi, H Greenspan, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018 and ML-CDS 2018. Madabhushi, A.ChamSpringer11045Araújo, T., Aresta, G., Galdran, A., Costa, P., Mendonça, A.M., Campilho, A.: UOLO -Automatic Object Detection and Segmentation in Biomedical Images. In: Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T.F., Martel, A.L., Maier- Hein, L., Tavares, J.M.R.S., Bradley, A.P., Papa, J.P., Belagiannis, V., Nascimento, J.C., Lu, Z., Conjeti, S., Moradi, M., Greenspan, H., Madabhushi, A. (eds.) Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018 and ML-CDS 2018. LNCS, vol. 11045, pp. 165-173. Springer, Cham (2018) Perception for Autonomous Systems (PAZ). O Arriaga, M Valdenegro-Toro, M Muthuraja, S Devaramani, F Kirchner, CoRR) abs/2010.14541Computing Research. Arriaga, O., Valdenegro-Toro, M., Muthuraja, M., Devaramani, S., Kirchner, F.: Per- ception for Autonomous Systems (PAZ). Computing Research Repository (CoRR) abs/2010.14541 (2020) On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. S Bach, A Binder, G Montavon, F Klauschen, K Müller, W Samek, PLOS ONE. 107Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K., Samek, W.: On Pixel- Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE 10(7), 1-46 (07 2015) The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. J Bastings, K Filippova, A Alishahi, Y Belinkov, G Chrupala, D Hupkes, Y Pinter, Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP. Sajjad, H.the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLPAssociation for Computational Linguistics ACLBastings, J., Filippova, K.: The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In: Alishahi, A., Belinkov, Y., Chrupala, G., Hupkes, D., Pinter, Y., Sajjad, H. (eds.) Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP. pp. 149-155. Association for Computational Linguistics ACL (2020) Toward transformerbased object detection. J Beal, E Kim, E Tzeng, D H Park, A Zhai, D Kislyuk, CoRR abs/2012.09958Beal, J., Kim, E., Tzeng, E., Park, D.H., Zhai, A., Kislyuk, D.: Toward transformer- based object detection. CoRR abs/2012.09958 (2020) Endto-End Object Detection with Transformers. N Carion, F Massa, G Synnaeve, N Usunier, A Kirillov, S Zagoruyko, Computer Vision -ECCV 2020. Vedaldi, A., Bischof, H., Brox, T., Frahm, J.Springer12346Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End- to-End Object Detection with Transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J. (eds.) Computer Vision -ECCV 2020. LNCS, vol. 12346, pp. 213-229. Springer (2020) Sanity checks for saliency methods explaining object detectors. Paul G Deepan Chakravarthi Padmanabhan, O A Plöger, M Valdenegro-Toro, Proceedings of the 1st World Conference on eXplainable Artificial Intelligence. the 1st World Conference on eXplainable Artificial IntelligenceDeepan Chakravarthi Padmanabhan, Paul G. Plöger, O.A., Valdenegro-Toro, M.: Sanity checks for saliency methods explaining object detectors. In: Proceedings of the 1st World Conference on eXplainable Artificial Intelligence (2023) F Doshi-Velez, B Kim, arXiv:1702.08608Towards A Rigorous Science of Interpretable Machine Learning. arXiv preprintDoshi-Velez, F., Kim, B.: Towards A Rigorous Science of Interpretable Machine Learning. arXiv preprint arXiv:1702.08608 (2017) The Rating of Chess Players, Past and Present. A E Elo, BT Batsford LimitedElo, A.E.: The Rating of Chess Players, Past and Present. BT Batsford Limited (1978) A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. M Ester, H Kriegel, J Sander, X Xu, Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD). Simoudis, E., Han, J., Fayyad, U.M.the Second International Conference on Knowledge Discovery and Data Mining (KDD)AAAI PressEster, M., Kriegel, H., Sander, J., Xu, X.: A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In: Simoudis, E., Han, J., Fayyad, U.M. (eds.) Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD). pp. 226-231. AAAI Press (1996) Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges. D Feng, C Haase-Schütz, L Rosenbaum, H Hertlein, C Gläser, F Timm, W Wiesbeck, K Dietmayer, IEEE Transactions on Intelligent Transportation Systems (TITS). 223Feng, D., Haase-Schütz, C., Rosenbaum, L., Hertlein, H., Gläser, C., Timm, F., Wiesbeck, W., Dietmayer, K.: Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges. IEEE Transactions on Intelligent Transportation Systems (TITS) 22(3), 1341-1360 (2021) Towards better visual explanations for deep image classifiers. A Grabska-Barwinska, A Rannen-Triki, O Rivasplata, A György, eXplainable AI approaches for debugging and diagnosis. Grabska-Barwinska, A., Rannen-Triki, A., Rivasplata, O., György, A.: Towards better visual explanations for deep image classifiers. In: eXplainable AI approaches for debugging and diagnosis. (2021) Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions. D A Gudovskiy, A Hodgkinson, T Yamaguchi, Y Ishii, S Tsukizawa, CoRR) abs/1811.08011Computing Research. Gudovskiy, D.A., Hodgkinson, A., Yamaguchi, T., Ishii, Y., Tsukizawa, S.: Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions. Computing Research Repository (CoRR) abs/1811.08011 (2018) Single Shot Text Detector with Regional Attention. P He, W Huang, T He, Q Zhu, Y Qiao, X Li, 2017 IEEE International Conference on Computer Vision (ICCV). IEEEHe, P., Huang, W., He, T., Zhu, Q., Qiao, Y., Li, X.: Single Shot Text Detector with Regional Attention. In: 2017 IEEE International Conference on Computer Vision (ICCV). pp. 3066-3074. IEEE (2017) Attention is not Explanation. S Jain, B C Wallace, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Burstein, J., Doran, C., Solorio, T.the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)ACL1Jain, S., Wallace, B.C.: Attention is not Explanation. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) Volume 1 (Long and Short Papers). pp. 3543-3556. Association for Computational Linguistics (ACL) (2019) Machine Learning Techniques for Accountability. B Kim, F Doshi-Velez, AI Magazine. 421Kim, B., Doshi-Velez, F.: Machine Learning Techniques for Accountability. AI Magazine 42(1), 47-52 (2021) Towards Human-Like Interpretable Object Detection Via Spatial Relation Encoding. J U Kim, S Park, Y M Ro, 2020 IEEE International Conference on Image Processing (ICIP). IEEEKim, J.U., Park, S., Ro, Y.M.: Towards Human-Like Interpretable Object Detection Via Spatial Relation Encoding. In: 2020 IEEE International Conference on Image Processing (ICIP). pp. 3284-3288. IEEE (2020) Microsoft COCO: Common Objects in Context. T Lin, M Maire, S J Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Computer Vision -ECCV 2014. Fleet, D.J., Pajdla, T., Schiele, B., Tuytelaars, T.ChamSpringer8693Lin, T., Maire, M., Belongie, S.J., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common Objects in Context. In: Fleet, D.J., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision -ECCV 2014. LNCS, vol. 8693, pp. 740-755. Springer, Cham (2014) SSD: Single Shot MultiBox Detector. W Liu, D Anguelov, D Erhan, C Szegedy, S E Reed, C Fu, A C Berg, Computer Vision -ECCV 2016. Leibe, B., Matas, J., Sebe, N., Welling, M.ChamSpringer9905Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C., Berg, A.C.: SSD: Single Shot MultiBox Detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision -ECCV 2016. LNCS, vol. 9905, pp. 21-37. Springer, Cham (2016) A Unified Approach to Interpreting Model Predictions. S M Lundberg, S Lee, I Guyon, U Von Luxburg, S Bengio, H M Wallach, R Fergus, S V N Vishwanathan, Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS). Garnett, R.the 31st International Conference on Neural Information Processing Systems (NIPS)Curran Associates, Inc17Lundberg, S.M., Lee, S.: A Unified Approach to Interpreting Model Predictions. In: Guyon, I., von Luxburg, U., Bengio, S., Wallach, H.M., Fergus, R., Vishwanathan, S.V.N., Garnett, R. (eds.) Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS). p. 4768-4777. NIPS'17, Curran Associates, Inc. (2017) RISE: Randomized Input Sampling for Explanation of Black-box Models. V Petsiuk, A Das, K Saenko, British Machine Vision Conference (BMVC). p. BMVA Press151Petsiuk, V., Das, A., Saenko, K.: RISE: Randomized Input Sampling for Explanation of Black-box Models. In: British Machine Vision Conference (BMVC). p. 151. BMVA Press (2018) Black-box Explanation of Object Detectors via Saliency Maps. V Petsiuk, R Jain, V Manjunatha, V I Morariu, A Mehra, V Ordonez, K Saenko, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Petsiuk, V., Jain, R., Manjunatha, V., Morariu, V.I., Mehra, A., Ordonez, V., Saenko, K.: Black-box Explanation of Object Detectors via Saliency Maps. In: Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 11443-11452 (2021) You Only Look Once: Unified, Real-Time Object Detection. J Redmon, S K Divvala, R B Girshick, A Farhadi, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEERedmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You Only Look Once: Unified, Real-Time Object Detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 779-788. IEEE (2016) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. S Ren, K He, R B Girshick, J Sun, IEEE Transactions on Pattern Analysis Machine Intelligence (PAMI). 396Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis Machine Intelligence (PAMI) 39(6), 1137-1149 (2017) Why Should I Trust You?": Explaining the Predictions of Any Classifier. M T Ribeiro, S Singh, C Guestrin, B Krishnapuram, M Shah, A J Smola, C C Aggarwal, D Shen, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Rastogi, R.the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMRibeiro, M.T., Singh, S., Guestrin, C.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In: Krishnapuram, B., Shah, M., Smola, A.J., Aggarwal, C.C., Shen, D., Rastogi, R. (eds.) Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 1135-1144. Association for Computing Machinery (ACM) (2016) The Elephant in the Room. A Rosenfeld, R S Zemel, J K Tsotsos, CoRR) abs/1808.03305Computing Research. Rosenfeld, A., Zemel, R.S., Tsotsos, J.K.: The Elephant in the Room. Computing Research Repository (CoRR) abs/1808.03305 (2018) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. C Rudin, Nature machine intelligence. 15Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1(5), 206-215 (2019) Machine learning for science and society. C Rudin, K L Wagstaff, Machine Learning. 951Rudin, C., Wagstaff, K.L.: Machine learning for science and society. Machine Learning 95(1), 1-9 (2014) Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. W Samek, G Montavon, S Lapuschkin, C J Anders, K Müller, Proceedings of the IEEE. 1093Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.: Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proceedings of the IEEE 109(3), 247-278 (2021) Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. R R Selvaraju, M Cogswell, A Das, R Vedantam, D Parikh, D Batra, International Journal of Computer Vision. 1282Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad- CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision 128(2), 336-359 (2020) Is Attention Interpretable?. S Serrano, N A Smith, Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL). Korhonen, A., Traum, D.R., Màrquez, L.the 57th Conference of the Association for Computational Linguistics (ACL)ACLSerrano, S., Smith, N.A.: Is Attention Interpretable? In: Korhonen, A., Traum, D.R., Màrquez, L. (eds.) Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL). pp. 2931-2951. Association for Computational Linguistics (ACL) (2019) Learning Important Features Through Propagating Activation Differences. A Shrikumar, P Greenside, A Kundaje, Proceedings of the 34th International Conference on Machine Learning (ICML) 2017. Proceedings of Machine Learning Research. Precup, D., Teh, Y.W.the 34th International Conference on Machine Learning (ICML) 2017. Machine Learning ResearchPMLR70Proceedings of Machine Learning ResearchShrikumar, A., Greenside, P., Kundaje, A.: Learning Important Features Through Propagating Activation Differences. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning (ICML) 2017. Proceedings of Machine Learning Research, vol. 70, pp. 3145-3153. Proceedings of Machine Learning Research (PMLR) (2017) Opening the Black Box of Deep Neural Networks via Information. R Shwartz-Ziv, N Tishby, CoRR) abs/1703.00810Computing Research. Shwartz-Ziv, R., Tishby, N.: Opening the Black Box of Deep Neural Networks via Information. Computing Research Repository (CoRR) abs/1703.00810 (2017) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. K Simonyan, A Vedaldi, A Zisserman, 2nd International Conference on Learning Representations (ICLR) Workshop Track Proceedings. Bengio, Y., LeCun, Y.Simonyan, K., Vedaldi, A., Zisserman, A.: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations (ICLR) Workshop Track Proceedings (2014) Smooth-Grad: removing noise by adding noise. D Smilkov, N Thorat, B Kim, F B Viégas, M Wattenberg, CoRR) abs/1706.03825Computing Research. Smilkov, D., Thorat, N., Kim, B., Viégas, F.B., Wattenberg, M.: Smooth- Grad: removing noise by adding noise. Computing Research Repository (CoRR) abs/1706.03825 (2017) Should We Trust Algorithms?. D Spiegelhalter, Harvard Data Science Review. 21Spiegelhalter, D.: Should We Trust Algorithms? Harvard Data Science Review 2(1) (2020) Striving for Simplicity: The All Convolutional Net. J T Springenberg, A Dosovitskiy, T Brox, M A Riedmiller, 3rd International Conference on Learning Representations (ICLR) Workshop Track Proceedings. Bengio, Y., LeCun, Y.Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.A.: Striving for Simplic- ity: The All Convolutional Net. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations (ICLR) Workshop Track Proceedings (2015) Axiomatic Attribution for Deep Networks. M Sundararajan, A Taly, Q Yan, Proceedings of the 34th International Conference on Machine Learning (ICML) 2017. Proceedings of Machine Learning Research. Precup, D., Teh, Y.W.the 34th International Conference on Machine Learning (ICML) 2017. Machine Learning ResearchPMLR70Proceedings of Machine Learning ResearchSundararajan, M., Taly, A., Yan, Q.: Axiomatic Attribution for Deep Networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning (ICML) 2017. Proceedings of Machine Learning Research, vol. 70, pp. 3319-3328. Proceedings of Machine Learning Research (PMLR) (2017) EfficientDet: Scalable and Efficient Object Detection. M Tan, R Pang, Q V Le, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEETan, M., Pang, R., Le, Q.V.: EfficientDet: Scalable and Efficient Object Detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10778-10787. IEEE (2020)
[ "https://github.com/DeepanChakravarthiPadmanabhan/" ]
[ "A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation", "A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation", "A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation", "A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation" ]
[ "Journal Of L A T E X Class ", "Files ", "Journal Of L A T E X Class ", "Files " ]
[]
[]
Neural chat translation (NCT) aims to translate a cross-lingual chat between speakers of different languages. Existing context-aware NMT models cannot achieve satisfactory performances due to the following inherent problems: 1) limited resources of annotated bilingual dialogues; 2) the neglect of modelling conversational properties; 3) training discrepancy between different stages. To address these issues, in this paper, we propose a multi-task multi-stage transitional (MMT) training framework, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. We elaborately design two auxiliary tasks, namely utterance discrimination and speaker discrimination, to introduce the modelling of dialogue coherence and speaker characteristic into the NCT model. The training process consists of three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware fine-tuning with gradual transition. Particularly, the second stage serves as an intermediate phase that alleviates the training discrepancy between the pre-training and fine-tuning stages. Moreover, to make the stage transition smoother, we train the NCT model using a gradual transition strategy, i.e., gradually transiting from using monolingual to bilingual dialogues. Extensive experiments on two language pairs demonstrate the effectiveness and superiority of our proposed training framework.
10.1109/tpami.2022.3233226
[ "https://export.arxiv.org/pdf/2301.11749v1.pdf" ]
255,328,224
2301.11749
e56ed08a55478c458d9eb3e1488dbe4427f79ae4
A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation AUGUST 2015 1 Journal Of L A T E X Class Files A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation 148AUGUST 2015 1Index Terms-Neural Chat TranslationMonolingual DialogueDialogue CoherenceSpeaker CharacteristicGradual Transition Neural chat translation (NCT) aims to translate a cross-lingual chat between speakers of different languages. Existing context-aware NMT models cannot achieve satisfactory performances due to the following inherent problems: 1) limited resources of annotated bilingual dialogues; 2) the neglect of modelling conversational properties; 3) training discrepancy between different stages. To address these issues, in this paper, we propose a multi-task multi-stage transitional (MMT) training framework, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. We elaborately design two auxiliary tasks, namely utterance discrimination and speaker discrimination, to introduce the modelling of dialogue coherence and speaker characteristic into the NCT model. The training process consists of three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware fine-tuning with gradual transition. Particularly, the second stage serves as an intermediate phase that alleviates the training discrepancy between the pre-training and fine-tuning stages. Moreover, to make the stage transition smoother, we train the NCT model using a gradual transition strategy, i.e., gradually transiting from using monolingual to bilingual dialogues. Extensive experiments on two language pairs demonstrate the effectiveness and superiority of our proposed training framework. INTRODUCTION N EURAL Chat Translation (NCT) is to translate a crosslingual chat between speakers of different languages into utterances of their individual mother tongue. Fig. 1 depicts an example of cross-lingual chat where one speaks in English and another in Chinese with their corresponding translations. With more international communication and cooperation all around the world, the chat translation task becomes more important and has broader applications in daily life. In this task, sentence-level Neural Machine Translation (NMT) models [1], [2], [3] can be directly used to translate dialogue utterances sentence by sentence. In spite of its practicability, sentence-level NMT models often generate Manuscript received July 25, 2021; revised May 6, 2022; accepted Dec 18, 2022. unsatisfactory translations due to ignoring the contextual information in dialogue history. To address this problem, many researches [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14] adapt context-aware NMT models to make chat translation through their capability of incorporating dialogue history context. Generally, these methods adopt a pretrain-finetune paradigm, which first pre-train a sentencelevel NMT model on a large-scale parallel corpus and then fine-tune it on the chat translation dataset in a context-aware way. However, they still can not obtain satisfactory results in the scenario of chat translation, mainly due to the following aspects of limitations: 1) The resource of bilingual chat arXiv:2301.11749v1 [cs.CL] 27 Jan 2023 translation corpus is usually limited, thus making an NCT model insufficiently trained to fully exploit dialogue context. 2) Conventional ways of incorporating dialogue context neglect to explicitly model its conversational properties such as dialogue coherence and speaker characteristic, resulting in incoherent and speaker-inconsistent translations. 3) The abrupt transition from sentence-level pre-training to context-aware fine-tuning breaks the consistency of model training, which hurts the potential performance of the final NCT model. Therefore, it is of great significance to train a better NCT models by resolving the above three aspects of limitations. In this paper, we propose a multi-task multi-stage transitional (MMT) training framework where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. Specifically, our proposed framework consists of three training stages, also following the pretrain-finetune paradigm. The first stage is still to pre-train the NCT model through sentence-level translation on the large-scale parallel corpus, resulting in the model M 1 . At the second stage, using M 1 for model initialization, we continue to train the model through the previous sentence-level translation task along with two auxiliary dialogue-related tasks using additional monolingual dialogues, obtaining the model M 2 . The auxiliary tasks are related to dialogue coherence and speaker characteristic, which are two important conversational properties of dialogue context. For the dialogue coherence, we design the task of Utterance Discrimination (UD). The UD task is to judge whether an utterance and a given section of contextual utterances are within the same dialogue. For the speaker characteristic, we design the Speaker Discrimination (SD) task. The SD task is to discriminate whether a given utterance and a piece of speaker-specific dialogue history contexts are spoken by the same speaker. Finally, at the last stage, initialized by M 2 , the model is fine-tuned using a gradual transition strategy and eventually becomes a context-aware NCT model M 3 . Concretely, the NCT model is trained through the objective comprised of chat translation, UD and SD tasks. During this process, we initially construct training samples for the two auxiliary tasks from additional monolingual dialogues and gradually transit to using bilingual dialogues. The MMT training framework enhances the NCT model from the following aspects. Firstly, the relatively abundant monolingual dialogues function as a supplement to the scarce annotated bilingual dialogues, making the model more sufficiently trained to exploit dialogue context. Secondly, the UD and SD tasks are directly related to dialogue coherence and speaker characteristic, thus introducing the modelling of these two conversational properties into the NCT model. Thirdly, the second training stage serves as an intermediate phase that alleviates the discrepancy between sentence-level pre-training and context-aware fine-tuning. Particularly, it endows the model with the preliminary capability to capture dialogue context for the subsequent NCT training. It is notable that the two dialogue-related auxiliary tasks exist at both the second and third stages with different training data, which maintains the training consistency to some extent. Therefore, at the third stage, the NCT model can be more effectively fine-tuned to leverage dialogue context using the chat translation dataset with only a small number of annotated bilingual dialogues. In essence, the major contributions of our paper are as follows: • In NCT, our work is the first attempt to use additional relatively abundant monolingual dialogues for training, which helps the model more sufficiently trained to capture dialogue context for chat translation. • We elaborately design two dialogue-related auxiliary tasks, namely utterance discrimination and speaker discrimination. This makes the model more capable of modelling dialogue coherence and speaker characteristic, which are two important conversational properties of dialogue context. • We propose to alleviate the training discrepancy between pre-training and fine-tuning by introducing an intermediate stage (Stage 2) and adopting a gradual transition strategy for the context-aware fine-tuning (Stage 3). At the second stage, the model is simultaneously optimized with the two auxiliary tasks on the additional monolingual dialogues. Moreover, at the third stage, we train the NCT model by gradually transiting from using monolingual to bilingual dialogues, making the stage transition smoother. Thus, the NCT model can be more effectively fine-tuned on the small-scale bilingual chat translation dataset. • We will release the code of this work on Github https:// github.com/DeepLearnXMU. The remainder of this paper is organized as follows. Section 2 gives the NCT problem formalization, introduces the basic architecture of our NCT model and describes the conventional two-stage training including sentence-level pretraining and context-aware fine-tuning. Section 3 elaborates our proposed MMT training framework. In Section 4, we report the experimental results and make in-depth analysis. Section 5 summarizes the related work, mainly involving several existing studies on NCT and context-aware NMT models. Finally, in Section 6, we draw the conclusions of this paper. BACKGROUND In this section, we first give the NCT problem formalization (Section 2.1). Then, we describe the Flat-NCT model, which is the model architecture used in this work (Section 2.2). Finally, we introduce the dominant approach of training an NCT model, which consists of sentence-level pre-training (Section 2.3.1) and context-aware fine-tuning (Section 2.3.2). Problem Formalization In the scenario of this work, we denote the two speakers involved in a dialogue as s1 and s2. For a cross-lingual chat, as shown in the example in Fig. 1, the two speakers speak in the source and target language, respectively. We assume For illustration, we assume the input sequence Cx u ; xu is the concatenation of Cx u =x 1 ,x 2 ,x 3 ,x 4 and xu=x 6 ,x 7 ,x 8 , eos separated by a special token " sep ". Notably, words in Cx u can only be attended to by those in xu at the first encoder layer. At the other encoder layers, Cx u is masked and the self-attention is only conducted within words of xu. [ ! ; " ] #$ % (') ),$ (') $ Decoder Input Representation Layer emb emb emb emb emb emb emb " # $ % &,( ($) &,$ ($) &,# ($) &,% ($) &,+ ($) &,,($) TABLE 1 Definitions of Different Dialogue History Contexts Symbol Definition Meaning Cx u x 1 , x 2 , x 3 , ..., x u−1 Source-side context of xu Cy u y 1 , y 2 , y 3 , ..., y u−1 Target-side context of yu C s1 xu x 1 , x 3 , ..., x u−2 s1-specific context of xu C s2 xu x 2 , x 4 , ..., x u−1 s2-specific context of xu C s1 yu y 1 , y 3 , ..., y u−2 s1-specific context of yu C s2 yu y 2 , y 4 , ..., y u−1 s2-specific context of yu C xu x 1 , x 2 , x 3 , ..., x u−1 Context of xu C y u y 1 , y 2 , y 3 , ..., y u−1 Context of y u C s1 xu x 1 , x 3 , ..., x u−2 s1-specific context of xu C s2 xu x 2 , x 4 , ..., x u−1 s2-specific context of xu C s1 y u y 1 , y 3 , ..., y u−2 s1-specific context of y u C s2 y u y 2 , y 4 , ..., y u−1 s2-specific context of y u xu represents an utterance from the source-language monolingual dialogue X and y u is from the target-language monolingual dialogue Y . they have alternately given utterances in their own languages for u turns, resulting in the source-language utterance sequence X=x 1 , x 2 , x 3 , x 4 , ..., x u−1 , x u and the targetlanguage utterance sequence Y =y 1 , y 2 , y 3 , y 4 , ..., y u−1 , y u . Notably, X and Y contain both the utterances originally spoken by one speaker and the translated utterances from the other speaker. Specifically, among these utterances, x 1 , x 3 , ..., x u are originally spoken by the source-language speaker s1 and y 1 , y 3 , ..., y u are the corresponding translations in the target language. Analogously, y 2 , y 4 , ..., y u−1 are originally spoken by the target-language speaker s2 and x 2 , x 4 , ..., x u−1 are the translated utterances in the source language. Besides the bilingual dialogues, our proposed training framework uses additional monolingual dialogues D X of the source language and D Y of the target language. Slightly different from the bilingual dialogue, the two speakers (s1 and s2) in a monolingual dialogue speak in the same language. We also assume a source-language monolingual dialogue X∈D X and a target-language monolingual Y ∈D Y proceed to the u-th turn, resulting in x 1 , x 2 , x 3 , x 4 , ..., x u−1 , x u and y 1 , y 2 , y 3 , y 4 , ..., y u−1 , y u , respectively. Then, we give the necessary definitions in the remainder of this paper. For clarity, we list all definitions 1 in Table 1. For a bilingual dialogue, we define the dialogue history context of x u on the source side as C xu =x 1 , x 2 , x 3 , ..., x u−1 and that of y u on the target side as C yu =y 1 , y 2 , y 3 , ..., y u−1 . According to original speakers, on the source side, we define the speaker s1-specific dialogue history context of x u as the partial sequence of its preceding utterances C s1 xu =x 1 , x 3 , ..., x u−2 and the speaker s2-specific dialogue history context of x u as C s2 xu =x 2 , x 4 , ..., x u−1 . On the target side, C s1 yu =y 1 , y 3 , ..., y u−2 and C s2 yu =y 2 , y 4 , ..., y u−1 denote the speaker s1-specific and s2-specific dialogue history contexts of y u , respectively. When it comes to a monolingual dialogue, we also formalize different types of dialogue history contexts {C xu , C y u , C s1 xu , C s2 xu , C s1 y u , C s2 y u } in a similar way. The NCT model We use the Flat-Transformer introduced in [14] as our basic NCT model, which we denote as Flat-NCT. Figure 2 shows the architecture of the Flat-NCT, mainly including input representation layer, encoder and decoder. Input Representation Layer For each utterance x u =x 1 , x 2 ,· · ·, x |xu| to be translated, [C xu ; x u ] is fed into the NCT model as input, where [; ] 1. For each item of {Cx u , Cy u , C s1 xu , C s2 xu , C s1 yu , C s2 yu , C xu , C y u , C s1 xu , C s2 xu , C s1 y u , C s2 y u }, taking Cx u for instance, we prepend a special token '[cls]' to it and use another special token '[sep]' to delimit its included utterances, as implemented in [15]. denotes the concatenation. Different from the conventional embedding layer that only includes word embedding WE and position embedding PE, we additionally add a speaker embedding SE and a turn embedding TE. The final embedding B(x i ) of each input word x i can be written as B(x i ) = WE(x i ) + PE(x i ) + SE(x i ) + TE(x i ),(1) where WE ∈ R |V |×d , SE ∈ R 2×d and TE ∈ R |U |×d . Here, |V |, |U | and d denote the size of shared vocabulary, maximum dialogue turns, and the hidden size, respectively. Encoder The encoder of our NCT model has L identical layers, each of which is composed of a self-attention (SelfAtt) sublayer and a feed-forward network (FFN) sub-layer. 2 Let h (l) e denote the hidden states of the l-th encoder layer, it is calculated using the following equations: z (l) e = SelfAtt(h (l−1) e ) + h (l−1) e , h (l) e = FFN(z (l) e ) + z (l) e ,(2) where h (0) e is initialized as the embedding of input words. Particularly, words in C xu can only be attended to by those in x u at the first encoder layer while C xu is masked at the other layers, as implemented in [14]. Decoder The decoder also consists of L identical layers, each of which additionally has a cross-attention (CrossAtt) sublayer compared to the encoder. Let h (l) d denote the hidden states of the l-th decoder layer, it is computed as z (l) d = SelfAtt(h (l−1) d ) + h (l−1) d , c (l) d = CrossAtt(z (l) d , h (L) e ) + z (l) d , h (l) d = FFN(c (l) d ) + c (l) d ,(3) where h (L) e corresponds to the top-layer encoder hidden states. At each decoding time step t, the t-th decoder hidden state h (L) d,t is fed into a linear transformation layer and a softmax layer to predict the probability distribution of the next target token: p(y t |y <t , x u , C xu ) = Softmax(W o h (L) d,t + b o ),(4) where W o ∈ R |V |×d and b o ∈ R |V | are trainable parameters. Two-stage Training Sentence-level Pre-training At this stage, the NCT model is pre-trained on a largescale parallel corpus D sent in the way of a vanilla sentencelevel translation. For each parallel sentence pair (x, y) ∈ 2. The layer normalization is omitted for simplicity. D sent , taking x as input, the model is optimized through the following objective: L sent (θ nct ) = |y| t=1 log(p(y t |x, y <t )),(5) where θ nct is the parameters of the NCT model, y=y 1 , y 2 , · · · , y |y| is the target translation, y t is the t-th word of y and y <t denotes the partial sequence y 1 , · · · , y t−1 of target words preceding y t . Context-aware Fine-tuning After the sentence-level pre-training, the model is then fine-tuned using the bilingual chat translation dataset D bct in a context-aware way. Concretely, given a piece of Uturn parallel bilingual dialogue utterances (X, Y ) ∈ D bct , where X=x 1 , x 2 , · · · , x U is in the source language while Y =y 1 , y 2 , · · · , y U is in the target language, 3 the training objective at this stage can be formalized as L nct (θ nct ) = − U u=1 log(p(y u |x u , x <u , y <u )),(6) where x <u and y <u are the preceding utterance sequences of the u-th source-language utterance x u and the u-th target-language utterance y u , respectively. More specifically, p(y u |x u , x <u , y <u ) is calculated as p(y u |x u , x <u , y <u ) = |yu| t=1 p(y t |y <t , x u , x <u , y <u ),(7) where y t is the t-th target word in y u and y <t denotes the preceding tokens y 1 , y 2 , · · · , y t−1 before the t-th time step. MULTI-TASK MULTI-STAGE TRANSITIONAL TRAINING FRAMEWORK In this section, we give a detailed description of our proposed multi-task multi-stage transitional (MMT) training framework for NCT, which aims to improve the NCT model with dialogue-related auxiliary tasks using additional monolingual dialogues. In the following subsections, we first introduce the two proposed dialogue-related auxiliary tasks (Section 3.1) in detail. Then, we elaborate the procedures of our proposed training framework (Section 3.2). Auxiliary Tasks In our proposed training framework, we elaborately design two auxiliary tasks that are related to two important conversational properties of dialogue context, namely dialogue coherence and speaker characteristic. The first task for dialogue coherence is utterance discrimination (UD) and the second for speaker characteristic is speaker discrimination (SD). Together with the main chat translation task, the NCT model can be enhanced to generate more coherent and speaker-consistent translations through multi-task learning. In the following subsections, in order to clearly describe the two auxiliary tasks, we just take a source-language dialogue X=x 1 , x 2 , x 3 , x 4 , ..., x u−1 , x u for instance, which can be generalized to other types of dialogues (Y , X and Y ). Utterance Discrimination (UD) A series of previous studies [16], [17], [18], [19], [20] have indicated that the modelling of global contextual coherence can lead to more coherent generated text. From this perspective, we design the task of UD to introduce the modelling of dialogue coherence into the NCT model. As shown in Fig. 3(a), our UD task aims to distinguish whether an utterance and a given section of contextual utterances are within the same dialogue. To this end, we construct positive and negative training samples from the monolingual and bilingual dialogues, where a training sample (C xu , x) contains a section of dialogue history context C xu and a selected utterance x with the label X ud . For a positive sample with label X ud = 1, x is exactly x u , while for a negative sample with label X ud = 0, x is a randomly selected utterance from any other irrelevant dialogue. Formally, the training objective of UD is defined as follows: L X ud (θ nct , θ ud ) = −log(p(ˆ X ud = X ud |C xu , x)),(8) where θ nct and θ ud are the trainable parameters of the NCT model and UD classifier, respectively. To estimate the probability in Eq. 8, we first obtain the representations H x of the utterance x and H Cx u of the dialogue history context C xu using the NCT encoder. Specifically, H x is calculated as 1 | x| | x| i=1 h (L) e,i while H Cx u is defined as the encoder hidden state h (L) e,0 of the prepended special token '[cls]' in C xu . Then, the concatenation of H x and H Cx u is fed into a binary UD classifier, which is an extra fully-connected layer on top of the NCT encoder: p(ˆ X ud = 1|C xu , x) = sigmoid(W ud [H x ; H Cx u ]), p(ˆ X ud = 0|C xu , x) = 1 − p(ˆ X ud = 1|C xu , x),(9) where W ud is the trainable parameter matrix of the UD classifier and the bias term is omitted for simplicity. Speaker Discrimination (SD) Generally, a dialogue may involve speakers with different characteristics, which is a salient conversational property. Therefore, we design the SD task to incorporate the modelling of speaking style into the NCT model, making the translated utterance more speaker-consistent. As shown in Fig. 3(b), the SD task is to discriminate whether a given utterance and a piece of speaker-specific dialogue history contexts are spoken by the same speaker. Similarly, we construct positive and negative training samples from the monolingual and bilingual dialogues. Specifically, an SD training sample (C s xu , x u ) is comprised of the speaker s-specific dialogue history context (s ∈ {s1, s2}) and the utterance x u with the corresponding label X sd . For a positive sample with label X sd = 1, the dialogue history context is specific to the speaker s1 (x u is spoken by s1), while for a negative sample with label X sd = 0, it is specific to the other speaker s2. Formally, the training objective of SD is defined as follows: L X sd (θ nct , θ sd ) = −log(p(ˆ X sd = X sd |C s xu , x u )),(10) where θ nct and θ sd are the trainable parameters of the NCT model and SD classifier, respectively. Analogously, we use the NCT encoder to obtain the representations H xu of x u and H C s xu of C s xu , where H xu = 1 |xu| |xu| i=1 h (L) e,i and the h (L) e,0 of C s xu is used as H C s xu . Then, to estimate the probability in Eq. 10, the concatenation of H xu and H C s xu is fed into a binary SD classifier, which is another fully-connected layer on top of the NCT encoder: p(ˆ X sd = 1|C s xu , x u ) = sigmoid(W sd [H xu ; H C s xu ]), p(ˆ X sd = 0|C s xu , x u ) = 1 − p(ˆ X sd = 1|C s xu , x u ),(11) where W sd is the trainable parameter matrix of the SD classifier and the bias term is omitted for simplicity. Three-stage Training Then, we elaborate the procedures of our proposed MMT training framework. The training totally consists of three stages: 1) sentence-level pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware finetuning with gradual transition. During inference, the auxiliary tasks (UD and SD) are not involved and only the NCT model (θ nct ) is used to conduct chat translation. Concretely, for UD and SD tasks, we construct training instances from X ∈ D X and Y ∈ D Y in the way described in Section 3.1.1 and Section 3.1.2. Together with the sentencelevel translation, the training objective at this stage can be written as L 2 = L sent + α 1 L ud + β 1 L sd ,(12) where L ud = L X ud (θ nct , θ ud ) + L Y ud (θ nct , θ ud ), L sd = L X sd (θ nct , θ sd ) + L Y sd (θ nct , θ sd ) , and α 1 and β 1 are balancing hyper-parameters for the tradeoff between L sent and the other auxiliary objectives. Here, as similarly defined in Eq. 8 and Eq. 10, L X ud (θ nct , θ ud ) and L Y ud (θ nct , θ ud ) represent the training objectives of the UD task on the source-language monolingual dialogue X and target-language monolingual dialogue Y respectively, which is analogous to L X sd (θ nct , θ ud ) and L Y sd (θ nct , θ ud ) of the SD task. In this way, the tasks of UD and SD introduce the modelling of dialogue coherence and speaker characteristic into the sentence-level translation model. Meanwhile, we still use the objective L sent so as to avoid undermining the pre-trained translation capability of the model, providing a better starting point for the subsequent NCT fine-tuning. Stage 3: Context-aware Fine-tuning with Gradual Transition Using the bilingual chat translation dataset D bct , the third stage is to obtain the final NCT model M 3 through contextaware fine-tuning, where the two auxiliary tasks (UD and SD) are still involved. Particularly, different from the second stage, we construct the training instances of UD and SD tasks from X and Y . Given a bilingual dialogue pair (X, Y ) ∈ D bct , we optimize the model (initialized by M 2 ) through the following objective: L 3 = L nct + α 2 L ud + β 2 L sd ,(13) where L ud = L X ud (θ nct , θ ud ) + L Y ud (θ nct , θ ud ), L sd = L X sd (θ nct , θ sd ) + L Y sd (θ nct , θ sd ) , and α 2 and β 2 are also the hyper-parameters controlling the balance between L nct and the other auxiliary objectives analogously defined as in Eq. 8 or Eq. 10. Notably, under our proposed training framework, UD and SD tasks exist both at the second and the third stages, which can benefit the NCT model in the following two aspects. On the one hand, the two auxiliary tasks maintain the training consistency, making the transition from sentence-level pre-training to context-aware fine-tuning smoother. On the other hand, because the model has acquired the preliminary capability of capturing dialogue context obtained at the second stage, it can be more effectively fine-tuned on D bct with only a small number of annotated bilingual dialogues. However, although the above strategy maintains the training consistency to some extent, the transition of training stage is still abrupt because the NCT model is trained with the two auxiliary tasks using totally different data at the second and third stages. To further alleviate the training discrepancy, we propose to train the NCT model by gradually transiting from using monolingual to bilingual dialogues. Specifically, we keep on using the additional monolingual dialogues (X and Y ) to accomplish a smoother transition of training stages. Therefore, the training objective of this stage can be formalized as L 3 = L nct + λ(α 2 L ud + β 2 L sd ) + (1 − λ)(α 1 L ud + β 1 L sd ),(14) where λ=n/N denotes the coefficient controlling the balance between monolingual and bilingual dialogues with n being the current training step at the third stage and N being the maximum steps of this stage. Note that α 1 and β 1 are kept fixed as the values in Eq. 12. Considering that the additional monolingual dialogues are much more than the available annotated bilingual dialogues, they can function as a supplement to the scarce annotated bilingual dialogues, helping the model learn to better exploit dialogue context. EXPERIMENTS To investigate the effectiveness of our proposed training framework, we conducted experiments on English⇔German (En⇔De) and English⇔Chinese (En⇔Zh) chat translation datasets. Datasets As described in Section 3.2, our proposed training framework consists of three stages, involving the large-scale sentence-level parallel corpus (WMT20), the additional monolingual dialogues (Taskmaster-1) and the annotated bilingual dialogues (BConTrasT and BMELD). We first filter out duplicate sentence pairs and remove those whose length exceeds 80. Then, we employ a series of open-source/in-house scripts, including full-/half-width conversion, unicode conversion, punctuation normalization, and tokenization [21] to pre-process the raw data. Finally, we apply byte-pair-encoding (BPE) [22] with 32K merge operations to tokenize the sentences into subwords. By doing so, we obtain 45,541,367 sentence pairs for En⇔De and 22,244,006 sentence pairs for En⇔Zh, respectively. Taskmaster-1 [23]. 5 The dataset [23] consists of English dialogues created via two distinct procedures, either the "Wizard of Oz" (WOz) approach in which trained agents and crowd-sourced workers interact with each other or the "self-dialog" where crowd-sourced workers write the entire dialog themselves. Given these monolingual dialogues in English, we first pre-process them using the same procedures as in WMT20. Then, because we do not have the needed German/Chinese monolingual dialogues in our En⇔De/En⇔Zh experiments, we use in-house En⇒De and En⇒Zh translation models to obtain the German/Chinese translations of those original English monolingual dialogues. 4. http://www.statmt.org/wmt20/translation-task.html 5. https://github.com/google-researchdatasets/Taskmaster/tree/master/TM-1-2019 BConTrasT [24]. 6 This dataset is based on the monolingual Taskmaster-1 corpus [23] and is provided by WMT20 Shared Task on Chat Translation [24], containing chats for the English-German language pair. A subset of dialogues in Taskmaster-1 are first automatically translated from English into German and then manually post-edited by native German speakers on Unbabel. 7 The conversations in BConTrasT involve two speakers of different languages, where one (customer) speaks in German and the other (agent) responds in English. BMELD. It is a recently released English-Chinese bilingual chat translation dataset. Based on the original English dialogues in MELD 8 (Multimodal EmotionLines Dataset) [25], the dataset authors first crawl the corresponding Chinese translations from a movie subtitle website 9 and then manually post-edit these crawled translations by native postgraduate Chinese students majoring in English. Finally, following [24], they assume 50% of utterances are originally spoken by the Chinese speakers to keep data balance for Zh⇒En translations and build the bilingual MELD (BMELD). For the Chinese utterances, we follow the authors to segment the sentences using Stanford CoreNLP toolkit. 10 Contrast Models We compare the Flat-NCT model trained under our proposed MMT training framework with baseline sentencelevel NMT models and several existing context-aware NMT models. Sentence-level NMT Models. • Transformer [3]: The vanilla Transformer model trained on the sentence-level NMT corpus. • Transformer+FT [3]: The vanilla Transformer model that is first pre-trained on the sentence-level NMT corpus and then directly fine-tuned on the bilingual chat translation dataset. Context-Aware NMT Models. • Dia-Transformer+FT [26]: The original model is RNN-based document-level NMT model with an additional encoder to incorporate the mixed-language dialogue history. We re-implement it based on Transformer, where an additional encoder layer is used 6 • Flat-NCT+FT: The Flat-NCT model trained through sentence-level pre-training (Section 2.3.1) and context-aware fine-tuning (Section 2.3.2). Please note that it is our most related baseline. Our Model. • Flat-NCT+MMT: It is the Flat-NCT model trained under our proposed MMT training framework with Eq. 14 used at the third stage, i.e., gradually transiting from monolingual to bilingual dialogues. Implementation Details We develop our NCT model based on the open-source toolkit THUMT. 11 [28] In experiments, we adopt the settings of Transformer-Base and Transformer-Big as [3]. In Transformer-Base, we use 512 as hidden size (i.e., d), 2,048 as filter size and 8 heads in multi-head attention. In Transformer-Big, we use 1,024 as hidden size, 4,096 as filter size, and 16 heads in multi-head attention. Both Transformer-Base and Transformer-Big contain L=6 encoder layers and the identical number of decoder layers. As for the number of training steps for each stage, following the implementation in [29], we set the training steps of the first and second stages to 200,000 and 5,000, respectively. For the third stage, we conduct trial experiments on the En⇒De validation set, where the performance is no longer improved after about 5,000 steps. Therefore, we set the total training steps of the third training stage to 5,000, (i.e., N =5,000 in Eq. 14). During training, we allocate 4,096 tokens to each NVIDIA Tesla V100 GPU. At the first stage, we use 8 GPUs to pre-train the model in parallel, resulting in 8*4,096 tokens per update. To test the performance of the pre-trained model, we measure its BLEU scores on newstest2019. The results are shown in Table 3. At the second and third stages, we only use 4 GPUs, resulting in about 4*4,096 tokens per update for all experiments at these two stages. All models are optimized using Adam [30] with the learning rate being 1.0 and label smoothing set to 0.1. The dropout rates for Transformer-Base and Transformer-Big are set to 0.1 and 0.3, respectively. The results are reported with the statistical significance test [31]. Effects of Hyper-paramerters For the Flat-NCT model under our proposed training framework, the context length for C xu and the balancing factors (α 1 , β 1 , α 2 and β 2 , see Eq.12 and Eq. 14) of auxiliary tasks are the hyper-parameters we need to manually tune. Context Length In practice, for each x u , the NCT model only takes a fixed length of preceding utterances as its dialogue history context C xu . We investigate the effect of context length using 11 the Flat-NCT+FT model with the Transformer-Base setting. Fig. 4 shows that the model achieves the best performance on the En⇒De validation set when the number of preceding source utterances for dialogue history context is set to 3. However, taking in more preceding utterances not only increases computational costs and but also adversely affects the performance. The underlying reason is that distant dialogue utterances usually have a low correlation with the current utterance and are likely to bring harmful noise. Therefore, we set the context length to 3 in all subsequent experiments. Balancing Factors of Auxiliary Tasks To determine the best balancing factors (α 1 ,β 1 ,α 2 ,β 2 ) of auxiliary tasks, we evaluate the model performance on corresponding validation sets using the grid-search strategy. First, at the second training stage, we vary α 1 and β 1 from 0 to 1.0 with the interval 0.1. Then, at the third training stage, given the selected α 1 and β 1 , we also search α 2 and β 2 by drawing values from 0 to 1.0 with the interval 0.1. Finally, we obtain the sets of determined balancing factors for different translation directions (En⇒De, De⇒En, En⇒Zh and Zh⇒En), as listed in Table 4. Overall Performance In Table 5, we report the experimental results on En⇔De and En⇔Zh using Transformer-Base and Transformer-Big settings. Sentence-level Models v.s. Context-aware Models From Table 5, in terms of both BLEU and TER, we can observe that the sentence-level model "Trans-former+FT" achieves comparable or even better results compared with those existing context-aware models Results on the test sets of BConTrasT (En⇔De) and BMELD (En⇔Zh) in terms of BLEU (%) and TER (%). ↑: The higher the better. ↓: The lower the better. The best results are shown in bold. " † " and " † † " indicate the results are statistically better than the best results of all other contrast NMT models with t-test p < 0.05 and p < 0.01, respectively. All the contrast models with "+FT" are trained using the conventional two-stage strategy. "Flat-NCT+MMT" is our model. ('Dia-Transformer+FT", "Gate-Transformer+FT" and "Flat-NCT+FT") which are originally proposed for documentlevel translation. This suggests that if conventional approaches of exploiting context are not well adapted to the chat scenario, the NCT model would be negatively affected. This may be because when the size of training data for chat translation is extremely small, the NCT model is insufficiently trained and its poor use of dialogue history context adversely brings harmful noise. Results on En⇔De Under the Transformer-Base setting, our NCT model outperforms sentence-level models and context-aware models in most cases. In terms of BLEU, compared with "Flat-NCT+FT", "Flat-NCT+MMT" performs 1.18↑ on En⇒De and 0.71↑ on De⇒En, showing the advantages of our proposed MMT training framework over the conventional twostage training strategy. In terms of TER, "Flat-NCT+MMT" also exhibits its advantage over other contrast models. Under the Transformer-Big setting, we can observe that "Flat-NCT+MMT" still performs the best in most cases on both En⇒De and De⇒En. Results on En⇔Zh We also conducted experiments on the BMELD dataset. Under the Transformer-Base setting, on En⇔Zh, "Flat-NCT+MMT" substantially outperforms other sentencelevel models and context-aware models. Concretely, "Flat-NCT+MMT" performs at least 2.09↑ and 0.62↑ BLEU scores over other contrast models on En⇒Zh and Zh⇒En, respectively. In terms of TER, it also achieves the best results in the two translation directions. Under the Transformer-Big setting, "Flat-NCT+MMT" exhibits notable performance gains again. All the above results demonstrate the effectiveness and generalizability of our proposed MMT training framework across different language pairs. Result Analysis In order to better understand the advantages of our proposed training framework, we conduct a series of analytical experiments to investigate the effectiveness of using additional monolingual dialogues and the introduced auxiliary tasks. Results (BLEU↑/TER↓) on the validation set of BConTrasT (En⇔De) with ablations of UD/SD tasks. The left half lists ablation results of the UD task while the right lists those of the SD task. "w/o.": the specified training objectives are ablated in our proposed training framework. For instance, "w/o. L X ud " means the objective of the UD task L X ud on source-language monolingual dialogues X is ablated in Eq. 12 and Eq. 14 at the second and third training stages. The last row (Row 8) corresponds to the setting that all the training objectives of auxiliary tasks are ablated, i.e., w/o. L X ud , L Y ud , L X ud , L Y ud , L X sd , L Y sd , L X sd , L Y Effects of Monolingual Dialogues In our proposed training framework, we use both sourceand target-language additional monolingual dialogues (X and Y ) at the second and third stages. First, we investigate the effect of monolingual dialogues on En⇔De validation set by partially removing different groups of them. From Table 6, according to training stages, we can observe that the removal of monolingual dialogues at either the second or the third stage results in performance drops (Rows 1 and 2). This indicates that the additional monolingual dialogues benefit the NCT model at both training stages. Next, according to languages, when we totally remove one of the source-language and target-language monolingual dialogues at the two stages, the model performance also declines (Rows 3 and 4). These two results show that both the source-and target-language monolingual dialogues take positive effects during training. Lastly, if there is no monolingual data used during the whole training process, the performance degrades more drastically (Row 5), echoing those aforementioned findings again. Then, we investigate how the amount of additional monolingual dialogues affects the NCT model. Fig. 5 illustrates the model performance with different proportions (100%, 50%, 10% and 0%) of used monolingual dialogues. The results show that the performance of the NCT model consistently declines with fewer monolingual dialogues used in our proposed training framework. All these results demonstrate the effectiveness and necessity of using relatively abundant monolingual dialogues in our framework. Effects of Auxiliary Tasks The two auxiliary tasks (UD and SD) play an important role in our proposed training framework. Therefore, we investigate their effects by ablating them with different settings. Table 7 lists the results on the validation set of BConTrasT (En⇔De) with ablations of UD/SD tasks. First, we successively exclude the objectives of UD/SD task on monolingual dialogues from the MMT training of our NCT model. When only one of L X ud , L Y ud , L X sd and L Y sd is excluded, the performance drops (Rows 1 and 2) compared to "Flat-NCT+MMT" (Row 0). Moreover, if we exclude the UD or SD task on both source-and targetlanguage monolingual dialogues at a time, the NCT model mostly performs worse than the above results (i.e., Row 3 v.s. Rows 0,1,2). It is also notable that the ablations of UD/SD tasks have a greater influence on En⇒De direction than on De⇒En. We conjecture that German monolingual dialogues are manually translated from English by in-house sentencelevel NMT models, losing their original conversational properties to some extent. Thus, the two dialogue-related auxiliary tasks bring smaller improvements in the process of MMT training. These results show both UD and SD tasks on source-and target-language monolingual dialogues bring improvements, indicating that the preliminary capability of Results on the test set of BMELD (En⇔Zh) in terms of BLEU (%) and TER (%). "Flat-NCT+MMT(Pseudo) w/o. SD" represents using pseudo Chinese monolingual dialogues without any SD objective. "Flat-NCT+MMT(Authentic) w/o. SD" represents using authentic Chinese monolingual dialogues without any SD objective. capturing dialogue context acquired from additional monolingual dialogues actually enhances the NCT model. Then, we turn to successively exclude the objective of UD/SD task on bilingual dialogues. We can obtain the similar conclusion that the exclusions of L X ud and L Y ud lead to the performance decline (i.e., Row 0 v.s. Rows 4,5,6). Similarly, the two auxiliary tasks on source-and targetlanguage bilingual dialogues take greater effects in most cases on En⇒De direction than on De⇒En, supporting the above-mentioned conjecture again. Lastly, we completely ablate either the UD or SD task from the MMT training. We can observe that the performance drops more severely (Row 7). Moreover, if we totally remove all auxiliary objectives of UD and SD tasks, the training of our NCT model degenerates into the conventional two-stage training, thus obtaining the worst performance (Row 8). These ablation results with different settings strongly confirm that the two auxiliary tasks take considerable effects during the MMT training by incorporating the modelling of conversational properties into our NCT model. Effects of Pseudo/Authentic Monolingual Dialogues In our previous experiments, since most German and Chinese dialogue datasets do not contain annotated speaker labels, they are not suitable for our Flat-NCT model to accomplish SD task. Therefore, we use in-house NMT models to obtain pseudo German/Chinese monolingual dialogues from authentic English Taskmaster-1 dataset that has available speaker labels. To investigate how the authenticity of monolingual dialogues would affect our proposed training framework, we turn to use totally authentic monolingual dialogues. Specifically, besides the authentic English Taskmaster-1 dataset, we introduce the authentic Chinese dialogues from the recently-released MSCTD dataset [32]. 12 When using MSCTD dataset, as it still has no speaker label for SD task, we only include the UD task, i.e., excluding L X sd , L Y sd , L X sd , L Y sd from MMT training, which is denoted as "Flat-NCT+MMT(Authentic) w/o. SD". Table 8 gives its comparison with the model using pseudo Chinese monolingual 12. MSCTD dataset has a total of 132,741 Chinese utterances. Gate Effects of BT-augmented Chat Translation Corpus Instead of just being used for the auxiliary tasks, the additional monolingual dialogues can be alternatively used to augment the bilingual chat translation dataset D bct for the context-aware fine-tuning of all contrast models and ours. To further validate the effectiveness of our proposed training framework, we make comparisons between MMT training and conventional two-stage pretrain-finetune paradigm using BT-augmented bilingual chat translation dataset. Concretely, as a common technique, we employ backtranslation to augment the original dataset D bct to D bct . For En⇒Zh, the target-side additional Chinese dialogues from MSCTD dataset are translated into English. Conversely, for Zh⇒En, the target-side English additional dialogues from Taskmaster-1 dataset are translated into Chinese. Due to the lack of speaker labels in MSCTD dataset, we also exclude all SD objectives in MMT training and compare "Flat-NCT+MMT(D bct ) w/o. SD" with the sentence-level "Transformer+FT(D bct )" and "Gate-Transformer+FT(D bct )". 13 From Table 9, we can observe "Flat-NCT+MMT(D bct ) w/o. SD" outperforms "Transformer+FT(D bct )" and "Gate-Transformer + FT(D bct )" under both Transformer-Base and Transformer-Big settings, which demonstrates that our proposed training framework can still take notable effects when the bilingual 13. "Gate-Transformer + FT" is chosen because it is the most competitive among all context-aware contrast models with two-stage training, as shown in Table 5. Results on the test sets of BConTrasT (En⇔De) and BMELD (En⇔Zh) in terms of BLEU (%) and TER (%). "Flat-NCT+MMT": the Flat-NCT model trained using the gradual transition strategy from monolingual to bilingual dialogues (Eq. 14). "Flat-NCT+MMT w/o. GT": the Flat-NCT model trained without using the gradual transition strategy (Eq. 13). chat translation corpus for context-aware fine-tuning is adequately augmented. Effects of Gradual Transition Strategy At the third stage of our proposed framework, the Flat-NCT model is trained through Eq. 14, i.e., gradually transiting from using monolingual to bilingual dialogues. This strategy makes the transition from the second to the third stage smoother, which further alleviates the training discrepancy described in Section 3.2.3. To investigate its effectiveness, we also train the NCT model through Eq. 13, i.e., without the strategy of gradual transition. As shown in Table 10, under the Transformer-Big setting, the performance of "Flat-NCT+MMT w/o. GT" is significantly worse than those of "Flat-NCT+MMT" across all translation directions. These results indicate that the gradual transition strategy makes better use of additional monolingual dialogues, benefiting the training of our NCT model. Evaluation of Translation Quality To further verify the benefits of our proposed training framework, we assess the quality of translations generated by different NCT models using automatic and human evaluations. Automatic Evaluation of Dialogue Coherence Following [18], [33], we use the cosine similarity between each translated utterance x u and its corresponding dialogue context C xu to automatically measure dialogue coherence, which is defined as sim(x u , C xu ) = cos sim(f (x u ), f (C xu )), where f (·) denotes the sequence representation obtained by averaging the word vectors of its included tokens. We use Word2Vec 14 [34] trained on Taskmaster-1 15 to obtain the distributed word vectors whose dimension is set to 100. 14. https://code.google.com/archive/p/word2vec/ 15. The English utterances in BConTrasT comes from Taskmaster-1. Results of dialogue coherence in terms of sentence similarity (-1∼1) on the test set of BConTrasT in De⇒En direction under the Transformer-Base setting. The "#-th Pr." denotes the #-th preceding utterance to the current one. " † † " indicates the improvement over the best result of all other contrast models is statistically significant (p < 0.01). Table 11 shows the measured coherence of translated utterances with their corresponding dialogue context on the De⇒En test set of BConTrasT. It shows that our "Flat-NCT+MMT" produces more coherent translations compared to other contrast models (significance test, p < 0.01). Table 12 lists the results of human evaluation on the test set of BMELD (Zh⇒En). Following [24], [35], we conduct evaluations using three criteria: 1) Dialogue Coherence (DC.) measures whether the translation is semantically coherent with the dialogue history context in a chat; 2) Speaker Consistency (SC.) evaluates whether the translation preserves the characteristic of its original speaker; 3) Fluency (Flu.) measures whether the translation is fluent and grammatically correct. Human Evaluation First, we randomly sample 200 dialogues from the test set of BMELD in Zh⇒En direction. Then, we use each of the models in Table 12 to generate the translations of these sampled dialogues. Finally, we assign these translated utterances and their corresponding dialogues in the target language to three postgraduate evaluators who are native Chinese speakers majoring in English with qualified certificates, and ask them to assess the translations according to the above three criteria. The results in Table 12 show that the generated translation of our model ("Flat-NCT+MMT") is more coherent to corresponding dialogue context, better preserves the characteristic of original speakers and is more fluent as well, indicating the superiority of our model. The inter-annotator agreements calculated by the Fleiss' kappa [36] are 0.535, 0.507, and 0.548 for DC., SC. and Flu., respectively. Case Study In Fig. 6, we deliver illustrative case examples from the test set of BMELD (En⇒Zh) to compare translations generated by different models. Dialogue Coherence. In the first example of Fig. 6, all contrast models translate the word "game" into its surface meaning "yóu xì" in Chinese. However, considering that the word "antique" in dialogue history generally refers to physical assets rather than virtual objects, what the speaker s1 really means is "yóu xì jī" ("arcade game machine") as in the reference, which is correctly translated by our "Flat-NCT+MMT" model. From the second example, we find that the translations generated by all contrast models neglect the crucial item "boat" ("chuán") inside the dialogue. On the contrary, our model "Flat-NCT+MMT" successfully generates the translation of "boat" that only exists in dialogue history context but not in the current utterance, which makes the whole translated utterance more coherent to the whole dialogue. For the above two examples, the underlying reason for our model to generate more coherent translations is that the UD task in our proposed training framework introduces the modelling of dialogue coherence into the NCT model. Speaker Characteristic. We also observe that the translation generated by our model "Flat-NCT+MMT" can better preserve the characteristic of its original speaker. Specifically, in the second example of Fig. 6, the speaker s1 is highly excited and obviously in a tone of showing off. Consequently, our model converts the translation of the second "What?" from its Chinese surface meaning "shén me?" into a more speakerconsistent Chinese expression "bù xìn?" (actually means "don't you believe?"), which makes the translated utterance more vivid and closer to the reference as well. This may be credited to the SD task that introduces the modelling of speaker characteristic into the NCT model during training. The above case examples indicate that our proposed training framework makes the NCT model more capable of capturing important conversational properties of dialogue context, showing its superiority over other contrast models. RELATED WORK The most related work to ours include the studies of neural chat translation and context-aware NMT, which will be described in the following subsections. Neural Chat Translation Due to the lack of publicly available annotated bilingual dialogues, there are only few relevant studies on this task. To address the data scarcity issue, some researches [26], [37], [38] design methods to automatically construct subtitles corpus that may contain low-quality bilingual dialogue utterances. Recently, Farajian et al., [24] organize the competition of WMT20 shared task on chat translation and first provide a chat corpus post-edited by human annotators. In the competition, the submitted NCT systems [21], [39], [40] are trained with some typical engineering techniques such as ensemble for higher performances. All these systems adhere to the conventional two-stage pretrain-finetune paradigm, mainly including fine-tuning the existing models or using the large pre-trained language models such as BERT [15]. During pretraining on the large-scale parallel corpus, they either use all the available data or adopt data selection methods to select more in-domain data for training. More recently, Wang et al. [41] propose to utilize context to translate dialogue utterances along with jointly identifying omission and typos in the process of translating. Different from these work, our proposed framework focuses on utilizing additional monolingual dialogues and introducing an intermediate stage to alleviate training discrepancy. Context-aware NMT In a sense, NCT can be viewed as a special case of contextaware NMT that has recently attracted much attention [4], [14], [27], [42], [43], [44], [45], [46], [47]. Typically, dominant approaches mainly resorted to extending extend conventional NMT models by incorporating cross-sentence global context, which can be roughly classified into two common categories: 1) concatenating the context and the current sentence to construct context-aware inputs [4], [14], [44]; 2) using additional modules or modifying model architectures to encode context sentences [9], [27], [42], [43], [45]. Besides, Kang et al. [46] considered the relevance of context sentences to the source sentence in document-level NMT and proposed to dynamically select relevant contextual sentences for each source sentence via reinforcement learning. Although these context-aware NMT models can be directly applied to the scenario of chat translation, they cannot overcome the previously-mentioned limitations of NCT models. Apart from improving context-aware NMT models, some researches [10], [47] investigated the effect of context in the process of translation. Voita et al., [10] concerned about the issue that the plausible translations of isolated sentences produced by context-agnostic NMT systems often end up being inconsistent with each other in a document. They investigated various linguistic phenomena and identified deixis, ellipsis and lexical cohesion as three main sources of inconsistency. Li et al. [47] looked into how the contexts bring improvements to conventional documentlevel multi-encoder NMT models. They found that the context encoder behaves as a noise generator and improves NMT models with robust training especially when the training data is small. Not only are these findings suitable for context-aware NMT models in document translation, they also inspire follow-up researches on NCT to explore better ways of utilizing dialogue contexts such as explicitly modelling conversational properties of utterances. Y2: 漂亮的客房不适合放游戏机。漂亮的客房里会有很多古董。(piào liang de kè fáng bù shì hé fàng yó u xì jī 。 piào liang de kè fáng lǐ huì yǒu hěn duō gǔ dǒng。) X1: So, that's it? Y3: X2: I just don't think arcade games go in the beautiful guest room . The beautiful guest room is gonna be filled with antiques . X3: Which is why "Asteroids" is perfect. It's the oldest game. S2 S1 S1 Y3: 这是为什么 小行星 是完美的。这是最古老的游戏。(zhè shì wè i shé n me xiǎo xíng xīng shì wán měi de。 zhè shì zuì gǔ lǎo de yó u xì 。) CONCLUSION In this paper, we have proposed a multi-task multi-stage transitional training framework for neural chat translation, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues. Particularly, we design UD and SD tasks to incorporate the modelling of dialogue coherence and speaker characteristic into the NCT model, respectively. Moreover, our proposed training framework consists of three stages: 1) sentencelevel pre-training on large-scale parallel corpus; 2) intermediate training with auxiliary tasks using additional monolingual dialogues; 3) context-aware fine-tuning with with gradual transition. Experimental results and in-depth analysis demonstrate the effectiveness of our proposed training framework. ! "# : !"#$%&'(&)*+, -./(rú guǒ nǐ kàn dào tā!jiào tā gǎn jǐn shōu shí xíng lǐ") !"# : Well, if you see him, tell him to pack his bags. s1 #Fig. 1 . s11: Oh, hi Max! Hey, do you know everybody? # : 0123#456789:(mài kè sī#nǐ hái rèn shí dà huǒ ma$) s2 $ : ;56/#$<7=>9:(bù rèn shí"nǐ kàn jiàn dà wèi le ma$) $ : No. Have you seen David? % : No, no, he hasn't been around. % : ?@'&?AB/(méi yǒu! tā méi zài zhè") ! : So when, when do you leave? ! : #CDEFGHI:(nǐ men shén me shí hou dòng shēn$)Source-side context ! Target-side context ! An example of cross-lingual chat (En⇔Zh). The speaker s1specific utterance xu is being translated from English to Chinese with corresponding dialogue history context. Fig. 2 . 2The architecture of the Flat-NCT model used in this work. The left part depicts the attention mechanism inside Flat-NCT encoder. )Fig. 3 . 3: '*+, ( +-, ) . : '*+, , $% , '% ′ / : +-, , $% , '% +-, , $% , '% GT Overview of the auxiliary tasks and the MMT training framework. To show the two auxiliary tasks, we just take the source-language dialogue X=x 1 , x 2 , x 3 , x 4 , ..., x u−1 , xu for instance, which can be analogously generalized to other types of dialogues (Y , X and Y ). (a): The utterance discrimination (UD) task. (b): The speaker discrimination (SD) task. (c): The three training stages of our proposed framework. Note that the NCT encoder is shared across the chat translation and the two auxiliary tasks. Fig. 4 . 4The effect of the context length for Cx u . The BLEU scores of the Flat-NCT+FT model on the En⇒De validation set (under the Transformer-Base setting). values of balancing factors for the auxiliary tasks. Fig. 5 . 5Results (Left: BLEU↑ / Right: TER↓) on the validation set of BConTrasT (En⇔De) using different proportions of used monolingual dialogues (under the Transformer-Base setting). -Transformer + FT(D bct ) 27.65 59.9 22.45 55.6 Flat-NCT+MMT(D bct ) w/o. SD 28.81 58.7 23.17 55.1Results on the test set of BMELD (En⇔Zh) in terms of BLEU (%) and TER (%). "Transformer + FT(D bct )" and "Gate-Transformer + FT(D bct )" represents using the BT-augmented dataset D bct to fine-tune the Transformer model and Gate-Transformer model ,respectively. "Flat-NCT+MMT(D bct ) w/o. SD" represents using D bct to train the Flat-NCT model through MMT training framework without any SD objective. dialogues, i.e., "Flat-NCT+MMT(Pseudo) w/o. SD". From the table, we can see that "Flat-NCT+MMT(Authentic) w/o. SD" outperforms "Flat-NCT+MMT(Pseudo) w/o. SD" under both the Transformer-Base and Transformer-Big settings. This shows authentic monolingual dialogues are indeed more beneficial to the NCT model, indicating that our MMT training framework has the potential to further boost model performance if there are suitable monolingual dialogue datasets with speaker labels on both source and target languages. 3.2.1 Stage 1: Sentence-level Pre-training on Large-scale Parallel CorpusAs described in Section 2.3.1, the first stage is to grant the NCT model the basic capability of translating sentences. Given the large-scale parallel corpus D sent , we pre-train the model M 1 using the same training objective as Eq. 5, i.e., L 1 =L sent (θ nct ).3.2.2 Stage 2: Intermediate Training with Auxiliary Tasks using Additional Monolingual DialoguesUnder our proposed training framework, the second stage serves as an intermediate phase that involves additional monolingual dialogues, endowing the original contextagnostic model with the preliminary capability of capturing dialogue context. Using the pre-trained M 1 for model initialization, we continue to train the model through the previous sentence-level translation along with the two designed auxiliary tasks (UD and SD) using additional monolingual dialogues, obtaining the model M 2 . TABLE 2 2Dataset Statistics Dataset/Split Train Valid Test WMT20 (En⇔De) 45,541,367 - - WMT20 (En⇔Zh) 22,244,006 - - Taskmaster-1 (En) 153,774 - - BConTrasT (En⇒De) 7,629 1,040 1,133 BConTrasT (De⇒En) 6,216 862 967 BMELD (En⇒Zh) 5,560 567 1,466 BMELD (Zh⇒En) 4,427 517 1,135 Train/Valid/Test splits corresponding to different usages and trans- lation directions. WMT20 is for sentence-level pre-training on both En⇔De and En⇔Zh. Taskmaster-1 is the additional English dia- logues, which is then translated to Germen and Chinese. BConTrasT and BMELD are used to fine-tune the NCT model on En⇔De and En⇔Zh, respectively. Table 2 2lists the statistics of the involved datasets corresponding to different usages and translation directions. WMT20.4 This large-scale sentence-level parallel corpus is used to at the first and second stages under our framework.For English⇔German, we use and combine six corpora including Euporal, ParaCrawl, CommonCrawl, TildeRapid, NewsCommentary, and WikiMatrix. For En⇔Zh, the cor- pora we use contain News Commentary v15, Wiki Titles v2, UN Parallel Corpus V1.0, CCMT Corpus, and WikiMatrix. TABLE 3 Model 3Performance after Sentence-level Pre-trainingMethods En⇒De De⇒En En⇒Zh Zh⇒En Transformer (Base) 39.88 40.72 32.55 24.42 Transformer (Big) 41.35 41.56 33.85 24.86 The BLEU scores on newstest2019 of the model M 1 after sentence- level pre-training, corresponding to section 3.2.1. TABLE 4 Balancing 4Factor Determination TABLE 5 Overall 5Evaluation (BLEU↑/TER↓) of En⇔De and En⇔Zh Chat Translation TasksModels (Base) En⇒De De⇒En En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ TER↓ Sentence-level NMT Models Transformer 40.02 42.5 48.38 33.4 21.40 72.4 18.52 59.1 Transformer+FT 58.43 26.7 59.57 26.2 25.22 62.8 21.59 56.7 Context-aware NMT Models Dia-Transformer+FT 58.33 26.8 59.09 26.2 24.96 63.7 20.49 60.1 Gate-Transformer+FT 58.48 26.6 59.53 26.1 25.34 62.5 21.03 56.9 Flat-NCT+FT 58.15 27.1 59.46 25.7 24.76 63.4 20.61 59.8 Our Model Flat-NCT+MMT 59.33 † † 26.2 60.17 † 25.1 † 27.43 † † 60.4 † † 22.21 † 56.1 † Models (Big) En⇒De De⇒En En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ TER↓ Sentence-level NMT Models Transformer 40.53 42.2 49.90 33.3 22.81 69.6 19.58 57.7 Transformer+FT 59.01 26.0 59.98 25.9 26.95 60.7 22.15 56.1 Context-Aware NMT Models Dia-Transformer+FT 58.68 26.8 59.63 26.0 26.72 62.4 21.09 58.1 Gate-Transformer+FT 58.94 26.2 60.08 25.5 27.10 60.3 22.26 55.8 Flat-NCT+FT 58.61 26.5 59.98 25.4 26.45 62.6 21.38 57.7 Our Model Flat-NCT+MMT 60.11 † † 25.8 61.04 † † 25.0 28.62 † † 59.6 † 23.08 † 54.9 † † TABLE 6 6Performance with Different Monolingual Dialogue Groups RemovedModels (Base) En⇒De De⇒En BLEU↑ TER↓ BLEU↑ TER↓ 0 Flat-NCT+MMT 60.86 24.6 60.94 25.3 1 3 : w/o. X, Y 60.51 24.6 60.72 25.5 2 2 : w/o. X, Y 60.46 24.9 60.64 25.2 3 2 : w/o. X 3 : w/o. X 60.18 24.9 60.50 25.8 4 2 : w/o. Y 3 : w/o. Y 59.83 25.3 59.69 25.9 5 2 : w/o. X, Y 3 : w/o. X, Y 59.74 25.6 60.11 25.9 Results on the validation set of BConTrasT (En⇔De) when different groups of monolingual dialogues are removed from MMT training framework. 2 and 3 denote the second and third training stages, respectively. "w/o.": the specified group of monolingual dialogues is removed. For instance, " 2 : w/o. X, Y " means X and Y are removed at the second training stage. TABLE 7 7Performance with Ablations of UD/SD TasksUD SD Models (Base) En⇒De De⇒En Models (Base) En⇒De De⇒En BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ TER↓ 0 Flat-NCT+MMT 60.86 24.6 60.94 25.3 Flat-NCT+MMT 60.86 24.6 60.94 25.3 1 w/o. L X ud 60.80 24.7 60.72 25.7 w/o. L X sd 60.51 25.0 60.43 26.1 2 w/o. L Y ud 60.47 24.9 60.43 26.1 w/o. L Y sd 60.29 24.7 60.83 25.6 3 w/o. L X ud ,L Y ud 59.96 25.3 60.41 25.9 w/o. L X sd ,L Y sd 60.13 25.0 60.66 25.6 4 w/o. L X ud 60.43 24.9 60.20 26.1 w/o. L X sd 60.36 25.2 60.76 26.0 5 w/o. L Y ud 60.25 24.8 60.56 25.5 w/o. L Y sd 60.22 25.0 60.47 26.0 6 w/o. L X ud ,L Y ud 59.89 25.1 60.25 25.7 w/o. L X sd ,L Y sd 60.27 25.3 60.56 25.3 7 w/o. L X ud ,L Y ud ,L X ud ,L Y ud 59.86 25.3 60.04 26.0 w/o. L X sd ,L Y sd ,L X sd ,L Y sd 59.97 25.5 60.39 25.9 8 w/o. any UD/SD task 59.79 25.5 59.97 26.5 w/o. any UD/SD task 59.79 25.5 59.97 26.5 TABLE 8 8Performance with Pseudo/Authentic Monolingual Dialogues Models (Base) En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ Flat-NCT+MMT(Pseudo) w/o. SD 27.35 60.6 22.12 56.4 Flat-NCT+MMT(Authentic) w/o. SD 27.80 59.7 22.82 55.8 Models (Big) En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ Flat-NCT+MMT(Pseudo) w/o. SD 28.31 59.7 22.87 55.3 Flat-NCT+MMT(Authentic) w/o. SD 28.55 59.0 23.36 54.0 TABLE 9 9Performance with BT-augmented Chat Translation Corpus D bctModels (Base) En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ Transformer + FT(D bct ) 26.04 61.7 21.77 56.2 Gate-Transformer + FT(D bct ) 26.36 61.2 21.61 55.8 Flat-NCT+MMT(D bct ) w/o. SD 28.15 59.6 22.44 55.6 Models (Big) En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ Transformer + FT(D bct ) 27.29 60.3 22.38 55.9 TABLE 10 10Performance with/without Gradual Transition Strategy Models (Big) En⇒De De⇒En BLEU↑ TER↓ BLEU↑ TER↓ Flat-NCT+MMT 60.11 25.8 61.04 25.0 Flat-NCT+MMT w/o. GT 59.62 26.2 60.76 25.2 Models (Big) En⇒Zh Zh⇒En BLEU↑ TER↓ BLEU↑ TER↓ Flat-NCT+MMT 28.62 59.6 23.08 54.9 Flat-NCT+MMT w/o. GT 28.18 59.8 22.50 55.9 TABLE 11 11Automatic Evaluation of Dialogue Coherence Models (Base) 1-th Pr. 2-th Pr. 3-th Pr. ctx. Transformer 0.650 0.604 0.566 0.612 Transformer+FT 0.658 0.610 0.571 0.619 Dia-Transformer+FT 0.655 0.608 0.571 0.617 Gate-Transformer+FT 0.660 0.614 0.575 0.620 Flat-NCT+FT 0.657 0.610 0.571 0.616 Flat-NCT+MMT 0.665 † † 0.617 † † 0.578 † † 0.629 † † Human Reference 0.666 0.620 0.580 0.633 TABLE 12 12Results on the test set of BMELD (Zh⇒En) under the Transformer-Base setting. "DC.": Dialogue Coherence. "SC.": Speaker Consistency. "Flu.": Fluency. The values for these three criteria range from 0 to 1.Human Evaluation Models (Base) DC. SC. Flu. Transformer 0.540 0.485 0.590 Transformer+FT 0.590 0.530 0.635 Dia-Transformer+FT 0.580 0.525 0.625 Gate-Transformer+FT 0.605 0.540 0.635 Flat-NCT+FT 0.595 0.525 0.630 Flat-NCT+MMT 0.640 0.570 0.665 Y3: 这就是为什么 小行星 是完美的。 它是最古老的游戏。(zhè jiù shì wè i shé n me xiǎo xíng xīng shì wán měi de。 tā shì zuì gǔ lǎo de yóu xì 。) Y3: 这就是为什么 小行星 是完美的。 它是最古老的游戏。(zhè jiù shì wei shé n me xiǎo xíng xīng shì wá n měi de。 tā shì zuì gǔ lǎo de yóu xì 。) Y3: 这就是为什么 小行星 是完美的。 因为它是最老的游戏。(zhè jiù shì wè i shé n me xiǎo xíng xīng shì wán měi de。 yīn wéi tā shì zuì lǎo de yóu xì 。)Y3: 这就是为什么放 小行星 是完美的。 它是最早期的游戏。(zhè jiù shì wè i shé n me fàng xiǎo xíng xīng shì wá n měi de。 tā shì zuì zǎo qī de yóu xì 。) Y3: 所以《小行星》才适合。那是最古老的游戏机。(suǒ yǐ xiǎo xíng xīng cái shì hé 。 nà shì zuì gǔ lǎo de yóu xì jī。) Y3: 这就是为什么放 小行星 是完美的。 那是最早的游戏机。(zhè jiù shì wè i shé n me fàng xiǎo xíng xīng shì wán měi de。 nà shì zuì zǎo de yóu xì jī。) Y1: 没有商量的余地吗?(mé i yǒu shāng liáng de yú dì ma?) X1: You know, Joey, I could teach you to sail, if you want.? Y3: 对啊,我这辈子都在驾船,我十五岁时,我爸送我一艘船。(duì a,wǒ zhè bè i zi dōu zài jià chuán,wǒ shí wǔ suì shí ,wǒ bà sòng wǒ yī sōu chuán。) X3: Yeah! I've been sailing my whole life. When I was fifteen, my dad bought me my own boat. Y3: 什么?! 什么? 他想让我高兴起来!我的小马病了。(shé n me?!shé n me? tā xiǎng ràng wǒ gāo xìng qǐ lái!wǒ de xiǎo mǎ bìng le。) Y3: 什么?! 什么?! 他想安慰我!我的小马生病了。 (shén me?!shé n me?!tā xiǎng ān wèi wǒ! wǒ de xiǎo mǎ shēng bì ng le。) Y3: 什么?! 什么?! 他想安慰我!因为我的小马病了。(shé n me?!shé n me?!tā xiǎng ān wèi wǒ)!yīn wéi wǒ de xiǎo mǎ bì ng le。) Y3: 什么? 他想要安慰我!我的小马病了。(shé n me?!tā xiǎng yào ān wèi wǒ!wǒ de xiǎo mǎ shēng bì ng le。) Y3: 什么?!他想安慰我! 我的小马生病了。 (shén me?!tā xiǎng ān wèi wǒ!wǒ de xiǎo mǎ shēng bì ng le。) Y3: 怎么?不信?他送我一艘船来安慰我,我的小马病了。(zěn me?bù xì n?tā sòng wǒ yī sōu chuán lái ān wèi wǒ,wǒ de xiǎo mǎ bìng le。) Y3: 什么?不信?他给我一艘船来安慰我!我的小马生病了 。(shén me?bù xì n?tā gěi wǒ yī sōu chuán lái ān wèi wǒ!wǒ de xiǎo mǎ shēng bìng le。) Y4: 你有一艘帆船?(nǐ yǒu yī sōu fān chuán?) X4: Your own boat? X5: What? What? He was trying to cheer me up! My pony was sick. Y2: 你会驾驶帆船?(nǐ huì jià shǐ fān chuán?) Y1: 乔伊,如果你想,我可以教你驾船。?(qiáo yī,rú guǒ nǐ xiǎng,wǒ kě yǐ jiào nǐ jià chuán。?) Fig. 6. Two illustrative case examples from the test set of BMELD (En⇒Zh).Dialogue History Context Sentence- Level Models Context- Aware Models Flat-NCT+MMT Transformer Transformer+FT Dia-Transformer+FT Gate-Transformer+FT Flat-NCT+FT Reference Ours X2: You could? S 2 S 1 S 1 Dialogue History Context Sentence- Level Models Context- Aware Models Flat-NCT+MMT Transformer Transformer+FT Dia-Transformer+FT Gate-Transformer+FT Flat-NCT+FT Reference S 1 S2 Y5: Ours (1) Example 1 (2) Example 2 . Note that X contains both the utterances originally spoken by the source-language speaker and the translations of those originally spoken by the other speaker of the target language, which is the same for Y . ACKNOWLEDGMENTS Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," in Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems, 2014, pp. 3104-3112. Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, 3rd International Conference on Learning Representations. D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," in 3rd International Conference on Learning Representations, 2015. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 2017, pp. 4831- 4836. Neural machine translation with extended context. J Tiedemann, Y Scherrer, Proceedings of the Third Workshop on Discourse in Machine Translation. the Third Workshop on Discourse in Machine TranslationJ. Tiedemann and Y. Scherrer, "Neural machine translation with extended context," in Proceedings of the Third Workshop on Discourse in Machine Translation, DiscoMT@EMNLP, 2017, pp. 82-92. Document context neural machine translation with memory networks. S Maruf, G Haffari, Proceedings of the 56th. the 56thS. Maruf and G. Haffari, "Document context neural machine translation with memory networks," in Proceedings of the 56th Annual Meeting of the Association for ComputationalLinguistics. Annual Meeting of the Association for ComputationalLinguistics, 2018, pp. 1275-1284. Evaluating discourse phenomena in neural machine translation. R Bawden, R Sennrich, A Birch, B Haddow, Proceedings of Conference of the North American Chapter. Conference of the North American ChapterR. Bawden, R. Sennrich, A. Birch, and B. Haddow, "Evaluating discourse phenomena in neural machine translation," in Proceed- ings of Conference of the North American Chapter of the Association for Computational Linguistics, 2018, pp. 1304-1313. Documentlevel neural machine translation with hierarchical attention networks. L M Werlen, D Ram, N Pappas, J Henderson, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingL. M. Werlen, D. Ram, N. Pappas, and J. Henderson, "Document- level neural machine translation with hierarchical attention net- works," in Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2018, pp. 2947-2954. Learning to remember translation history with a continuous cache. Z Tu, Y Liu, S Shi, T Zhang, Trans. Assoc. Comput. Linguistics. 6Z. Tu, Y. Liu, S. Shi, and T. Zhang, "Learning to remember translation history with a continuous cache," Trans. Assoc. Comput. Linguistics, vol. 6, pp. 407-420, 2018. Context-aware neural machine translation learns anaphora resolution. E Voita, P Serdyukov, R Sennrich, I Titov, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsE. Voita, P. Serdyukov, R. Sennrich, and I. Titov, "Context-aware neural machine translation learns anaphora resolution," in Pro- ceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics, 2018, pp. 1264-1274. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. E Voita, R Sennrich, I Titov, Proceedings of the 57th Conference of the Association for Computational Linguistics. the 57th Conference of the Association for Computational LinguisticsE. Voita, R. Sennrich, and I. Titov, "When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion," in Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019, pp. 1198-1212. Context-aware monolingual repair for neural machine translation. 10.18653/v1/D19-1081Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing--, "Context-aware monolingual repair for neural machine translation," in Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019, pp. 877-886. [Online]. Available: https://doi.org/10.18653/v1/D19-1081 One model to learn both: Zero pronoun prediction and translation. L Wang, Z Tu, X Wang, S Shi, Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingL. Wang, Z. Tu, X. Wang, and S. Shi, "One model to learn both: Zero pronoun prediction and translation," in Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019, pp. 921-930. Selective attention for context-aware neural machine translation. S Maruf, A F T Martins, G Haffari, 10.18653/v1/n19-1313Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics. J. Burstein, C. Doran, and T. SolorioConference of the North American Chapter of the Association for Computational LinguisticsS. Maruf, A. F. T. Martins, and G. Haffari, "Selective attention for context-aware neural machine translation," in Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics, J. Burstein, C. Doran, and T. Solorio, Eds., 2019, pp. 3092-3102. [Online]. Available: https://doi.org/10.18653/v1/n19-1313 A simple and effective unified encoder for document-level machine translation. S Ma, D Zhang, M Zhou, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsS. Ma, D. Zhang, and M. Zhou, "A simple and effective unified encoder for document-level machine translation," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 3505-3511. BERT: pre-training of deep bidirectional transformers for language understanding. J Devlin, M Chang, K Lee, K Toutanova, Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics. Conference of the North American Chapter of the Association for Computational LinguisticsJ. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: pre-training of deep bidirectional transformers for language understanding," in Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics, 2019, pp. 4171-4186. Modeling coherence for neural machine translation with dynamic and topic caches. S Kuang, D Xiong, W Luo, G Zhou, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsS. Kuang, D. Xiong, W. Luo, and G. Zhou, "Modeling coherence for neural machine translation with dynamic and topic caches," in Proceedings of the 27th International Conference on Computational Linguistics, 2018, pp. 596-606. Answer-guided and semantic coherent question generation in open-domain conversation. W Wang, S Feng, D Wang, Y Zhang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingW. Wang, S. Feng, D. Wang, and Y. Zhang, "Answer-guided and semantic coherent question generation in open-domain conversa- tion," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019, pp. 5065-5075. Modeling coherence for discourse neural machine translation. H Xiong, Z He, H Wu, H Wang, The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. H. Xiong, Z. He, H. Wu, and H. Wang, "Modeling coherence for discourse neural machine translation," in The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, 2019, pp. 7338-7345. T-CVAE: transformer-based conditioned variational autoencoder for story completion. T Wang, X Wan, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, S. Kraus. the Twenty-Eighth International Joint Conference on Artificial Intelligence, S. KrausT. Wang and X. Wan, "T-CVAE: transformer-based conditioned variational autoencoder for story completion," in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, S. Kraus, Ed., 2019, pp. 5233-5239. GRADE: automatic graph-enhanced coherence metric for evaluating open-domain dialogue systems. L Huang, Z Ye, J Qin, L Lin, X Liang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingL. Huang, Z. Ye, J. Qin, L. Lin, and X. Liang, "GRADE: automatic graph-enhanced coherence metric for evaluating open-domain dialogue systems," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020, pp. 9230-9240. Tencent AI lab machine translation systems for WMT20 chat translation task. L Wang, Z Tu, X Wang, L Ding, L Ding, S Shi, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationL. Wang, Z. Tu, X. Wang, L. Ding, L. Ding, and S. Shi, "Tencent AI lab machine translation systems for WMT20 chat translation task," in Proceedings of the Fifth Conference on Machine Translation, WMT@EMNLP, 2020, pp. 483-491. Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, Proceedings of the 54th. the 54thR. Sennrich, B. Haddow, and A. Birch, "Neural machine transla- tion of rare words with subword units," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Annual Meeting of the Association for Computational Linguistics, 2016. Taskmaster-1: Toward a realistic and diverse dialog dataset. B Byrne, K Krishnamoorthi, C Sankar, A Neelakantan, B Goodrich, D Duckworth, S Yavuz, A Dubey, K Kim, A Cedilnik, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingB. Byrne, K. Krishnamoorthi, C. Sankar, A. Neelakantan, B. Goodrich, D. Duckworth, S. Yavuz, A. Dubey, K. Kim, and A. Cedilnik, "Taskmaster-1: Toward a realistic and diverse dialog dataset," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing, 2019, pp. 4515-4524. Findings of the WMT 2020 shared task on chat translation. M A Farajian, A V Lopes, A F T Martins, S Maruf, G Haffari, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationM. A. Farajian, A. V. Lopes, A. F. T. Martins, S. Maruf, and G. Haffari, "Findings of the WMT 2020 shared task on chat trans- lation," in Proceedings of the Fifth Conference on Machine Translation, WMT@EMNLP, 2020, pp. 65-75. MELD: A multimodal multi-party dataset for emotion recognition in conversations. S Poria, D Hazarika, N Majumder, G Naik, E Cambria, R Mihalcea, Proceedings of the 57th Conference of the Association for Computational Linguistics. the 57th Conference of the Association for Computational LinguisticsS. Poria, D. Hazarika, N. Majumder, G. Naik, E. Cambria, and R. Mihalcea, "MELD: A multimodal multi-party dataset for emo- tion recognition in conversations," in Proceedings of the 57th Confer- ence of the Association for Computational Linguistics, 2019, pp. 527- 536. Contextual neural model for translating bilingual multi-speaker conversations. S Maruf, A F T Martins, G Haffari, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationS. Maruf, A. F. T. Martins, and G. Haffari, "Contextual neural model for translating bilingual multi-speaker conversations," in Proceedings of the Third Conference on Machine Translation: Research Papers, 2018, pp. 101-112. Improving the transformer translation model with documentlevel context. J Zhang, H Luan, M Sun, F Zhai, J Xu, M Zhang, Y Liu, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingJ. Zhang, H. Luan, M. Sun, F. Zhai, J. Xu, M. Zhang, and Y. Liu, "Improving the transformer translation model with document- level context," in Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2018, pp. 533-542. THUMT: an open-source toolkit for neural machine translation. Z Tan, J Zhang, X Huang, G Chen, S Wang, M Sun, H Luan, Y Liu, Proceedings of the 14th Conference of the Association for Machine Translation in the Americas. the 14th Conference of the Association for Machine Translation in the AmericasZ. Tan, J. Zhang, X. Huang, G. Chen, S. Wang, M. Sun, H. Luan, and Y. Liu, "THUMT: an open-source toolkit for neural machine translation," in Proceedings of the 14th Conference of the Association for Machine Translation in the Americas, 2020, pp. 116-122. Modeling bilingual conversational characteristics for neural chat translation. Y Liang, F Meng, Y Chen, J Xu, J Zhou, Proceedings of ACL. ACLY. Liang, F. Meng, Y. Chen, J. Xu, and J. Zhou, "Modeling bilingual conversational characteristics for neural chat translation," in Proceedings of ACL, Aug. 2021, pp. 5711-5724. [Online]. Available: https://aclanthology.org/2021.acl-long.444 Adam: A method for stochastic optimization. D P Kingma, J Ba, 3rd International Conference on Learning Representations. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," in 3rd International Conference on Learning Representations, 2015. Statistical significance tests for machine translation evaluation. P Koehn, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingP. Koehn, "Statistical significance tests for machine translation evaluation," in Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004, pp. 388-395. MSCTD: A multimodal sentiment chat translation dataset. Y Liang, F Meng, J Xu, Y Chen, J Zhou, Proceedings of ACL. ACLDublin, IrelandAssociation for Computational LinguisticsY. Liang, F. Meng, J. Xu, Y. Chen, and J. Zhou, "MSCTD: A multimodal sentiment chat translation dataset," in Proceedings of ACL. Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 2601-2613. [Online]. Available: https://aclanthology.org/2022.acl-long.186 Automatic evaluation of text coherence: Models and representations. M Lapata, R Barzilay, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence. the Nineteenth International Joint Conference on Artificial IntelligenceM. Lapata and R. Barzilay, "Automatic evaluation of text coher- ence: Models and representations," in Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, 2005, pp. 1085- 1090. Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, 1st International Conference on Learning Representations. T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estima- tion of word representations in vector space," in 1st International Conference on Learning Representations, 2013. The university of maryland's submissions to the wmt20 chat translation task: Searching for more data to adapt discourse-aware neural machine translation. C Bao, Y Shiue, C Song, J Li, M Carpuat, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationWMT@EMNLPC. Bao, Y. Shiue, C. Song, J. Li, and M. Carpuat, "The university of maryland's submissions to the wmt20 chat translation task: Searching for more data to adapt discourse-aware neural machine translation," in Proceedings of the Fifth Conference on Machine Trans- lation, WMT@EMNLP, 2020, pp. 456-461. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. J L Fleiss, J Cohen, 10.1177/001316447303300309Educational and Psychological Measurement. J. L. Fleiss and J. Cohen, "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability," Educational and Psychological Measurement, pp. 613-619, 1973. [Online]. Available: https://doi.org/10.1177/001316447303300309 Automatic construction of discourse corpora for dialogue translation. L Wang, X Zhang, Z Tu, A Way, Q Liu, Proceedings of the Tenth International Conference on Language Resources and Evaluation. the Tenth International Conference on Language Resources and EvaluationL. Wang, X. Zhang, Z. Tu, A. Way, and Q. Liu, "Automatic construction of discourse corpora for dialogue translation," in Pro- ceedings of the Tenth International Conference on Language Resources and Evaluation, 2016. Automatically annotate TV series subtitles for dialogue corpus construction. L Zhang, Q Zhou, 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. L. Zhang and Q. Zhou, "Automatically annotate TV series subtitles for dialogue corpus construction," in 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2019, pp. 1029-1035. Naver labs europe's participation in the robustness, chat, and biomedical tasks at WMT 2020. A Berard, I Calapodescu, V Nikoulina, J Philip, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationA. Berard, I. Calapodescu, V. Nikoulina, and J. Philip, "Naver labs europe's participation in the robustness, chat, and biomedical tasks at WMT 2020," in Proceedings of the Fifth Conference on Machine Translation, WMT@EMNLP, 2020, pp. 462-472. JUST system for WMT20 chat translation task. R Mohammed, M Al-Ayyoub, M Abdullah, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationR. Mohammed, M. Al-Ayyoub, and M. Abdullah, "JUST system for WMT20 chat translation task," in Proceedings of the Fifth Confer- ence on Machine Translation, WMT@EMNLP, 2020, pp. 479-482. Autocorrect in the process of translation -multi-task learning improves dialogue machine translation. T Wang, C Zhao, M Wang, L Li, D Xiong, T. Wang, C. Zhao, M. Wang, L. Li, and D. Xiong, "Autocorrect in the process of translation -multi-task learning improves dialogue machine translation," 2021. Does neural machine translation benefit from larger context. S Jean, S Lauly, O Firat, K Cho, CoRRS. Jean, S. Lauly, O. Firat, and K. Cho, "Does neural machine translation benefit from larger context?" CoRR, 2017. Exploiting cross-sentence context for neural machine translation. L Wang, Z Tu, A Way, Q Liu, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingL. Wang, Z. Tu, A. Way, and Q. Liu, "Exploiting cross-sentence context for neural machine translation," in Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, 2017, pp. 2826-2831. Contextual handling in neural machine translation: Look behind, ahead and on both sides. R R Agrawal, M Turchi, M Negri, 21st Annual Conference of the European Association for Machine Translation. R. R. Agrawal, M. Turchi, and M. Negri, "Contextual handling in neural machine translation: Look behind, ahead and on both sides," in 21st Annual Conference of the European Association for Machine Translation, 2018, pp. 11-20. Towards making the most of context in neural machine translation. Z Zheng, X Yue, S Huang, J Chen, A Birch, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. the Twenty-Ninth International Joint Conference on Artificial Intelligence2020Z. Zheng, X. Yue, S. Huang, J. Chen, and A. Birch, "Towards making the most of context in neural machine translation," in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020, pp. 3983-3989. Dynamic context selection for document-level neural machine translation via reinforcement learning. X Kang, Y Zhao, J Zhang, C Zong, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingX. Kang, Y. Zhao, J. Zhang, and C. Zong, "Dynamic context selection for document-level neural machine translation via re- inforcement learning," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020, pp. 2242- 2254. Does multi-encoder help? A case study on context-aware neural machine translation. B Li, H Liu, Z Wang, Y Jiang, T Xiao, J Zhu, T Liu, C Li, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsB. Li, H. Liu, Z. Wang, Y. Jiang, T. Xiao, J. Zhu, T. Liu, and C. Li, "Does multi-encoder help? A case study on context-aware neural machine translation," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 3512-3518.
[ "https://github.com/google-researchdatasets/Taskmaster/tree/master/TM-1-2019" ]
[ "Using Pressure to Unravel the Structure-Dynamic-Disorder Relationship in Metal Halide Perovskites", "Using Pressure to Unravel the Structure-Dynamic-Disorder Relationship in Metal Halide Perovskites" ]
[ "Kai Xu \nInstitut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain\n", "Luis Pérez-Fidalgo \nInstitut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain\n", "Bethan L Charles \nDept. of Chemistry & Centre for Sustainable Chemical Technologies\nUniversity of Bath\nClaverton DownBA2 7AYBathUK\n\nDept. of Mechanical Engineering, Queens Building\nUniversity of Bristol\nBS8 1TRBristolUK\n", "Mark T Weller \nDept. of Chemistry & Centre for Sustainable Chemical Technologies\nUniversity of Bath\nClaverton DownBA2 7AYBathUK\n\nDept. of Chemistry\nCardiff University\nCF10 3ATWales, UK\n", "M Isabel Alonso \nInstitut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain\n", "Alejandro R Goñi \nInstitut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain\n\nICREA\nPasseig Lluís Companys 2308010BarcelonaSpain\n" ]
[ "Institut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain", "Institut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain", "Dept. of Chemistry & Centre for Sustainable Chemical Technologies\nUniversity of Bath\nClaverton DownBA2 7AYBathUK", "Dept. of Mechanical Engineering, Queens Building\nUniversity of Bristol\nBS8 1TRBristolUK", "Dept. of Chemistry & Centre for Sustainable Chemical Technologies\nUniversity of Bath\nClaverton DownBA2 7AYBathUK", "Dept. of Chemistry\nCardiff University\nCF10 3ATWales, UK", "Institut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain", "Institut de Ciència de Materials de Barcelona\nICMAB-CSIC\nCampus UAB08193BellaterraSpain", "ICREA\nPasseig Lluís Companys 2308010BarcelonaSpain" ]
[]
The exceptional optoelectronic properties of metal halide perovskites (MHPs) are presumed to arise, at least in part, from the peculiar interplay between the inorganic metal-halide sublattice and the atomic or molecular cations enclosed in the cage voids. The latter can exhibit a roto-translative dynamics, which is shown here to be at the origin of the structural behavior of MHPs as a function of temperature, pressure and composition. The application of high hydrostatic pressure allows for unraveling the nature of the interaction between both sublattices, characterized by the simultaneous action of hydrogen bonding and steric hindrance. In particular, we find that under the conditions of unleashed cation dynamics, the key factor that determines the structural stability of MHPs is the repulsive steric interaction rather than hydrogen bonding. Taking as example the results from pressure and 1 arXiv:2305.07020v1 [cond-mat.mtrl-sci]
10.1038/s41598-023-36501-w
[ "https://export.arxiv.org/pdf/2305.07020v1.pdf" ]
258,615,195
2305.07020
e7c9833b821e6a9fbe80b13f6d00787ea540732e
Using Pressure to Unravel the Structure-Dynamic-Disorder Relationship in Metal Halide Perovskites May 12, 2023 Kai Xu Institut de Ciència de Materials de Barcelona ICMAB-CSIC Campus UAB08193BellaterraSpain Luis Pérez-Fidalgo Institut de Ciència de Materials de Barcelona ICMAB-CSIC Campus UAB08193BellaterraSpain Bethan L Charles Dept. of Chemistry & Centre for Sustainable Chemical Technologies University of Bath Claverton DownBA2 7AYBathUK Dept. of Mechanical Engineering, Queens Building University of Bristol BS8 1TRBristolUK Mark T Weller Dept. of Chemistry & Centre for Sustainable Chemical Technologies University of Bath Claverton DownBA2 7AYBathUK Dept. of Chemistry Cardiff University CF10 3ATWales, UK M Isabel Alonso Institut de Ciència de Materials de Barcelona ICMAB-CSIC Campus UAB08193BellaterraSpain Alejandro R Goñi Institut de Ciència de Materials de Barcelona ICMAB-CSIC Campus UAB08193BellaterraSpain ICREA Passeig Lluís Companys 2308010BarcelonaSpain Using Pressure to Unravel the Structure-Dynamic-Disorder Relationship in Metal Halide Perovskites May 12, 2023metal halide perovskiteshigh pressuresdynamic disorderphotolumi- nescenceRaman scatteringlow temperatures The exceptional optoelectronic properties of metal halide perovskites (MHPs) are presumed to arise, at least in part, from the peculiar interplay between the inorganic metal-halide sublattice and the atomic or molecular cations enclosed in the cage voids. The latter can exhibit a roto-translative dynamics, which is shown here to be at the origin of the structural behavior of MHPs as a function of temperature, pressure and composition. The application of high hydrostatic pressure allows for unraveling the nature of the interaction between both sublattices, characterized by the simultaneous action of hydrogen bonding and steric hindrance. In particular, we find that under the conditions of unleashed cation dynamics, the key factor that determines the structural stability of MHPs is the repulsive steric interaction rather than hydrogen bonding. Taking as example the results from pressure and 1 arXiv:2305.07020v1 [cond-mat.mtrl-sci] temperature-dependent photoluminescence and Raman experiments on MAPbBr 3 but also considering the pertinent MHP literature, we provide a general picture about the relationship between the crystal structure and the presence or absence of cationic dynamic disorder. The reason for the structural sequences observed in MHPs with increasing temperature, pressure, A-site cation size or decreasing halide ionic radius is found principally in the strengthening of the dynamic steric interaction with the increase of the dynamic disorder. In this way, we have deepened our fundamental understanding of MHPs; knowledge that could be coined to improve performance in future optoelectronic devices based on this promising class of semiconductors. Introduction Metal halide perovskites (MHPs) are nowadays the focus of intense fundamental as well as applied research mainly for their exceptional photovoltaic properties that have catapulted solar cell efficiencies to values in excess of 25%[1] but using low-cost, solution-processing methods. MHPs with general formula ABX 3 , being B a metal (Pb or Sn) and X a halogen atom (Cl, Br, I), are characterized by a labile inorganic cage of corner-sharing BX 6 octahedrons, enclosing the loosely bound atomic or molecular Asite cations in their voids. According to Goldschmidt's tolerance-factor criterium, [2] the A-site cations fitting in the inorganic cage voids are Cs and organic molecules such as methylammonium (MA) or formamidinium (FA). Because the A-site cations are only loosely bound to the inorganic cage by electrostatic forces, they are able to freely move (translate, rotate and librate) inside the cage voids. It is an experimentally and theoretically well-established fact that in cubic and tetragonal phases of MHPs, such dynamics is fully or partially (in-plane) unfolded, respectively, whereas in less symmetric orthorhombic phases the A-site cations are locked in certain positions and orientations inside the voids. [3] For example, experimentally the MA and/or FA dynamics has been directly assessed by ultra-fast vibrational spectroscopy [4,5] or indirectly inferred from the analysis of the atomic displacement parameter in neutron scattering [6,7] and X-ray diffraction experiments. [8] In the case of the MA + ions in pure lead halide perovskites, the dynamics consists essentially of a fast (ca. 0.3 ps) wobbling-in-a-cone motion and much slower, jump-like reorientation rotations of the molecules by 90 • . [5] The latter, which are the main cause of dynamic disorder, exhibit characteristic jump times ranging from 1 to 3 ps, depending on the halide atom. However, in mixed-halide compounds, these times can be as long as 15 ps. [5] Theoretically, the A-site cation dynamics has been well accounted for within moleculardynamics calculations. [9,10,11] Using a diffusive model, [12] ab-initio molecular dynamics simulations yield for MAPbBr 3 at 300 K [11] a relaxation time of ca. 340 ps for the fast motion and about 2 ps for the jump-like rotations, in excellent agreement with the experiment. This dynamics has direct impact on one of the distinctive features of MHPs, namely the interplay between the inorganic network and the atomic or molecular cations enclosed in the cage voids, determining, at least in part, the outstanding optoelectronic properties of these semiconductor materials. The interplay between the inorganic metal-halide sublattice and the network of A-site cations picks up contributions from two interactions with different origin and acting at different length scales: Hydrogen bonding and steric effects. Hydrogen bonding results from the electrostatic interaction between the hydrogen atoms of the organic cations and the negatively charged halide anions. In the case of the Cs + cations, H bonding is replaced by the bare electrostatic anion-cation attraction. In contrast, steric effects corresponds to non-bonding dipole-dipole interactions between molecules and/or atoms, which are well described by a Lennard-Jones potential. At large distances steric effects correspond to the weak van der Waals attraction that is much weaker than electrostatic interactions, being thus negligible against H bonding. However, at short distances the repulsion between the electronic clouds of neighboring atoms or molecules comes into play and the steric interaction becomes strongly repulsive. In the case of MHPs, steric effects are intimately related to the movement of the Asite cations inside the cage voids, which provides the necessary kinetic energy to bring cations and anions sufficiently close together. Hence, at the risk of being redundant, the steric repulsion will be hereafter called dynamic steric interaction (DSI). H bonding is ubiquitous in hybrid halide perovskites and has been repeatedly invoked to explain the structural phase behavior of MA lead halides as a function of temperature [13,11] and pressure. [14,15] Apart from contributing to the structural stability of the lowtemperature orthorhombic phases of MHPs, first-principle calculations have shown that H bonding is instrumental for the tilting of the PbX 6 octahedrons. [16,17] Furthermore, molecular dynamics simulations have highlighted the role that H bonding plays at the one-to-one connection between octahedral tilting and local structural deformations with the roto-translational dynamics of the molecular cations. [9,10,11] This is the origin of the dynamic disorder caused by unleashed A-site cation dynamics. Curiously, besides for its consideration to explain phase stability in inorganic MHPs, [16] the dynamic steric interaction has been widely ignored in the literature. Yet, here we will show that DSI is crucial for a final understanding of the structural phase sequences observed in MHPs as a function of pressure and halide composition. For MHPs Raman scattering turns out to be a very powerful technique, since it grants easy access and without experimental complications to the degree of dynamic disorder present in the sample for given temperature and pressure conditions. In previous temperature-dependent experiments on the three MA lead halides, we found evidence of the coupling mentioned before between the vibrations of the anionic network PbX 3 (X=I, Br, Cl) and the MA cations in the Raman scattering signature. [18,19] As a consequence of the steric interaction between the MA molecules and the halogen atoms of the inorganic cage and due to dynamic disorder, the vibrational modes of the cage exhibit a wide statistical distribution of frequencies, which in turn leads to a strong inhomogeneous broadening of the Raman peaks. In contrast, in the low-temperature orthorhombic phase, when the organic cations are locked and ordered inside the cage voids, becoming well oriented along high-symmetry directions of the perovskite crystal, dynamic disorder just disappears. The result is a pronounced reduction of the linewidths of the Raman peaks, which is readily observed in low-temperature Raman spectra. [13,18,19,20] Interestingly, a similar locking effect of the MA cations and the concomitant reduction in linewidth of the inorganic cage phonons can be induced at room temperature through the application of high hydrostatic pressure. [10,14,21] Here we make explicit use of this spectroscopic tool to monitor the appearance or disappearance of structural disorder as a function of pressure and temperature in relation to the A-site cation dynamics. Figure 1: (a) PL spectra of a MAPbBr 3 single crystal recorded at different temperatures using the green laser line (514.5 nm) for excitation. The spectra were normalized to their maximum intensity and plotted with a vertical shift for clarity. The temperature range is indicated (temperature step ca. 5 K). The different colors represent the different structural phases adopted as a function of temperature (Cubic: Pm3m, Tetra.: I4/mcm, Ortho.: Pnma). (b) The maximum PL peak energy position plotted as a function of temperature, obtained from the PL lineshape fits to the spectra shown in (a), using a cross-product function (Eq. (1) of Supporting Information). Dashed lines indicate the phase transition temperatures. In this work, we present a systematic study of the structural phase behavior of highquality MAPbBr 3 single crystals as a function of temperature in the range 80 to 320 K and ambient pressure as well as a function of pressure up to ca. 7 GPa at room temperature. This has been accomplished by monitoring the temperature and pressureinduced changes in the fundamental band gap and vibrational spectrum of MAPbBr 3 , as observed in PL and Raman experiments, respectively, following the procedure reported elsewhere. [21,22] By combining the results obtained here for MAPbBr 3 with data from the available literature on temperature and/or pressure-dependent studies for MAPbI 3 , [6,8,19,21,23,24,25,26,27,28,29,30,31] MAPbBr 3 , [11,13,14,15,19,23,24,25,28,31,32,33,34] [33] and the data from two recent reviews, [41,42] we were able to conceive a general picture about the relationship between crystal structure and dynamic disorder in MHPs. One particularly important finding is that at the temperatures for unfolded A-site cation dynamics, the structural stability of the crystalline phases observed for increasing pressure, A-site cation size or decreasing halogen atomic radius can only be understood in terms of a strengthening of the DSI, rather than due to H-bonding effects. Moreover, we offer an explanation for the fact that the onset of the pressure-induced amorphization, i.e. static disorder, based on the amounts of vacancies present in each particular sample. We note that we have intentionally excluded the results on thin films and nanocrystals from the discussion to avoid complications due to effects on structural behavior of grain boundaries, interfaces, surfaces, and/or confinement, making the underlying physics difficult to understand. Results Temperature and Pressure-Dependent Photoluminescence (PL) Spectra Figure 1a shows the evolution with temperature of the PL spectra of a MAPbBr 3 highquality single crystal in the range from 310 to 80 K. All spectra were normalized to their absolute maximum intensity and vertically offset to ease their comparison. The different colors correspond to the temperature ranges of stability of the different crystalline phases of MAPbBr 3 , as indicated. According to X-ray diffraction results, [23,32,13] starting at ambient, the phase sequence for decreasing temperature is α-cubic → βtetragonal-I → γ-tetragonal-II → δ -orthorhombic. The γ phase is not indicated in Fig. 1a because we missed it in our experiments, since it has a very narrow stability range of 5 K. At all temperatures a single peak dominates the PL spectra of MAPbBr 3 , corresponding to the free-exciton emission. [22,43] With decreasing temperature the exciton peak exhibits a monotonous redshift of its energy, except for the sudden jumps at the phase transitions, and a clear decrease in linewidth. In view of the relatively small binding energy of ca. 15 meV, [44] the redshift can be taken as representative of the temperature dependence of the fundamental band gap. The linewidth reduction, in turn, is indicative of a homogeneously broadened emission peak, which means it is lifetime limited. In high quality crystals, non-radiative exciton decay is mainly associated to the scattering by phonons, thus, being strongly temperature dependent. At low temperatures below ca. 110 K, several peaks become apparent at the low-energy side of the main exciton peak (see Fig. 1a), which are ascribed to emission from bound (acceptor/donor) exciton complexes, as reported elsewhere. [45] To analyze the PL spectra of the hybrid perovskites we used a Gaussian-Lorentzian cross-product function for describing the main emission peak, as successfully employed for the analysis of the PL spectra of MAPbI 3 [21] and MA/FA mixed crystals. [22] The expression for the cross-product function is given in the Supporting Information. It contains three adjustable parameters: The amplitude prefactor A, the peak energy position E 0 , and the full width at half maximum (FWHM) Γ. This function is a useful simplification of a Voigt function, which corresponds to the mathematical convolution of a Lorentzian and a Gaussian. There is an additional lineshape parameter which takes the values s = 0 for pure Gaussian and s = 1 for pure Lorentzian. For MAPbBr 3 the exciton emission lineshape turned out mainly Gaussian with little Lorentzian admixture. The values of the peak energy E 0 are plotted as a function of temperature in Fig. 1b (the PL linewidths and intensities are shown in Fig. S1 of the Supporting Information). As mentioned above, we consider the shift of the PL peak energy E 0 with temperature representative of the temperature change of the gap. The linear increase of the gap with increasing temperature observed for the cubic and tetragonal phases is a common trend of MHPs, which was explained as due to two equally-contributing effects, namely thermal expansion and enhanced electron-phonon interaction. [46] Furthermore, the temperatures at which the jumps in the gap energy occur are in excellent agreement with the phase transition temperatures from X-ray data (dashed lines in Fig. 1b). At the phase transitions, the gap always increases for the phase with lower symmetry. This is due to the sudden increment in the overall octahedral tilting of the PbBr 6 octahedrons, which leads to a strong reduction of the Pb-Br-Pb bond angle, reducing the overlap between valence Pb and Br orbitals and increasing the bandgap. [28,31] Figure 2a shows representative PL spectra of MAPbBr 3 measured at different pressures up to about 6.5 GPa. Spectra were again normalized to its absolute maximum intensity and vertically offset to ease their comparison. The main PL peak exhibits abrupt changes in the position of its maximum, which are indicative of the occurrence of three phase transitions in the pressure range of the experiment. This can be better appreciated in Fig. 2b, where the values of the peak energy E 0 are plotted as a function of pressure. Different colors correspond to the four observed phases and vertical dashed lines (except for the first one) mark the corresponding phase transition pressures. We note that at the very beginning solely of the first pressure upstroke, a sudden redshift of the PL emission, i.e. of E 0 , is observed to occur in the range from 0 to ∼ 0.25 GPa. As shown below, this effect is accompanied by changes in the linewidth of the Raman peaks. Since this happens only once, we believe it is not related to a phase transition. We speculate that this behavior might arise from an initial strain relaxation the first time the sample is pressurized in the diamond anvil cell (DAC). Such a strain could have been introduced by the way the small chips to be loaded into the DAC are produced (see Methods section). Within the cubic-I (Pm3m) phase, stable from ambient conditions, the PL spectra exhibit a clear redshift and the gap energy of MAPbBr 3 displays a negative linear dependence on pressure. A linear regression to the data points yields a pressure coefficient of (−54 ± 5) meV/GPa, which is very similar to that of other counterparts like MAPbI 3 [46] and MAPbCl 3 [35] (for comparison see the survey of pressure coefficients of MHPs published in Ref. [46]). As previously argued for MAPbI 3 [21], such a negative pressure dependence of the gap can be readily explained using the well-established systematic about the pressure coefficients of conventional semiconductors [47] and ac- P r e s s u r e ( G P a ) counting for the bonding/antibonding and atomic orbital character of the valence and conduction-band states. Relativistic band-structure calculations [48,49] for a pseudocubic phase of MAPbI 3 predict that due to the huge spin-orbit interaction present in heavy atoms like Pb, there is a so-called band inversion. For MHPs this means that the top of the valence band is predominantly composed by antibonding Pb 6s orbitals, which shift up in energy with pressure, whereas the bottom of the conduction band is formed by the antibonding split-off Pb 6p-orbitals, which are fairly pressure insensitive. A totally similar result is expected for MAPbBr 3 , which explains the negative sign and magnitude of the gap pressure coefficient. M A P b B r 3 3 0 0 K C u b i c Ι C u b i c ΙΙ O r t h o . Ι O r t h o . ΙΙ ( b ) In MAPbBr 3 the first phase transition thus occurs at a low pressure of ca. 0.75 GPa, as signalled by a turnover in the change of the PL peak energy E 0 with pressure. This is an isostructural transition because the new high-pressure phase, which is stable up to 2.2 GPa, corresponds to the cubic-II (Im3) phase, as reported elsewhere. [14,15,28] This phase is characterized by a stepwise linear increase of E 0 with increasing pressure, exhibiting a kink in the pressure dependence at about 1.2 GPa, also observed by Yesudhas et al. [15] However, there is no hint to a phase transition from the Raman data at this pressure, thus, the reason for the kink remains elusive to us. In contrast, the fact that in the cubic-II phase the gap energy gradually but steadily increases with pressure can be understood as arising from a pressure-induced increase in octahedral tilting. The cubic-II phase (Im3) is obtained from the cubic-I (Pm3m) by an alternate tilting of the PbBr 6 octahedrons in the direction of the cube diagonals. This doubles the unit cell in all three directions that remains cubic. Once the tilting starts, it increases gradually with pressure, causing the observed incremental opening of the gap. The second phase transformation occurs at 2.2 GPa and is characterized by an abrupt increase of the PL peak energy. As explained in the discussion of the Raman results, this new phase is perfectly crystalline and probably orthorhombic in nature, in agreement with previous reports. [14,15] The PL spectra clearly indicate the occurrence of a third phase transition at about 3.75 GPa, characterized by a dramatic change in PL lineshape (gray spectra in Fig. 2a). Two broad peaks appear at much lower energies and there is an overall lost of intensity together with a pronounced broadening of the main peak (see Fig. S2 of the Supporting Information). This speaks for a large heterogeneity in the sample, as far as the electronic states involved in the optical transitions are concerned. In fact, this is the pressure range for which a a pressure-induced amorphization is reported for MAPbBr 3 . [14,15,28,34] However, we anticipate that the Raman data indicate again that this phase is crystalline and probably orthorhombic up to the highest pressure of this experiment close to 6.5 GPa. We will return to deal with the amorphization and how it might be generated under pressure in the discussion of the Raman results. Finally, we remark that the changes in the PL emission (the Raman too) are fully reversible only provided the pressure was kept below that of the transition into the ortho-I phase. Otherwise there is a certain degree of hysteresis in the PL peak energy by releasing the pressure. Figure 3a summarizes the Raman results obtained on single-crystalline MAPbBr 3 as a function of temperature in a similar range as for the PL measurements (80 -320 K) using the 785-nm line for excitation. The Raman spectra shown here correspond to the spectral region of the inorganic cage phonon modes below 300 cm −1 . [18] As reported before, [19] in the high-temperature cubic phase (red spectra), the MA dynamics is fully unfolded, resulting in a strong inhomogeneous broadening of the inorganic cage phonons due to the strong coupling to the molecular cations. In fact, the Raman spectra are quite featureless, exhibiting essentially a broad peak centered at around 70 cm −1 . The width of this Raman band decreases slightly, when the sample transforms into the tetragonal phase (blue spectra), for which the MA cations are free to move only in the tetragonal plane. The partial reduction of the dynamic disorder in the tetragonal phase leads to a slight decrease of the inhomogeneous broadening. In stark contrast, sev- eral well-defined peaks are apparent in the Raman spectra of the orthorhombic phase (black curves in Fig. 3a), a phase in which the MA cations are locked inside the cage voids in a state of static order. Concomitant with the disappearance of dynamic disorder, the inhomogeneous broadening vanishes, such that the Raman peaks just display their lifetime-limited homogeneous linewidth. An instructive digression on the relative importance of homogeneous versus inhomogeneous broadening in the Raman spectra of the three halide compounds MAPbX 3 with X=Cl, Br and I in relation to dynamic disorder is given in the Supporting Information. Temperature and Pressure-Dependent Raman Spectra The raw Raman spectra recorded for MAPbBr 3 under pressure are shown in Fig. S3 of the Supporting Information. For a quantitative assessment of the effect of pressure on the vibrational spectrum of MAPbBr 3 we have decomposed each Raman spectrum in its different mode components by a lineshape analysis, as illustrated in Fig. 3b, where a representative example of the fits for each of the observed phases is displayed. We note that at room temperature and mainly for the first two phases (P< 2.2 GPa), all Raman lineshapes are affected by the presence of a very broad and intense peak at very small Raman shifts and the edge-like attenuation caused by the dichroic filter used to screen the laser. The former is interpreted as a broad central Raman peak originating from local polar fluctuations in the perovskite structure caused by dynamic disorder [50]. A special function was constructed to describe such background, [22] which has been subtracted from the Raman spectra for a better visualization of the phonon modes. This simplifies the fitting of the Raman spectra using Gaussian functions, as illustrated in Fig. 3b. The number of Gaussian peaks and the approximate frequency positions are consistent with previous full assignments of Raman [18,19] and far-infrared spectra [51] as well as those observed at low temperature for the orthorhombic phase (Fig. 3a). The results of the Raman lineshape fits for the frequency and FWHM of the main peaks apparent in the Raman spectra of MAPbBr 3 (see Figs. S3 and 3b) are depicted in Fig. 4 as a function of pressure. Unlike what happened with MAPbI 3 [21], the changes in the Raman frequencies and mainly the linewidths were much less pronounced in MAPbBr 3 . The latter is a consequence of the greater homogeneous broadening of the Raman peaks due to a stronger coupling with the MA molecules within the narrower voids of the lead bromide cage (see digression in the Supporting Information), that hampers the observation of the variations of the inhomogeneous part of the linewidth with the amount of disorder. However, the clear changes in Raman lineshape observed with increasing pressure like the reduction of the linewidths and/or the appearance of additional well-resolved peaks, allowed us to corroborate the occurrence of the phase transitions previously ascertained from the PL experiments. Therefore, the dashed lines in Fig. 4a also denote the phase transition pressures. An exception is the first abrupt and irreversible decrease in linewidth, which occurs only once at the start of the pressure experiments. An important result of this work concerns the observation of a sharpening of the Raman modes also for the orthorhombic-II phase (see Figs. 3b and 4b), a phase that appears to be stable above the onset of amorphization, according to most of the high-pressure work on MAPbBr 3 [14,15,28,34] and other halide perovskites. [8,26,29,30,31,35] On the contrary, our Raman linewidths remain narrow up to ca. 6.5 GPa, the highest pressure of these experiments, exactly as was previously reported by us for MAPbI 3 too. [21] In this respect, it is interesting to compare our Raman results with those of Capitani et al., [14] where the low-frequency Raman spectra exhibit a broad, featureless band for the cubic low-pressure phases and well-defined, sharp peaks for the orthorhombic-I phase, exactly like us. The key difference lies in the strong broadening that Capitani et al. observe in MAPbBr 3 above ca. 4 GPa, which was ascribed to an amorphous-like state of static disorder. [14,42] Considering that the pressure-induced changes in the Raman linewidth are due to variations of its inhomogeneous part, this provides a tool to monitor the degree of disorder of the crystal lattice. Moreover, we note that the inhomogeneous broadening is unable to distinguish between static or dynamic disorder. At least in this particular case, this is so because the typical duration times of the Raman scattering processes are in the sub-picosecond regime, [52] i.e. much faster than the times required for the jump-like reorientations of the MA molecules causing dynamic disorder. Hence, a Raman measurement just corresponds to a sampling of 10 12 to 10 13 different but static MA orientational motifs per second, occurring in the sample throughout the molecular-cation dynamics. In fact, a marked increase in inhomogeneous broadening has been also observed in the Raman spectra of the low-temperature phases of FAPbBr 3 [53] and FA x MA 1−x PbI 3 [22] with x > 0.4, which exhibit static structural disorder. Hence, according to the Raman data, with increasing pressure MAPbBr 3 transforms from a state of dynamic disorder in the cubic phases, due to an unleashed MAcation dynamics, to a state of static order with all MA molecules locked inside the cage voids and orderly oriented in the repeated unit cell of the orthorhombic-I phase. Further increase of pressure above ca. 4 GPa can either induce transformation into a static disordered phase [14] or to a presumably orthorhombic phase, as here reported, where the short-range order is still preserved (sharp Raman peaks) but the optical emission shows clear signs of carrier localization effects compatible with an incipient amorphization (see Fig. 2a). The first question is what triggers amorphization? Obviously, the MA cations cannot be, since they are locked and ordered throughout the crystal structure. By combining density functional theory and ab-initio molecular dynamics calculations, [40] an answer to this question has been recently provided for CsPbI 3 , although valid in general for MHPs. Essentially, high pressure induces a phase instability driven by the softening of lattice vibrational modes associated with the tilting of the PbI 6 octahedrons, i.e. with a strong reduction of the Pb-I-Pb bond angle. The deformation caused by the stark octahedral tilting starts at different seeding points across the sample and extends gradually, leading to a lost of the long and short-range crystalline order. If so, the next question that arises is why is the onset of amorphization so sample dependent (reports from 2 to 7 GPa)? One possibility might be the different experimental conditions, for the phase behavior of MHPs under compression can be very sensitive to the degree of hydrostaticity of the pressure transmitting medium used. [54] However, we propose an alternative explanation based on the amount of vacancies (mainly Pb vacancies) present in the sample. In a recent study on the optical emission of FA x MA 1−x PbI 3 mixed crystals [45], we have shown that the most common shallow defects in single crystalline MHPs are vacancies, mainly of Pb but also of the halogen and A-site cation. In view of the fact that the crystal structure is already deformed around a vacancy, we can foresee vacancies acting as seeding points to trigger the proposed pressure-induced lattice instability, leading to static disorder. [40] We finally turn to a key result that concerns the temperature and pressure dependence of the N-H symmetric stretching vibration [ν s (NH + 3 )] of the MA cations, as determined by Raman scattering. This vibrational mode corresponds to the strongest peak in the Raman spectrum of MAPbBr 3 in the spectral range of the N-H stretching vibrations around 3000 cm −1 , as shown in the inset to Fig. 5. This vibrational mode provides direct information about the coupling between the inorganic cage and the A-site cations, in particular, allowing one to unravel the conditional weight of H bonding and steric hindrance. This is so because the frequency of this vibrational mode shifts up or down upon changes in temperature, pressure or composition, if the coupling between both sublattices is dominated by steric or H-bonding effects, respectively. The frequency of the ν s (NH + 3 ) vibration is determined by the strength of the covalent bond between nitrogen and hydrogen. In the H-bonding case, the electrostatic attraction between the H + and the negative halide ion of the inorganic cage weakens the N-H bond by elongating it, thus causing a redshift. [55] On the contrary, the DSI is repulsive and stronger the closer the H and the halide atom become, which in turn shortens the N-H bond, causing a blueshift. As shown in Fig. 5a, the frequency of the N-H stretching vibration decreases slightly with decreasing temperature (about 2 cm −1 from room temperature down to 80 K). This is a clear indication that the H-bonding increases in importance with decreasing temperature, while the gradual cooling of the MA dynamics diminishes the steric effects. In fact, in the low-temperature orthorhombic phase, H-bonding is crucial to determine the arrangement (position and orientation) of the MA molecules within the cage voids. [40] However, at room temperature the application of moderate pressure causes a strong increase in frequency of the N-H stretching mode (ca. 8 cm −1 up to 1.2 GPa), as displayed in Fig. 5b. This is compelling evidence that, when the MA dynamics is fully unfolded, the DSI dominates the inter-sublattice coupling. We point out that the prominent role of DSI at ambient conditions is also demonstrated by the theoretically predicted and experimentally assessed blueshift of the ν s (NH + 3 ) vibration for a reduction of the lattice parameter by the substitution of the halide atom from I to Br to Cl. [19] This idea gathers additional support from a recent study that combines Raman scattering and density functional theory, where the entire vibrational spectrum of isolated MA + and FA + molecules is compared with that of MAPbX 3 and FAPbX 3 (X=I and Br), respectively. [56] This comparison clearly shows that there are no hydrogen bonds in MHPs at room temperature. Discussion At this point, it is worth offering a general discussion on the relationship between the crystal structures adopted by MHPs and the magnitude of dynamic disorder, as a function of different important parameters such as temperature, pressure and composition. For this purpose we use the sketch of Fig. 6, which serves both as a guide for discussion and as graphical summary of the "take-home" message of this work. The main hypothesis is that the strength of the dynamic steric interaction and, hence its leading role in the coupling between inorganic metal-halide network and the A-site cation sublattice, increases in direct proportion to the amount of dynamic disorder caused by the unfolded A-site cation dynamics. In this sense, the arrows in Fig. 6 indicate the direction of increase of the DSI linked to the variation of the corresponding parameter. We first discuss the effect of temperature, represented by the blue circle in Fig. 6. We next consider the impact of replacing the A-site cation on the structural behavior of MHPs (orange circle in Fig. 6). The same O→T→C sequence is observed for an increasing cation size, when going from an atom like Cs to a molecule like MA and a larger one such as FA. Even though the lattice parameter of the perovskite increases slightly for larger A-site cation size, the effective volume filled by the A-site cation (V A ), the so-called steric bulk, increases faster than the void volume (V v ) itself. The volume V A can be inferred, for example, from the atomic displacement parameter plots at 50% probability from neutron scattering [6,7] or molecular dynamics calculations. [9,10] The key point is that the strength of the DSI is proportional to the ratio V A V v and being repulsive in nature, the fast movement of the A-site cations produces the same effect of an internal pressure acting outwards on the imaginary walls of the cage voids. As clearly shown by molecular dynamics simulations, [9,10] the spherical atomic-density cloud generated by the movement of the A-site cations in the three spatial directions favors a cubic void environment, thus stabilizing the cubic phase. In contrast, a free movement solely in a plane favors the stabilization of the tetragonal phase, whereas the orthorhombic phase is only compatible with the locking of the Asite cations inside the cage voids. A nice example of the effect of the A-site cation size can be appreciated for the series of lead bromide compounds. [33,59] We now turn to the discussion of the effects of the halogen atom substitution, which are illustrated by the green circle in Fig. 6. In this case, the effects on the structural behavior are more subtle than for the preceding parameters. However, one can recognize certain correlation between the structural sequence O→T→C and the reduction of the ionic radius of the halogen atoms. The heavier the halogen atom, the larger its ionic radius, which leads to an increase of the lattice parameter, i.e. of V v . This means that the DSI decreases with increasing ionic radius of the halogens, which explains why chlorine compounds are more prone to stabilize in the cubic phase than the bromide and iodide counterparts. At least, the T→C transition temperature, for example for the MAPbX 3 family, shifts to higher temperatures as the halogen ionic radius increases. [19] As shown below for the case of varying the pressure, halogen substitution works in a similar way as what is known as chemical pressure. Finally, we discuss the structural effect of an external hydrostatic pressure. The corresponding grey circle in Fig. 6 appears out of phase with respect to the others, because the observed structural sequence under compression is T→C→O, as for the emblematic case of MAPbI 3 . [8,21,26,28] This seems a priori counterintuitive. In fact, since the effect on the void volume V v of thermal expansion is opposite to that of compression, one would expect the pressure effect to be represented in the sketch of Fig. 6 by a similar circle as for the temperature but going clockwise instead of anticlockwise. The only way to understand such behavior is by considering the DSI as the dominant interaction against H bonding. As mentioned before, when the A-site cation dynamics is fully unfolded and due to the repulsive character of the DSI, the moving A-site cations exert an outward force to the surrounding octahedrons, which partly counteracts the effect of the applied pressure. In the tetragonal phase, stable at ambient conditions for MAPbI 3 , where the dynamics of the MA cations is restricted to the tetragonal plane, such a reaction of the MA molecules to compression is only expected along the (a,b) tetragonal axes. To the contraction of the tetragonal axes under pressure follows a reaction of the moving MA cations, mediated by DSI, that repels the tilted octahedrons slowing down further pressure-induced tilting. This leads to an effective asymmetry in the compressibility of the inorganic cage, in view of the fact that the longer (unperturbed) c axis would be more compressible than the (distorted) tetragonal ones. Thus, with increasing pressure the tetragonal distortion diminishes up to the point where the compressed crystal structure is almost cubic. At that moment is when the MA dynamics becomes unleashed in all three directions in space, what in turn stabilizes the cubic phase at finite pressure. [9,10] Further compression will eventually induce a transformation into an orthorhombic phase, which is thermodynamically more stable at reduced volumes and after the MA dynamics has collapsed. This phenomenology is a unique signature of the DSI present in MHPs. We point out that from a pure structural point of view, ferroelectric perovskites like CsGeX 3 with X=I, Br, and Cl also exhibit a similar behavior under pressure. [60,61] However, the reason for it is the pressure-induced reduction up to a full collapse of the Jahn-Teller distortion giving rise to the ferroelectric polarization. Lead halide perovskites, in contrast, are not ferroelectric but ferroelastic [62] and the transformation from tetragonal to cubic structure under compression is the consequence of a gradual pressure-induced, DSI aided reduction of the tetragonal symmetry. Conclusion In summary, we have performed a systematic study of the optical emission and vibrational properties of single crystalline MAPbBr 3 as a function of temperature and hydrostatic pressure using photoluminescence and Raman scattering spectroscopy. These results combined with the available literature data on other closely-related MHPs allowed us to unravel the underlying physics relating the crystal structure stability, depending on composition as well as temperature and pressure conditions, and the dynamic disorder caused by the fast A-site cation dynamics. The main finding is that a full understanding of the relationship between structure and dynamic disorder in MHPs can only be achieved if dynamic steric effects are taken into account; H-bonding alone is insufficient. The leitmotif for the observed trends regarding the crystal phase sequences obtained with increasing temperature, pressure and A-site cation size or with decreasing halogen ionic radius is a strengthening of the DSI, which is directly linked with the magnitude of the dynamic disorder induced by the unfolded A-site cation roto-translational dynamics. Furthermore, we offer an explanation for the large spread in the reported values of the onset of the pressure-induced amorphization or staticdisordered state, ubiquitous in MHPs. Here we suggest that vacancies (mainly of lead) act as seeding points for the pressure-induced lattice instability due to the softening of phonon modes related to octahedral tilting, as proposed to trigger amorphization. [40] Since the lattice is already deformed at a vacancy, the number of vacancies would then determine the onset of amorphization, making its observation fairly sample dependent. In this way, we believe to have deepened our understanding of a very fundamental issue for MHPs, namely the crystal-structure/dynamic-disorder relationship, thus contributing to advance the development of optoelectronic applications of this exceptional class of materials. Methods Growth of the MAPbBr 3 single crystals The inverse solubility method of Saidaminov et al. [63] was developed to produce crystals of MAPbBr 3 . Stoichiometric quantities of MABr (GreatCell Solar) and PbBr 2 (Merck, 99%) were dissolved at 20 • C in dry dimethylformamide (Alfa Aesar). When fully dissolved, the solution was heated to 80 • C and left undisturbed for 3 hours to allow crystallisation. The remaining solution was filtered off and large single crystals were oven dried at 100 • C overnight. High-pressure experiments The high-pressure photoluminescence and micro-Raman scattering measurements were performed at room temperature employing a gasketed diamond anvil cell (DAC). Anhydrous propanol was used as pressure transmitting medium which ensures good hydrostatic conditions in the pressure range of the present experiments (perfectly hydrostatic up to 4.2 GPa [64]) and proved chemically inert to MAPbBr 3 . For loading the DAC, small chips with a thickness below ca. 30 µm were produced by crushing a big MAPbBr 3 single crystal between two glass slides. By close inspection of the debris we were able to pick up small enough, good-quality single crystals, recognized by their flat and shiny surface under the microscope. This simple but effective procedure allowed us to avoid the thinning of the sample by either mechanical polishing or chemical etching, which are known to spoil the quality of such soft crystals. Small pieces of about 100×100 µm 2 in size were placed into the DAC together with a ruby sphere for pressure calibration [65]. Here we point out that the Inconel gasket was intentionally pre-indented to a fairly large thickness of 120 µm, before drilling a hole of ca. 250 µm with a spark-gap machine from EasyLab. The reason was to be able to adjust the pressure with the DAC in steps less than 0.05 GPa, mainly at very low pressures (below 1 GPa). For this purpose an electric motor drive was used to change the pressure in a continuous manner and at low speed (by ca. 0.05 GPa/min). In return, the maximum pressure reached in our experiments was about 7 GPa. Regarding the high accuracy claimed in the measurement of the pressure, we point out that we always loaded more than one ruby sphere into the DAC for a multi-point determination of the pressure. The excitation of the ruby fluorescence was performed using extremely low laser powers in the range of a few tens of nW, in order to avoid any heating-induced shift of the ruby emission. Furthermore, the pressure was determined immediately before and after each PL or Raman measurement, to account for effects of mechanical relaxation of the DAC upon changes in pressure. The temperature of the room was also frequently monitored to correct for an eventual temperature increase of the room, for example, from the morning to the evening or if another heat-generating equipment (laser, vacuum pump, etc.) was switched on nearby. Finally, the backlash of the spectrometer was also considered and to minimize its effect, the fluorescence of the ruby was measured, acquiring the spectrum by forcing the spectrometer to approach its final position always from the same side. PL & Raman measurements For the high-pressure experiments, the PL spectra were excited with the 405 nm line of laser diode, whereas for the PL measurements at low temperatures the 514.5 nm line of an Ar + -ion laser was employed, using a very low incident light power below 2 µW. The latter was selected as the closest available laser line to the MAPbBr 3 gap. This turned out to be very important to attain long time stability and reproducibility of the PL emission by reducing as much as possible laser heating effects due to thermalization of photo-generated hot carriers. For the Raman measurements either an infrared diode laser emitting at 785 nm or the 633 nm line of a He-Ne laser was employed for excitation of the low-frequency spectra (below 500 cm −1 ) and the high-frequency ones (around 3.000 cm −1 ), respectively. The former turned out most suitable to excite the vibrational modes of the inorganic cage, providing also the highest spectral resolution and stray-light rejection. In all cases, a very low incident light power density below 15 W/cm 2 was used to avoid any photo-degradation of the samples, such that thermal damage by the laser can be safely ruled out. Spectra were collected using a 20× long working distance objective with NA=0.35 and dispersed with a high-resolution LabRam HR800 grating spectrometer equipped with a charge-coupled device detector. PL spectra were corrected for the spectral response of the spectrometer by normalizing each spectrum using the detector and the 600-grooves/mm grating characteristics. Temperature-dependent measurements on large single crystals exhibiting flat surfaces were carried out between 80 and 320 K using a gas flow cryostat from CryoVac with optical access that fits under the microscope of the LabRam setup. Supporting Information The Supporting Information contains a set of PL spectra recorded at five different temperatures in the range of approx. 10 to 50 K for each of the ten compositions of the FA x MA 1−x PbI 3 system studied here, showing details of the lineshape fits performed to the PL spectra using multiple Gaussian-Lorentzian cross-product functions. The results of the line-shape fits concerning the peak energy, line width and intensity for the main emission features, plotted as a function of temperature, are also included. Figure 2 : 2(a) PL spectra of MAPbBr 3 obtained for incremental steps of pressure up to about 6.5 GPa using the 405-nm laser line for excitation. The spectra were normalized to their maximum intensity and plotted with a vertical shift for increasing pressure. The different colors indicate the subsequent phases adopted by the material during the pressure upstroke (Cubic I: Pm3m, Cubic II: Im3, Ortho. I: Pnma, Ortho. II: unknown). (b) The PL peak energy E 0 plotted as a function of pressure, obtained from the PL lineshape fits using Eq. (1) of the Supporting Information. The pressures at which the phase transitions occur are indicated by vertical dashed lines. See text for details. Figure 3 : 3(a) Raman spectra of MAPbBr 3 measured at different temperatures from 320 K to 80 K (steps of 10 K) in the spectral range of the inorganic cage phonon modes using the 785-nm line for excitation. The spectra were normalized to their maximum intensity and shifted vertically for clarity. The different colors of the spectra indicate the changes in phase after every transition. (b) Examples of the performed lineshape fits (green solid curves) to the Raman spectra (black closed symbols) using Gaussian functions for different pressures, as indicated, each one representing a different highpressure phase of the material. The solid curves in the color of the corresponding phase represent the different phonon components. The shown spectra were obtained by subtracting a special function used for describing the combined effect of the dichroic filter and the broad central peak (see text for details). Figure 4 : 4(a) The frequency and (b) full width at half maximum (FWHM) of the Raman peaks in the spectral region of the inorganic cage phonons below 300 cm −1 , as obtained from the lineshape fits as a function of pressure up to ca. 6.5 GPa. The pressures at which the phase transitions occur are marked with vertical dashed lines. The different phases are indicated (C: cubic, O: orthorhombic). Figure 5 : 5The frequency of the N-H symmetric stretching vibration [ν s (NH + 3 )] of the methylammonia (a) as a function of temperature at ambient pressure and (b) as a function of pressure at room temperature. The pressures at which the phase transitions occur are marked with vertical dashed lines and the different phases are indicated. The inset shows a representative Raman spectrum in the range of the N-H stretching vibrations around 3000 cm −1 . Figure 6 : 6With increasing temperature, the structural sequence exhibited by MHPs is typically: orthorhombic (O)→tetragonal (T)→cubic (C). Concomitantly with the thermal activa-Schematic representation of the relationship between crystal structure and dynamic disorder in lead halide perovskites with formula APbX 3 as a function of temperature, pressure and composition (A-site cation and halogen anion types). The arrow pointing counterclockwise represents the direction of increase of the DSI. Numbers in parentheses correspond to the cation/anion ionic radii from the literature (in pm).[57,58] tion of vibrations, rotations and translations of the A-site cations within the cage voids, there is an increase in dynamic disorder and, thus, of the DSI. The increase in entropy from dynamic disorder overcompensates both the decrease in structural entropy for the more symmetric structures and the detrimental effect of the lattice thermal expansion on the DSI. AcknowledgementsThe Spanish "Ministerio de Ciencia e Innovación (MICINN)" is gratefully acknowledged for its support through grant CEX2019-000917-S (FUNFUTURE) in the framework of the Spanish Severo Ochoa Centre of Excellence program and the AEI/FEDER(UE) grants PGC2018-095411-B-100 (RAINBOW) and PID2021-128924OB-I00 (ISOSCELLES). The authors also thank the Catalan agency AGAUR for grant 2017-SGR-00488 and the National Network "Red Perovskitas" (MICINN funded). K.X. acknowledges a fellowship (CSC201806950006) from China Scholarship Council and the PhD programme in Materials Science from Universitat Autònoma de Barcelona in which he was enrolled. B.C. thanks the EPSRC for PhD studentship funding via the University of Bath, CSCT CDT (EP/G03768X/1). Data availability All data generated or analysed during this study are either included in this published article and its supplementary information files or are available from the corresponding author on reasonable request.Author contributionsReferences . V M Goldschmidt, Die Gesetze Der Krystallochemie, Die Naturwissenschaften. 14Goldschmidt, V. M. Die Gesetze der Krystallochemie, Die Naturwissenschaften 14, 477-485 (1926). What is moving in hybrid halide perovskite solar cells?. J M Frost, Walsh , A , Acc. Chem. Res. 49Frost, J. M., and Walsh, A. What is moving in hybrid halide perovskite solar cells? Acc. Chem. Res. 49, 528-535 (2016). Real-time observation of organic cation reorientation in methylammonium lead iodide perovskites. A A Bakulin, O Selig, H J Bakker, Y L Rezus, C Müller, T Glaser, R Lovrincic, Z Sun, Z Chen, A Walsh, J. Phys. Chem. Lett. 6Bakulin, A. A., Selig, O., Bakker, H. J., Rezus, Y. L., Müller, C. Glaser, T., Lovrincic, R., Sun, Z., Chen, Z., Walsh, A., et al. Real-time observation of organic cation reorientation in methylammonium lead iodide perovskites. J. Phys. Chem. Lett. 6, 3663-3669 (2015). Organic cation rotation and immobilization in pure and mixed methylammonium lead-halide perovskites. O Selig, A Sadhanala, C Müller, R Lovrincic, Z Chen, Y L Rezus, J M Frost, T L Jansen, A A Bakulin, J. Am. Chem. Soc. 139Selig, O., Sadhanala, A., Müller, C., Lovrincic, R., Chen, Z., Rezus, Y. L., Frost, J. M., Jansen, T. L., Bakulin, A. A. Organic cation rotation and immobilization in pure and mixed methylammonium lead-halide perovskites. J. Am. Chem. Soc. 139, 4068-4074 (2017). Complete structure and cation orientation in the perovskite photovoltaic methylammonium lead iodide between 100 and 352 K. M T Weller, O J Weber, P F Henry, A M Di Pumpo, T C Hansen, Chem. Commun. 51Weller, M. T., Weber, O. J., Henry, P. F., Di Pumpo, A. M., Hansen, T. C. Com- plete structure and cation orientation in the perovskite photovoltaic methylam- monium lead iodide between 100 and 352 K. Chem. Commun. 51, 4180-4183 (2015). Phase behavior and polymorphism of formamidinium lead iodide. O J Weber, D Ghosh, S Gaines, P F Henry, A B Walker, M S Islam, M T Weller, Chem. Mater. 30Weber, O. J., Ghosh, D., Gaines, S., Henry, P. F., Walker, A. B., Islam, M. S., Weller, M. T. Phase behavior and polymorphism of formamidinium lead iodide. Chem. Mater. 30, 3768-3778 (2018). Mechanism of pressure-induced phase transitions, amorphization, and absorption-edge shift in photovoltaic methylammonium lead iodide. M Szafrański, A Katrusiak, J. Phys. Chem. Lett. 7Szafrański, M., Katrusiak, A. Mechanism of pressure-induced phase transitions, amorphization, and absorption-edge shift in photovoltaic methylammonium lead iodide. J. Phys. Chem. Lett. 7, 3458-3466 (2016). Good vibrations: Locking of octahedral tilting in mixed-cation iodide perovskites for solar cells. D Ghosh, P W Atkins, M S Islam, A B Walker, C Eames, ACS Energy Lett. 2Ghosh, D., Atkins, P. W., Islam, M. S., Walker, A. B., Eames, C. Good vibrations: Locking of octahedral tilting in mixed-cation iodide perovskites for solar cells. ACS Energy Lett. 2, 2424-2429 (2017). Putting the squeeze on lead iodide perovskites: Pressure-induced effects to tune their structural and optoelectronic behavior. D Ghosh, A Aziz, J A Dawson, A B Walker, M S Islam, Chem. Mater. 31Ghosh, D., Aziz, A., Dawson, J. A., Walker, A. B., Islam, M. S. Putting the squeeze on lead iodide perovskites: Pressure-induced effects to tune their struc- tural and optoelectronic behavior. Chem. Mater. 31, 4063-4071 (2019). Deciphering the nature of temperature-induced phases of MAPbBr 3 by ab initio molecular dynamics. S Maity, S Verma, L M Ramaniah, V Srinivasan, Chem. Mater. 34Maity, S., Verma, S., Ramaniah, L. M., Srinivasan, V. Deciphering the nature of temperature-induced phases of MAPbBr 3 by ab initio molecular dynamics. Chem. Mater. 34, 10459-10469 (2022). Methylammonium rotational dynamics in lead halide perovskite by classical molecular dynamics: the role of temperature. A Mattoni, A Filippetti, M Saba, P Delugas, J. Phys. Chem. C. 119Mattoni, A., Filippetti, A., Saba, M., Delugas, P. Methylammonium rotational dynamics in lead halide perovskite by classical molecular dynamics: the role of temperature. J. Phys. Chem. C 119, 17421-17428 (2015). Hydrogen-bonding evolution during the polymorphic transformations in CH 3 NH 3 PbBr 3 : Experiment and theory. T Yin, Y Fang, X Fan, B Zhang, J.-L Kuo, T J White, G M Chow, J Yan, Z X Shen, Chem. Mater. 29Yin, T., Fang, Y., Fan, X., Zhang, B., Kuo, J.-L., White, T. J., Chow, G. M., Yan, J., and Shen, Z. X. Hydrogen-bonding evolution during the polymorphic transfor- mations in CH 3 NH 3 PbBr 3 : Experiment and theory. Chem. Mater. 29, 5974-5981 (2017). Locking of methylammonium by pressure-enhanced H-bonding in (CH 3 NH 3 )PbBr 3 hybrid perovskite. F Capitani, C Marini, S Caramazza, P Dore, A Pisanu, L Malavasi, L Nataf, F Baudelet, J.-B Brubach, P Roy, P Postorino, J. Phys. Chem. C. 121Capitani, F., Marini, C., Caramazza, S., Dore, P., Pisanu, A., Malavasi, L., Nataf, L., Baudelet, F., Brubach, J.-B., Roy, P., Postorino, P. Locking of methylammo- nium by pressure-enhanced H-bonding in (CH 3 NH 3 )PbBr 3 hybrid perovskite. J. Phys. Chem. C 121, 28125-28131 (2017). Coupling of organic cation and inorganic lattice in methylammonium lead halide perovskites: Insights into a pressure-induced isostructural phase transition. S Yesudhas, R Burns, B Lavina, S N Tkachev, J Sun, C A Ullrich, S Guha, Phys. Rev. Mater. 4105403Yesudhas, S., Burns, R., Lavina, B., Tkachev, S. N., Sun, J., Ullrich, C. A., Guha, S. Coupling of organic cation and inorganic lattice in methylammonium lead halide perovskites: Insights into a pressure-induced isostructural phase tran- sition. Phys. Rev. Mater. 4, 105403 (2020). Resolving the physical origin of octahedral tilting in halide perovskites. J.-H Lee, N C Bristowe, J H Lee, S.-H Lee, P D Bristowe, A K Cheetham, H M Jang, Chem. Mater. 28Lee, J.-H., Bristowe, N. C., Lee, J. H., Lee, S.-H., Bristowe, P. D., Cheetham, A. K., Jang, H. M. Resolving the physical origin of octahedral tilting in halide perovskites. Chem. Mater. 28, 4259-4266 (2016). The nature of hydrogen-bonding interaction in the prototypic hybrid halide perovskite, tetragonal CH 3 NH 3 PbI 3. J H Lee, J.-H Lee, E.-H Kong, H M Jang, Sci. Rep. 621687Lee, J. H., Lee, J.-H., Kong, E.-H., Jang, H. M. The nature of hydrogen-bonding interaction in the prototypic hybrid halide perovskite, tetragonal CH 3 NH 3 PbI 3 . Sci. Rep. 6, 21687 (2016). Lattice dynamics and vibrational spectra of the orthorhombic, tetragonal, and cubic phases of methylammonium lead iodide. F Brivio, J M Frost, J M Skelton, A J Jackson, O J Weber, M T Weller, A R Goñi, A M A Leguy, P R F Barnes, A Walsh, Phys. Rev. B. 92Brivio, F., Frost, J. M., Skelton, J. M., Jackson, A. J., Weber, O. J., Weller, M. T., Goñi, A. R., Leguy, A. M. A., Barnes, P. R. F., Walsh, A. Lattice dynam- ics and vibrational spectra of the orthorhombic, tetragonal, and cubic phases of methylammonium lead iodide. Phys. Rev. B 92, 144308/1-8 (2015). Dynamic disorder, phonon lifetimes, and the assignment of modes to the vibrational spectra of methylammonium lead halide perovskites. A M A Leguy, A R Goñi, J M Frost, J Skelton, F Brivio, X Rodríguez-Martínez, O J Weber, A Pallipurath, M I Alonso, M Campoy-Quiles, M T Weller, J Nelson, A Walsh, P R Barnes, Phys. Chem. Chem. Phys. 18Leguy, A. M. A., Goñi, A. R., Frost, J. M., Skelton, J., Brivio, F., Rodríguez- Martínez, X., Weber, O. J., Pallipurath, A., Alonso, M. I., Campoy-Quiles, M., Weller, M. T., Nelson, J., Walsh, A., Barnes, P. R. F. Dynamic disorder, phonon lifetimes, and the assignment of modes to the vibrational spectra of methylammo- nium lead halide perovskites. Phys. Chem. Chem. Phys. 18, 27051-27066 (2016). Lattice mode symmetry analysis of the orthorhombic phase of methylammonium lead iodide using polarized Raman. R Sharma, M Menahem, Z Dai, L Gao, T M Brenner, L Yadgarov, J Zhang, Y Rakita, R Korobko, I Pinkas, A M Rappe, O Yaffe, Phys. Rev. Mater. 451601Sharma, R., Menahem, M., Dai, Z., Gao, L., Brenner, T. M., Yadgarov, L., Zhang, J., Rakita, Y., Korobko, R., Pinkas, I., Rappe, A. M., Yaffe, O. Lattice mode symmetry analysis of the orthorhombic phase of methylammonium lead iodide using polarized Raman. Phys. Rev. Mater. 4, 051601R (2020). Pressure-induced locking of methylammonium cations versus amorphization in hybrid lead iodide perovskites. A Francisco-López, B Charles, O J Weber, M I Alonso, M Garriga, M Campoy-Quiles, M T Weller, A R Goñi, J. Phys. Chem. C. 122Francisco-López, A., Charles, B., Weber, O. J., Alonso, M. I., Garriga, M., Campoy-Quiles, M., Weller, M. T., Goñi, A. R. Pressure-induced locking of methylammonium cations versus amorphization in hybrid lead iodide per- ovskites. J. Phys. Chem. C 122, 22073-22082 (2018). Phase diagram of methylammonium/formamidinium lead iodide perovskite solid solutions from temperaturedependent photoluminescence and Raman spectroscopies. A Francisco-López, B Charles, M I Alonso, M Garriga, M Campoy-Quiles, M T Weller, A R Goñi, J. Phys. Chem. C. 124Francisco-López, A., Charles, B., Alonso, M. I., Garriga, M., Campoy- Quiles, M., Weller, M. T., Goñi, A. R. Phase diagram of methylammo- nium/formamidinium lead iodide perovskite solid solutions from temperature- dependent photoluminescence and Raman spectroscopies. J. Phys. Chem. C 124, 3448-3458 (2020). Dynamic disorder in methylammoniumtrihalogenoplumbates (II) observed by millimeter-wave spectroscopy. A Poglitsch, D Weber, J. Chem. Phys. 87Poglitsch, A., and Weber, D. Dynamic disorder in methylammoniumtrihalogeno- plumbates (II) observed by millimeter-wave spectroscopy. J. Chem. Phys. 87, 6373-6378 (1987). Calorimetric and IR spectroscopic studies of phase transitions in methylammonium trihalogenoplumbates (II). N Onoda-Yamamuro, T Matsuo, H Suga, J. Phys. Chem. Solids. 51Onoda-Yamamuro, N., Matsuo, T., and Suga, H. Calorimetric and IR spectro- scopic studies of phase transitions in methylammonium trihalogenoplumbates (II). J. Phys. Chem. Solids 51, 1383-1395 (1990). Dielectric study of CH 3 NH 3 PbX 3 (X = Cl, Br, I). N Onoda-Yamamuro, T Matsuo, H Suga, J. Phys. Chem. Solids. 53Onoda-Yamamuro, N., Matsuo, T., and Suga, H. Dielectric study of CH 3 NH 3 PbX 3 (X = Cl, Br, I). J. Phys. Chem. Solids 53, 935-939 (1992). High-pressure behavior of metylammonium lead iodide (MAPbI 3 ) hybrid perovskite. F Capitani, C Marini, S Caramazza, P Postorino, G Garbarino, M Hanfland, A Pisanu, P Quadrelli, L Malavasi, J. Appl. Phys. 119Capitani, F., Marini, C., Caramazza, S., Postorino, P., Garbarino, G., Hanfland, M., Pisanu, A., Quadrelli, P., Malavasi, L. High-pressure behavior of metylam- monium lead iodide (MAPbI 3 ) hybrid perovskite. J. Appl. Phys. 119, 185901/1-6 (2016). Indirect to direct bandgap transition in methylammonium lead halide perovskite. T Wang, B Daiber, J M Frost, S A Mann, E C Garnett, A Walsh, B Ehrler, Energy Environ. Sci. 10Wang, T., Daiber, B., Frost, J. M., Mann, S. A., Garnett, E. C., Walsh, A., and Ehrler, B. Indirect to direct bandgap transition in methylammonium lead halide perovskite. Energy Environ. Sci. 10, 509-515 (2017). Highpressure single-crystal structures of 3D lead-halide hybrid perovskites and pressure effects on their electronic and optical properties. A Jaffe, Y Lin, C M Beavers, J Voss, W L Mao, H I Karunadasa, ACS Cent. Sci. 2Jaffe, A., Lin, Y., Beavers, C. M., Voss, J., Mao, W. L., Karunadasa, H. I. High- pressure single-crystal structures of 3D lead-halide hybrid perovskites and pres- sure effects on their electronic and optical properties. ACS Cent. Sci. 2, 201-209 (2016). Visible light response, electrical transport, and amorphization in compressed organo lead iodine perovskites. T Ou, J Yan, C Xiao, W Shen, C Liu, X Liu, Y Han, Y Ma, C Gao, Nanoscale. 8Ou, T., Yan, J., Xiao, C., Shen, W., Liu, C., Liu, X., Han, Y., Ma, Y., Gao, C. Visible light response, electrical transport, and amorphization in compressed organo lead iodine perovskites. Nanoscale 8, 11426-11431 (2016). Pressure-dependent polymorphism and band-gap tuning of methylammonium lead iodide perovskite. S Jiang, Y Fang, R Li, H Xiao, J Crowley, C Wang, T J White, Iii Goddard, W A Wang, Z Baikie, T Fang, J , Angew. Chem. Int. Ed. 55Jiang, S., Fang, Y., Li, R., Xiao, H., Crowley, J., Wang, C., White, T. J., Goddard III, W. A., Wang, Z., Baikie, T., Fang, J. Pressure-dependent polymorphism and band-gap tuning of methylammonium lead iodide perovskite. Angew. Chem. Int. Ed. 55, 6540-6544 (2016). Simultaneous band-gap narrowing and carrier-lifetime prolongation of organic-inorganic trihalide perovskites. L Kong, G Liu, J Gong, Q Hu, R D Schaller, P Dera, D Zhang, Z Liu, W Yang, Y Tang, C Wang, S.-H Wei, T Xu, H.-K Mao, PNAS. 113Kong, L., Liu, G., Gong, J., Hu, Q., Schaller, R. D., Dera, P., Zhang, D., Liu, Z., Yang, W., Tang, Y., Wang, C., Wei, S.-H., Xu, T., Mao, H.-K. Simultane- ous band-gap narrowing and carrier-lifetime prolongation of organic-inorganic trihalide perovskites. PNAS 113, 8910-8915 (2016). Pressure response of an organic-inorganic perovskite: Methylammonium lead bromide. I P Swainson, M G Tucker, D J Wilson, B Winkler, V Milman, Chem. Mater. 19Swainson, I. P., Tucker, M. G., Wilson, D. J., Winkler, B., Milman, V. Pressure response of an organic-inorganic perovskite: Methylammonium lead bromide. Chem. Mater. 19, 2401-2405 (2007). Temperature-dependent optical band gap in CsPbBr 3 , MAPbBr 3 , and FAPbBr 3 single crystals. G Mannino, I Deretzis, E Smecca, A La Magna, A Alberti, D Ceratti, D Cahen, J. Phys. Chem. Lett. 11Mannino, G., Deretzis, I., Smecca, E., La Magna, A., Alberti, A., Ceratti, D., Cahen, D. Temperature-dependent optical band gap in CsPbBr 3 , MAPbBr 3 , and FAPbBr 3 single crystals. J. Phys. Chem. Lett. 11, 2490-2496 (2020). Pressure-induced phase transformation, reversible amorphization, and anomalous visible light response in organolead bromide perovskite. Y Wang, X Lü, W Yang, T Wen, L Yang, X Ren, L Wang, Z Lin, Y Zhao, J. Am. Chem. Soc. 137Wang, Y., Lü, X., Yang, W., Wen, T., Yang, L., Ren, X., Wang, L., Lin, Z., and Zhao, Y. Pressure-induced phase transformation, reversible amorphization, and anomalous visible light response in organolead bromide perovskite. J. Am. Chem. Soc. 137, 11144-11149 (2015). Pressure-induced structural evolution and band gap shifts of organometal halide perovskite-based methylammonium lead chloride. L Wang, K Wang, G Xiao, Q Zeng, B Zou, J. Phys. Chem. Lett. 7Wang, L., Wang, K., Xiao, G., Zeng, Q., Zou, B. Pressure-induced structural evolution and band gap shifts of organometal halide perovskite-based methylam- monium lead chloride. J. Phys. Chem. Lett. 7, 5273-5279 (2016). High pressure behavior of δ -phase of formamidinium lead iodide by optical spectroscopy. V Carpenella, F Ripanti, E Stellino, C Fasolato, A Nucara, C Petrillo, L Malavasi, P Postorino, 10.1021/acs.jpcc.2c08253J. Phys. Chem. C. Carpenella, V., Ripanti, F., Stellino, E., Fasolato, C., Nucara, A., Petrillo, C., Malavasi, L., Postorino, P. High pressure behavior of δ -phase of formami- dinium lead iodide by optical spectroscopy. J. Phys. Chem. C (2023). DOI: 10.1021/acs.jpcc.2c08253 Pressure-induced structural and optical properties of organometal halide perovskite-based formamidinium lead bromide. L Wang, K Wang, B Zou, J. Phys. Chem. Lett. 7Wang, L., Wang, K., Zou, B. Pressure-induced structural and optical properties of organometal halide perovskite-based formamidinium lead bromide. J. Phys. Chem. Lett. 7, 2556-2562 (2016). Phase diagram and dielectric properties of MA 1−x FA x PbI 3. A Mohanty, D Swain, S Govinda, T N G Row, D D Sarma, ACS Energy Lett. 4Mohanty, A., Swain, D., Govinda, S., Row, T. N. G., Sarma, D. D. Phase dia- gram and dielectric properties of MA 1−x FA x PbI 3 . ACS Energy Lett. 4, 2045-2051 (2019). Contrasting effects of FA substitution on MA/FA rotational dynamics in FA x MA 1−x PbI 3. V K Sharma, R Mukhopadhyay, A Mohanty, V García Sakai, M Tyagi, D D Sarma, J. Phys. Chem. C. 125Sharma, V. K., Mukhopadhyay, R., Mohanty, A., García Sakai, V., Tyagi, M., and Sarma, D. D. Contrasting effects of FA substitution on MA/FA rotational dynamics in FA x MA 1−x PbI 3 . J. Phys. Chem. C 125, 13666-13676 (2021). Degenerate lattice-instability-driven amorphization under compression in metal halide perovskite CsPbI 3. S Yi, J.-H Lee, J. Phys. Che. Lett. 13Yi, S., and Lee, J.-H. Degenerate lattice-instability-driven amorphization under compression in metal halide perovskite CsPbI 3 . J. Phys. Che. Lett. 13, 9449-9455 (2022). Pressure responses of halide perovskites with various compositions, dimensionalities, and morphologies. M Li, T Liu, Y Wang, W Yang, X Lü, Matter Radiat. Extremes. 518201Li, M., Liu, T., Wang, Y., Yang, W., and Lü, X. Pressure responses of halide per- ovskites with various compositions, dimensionalities, and morphologies. Matter Radiat. Extremes 5, 018201 (2020). Hybrid perovskites under pressure: Present and future directions. A Celeste, F Capitani, J. Appl. Phys. 132220903Celeste, A., and Capitani, F. Hybrid perovskites under pressure: Present and fu- ture directions. J. Appl. Phys. 132, 220903 (2022). Spatially resolved studies of the phases and morphology of methylammonium and formamidinium lead tri-halide perovskites. K Galkowski, A A Mitioglu, A Surrente, Z Yang, D K Maude, P Kossaki, G E Eperon, J T Wang, .-W Snaith, H J Plochocka, P Nicholas, R J , Nanoscale. 9Galkowski, K., Mitioglu, A. A., Surrente, A., Yang, Z., Maude, D. K., Kossaki, P., Eperon, G. E., Wang, J. T.-W., Snaith, H. J., Plochocka, P., Nicholas, R. J. Spatially resolved studies of the phases and morphology of methylammonium and formamidinium lead tri-halide perovskites. Nanoscale 9, 3222-3230 (2017). Hydrogen-like Wannier-Mott excitons in single crystal of methylammonium lead bromide perovskite. J Tilchin, D N Dirin, G I Maikov, A Sashchiuk, M V Kovalenko, E Lifshitz, ACS Nano. 10Tilchin, J., Dirin, D. N., Maikov, G. I., Sashchiuk, A., Kovalenko, M. V., Lifshitz, E. Hydrogen-like Wannier-Mott excitons in single crystal of methylammonium lead bromide perovskite. ACS Nano 10, 6363-6371 (2016). Photoluminescence of bound-exciton complexes and assignment to shallow defects in methylammonium/formamidinium lead iodide mixed crystals. A Francisco-López, B Charles, M I Alonso, M Garriga, M T Weller, A R Goñi, Adv. Optical Mater. Francisco-López, A., Charles, B., Alonso, M. I., Garriga, M., Weller, M. T., Goñi, A. R. Photoluminescence of bound-exciton complexes and assignment to shallow defects in methylammonium/formamidinium lead iodide mixed crystals. it Adv. Optical Mater., 2001969/1-9 (2021). Equal footing of thermal expansion and electron-phonon interaction in the temperature dependence of lead halide perovskite band gaps. A Francisco López, B Charles, O J Weber, M I Alonso, M Garriga, M Campoy-Quiles, M T Weller, A R Goñi, J. Phys. Chem. Lett. 10Francisco López, A., Charles, B., Weber, O. J., Alonso, M. I., Garriga, M., Campoy-Quiles, M., Weller, M. T., Goñi, A. R. Equal footing of thermal ex- pansion and electron-phonon interaction in the temperature dependence of lead halide perovskite band gaps. J. Phys. Chem. Lett. 10, 2971-2977 (2019). Optical properties of semiconductors under pressure. A R Goñi, K Syassen, Semicond. Semimetals. 54and references thereinGoñi, A. R., and Syassen, K. Optical properties of semiconductors under pressure. Semicond. Semimetals 54, 247-425, (1998) and references therein. Atomistic origins of high-performance in hybrid halide perovskite solar cells. J M Frost, K T Butler, F Brivio, C H Hendon, M Van Schilgaarde, A Walsh, Nano Lett. 14Frost, J. M. Butler, K. T., Brivio, F., Hendon, C. H., van Schilgaarde, M., Walsh, A. Atomistic origins of high-performance in hybrid halide perovskite solar cells. Nano Lett. 14, 2584-2590 (2014). Solid-state physics perspective on hybrid perovskite semiconductors. J Even, L Pedesseau, C Katan, M Kepenekian, J.-S Lauret, D Sapori, E Deleporte, J. Phys. Chem. C. 119Even, J., Pedesseau, L., Katan, C., Kepenekian, M., Lauret, J.-S., Sapori, D., Deleporte, E. Solid-state physics perspective on hybrid perovskite semiconduc- tors. J. Phys. Chem. C 119, 10161-10177 (2015). Local polar fluctuations in lead halide perovskite crystals. O Yaffe, Y Guo, L Z Tan, D A Egger, T Hull, C C Stoumpos, F Zheng, T F Heinz, L Kronik, M G Kanatzidis, J S Owen, A M Rappe, M A Pimenta, L E Brus, Phys. Rev. Lett. 118Yaffe, O., Guo, Y., Tan, L. Z., Egger, D. A., Hull, T., Stoumpos, C. C., Zheng, F., Heinz, T. F., Kronik, L., Kanatzidis, M. G., Owen, J. S., Rappe, A. M., Pimenta, M. A., Brus, L. E. Local polar fluctuations in lead halide perovskite crystals. Phys. Rev. Lett. 118, 136001/1-6 (2017). Lovrincic, R. Optical phonons in methylammonium lead halide perovskites and implications for charge transport. M Sendner, P K Nayak, D A Egger, S Beck, C Müller, B Epding, W Kowalsky, L Kronik, H J Snaith, A Puccia, Mater. Horiz. 3Sendner, M., Nayak, P. K., Egger, D. A., Beck, S., Müller, C., Epding, B., Kowal- sky, W., Kronik, L., Snaith, H. J., Puccia, A., Lovrincic, R. Optical phonons in methylammonium lead halide perovskites and implications for charge transport. Mater. Horiz. 3, 613-620 (2016). Exploring resonance Raman spectroscopy. D Tuschel, Spectros. 33Tuschel, D. Exploring resonance Raman spectroscopy. Spectros. 33, 12-19 (2018). G Reuveni, Y Diskin-Posner, C Gehrmann, S Godse, G G Gkikas, I Buchine, S Aharon, R Korobko, C C Stoumpos, D A Egger, Yaffe , arXiv:2211.06904v1[cond-mat.mtrl-sci]13Static and dynamic disorder in formamidinium lead bromide single crystals. Reuveni, G., Diskin-Posner, Y., Gehrmann, C., Godse, S., Gkikas, G. G., Bu- chine, I., Aharon, S., Korobko, R., Stoumpos, C. C., Egger, D. A., and Yaffe, O. Static and dynamic disorder in formamidinium lead bromide single crystals. arXiv:2211.06904v1 [cond-mat.mtrl-sci] 13 Nov 2022. Effects of nonhydrostatic stress on structural and optoelectronic properties of methylammonium lead bromide perovskite. R Zhang, W Cai, T Bi, N Zarifi, T Terpstra, C Zhang, Z V Verdeny, E Zurek, S Deemyad, J. Phys. Chem. Lett. 8Zhang, R., Cai, W., Bi, T., Zarifi, N., Terpstra, T., Zhang, C., Verdeny, Z. V., Zurek, E., and Deemyad, S. Effects of nonhydrostatic stress on structural and optoelectronic properties of methylammonium lead bromide perovskite. J. Phys. Chem. Lett. 8, 3457-3465 (2017). NH 2 stretching vibration absorption and association mechanism of methylamine in n-hexane and carbon tetrachloride. Spectrochimica Acta 46A. H Wolff, U Schmidt, E Wolff, Wolff, H., Schmidt, U., and Wolff, E. NH 2 stretching vibration absorption and as- sociation mechanism of methylamine in n-hexane and carbon tetrachloride. Spec- trochimica Acta 46A, 85-89 (1990). Do lead halide hybrid perovskites have hydrogen bonds?. J Ibaceta-Jaña, M Chugh, A S Novikov, H Mirhosseini, T D Kühne, B Szyszka, M R Wagner, R Muydinov, J. Phys. Chem. C. 126Ibaceta-Jaña, J., Chugh, M., Novikov, A. S., Mirhosseini, H., Kühne, T. D., Szyszka, B., Wagner, M. R., and Muydinov, R. Do lead halide hybrid perovskites have hydrogen bonds? J. Phys. Chem. C 126, 16215-16226 (2022). On the application of the tolerance factor to inorganic and hybrid halide perovskites: a revised system. W Travis, E N K Glover, H Bronstein, D O Scanlon, R G Palgrave, Chem. Sci. 7Travis, W., Glover, E. N. K., Bronstein, H., Scanlon, D. O., and Palgrave, R. G. On the application of the tolerance factor to inorganic and hybrid halide per- ovskites: a revised system. Chem. Sci. 7, 4548-4556 (2016). Cesium-containing triple cation perovskite solar cells: improved stability, reproducibility and high efficiency. M Saliba, T Matsui, J.-Y Seo, K Domanski, J.-P Correa-Baena, M K Nazeeruddin, S M Zakeeruddin, W Tress, A Abate, A Hagfeldt, M Grätzel, Energy Environ. Sci. 9Saliba, M., Matsui, T., Seo, J.-Y., Domanski, K., Correa-Baena, J.-P., Nazeerud- din, M. K., Zakeeruddin, S. M., Tress, W., Abate, A., Hagfeldt, A., and Grätzel, M. Cesium-containing triple cation perovskite solar cells: improved stability, re- producibility and high efficiency. Energy Environ. Sci. 9, 1989-1997 (2016). A-site cation effect on optical phonon modes and thermal stability in lead-based perovskite bromide single crystals using Raman spectroscopy. F H Naqvi, J.-H Ko, T H Kim, C W Ahn, Y Hwang, M Sheraz, S Kim, J. Korean Phys. Soc. 81Naqvi, F. H., Ko, J.-H., Kim, T. H., Ahn, C. W., Hwang, Y., Sheraz, M., Kim, S. A-site cation effect on optical phonon modes and thermal stability in lead-based perovskite bromide single crystals using Raman spectroscopy. J. Korean Phys. Soc. 81, 230-240 (2022). Ferroelectric CsGeI 3 single crystals with a perovskite structure grown from aqueous solution. R Chen, C Liu, Y Chen, C Ye, S Chen, J Cheng, S Cao, S Wang, A Cui, Z Hu, H Lin, J Wu, X Y Kong, W Ren, 10.1021/acs.jpcc.2c06818J. Phys. Chem. C. Chen, R., Liu, C., Chen, Y., Ye, C., Chen, S., Cheng, J., Cao, S., Wang, S., Cui, A., Hu, Z., Lin, H., Wu, J., Kong, X. Y., and Ren, W. Ferroelectric CsGeI 3 single crystals with a perovskite structure grown from aqueous solution. J. Phys. Chem. C (2023). DOI:10.1021/acs.jpcc.2c06818 Effect of pressure on the optical-absorption edges of CsGeBr 3 and CsGeCl 3. U Schwarz, F Wagner, K Syassen, H Hillebrecht, Phys. Rev. B. 53Schwarz, U., Wagner, F., Syassen, K., and Hillebrecht, H. Effect of pressure on the optical-absorption edges of CsGeBr 3 and CsGeCl 3 . Phys. Rev. B 53, 12545- 12548 (1996). The ferroelectric-ferroelastic debate about metal halide perovskites. F Ambrosio, F De Angelis, A R Goñi, J. Phys. Chem. Lett. 13Ambrosio, F., De Angelis, F., and Goñi, A. R. The ferroelectric-ferroelastic de- bate about metal halide perovskites. J. Phys. Chem. Lett. 13, 7731-7740 (2022). High-quality bulk hybrid perovskite single crystals within minutes by inverse temperature crystallization. M I Saidaminov, A L Abdelhady, B Murali, E Alarousu, V M Burlakov, W Peng, I Dursun, L Wang, Y He, G Maculan, A Goriely, T Wu, O F Mohammed, O M Bakr, Nat. Commun. 67586M. I. Saidaminov, A. L. Abdelhady, B. Murali, E. Alarousu, V. M. Burlakov, W. Peng, I. Dursun, L. Wang, Y. He, G. Maculan, A. Goriely, T. Wu, O. F. Mo- hammed, O. M. Bakr, High-quality bulk hybrid perovskite single crystals within minutes by inverse temperature crystallization. Nat. Commun. 6, 7586 (2015). Effective hydrostatic limits of pressure media for high-pressure crystallographic studies. R J Angel, M Bujak, J Zhao, G D Gatta, S D Jacobsen, J. Appl. Crystallogr. 40Angel, R. J., Bujak, M., Zhao, J., Gatta, G. D., Jacobsen, S. D. Effective hydro- static limits of pressure media for high-pressure crystallographic studies, J. Appl. Crystallogr. 40, 26-32 (2007). Calibration of the ruby pressure gauge to 800 kbar under quasi-hydrostatic conditions. H.-K Mao, J Xu, P M Bell, J. Geophys. Res. 91Mao, H.-K., Xu, J., Bell, P. M. Calibration of the ruby pressure gauge to 800 kbar under quasi-hydrostatic conditions. J. Geophys. Res. 91, 4673-4676 (1986).
[]
[ "Illuminating Pedestrians via Simultaneous Detection & Segmentation", "Illuminating Pedestrians via Simultaneous Detection & Segmentation" ]
[ "Garrick Brazil [email protected] \nMichigan State University\n48824East LansingMI\n", "Xi Yin [email protected] \nMichigan State University\n48824East LansingMI\n", "Xiaoming Liu \nMichigan State University\n48824East LansingMI\n" ]
[ "Michigan State University\n48824East LansingMI", "Michigan State University\n48824East LansingMI", "Michigan State University\n48824East LansingMI" ]
[]
Pedestrian detection is a critical problem in computer vision with significant impact on safety in urban autonomous driving. In this work, we explore how semantic segmentation can be used to boost pedestrian detection accuracy while having little to no impact on network efficiency. We propose a segmentation infusion network to enable joint supervision on semantic segmentation and pedestrian detection. When placed properly, the additional supervision helps guide features in shared layers to become more sophisticated and helpful for the downstream pedestrian detector. Using this approach, we find weakly annotated boxes to be sufficient for considerable performance gains. We provide an in-depth analysis to demonstrate how shared layers are shaped by the segmentation supervision. In doing so, we show that the resulting feature maps become more semantically meaningful and robust to shape and occlusion. Overall, our simultaneous detection and segmentation framework achieves a considerable gain over the state-of-the-art on the Caltech pedestrian dataset, competitive performance on KITTI, and executes 2× faster than competitive methods.
10.1109/iccv.2017.530
[ "https://arxiv.org/pdf/1706.08564v1.pdf" ]
20,340,159
1706.08564
c92f26b4a7116ab923e84e351662d1c8a6048b47
Illuminating Pedestrians via Simultaneous Detection & Segmentation Garrick Brazil [email protected] Michigan State University 48824East LansingMI Xi Yin [email protected] Michigan State University 48824East LansingMI Xiaoming Liu Michigan State University 48824East LansingMI Illuminating Pedestrians via Simultaneous Detection & Segmentation Pedestrian detection is a critical problem in computer vision with significant impact on safety in urban autonomous driving. In this work, we explore how semantic segmentation can be used to boost pedestrian detection accuracy while having little to no impact on network efficiency. We propose a segmentation infusion network to enable joint supervision on semantic segmentation and pedestrian detection. When placed properly, the additional supervision helps guide features in shared layers to become more sophisticated and helpful for the downstream pedestrian detector. Using this approach, we find weakly annotated boxes to be sufficient for considerable performance gains. We provide an in-depth analysis to demonstrate how shared layers are shaped by the segmentation supervision. In doing so, we show that the resulting feature maps become more semantically meaningful and robust to shape and occlusion. Overall, our simultaneous detection and segmentation framework achieves a considerable gain over the state-of-the-art on the Caltech pedestrian dataset, competitive performance on KITTI, and executes 2× faster than competitive methods. Introduction Pedestrian detection from an image is a core capability of computer vision, due to its applications such as autonomous driving and robotics [14]. It is also a long-standing vision problem because of its distinct challenges including low resolution, occlusion, cloth variations, etc [30]. There are two central approaches for detecting pedestrians: object detection [2,29] and semantic segmentation [4,5]. The two approaches are highly related by nature but have their own strengths and weaknesses. For instance, object detection is designed to perform well at localizing distinct objects but typically provides little information on object boundaries. In contrast, semantic segmentation does well at distinguishing pixel-wise boundaries among classes but struggles to separate objects within the same class. Intuitively, we expect that knowledge from either task will make the other substantially easier. This has been Notice that our feature map substantially illuminates the pedestrian shape while suppressing the background region, both of which make positive impact to downstream pedestrian detection. demonstrated for generic object detection, since having segmentation masks of objects would clearly facilitate detection. For example, Fidler et al. [13] utilize predicted segmentation masks to boost object detection performance via a deformable part-based model. Hariharan et al. [18] show how segmentation masks generated from MCG [1] can be used to mask background regions and thus simplify detection. Dai et al. [6] utilize the two tasks in a 3-stage cascaded network consisting of box regression, foreground segmentation, and classification. Their architecture allows each task to share features and feed into one another. In contrast, the pairing of these two tasks is rarely studied in pedestrian detection, despite the recent advances [2,21,29]. This is due in part to the lack of pixel-wise annotations available in classic pedestrian datasets such as Caltech [8] and KITTI [14], unlike the detailed segmentation labels in the COCO [22] dataset for generic object detection. With the release of Cityscapes [5], a high quality dataset for urban semantic segmentation, it is expected that substantial research efforts will be on how to leverage semantic segmentation to boost the performance of pedestrian detection, which is the core problem to be studied in this paper. Given this objective, we start by presenting a competitive two-stage baseline framework of pedestrian detection deriving from RPN+BF [29] and Faster R-CNN [23]. We contribute a number of key changes to enable the secondstage classifier to specialize in stricter supervision and additionally fuse the refined scores with the first stage RPN. These changes alone lead to state-of-the-art performance on the Caltech benchmark. We further present a simple, but surprisingly powerful, scheme to utilize multi-task learning on pedestrian detection and semantic segmentation. Specifically, we infuse the semantic segmentation mask into shared layers using a segmentation infusion layer in both stages of our network. We term our approach as "simultaneous detection and segmentation R-CNN (SDS-RCNN)". We provide an in-depth analysis on the effects of joint training by examining the shared feature maps, e.g., Fig. 1. Through infusion, the shared feature maps begin to illuminate pedestrian regions. Further, since we infuse the semantic features during training only, the network efficiency at inference is unaffected. We demonstrate the effectiveness of SDS-RCNN by reporting considerable improvement (23% relative reduction of the error) over the published state-of-the-art on Caltech [8], competitive performance on KITTI [14], and a runtime roughly 2× faster than competitive methods. In summary our contributions are as follows: Improved baseline derived from [23,29] by enforcing stricter supervision in the second-stage classification network, and further fusing scores between stages. A multi-task infusion framework for joint supervision on pedestrian detection and semantic segmentation, with the goal of illuminating pedestrians in shared feature maps and easing downstream classification. We achieve the new state-of-the-art performance on Caltech pedestrian dataset, competitive performance on KITTI, and obtain 2× faster runtime. Prior work Object Detection: Deep convolution neural networks have had extensive success in the domain of object detection. Notably, derivations of Fast [16] and Faster R-CNN [23] are widely used in both generic object detection [2,15,28] and pedestrian detection [21,26,29]. Faster R-CNN consists of two key components: a region proposal network (RPN) and a classification sub-network. The RPN works as a sliding window detector by determining the objectness across a set of predefined anchors (box shapes defined by aspect ratio and scale) at each spatial location of an image. After object proposals are generated, the second stage classifier determines the precise class each object belongs to. Faster R-CNN has been shown to reach state-of-the-art performance on the PASCAL VOC 2012 [12] dataset for generic object detection and continues to serve as a frequent baseline framework for a variety of related problems [15,18,19,30]. Pedestrian Detection: Pedestrian detection is one of the most extensively studied problems in object detection due to its real-world significance. The most notable challenges are caused by small scale, pose variations, cyclists, and occlusion [30]. For instance, in the Caltech pedestrian dataset [8] 70% of pedestrians are occluded in at least one frame. The top performing approaches on the Caltech pedestrian benchmark are variations of Fast or Faster R-CNN. SA-FastRCNN [16] and MS-CNN [2] reach competitive performance by directly addressing the scale problem using specialized multi-scale networks integrated into Fast and Faster R-CNN respectively. Furthermore, RPN+BF [29] shows that the RPN of Faster R-CNN performs well as a standalone detector while the downstream classifier degrades performance due to collapsing bins of small-scale pedestrians. By using higher resolution features and replacing the downstream classifier with a boosted forest, RPN+BF is able to alleviate the problem and achieve 9.58% miss rate on the Caltech reasonable [9] setting. F-DNN [10] also uses a derivation of the Faster R-CNN framework. Rather then using a single downstream classifier, F-DNN fuses multiple parallel classifiers including ResNet [19] and GoogLeNet [25] using soft-reject and further incorporates multiple training datasets to achieve 8.65% miss rate on the Caltech reasonable setting. The majority of top performing approaches utilize some form of a RPN, whose scores are typically discarded after selecting the proposals. In contrast, our work shows that fusing the score with the second stage network can lead to substantial performance improvement. Simultaneous Detection & Segmentation: There are two lines of research on simultaneous detection and segmentation. The first aims to improve the performance of both tasks, and formulates a problem commonly known as instance-aware semantic segmentation [5]. Hariharan et al. [18] predict segmentation masks using MCG [1] then get object instances using "slow" R-CNN [17] on masked image proposals. Dai et al. [6] achieve high performance on instance segmentation using an extension of Faster R-CNN in a 3-stage cascaded network including mask supervision. The second aims to explicitly improve object detection by using segmentation as a strong cue. Early work on the topic by Fidler et al. [13] demonstrates how semantic segmentation masks can be used to extract strong features for improved object detection via a deformable part-based model. Du et al. [10] use segmentation as a strong cue in their F-DNN+SS framework. Given the segmentation mask predicted by a third parallel network, their ensemble network uses the mask in a post-processing manner to suppress background proposals, and pushes performance on the Caltech pedestrian dataset from 8.65% to 8.18% miss rate. However, the segmentation network degrades the efficiency of F-DNN+SS from 0.30 to 2.48 seconds per image, and requires multiple GPUs at inference. In contrast, our novel framework infuses the semantic segmentation masks into shared feature maps and thus does not require a separate segmentation network, which outperforms [10] in both accuracy and network efficiency. Furthermore, our use of weak box-based segmentation masks addresses the issue of lacking pixel-wise segmentation annotations in [8,14]. Proposed method Our proposed architecture consists of two key stages: a region proposal network (RPN) to generate candidate bounding boxes and corresponding scores, and a binary classification network (BCN) to refine their scores. In both stages, we propose a semantic segmentation infusion layer with the objective of making downstream classification a substantially easier task. The infusion layer aims to encode semantic masks into shared feature maps which naturally serve as strong cues for pedestrian classification. Due to the impressive performance of the RPN as a standalone detector, we elect to fuse the scores between stages rather than discarding them as done in prior work [2,10,27,29]. An overview of the SDS-RCNN framework is depicted in Fig. 2 Region Proposal Network The RPN aims to propose a set of bounding boxes with associated confidence scores around potential pedestrians. We adopt the RPN of Faster R-CNN [23] following the settings in [29]. We tailor the RPN for pedestrain detection by configuring N a = 9 anchors with a fixed aspect ratio of 0.41 and spanning a scale range from 25 -350 pixels, corresponding to the pedestrain statistics of Caltech [8]. Since each anchor box acts as a sliding window detector across a pooled image space, there are N p = N a × W fs × H fs total pedestrian proposals, where f s corresponds to the feature stride of the network. Hence, each proposal box i corresponds to an anchor and a spatial location of image I. The RPN architecture uses conv1-5 from VGG-16 [24] as the backbone. Following [23], we attach a proposal feature extraction layer to the end of the network with two sibling output layers for box classification (cls) and bounding box regression (bbox). We further add a segmentation infusion layer to conv5 as detailed in Sec. 3.3. For every proposal box i, the RPN aims to minimize the following joint loss function with three terms: L = λ c i L c (c i ,ĉ i ) + λ r i L r (t i ,t i ) + λ s L s . (1) The first term is the classification loss L c , which is a softmax logistic loss over two classes (pedestrian vs. background). We use the standard labeling policy which considers a proposal box at location i to be pedestrian (c i = 1) if it has at least 0.5 Intersection over Union (IoU) with a ground truth pedestrian box, and otherwise background (c i = 0). The second term seeks to improve localization via bounding box regression, which learns a transformation for each proposal box to the nearest pedestrian ground truth. Specifically, we use L r (t i ,t i ) = R(t i −t i ) where R is the robust (smooth L 1 ) loss defined in [16]. The bounding box transformation is defined as a 4-tuple consisting of shifts in x, y and scales in w, h denoted as t = [t x , t y , t w , t h ]. The third term L s is the segmentation loss presented in Sec. 3.3. In order to reduce multiple detections of the same pedestrian, we apply non-maximum suppression (NMS) greedily to all pairs of proposals after the transformations have been applied. We use an IoU threshold of 0.5 for NMS. We train the RPN in the Caffe [20] framework using SGD with a learning rate of 0.001, momentum of 0.9, and mini-batch of 1 full-image. During training, we randomly sample 120 proposals per image at a ratio of 1:5 for pedestrian and background proposals to help alleviate the class imbalance. All other proposals are treated as ignore. We initialize conv1-5 from a VGG-16 model pretrained on Im-ageNet [7], and all remaining layers randomly. Our network has four max-pooling layers (within conv1-5), hence f s = 16. In our experiments, we regularize our multi-task loss terms by setting λ c = λ s = 1, λ r = 5. There is no discernible difference between the non-padded masks of well-localized (a) and poorly localized (b) proposals. Binary Classification Network The BCN aims to perform pedestrian classification over the proposals of the RPN. For generic object detection, the BCN usually uses the downstream classifier of Faster R-CNN by sharing conv1-5 with the RPN, but was shown by [29] to degrade pedestrian detection accuracy. Thus, we choose to construct a separate network using VGG-16. The primary advantage of a separate network is to allow the BCN freedom to specialize in the types of "harder" samples left over from the RPN. While sharing computation is highly desirable for the sake of efficiency, the shared networks are more predestined to predict similar scores which are redundant when fused. Therefore, rather than cropping and warping a shared feature space, our BCN directly crops the top N b proposals from the RGB input image. For each proposal image i, the BCN aims to minimize the following joint loss function with two terms: L = λ c i w i L c (c i ,ĉ i ) + λ s L s .(2) Similar to RPN, the first term is the classification loss L c where c i is the class label for the ith proposal. A costsensitive weight w i is used to give precedence to detect large pedestrians over small pedestrians. There are two key motivations for this weighting policy. First, large pedestrians typically imply close proximity and are thus significantly more important to detect. Secondly, we presume that features of large pedestrians may be more helpful for detecting small pedestrians. We define the weighting function given the ith proposal with height h i and a pre-computed mean heighth as w i = 1 + hī h . The second term is the segmentation loss presented in Sec. 3.3. We make a number of significant contributions to the BCN. First, we change the labeling policy to encourage higher precision and further diversification from the RPN. We enforce a stricter labeling policy, requiring a proposal to have IoU > 0.7 with a ground truth pedestrian box to be considered pedestrian (c i = 1), and otherwise background (c i = 0). This encourages the network to suppress poorly localized proposals and reduces false positives in the form of double detections. Secondly, we choose to fuse the scores of the BCN with the confidence scores of the RPN at test time. Since our design explicitly encourages the two stages to diversify, we expect the classification characteristics of each network to be complementary when fused. We fuse the scores at the feature level prior to softmax. Formally, the fused score for the ith proposal, given the predicted 2class scores from the RPN = {ĉ r i0 ,ĉ r i1 } and BCN = {ĉ b i0 , c b i1 } is computed via the following softmax function: c i = e (ĉ r i1 +ĉ b i1 ) e (ĉ r i1 +ĉ b i1 ) + e (ĉ r i0 +ĉ b i0 ) .(3) In effect, the fused scores become more confident when the stages agree, and otherwise lean towards the dominant score. Thus, it is ideal for each network to diversify in its classification capabilities such that at least one network may be very confident for each proposal. For a modest improvement to efficiency, we remove the pool5 layer from the VGG-16 architecture then adjust the input size to 112 × 112 to keep the fully-connected layers intact. This is a fair trade-off since most pedestrian heights fall in the range of 30 − 80 pixels [8]. Hence, small pedestrian proposals are upscaled by a factor of ∼2×, allowing space for finer discrimination. We further propose to pad each proposal by 20% on all sides to provide background context and avoid partial detections, as shown in Fig. 3. We train the BCN in the Caffe [20] framework using the same settings as the RPN. We initialize conv1-5 from the trained RPN model, and all remaining layers randomly. During training, we set N b = 20. During inference, we set N b = 15 for a moderate improvement to efficiency. We regularize the multi-task loss by setting λ c = λ s = 1. Figure 5. Visualization of the similarity between pixel-wise segmentation masks (from Cityscapes [5]) and weak box-based masks when downsampled in both the BCN (top) and RPN (bottom). Simultaneous Detection & Segmentation We approach simultaneous detection and segmentation with the motivation to make our downstream pedestrian detection task easier. We propose a segmentation infusion layer trained on weakly annotated pedestrian boxes which illuminate pedestrians in the shared feature maps preceding the classification layers. We integrate the infusion layer into both stages of our SDS-RCNN framework. Segmentation Infusion Layer: The segmentation infusion layer aims to output two masks indicating the likelihood of residing on pedestrian or background segments. We choose to use only a single layer and a 1 × 1 kernel so the impact on the shared layers will be as high as possible. This forces the network to directly infuse semantic features into shared feature maps, as visualized in Fig. 4. A deeper network could achieve higher segmentation accuracy but will infer less from shared layers and diminish the overall impact on the downstream pedestrian classification. Further, we choose to attach the infusion layer to conv5 since it is the deepest layer which precedes both the proposal layers of the RPN and the fully connected layers of the BCN. Formally, the final loss term L s of both the RPN and BCN is a softmax logistic loss over two classes (pedestrian vs. background), applied to each location i, where w i is the cost-sensitive weight introduced in 3.2: λ s i w i L s (S i ,Ŝ i ).(4) We choose to levereage the abundance of bounding box annotations available in popular pedestrian datasets (e.g., Caltech [8], KITTI [14]) by forming weak segmentation ground truth masks. Each mask S ∈ R W×H is generated by labeling all pedestrian box regions as S i = 1, and otherwise background S i = 0. In most cases, box-based annotations would be considered too noisy for semantic seg-mentation. However, since we place the infusion layer at conv5, which has been pooled significantly, the differences between box-based annotations and pixel-wise annotations diminish rapidly w.r.t. the pedestrian height (Fig. 5). For example, in the Caltech dataset 68% of pedestrians are less than 80 pixels tall, which corresponds to 3 × 5 pixels at conv5 of the RPN. Further, each of the BCN proposals are pooled to 7 × 7 at conv5. Hence, pixel-wise annotations may not offer a significant advantage over boxes at the high levels of pooling our networks undertake. Benefits Over Detection: A significant advantage of segmentation supervision over detection is its simplicity. For detection, sensitive hyperparamters must be set, such as anchor selection and IoU thresholds used for labeling and NMS. If the chosen anchor scales are too sparse or the IoU threshold is too high, certain ground truths that fall near the midpoint of two anchors could be missed or receive low supervision. In contrast, semantic segmentation treats all ground truths indiscriminate of how well the pedestrian's shape or occlusion-level matches the chosen set of anchors. In theory, the incorporation of semantic segmentation infusion may help reduce the sensitivity of conv1-5 to such hyperparamters. Furthermore, the segmentation supervision is especially beneficial for the second stage BCN, which on its own would only know if a pedestrian is present. The infusion of semantic segmentation features inform the BCN where the pedestrian is, which is critical for differentiating poorly vs. well-localized proposals. Experiments We evaluate our proposed SDS-RCNN on popular datasets including Caltech [8] and KITTI [14]. We perform comprehensive analysis and ablation experiments using the Caltech dataset. We refer to our collective method as SDS-RCNN and our region proposal network as SDS-RPN. We show the performance curves compared to the state-of-theart pedestrian detectors on Caltech in Fig. 6. We further report a comprehensive overview across datasets in Table 1. Benchmark Comparison Caltech: The Caltech dataset [8] contains ∼350K pedestrian bounding box annotations across 10 hours of urban driving. The log average miss rate sampled against a false positive per image (FPPI) range of [10 −2 , 10 0 ] is used for measuring performance. A minimum IoU threshold of 0.5 is required for a detected box to match with a ground truth box. For training, we sample from the standard training set according to Caltech10× [31], which contains 42,782 training images. We evaluate on the standard 4,024 images in the Caltech 1× test set using the reasonable [9] setting, which only considers pedestrians with at least 50 pixels in height and with less than 35% occlusion. SDS-RCNN achieves an impressive 7.36% miss rate. The performance gain is a relative improvement of 23% compared to the best published method RPN+BF (9.58%). In Fig. 6, we show the ROC plot of miss rate against FPPI for the current top performing methods reported on Caltech. We further report our performance using just SDS-RPN (without cost-sensitive weighting, Sec. 4.2) on Caltech as shown in Table 1. The RPN performs quite well by itself, reaching 9.63% miss rate while processing images at roughly 3× the speed of competitive methods. Our RPN is already on par with other top detectors, which themselves contain a RPN. Moreover, the network significantly outperforms other standalone RPNs such as in [29] (14.9%). Hence, the RPN can be leveraged by other researchers to build better detectors in the future. KITTI: The KITTI dataset [14] contains ∼80K annotations of cars, pedestrians, and cyclists. Since our focus is on pedestrian detection, we continue to use only the pedestrian class for training and evaluation. The mean Average Precision (mAP) [11] sampled across a recall range of [0, 1] is used to measure performance. We use the standard training set of 7,481 images and evaluate on the designated test set of 7,518 images. Our method reaches a score of 63.05 mAP on the moderate setting for the pedestrian class. Surprisingly, we observe that many models which perform well on Caltech do not generalize well to KITTI, as detailed in Table 1. We expect this is due to both sensitivity to hyperparameters and the smaller training set of KITTI (∼6× smaller than Caltech10×). MS-CNN [2] is the current top performing method for pedestrian detection on KITTI. Aside from the novelty as a multi-scale object detector, MS-CNN augments the KITTI dataset by random cropping and scaling. Thus, incorporating data augmentation could alleviate the smaller Method Caltech KITTI Runtime DeepParts [26] 11.89 58.67 1s CompACT-Deep [3] 11.75 58.74 1s MS-CNN [2] 9.95 73.70 0.4s SA-FastRCNN [21] 9.68 65.01 0.59s RPN+BF [29] 9.58 61.29 0.60s F-DNN [10] 8.65 -0.30s F-DNN+SS [10] 8 training set and lead to better generalization across datasets. Furthermore, as described in the ablation study of Sec. 4.2, our weak segmentation supervision primarily improves the detection of unusual shapes and poses (e.g., cyclists, people sitting, bent over). However, in the KITTI evaluation, the person sitting class is ignored and cyclists are counted as false positives, hence such advantages are less helpful. Efficiency: The runtime performance of SDS-RCNN takes ∼0.21s/image. We use images of size 720 × 960 pixels and a single Titan X GPU for computation. The efficiency of SDS-RCNN surpasses the current state-of-the-art methods for pedestrian detection, often by a factor of 2×. Compared to F-DNN+SS [10], which also utilizes segmentation cues, our method executes ∼10× faster. The next fastest runtime is F-DNN, which takes 0.30s/image with the caveat of requiring multiple GPUs to process networks in parallel. Further, our SDS-RPN method achieves very competitive accuracy while only taking 0.13s/image (frequently ∼3× faster than competitive methods using a single GPU). Ablation Study In this section, we evaluate how each significant component of our network contributes to performance using the reasonable set of Caltech [8]. First, we examine the impact of four components: weak segmentation supervision, proposal padding, cost-sensitive weighting, and stricter supervision. For each experiment, we start with SDS-RCNN and disable one component at a time as summarized in Table 2. For simplicity, we disable components globally when applicable. Then we provide detailed discussion on the benefits of stage-wise fusion and comprehensively report the RPN, BCN, and fused performances for all experiments. Finally, since our BCN is designed to not share features with the RPN, we closely examine how sharing weights between stages impacts network diversification and efficiency. Weak Segmentation: The infusion of semantic features into shared layers is the most critical component of SDS-RCNN. The fused miss rate degrades by a full 3.05% when the segmentation supervision is disabled, while both individual stages degrade similarly. To better understand the types of improvements gained by weak segmentation, we perform a failure analysis between SDS-RCNN and the "baseline" (non-weak segmentation) network. For analysis, we examine the 43 pedestrian cases which are missed when weak segmentation is disabled, but corrected otherwise. Example error corrections are shown in Fig. 7. We find that ∼48% of corrected pedestrians are at least partially occluded. Further, we find that ∼28% are pedestrians in unusual poses (e.g., sitting, cycling, or bent over). Hence, the feature maps infused with semantic features become more robust to atypical pedestrian shapes. These benefits are likely gained by semantic segmentation having indiscriminant coverage of all pedestrians, unlike object detection which requires specific alignment between pedestrians and anchor shapes. A similar advantage could be gained for object detection by expanding the coverage of anchors, but at the cost of computational complexity. Proposal Padding: While padding proposals is an intuitive design choice to provide background context (Fig. 3), the benefit in practice is minor. Specifically, when pro-posal padding is disabled, the fused performance only worsens from 7.36% to 7.69% miss rate. Interestingly, proposal padding remains critical for the individual BCN performance, which degrades heavily from 10.98% to 13.09% without padding. The low sensitivty of the fused score to padding suggests that the RPN is already capable of localizing and differentiating between partial and full-pedestrians, thus improving the BCN in this respect is less significant. Cost-sensitive: The cost-sensitive weighting scheme used to regularize the importance of large pedestrians over small pedestrians has an interesting effect on SDS-RCNN. When the cost-sensitive weighting is disabled, the RPN performance actually improves to an impressive 9.63% miss rate. In contrast, without cost-sensitive weighting the BCN degrades heavily, while the fused score degrades mildly. A logical explanation is that imposing a precedence on a single scale is counter-intuitive to the RPN achieving high recall across all scales. Further, the RPN has the freedom to learn scale-dependent features, unlike the BCN which warps to a fixed size for every proposal. Hence, the BCN can gain significant boost when encouraged to focus on large pedestrian features, which may be more scaleindependent than features of small pedestrians. Strict Supervision: Using a stricter labeling policy while training the BCN has a substantial impact on the performance of both the BCN and fused scores. Recall that the strict labeling policy requires a box to have IoU > 0.7 to be considered foreground, while the standard policy requires IoU > 0.5. When the stricter labeling policy is reduced to the standard policy, the fused performance degrades by 1.35%. Further, the individual BCN degrades by 6.43%, which is on par with the degradation observed when weak segmentation is disabled. We examine the failure cases of the strict versus non-strict BCN and observe that the false positives caused by double detections reduce by ∼22%. Hence, the stricter policy enables more aggressive suppression of poorly localized boxes and therefore reduces double detections produced as localization errors of the RPN. Stage Fusion: The power of stage-wise fusion relies on the assumption that the each network will diversify in their classification characteristics. Our design explicitly encourages this diversification by using separate labeling policies and training distributions for the RPN and BCN. Table 2 shows that although fusion is useful in every case, it is difficult to anticipate how well any two stages will perform when fused without examining their specific strengths and weaknesses. To better understand this effect, we visualize how fusion behaves when the RPN and BCN disagree (Fig. 8). We consider only boxes for which the RPN and BCN disagree using a decision threshold of 0.5. We notice that both networks agree on the majority of boxes (∼80K), but observe an interesting trend when they disagree. The visualization clearly shows that the RPN tends to predict a significant amount of background proposals with high scores, which are corrected after being fused with the BCN scores. The inverse is true for disagreements among the foreground, where fusion is able to correct the majority of pedestrians boxes given low scores by the BCN. It is clear that whenever the two networks disagree, the fused result tends toward the true score for more than ∼80% of the conflicts. Sharing Features: Since we choose to train a separate RPN and BCN, without sharing features, we conduct comprehensive experiments using different levels of stage-wise sharing in order to understand the value of diversification as a tradeoff to efficiency. We adopt the Faster R-CNN feature sharing scheme with five variations differing at the point of sharing (conv1-5) as detailed in Table 3. In each experiment, we keep all layers of the BCN except those before and including the shared layer. Doing so keeps the effective depth of the BCN unchanged. For example, if the shared layer is Table 3. Stage-wise sharing experiments which demonstrate the trade-off of runtime efficiency and accuracy, using the Caltech dataset. As sharing is increased from RGB (no sharing) to conv5, both the BCN and Fused miss rate (MR) become less effective. conv4 then we replace conv1-4 of the BCN with a RoIPooling layer connected to conv4 of the RPN. We configure the RoIPooling layer to pool to the resolution of the BCN at the shared layer (e.g., conv4 → 14 × 14, conv5→ 7 × 7). We observe that as the amount of sharing is increased, the overall fused performance degrades quickly. Overall, the results suggest that forcing the networks to share feature maps lowers their freedom to diversify and complement in fusion. In other words, the more the networks share the more susceptible they become to redundancies. Further, sharing features up to conv1 becomes slower than no stagewise sharing (e.g., RGB). This is caused by the increased number of channels and higher resolution feature map of conv1 (e.g., 720 × 960 × 64), which need to be cropped and warped. Compared to sharing feature maps with conv3, using no sharing results in a very minor slow down of 0.03 seconds while providing a 1.30% improvement to miss rate. Hence, our network design favors maximum precision for a reasonable trade-off in efficiency, and obtains speeds generally 2× faster than competitive methods. Conclusion We present a multi-task infusion framework for joint supervision on pedestrian detection and semantic segmentation. The segmentation infusion layer results in more sophisticated shared feature maps which tend to illuminate pedestrians and make downstream pedestrian detection easier. We analyze how infusing segmentation masks into feature maps helps correct pedestrian detection errors. In doing so, we observe that the network becomes more robust to pedestrian poses and occlusion compared to without. We further demonstrate the effectiveness of fusing stage-wise scores and encouraging network diversification between stages, such that the second stage classifier can learn a stricter filter to suppress background proposals and become more robust to poorly localized boxes. In our SDS-RCNN framework, we report new state-of-the-art performance on the Caltech pedestrian dataset (23% relative reduction in error), achieve competitive results on the KITTI dataset, and obtain an impressive runtime approximately 2× faster than competitive methods. Figure 1 . 1Detection results on the Caltech test set (left), feature map visualization from the RPN of conventional Faster R-CNN (middle), and feature map visualization of SDS-RCNN (right). Figure 2 . 2Overview of the proposed SDS-RCNN framework. The segmentation layer infuses semantic features into shared conv1-5 layers of each stage, thus illuminating pedestrians and easing downstream pedestrian detection (proposal layers in RPN, and FC1-2 in BCN). Figure 3 . 3Example proposal masks with and without padding. Figure 4 . 4Feature map visualizations of conv5 and the proposal layer for the baseline RPN (left) and the RPN infused with weak segmentation supervision (right). Figure 6 . 6Comparison of SDS-RCNN with the state-of-the-art methods on the Caltech dataset using the reasonable setting. Figure 7 . 7Example error sources which are corrected by infusing semantic segmentation into shared layers. Row 1 shows the test images from Caltech1×. Row 2 shows a visualization of the RPN proposal layer using the baseline network which fails on these examples. Row 3 shows a visualization of the proposal layer from SDS-RCNN, which corrects the errors. Collectively, occlusion and unusual poses of pedestrians (sitting, cyclist, bent over) make up for 75% of the corrections, suggesting that the the segmentation supervision naturally informs the shared features on robust pedestrian parts and shape information. Figure 8 . 8Visualization of the diversification between the RPN and BCN classification characteristics. We plot only boxes which the RPN and BCN of SDS-RCNN disagree on using a threshold of 0.5. The BCN drastically reduces false positives of the RPN, while the RPN corrects many missed detections by the BCN. Shared Layer BCN MR Fused MR Runtimeconv5 16.24 10.87 0.15s conv4 15.53 10.42 0.16s conv3 14.28 8.66 0.18s conv2 13.71 8.33 0.21s conv1 14.02 8.28 0.25s RGB 10.98 7.36 0.21s Multiscale combinatorial grouping. P Arbeláez, J Pont-Tuset, J T Barron, F Marques, J Malik, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1P. Arbeláez, J. Pont-Tuset, J. T. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 328-335, 2014. 1, 2 A unified multi-scale deep convolutional neural network for fast object detection. Z Cai, Q Fan, R S Feris, N Vasconcelos, European Conference on Computer Vision. Springer6Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A unified multi-scale deep convolutional neural network for fast ob- ject detection. In European Conference on Computer Vision, pages 354-370. Springer, 2016. 1, 2, 3, 6 Learning complexity-aware cascades for deep pedestrian detection. Z Cai, M Saberian, N Vasconcelos, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZ. Cai, M. Saberian, and N. Vasconcelos. Learning complexity-aware cascades for deep pedestrian detection. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 3361-3369, 2015. 6 Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. L.-C Chen, G Papandreou, I Kokkinos, K Murphy, A L Yuille, arXiv:1606.00915arXiv preprintL.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully con- nected crfs. arXiv preprint arXiv:1606.00915, 2016. 1 The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionM. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213-3223, 2016. 1, 2, 5 Instance-aware semantic segmentation via multi-task network cascades. J Dai, K He, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1J. Dai, K. He, and J. Sun. Instance-aware semantic segmen- tation via multi-task network cascades. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3150-3158, 2016. 1, 2 Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, Computer Vision and Pattern Recognition. IEEECVPR 2009. IEEE Conference onJ. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248-255. IEEE, 2009. 3 Pedestrian detection: A benchmark. P Dollár, C Wojek, B Schiele, P Perona, Computer Vision and Pattern Recognition. IEEE56CVPR 2009. IEEE Conference onP. Dollár, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: A benchmark. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 304-311. IEEE, 2009. 1, 2, 3, 4, 5, 6 Pedestrian detection: An evaluation of the state of the art. P Dollar, C Wojek, B Schiele, P Perona, IEEE transactions on pattern analysis and machine intelligence. 345P. Dollar, C. Wojek, B. Schiele, and P. Perona. Pedes- trian detection: An evaluation of the state of the art. IEEE transactions on pattern analysis and machine intelligence, 34(4):743-761, 2012. 2, 5 Fused dnn: A deep neural network fusion approach to fast and robust pedestrian detection. X Du, M El-Khamy, J Lee, L S Davis, arXiv:1610.0346626arXiv preprintX. Du, M. El-Khamy, J. Lee, and L. S. Davis. Fused dnn: A deep neural network fusion approach to fast and ro- bust pedestrian detection. arXiv preprint arXiv:1610.03466, 2016. 2, 3, 6 The PASCAL Visual Object Classes Challenge. M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman, M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results. http://www.pascal- network.org/challenges/VOC/voc2011/workshop/index.html. 6 The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman, M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal- network.org/challenges/VOC/voc2012/workshop/index.html. 2 Bottom-up segmentation for top-down detection. S Fidler, R Mottaghi, A Yuille, R Urtasun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1S. Fidler, R. Mottaghi, A. Yuille, and R. Urtasun. Bottom-up segmentation for top-down detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3294-3301, 2013. 1, 2 Are we ready for autonomous driving? the kitti vision benchmark suite. A Geiger, P Lenz, R Urtasun, Conference on Computer Vision and Pattern Recognition (CVPR). 6A. Geiger, P. Lenz, and R. Urtasun. Are we ready for au- tonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012. 1, 2, 5, 6 Object detection via a multiregion and semantic segmentation-aware cnn model. S Gidaris, N Komodakis, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionS. Gidaris and N. Komodakis. Object detection via a multi- region and semantic segmentation-aware cnn model. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 1134-1142, 2015. 2 Fast r-cnn. R Girshick, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision23R. Girshick. Fast r-cnn. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1440-1448, 2015. 2, 3 Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionR. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580-587, 2014. 2 Simultaneous detection and segmentation. B Hariharan, P Arbeláez, R Girshick, J Malik, European Conference on Computer Vision. Springer1B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simul- taneous detection and segmentation. In European Confer- ence on Computer Vision, pages 297-312. Springer, 2014. 1, 2 Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 2 Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, arXiv:1408.5093Caffe: Convolutional architecture for fast feature embedding. 34arXiv preprintY. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. 3, 4 Scaleaware fast r-cnn for pedestrian detection. J Li, X Liang, S Shen, T Xu, J Feng, S Yan, arXiv:1510.081606arXiv preprintJ. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan. Scale- aware fast r-cnn for pedestrian detection. arXiv preprint arXiv:1510.08160, 2015. 1, 2, 6 Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, European Conference on Computer Vision. SpringerT.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollár, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In European Conference on Com- puter Vision, pages 740-755. Springer, 2014. 1 Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. 13S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015. 1, 2, 3 Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 3 Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. 2 Deep learning strong parts for pedestrian detection. Y Tian, P Luo, X Wang, X Tang, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision26Y. Tian, P. Luo, X. Wang, and X. Tang. Deep learning strong parts for pedestrian detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 1904- 1912, 2015. 2, 6 Deep learning for semantic part segmentation with high-level guidance. S Tsogkas, I Kokkinos, G Papandreou, A Vedaldi, arXiv:1505.02438arXiv preprintS. Tsogkas, I. Kokkinos, G. Papandreou, and A. Vedaldi. Deep learning for semantic part segmentation with high-level guidance. arXiv preprint arXiv:1505.02438, 2015. 3 Exploit all the layers: Fast and accurate cnn object detector with scale dependent pooling and cascaded rejection classifiers. F Yang, W Choi, Y Lin, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionF. Yang, W. Choi, and Y. Lin. Exploit all the layers: Fast and accurate cnn object detector with scale dependent pool- ing and cascaded rejection classifiers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 2129-2137, 2016. 2 Is faster rcnn doing well for pedestrian detection?. L Zhang, L Lin, X Liang, K He, arXiv:1607.07032arXiv preprintL. Zhang, L. Lin, X. Liang, and K. He. Is faster r- cnn doing well for pedestrian detection? arXiv preprint arXiv:1607.07032, 2016. 1, 2, 3, 4, 6 How far are we from solving pedestrian detection?. S Zhang, R Benenson, M Omran, J Hosang, B Schiele, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele. How far are we from solving pedestrian detec- tion? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1259-1267, 2016. 1, 2 Filtered channel features for pedestrian detection. S Zhang, R Benenson, B Schiele, Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. IEEES. Zhang, R. Benenson, and B. Schiele. Filtered channel fea- tures for pedestrian detection. In Computer Vision and Pat- tern Recognition (CVPR), 2015 IEEE Conference on, pages 1751-1760. IEEE, 2015. 5
[]
[ "End-to-End Training of Deep Visuomotor Policies", "End-to-End Training of Deep Visuomotor Policies" ]
[ "Sergey Levine [email protected] \nDepartment of Electrical Engineering and Computer Sciences\nUC Berkeley\n\n", "Chelsea Finn [email protected] \nDepartment of Electrical Engineering and Computer Sciences\nUC Berkeley\n\n", "Trevor Darrell [email protected] \nDepartment of Electrical Engineering and Computer Sciences\nUC Berkeley\n\n", "Pieter Abbeel [email protected] \nDepartment of Electrical Engineering and Computer Sciences\nUC Berkeley\n\n" ]
[ "Department of Electrical Engineering and Computer Sciences\nUC Berkeley\n", "Department of Electrical Engineering and Computer Sciences\nUC Berkeley\n", "Department of Electrical Engineering and Computer Sciences\nUC Berkeley\n", "Department of Electrical Engineering and Computer Sciences\nUC Berkeley\n" ]
[]
Policy search methods based on reinforcement learning and optimal control can allow robots to automatically learn a wide range of tasks. However, practical applications of policy search tend to require the policy to be supported by handengineered components for perception, state estimation, and lowlevel control. We propose a method for learning policies that map raw, low-level observations, consisting of joint angles and camera images, directly to the torques at the robot's joints. The policies are represented as deep convolutional neural networks (CNNs) with 92,000 parameters. The high dimensionality of such policies poses a tremendous challenge for policy search. To address this challenge, we develop a sensorimotor guided policy search method that can handle high-dimensional policies and partially observed tasks. We use BADMM to decompose policy search into an optimal control phase and supervised learning phase, allowing CNN policies to be trained with standard supervised learning techniques. This method can learn a number of manipulation tasks that require close coordination between vision and control, including inserting a block into a shape sorting cube, screwing on a bottle cap, fitting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a clothes rack.
null
[ "https://arxiv.org/pdf/1504.00702v1.pdf" ]
7,242,892
1504.00702
46770857849d76076954a9a9f508ae887693b59e
End-to-End Training of Deep Visuomotor Policies Sergey Levine [email protected] Department of Electrical Engineering and Computer Sciences UC Berkeley Chelsea Finn [email protected] Department of Electrical Engineering and Computer Sciences UC Berkeley Trevor Darrell [email protected] Department of Electrical Engineering and Computer Sciences UC Berkeley Pieter Abbeel [email protected] Department of Electrical Engineering and Computer Sciences UC Berkeley End-to-End Training of Deep Visuomotor Policies Policy search methods based on reinforcement learning and optimal control can allow robots to automatically learn a wide range of tasks. However, practical applications of policy search tend to require the policy to be supported by handengineered components for perception, state estimation, and lowlevel control. We propose a method for learning policies that map raw, low-level observations, consisting of joint angles and camera images, directly to the torques at the robot's joints. The policies are represented as deep convolutional neural networks (CNNs) with 92,000 parameters. The high dimensionality of such policies poses a tremendous challenge for policy search. To address this challenge, we develop a sensorimotor guided policy search method that can handle high-dimensional policies and partially observed tasks. We use BADMM to decompose policy search into an optimal control phase and supervised learning phase, allowing CNN policies to be trained with standard supervised learning techniques. This method can learn a number of manipulation tasks that require close coordination between vision and control, including inserting a block into a shape sorting cube, screwing on a bottle cap, fitting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a clothes rack. I. INTRODUCTION Reinforcement learning and policy search methods hold the promise of allowing robots to acquire new behaviors through experience. They have been applied to a range of robotic tasks, including manipulation [2,13] and locomotion [5,7,15,39]. However, policies learned using such methods often rely on a number of hand-engineered components for perception and low-level control. The policy might specify a trajectory in taskspace, relying on hand-designed PD controllers to execute the desired motion, and a policy for manipulating objects might rely on an existing vision system to localize these objects [29]. The vision system in particular can be complex and prone to errors, and its performance is typically not improved during policy training, nor adapted to the goal of the task. We propose a method for learning policies that directly map raw observations, including joint angles and camera images, to motor torques. The policies are trained end-toend using real-world experience, optimizing both the control and perception components on the same measure of task performance. This allows the policy to learn goal-driven perception, which avoids the mistakes that are most costly for task performance. Learning perception and control in a general and flexible way requires a large, expressive model. Our policies are represented with convolutional neural networks (CNNs), which have 92,000 parameters and 7 layers. Deep CNN models have been shown to achieve state of the art results on a Technical report published 2/22/2015 as BVLC Report Series #100. Asterisk (*) denotes authors who contributed equally to the work. number of supervised vision tasks [8,16,40], but sensorimotor deep learning remains a challenging prospect. The policies are extremely high dimensional, and the control task is partially observed, since part of the state must be inferred from images. To address these challenges, we extend the framework of guided policy search to sensorimotor deep learning. Guided policy search decomposes the policy learning problem into two phases: a trajectory optimization phase that determines how to solve the task in a few specific conditions, and a supervised learning phase that trains the policy from these successful executions with supervised learning [22]. Since the CNN policy is trained with supervised learning, we can use the tools developed in the deep learning community to make this phase simple and efficient. We handle the partial observability of visuomotor control by optimizing the trajectories with full state information, while providing only partial observations (consisting of images and robot configurations) to the policy. The trajectories are optimized under unknown dynamics, using real-world experience and minimal prior knowledge. The main contribution of our work is a method for endto-end training of deep visuomotor policies for robotic manipulation. We propose a partially observed guided policy search algorithm that can train high-dimensional policies for tasks where part of the state must be determined from camera images. We also introduce a novel CNN architecture designed for robotic control, shown in Figure 2. The vision layers of this CNN are designed for localizing points of interest in an image, unlike standard vision architectures that discard locational information to induce translational invariance [16]. We evaluate our method by learning policies for inserting a block into a shape sorting cube, screwing a cap onto a bottle, fitting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a rack (see Figure 1). Our results demonstrate clear improvements in consistency and generalization from training visuomotor policies end-to-end, when compared to using the poses or features produced by a CNN trained for 3D object localization. II. RELATED WORK Reinforcement learning and policy search have been applied in robotics for playing games such as table tennis [13], object manipulation [2,29], and locomotion [5,7,15,39]. Several recent papers provide surveys of policy search in robotics [3,14]. Such methods are typically applied to one component of the robot control pipeline, which often sits on top of a hand-designed controller, such as a PD controller, and accepts processed input, for example from an existing vision pipeline [29]. Our method trains policies that map visual input and joint encoder signals directly to the torques at the robot's joints. By learning the entire mapping from perception to control, the perception layers can be adapted to optimize task performance. We represent our policies with convolutional neural networks (CNNs). CNNs have recently achieved dramatic improvements on a number of vision benchmarks [8,16,40]. Most applications of CNNs focus on classification, where locational information is intentionally discarded by means of successive pooling layers [18]. Applications to localization typically either use a sliding window to localize the object, reducing the task to classification [8], perform regression to a heatmap of manually labeled keypoints [40], requiring precise knowledge of the object position in the image and camera calibration, or use 3D models to localize previously scanned objects [30,36]. We use a novel CNN architecture that automatically learns feature points without any supervision beyond the information from the robot's encoders and camera. Unlike with visual recognition, applications of deep networks to robotic control have been comparatively limited. Backpropagation through the dynamics and the image formation process is impractical, since they are often nondifferentiable, and such long-range backpropagation leads to extreme numerical instability. The high dimensionality of the network also makes reinforcement learning very difficult [3]. Pioneering early work on neural network control used small, simple networks [10,33], and has largely been supplanted by methods that use carefully designed policies that can be learned efficiently with reinforcement learning [14]. More recent work on sensorimotor deep learning has tackled simple task-space motion [17] and used unsupervised learning to obtain low-dimensional state spaces from images [34], but such methods are limited to tasks with a low-dimensional structure. CNNs have also been trained to play video games with temporal difference learning and Monte Carlo tree search [9,26]. However, such methods have only been demonstrated on discrete, synthetic domains, and require an impractical number of samples for real-world robotic applications. Our method is sample efficient, requiring only minutes of interaction time. To the best of our knowledge, this is the first method that can train deep visuomotor policies for complex, highdimensional manipulation skills with direct torque control. Learning visuomotor policies end-to-end introduces two key challenges: partial observability and the high dimensionality of the policy. We tackle these challenges using guided policy search. In guided policy search, the policy is optimized using supervised learning, which scales gracefully with the dimensionality of the function approximator. The training set for this supervised learning procedure can be constructed from example demonstrations [20], trajectory optimization under known dynamics [21,22,28], and trajectory-centric reinforcement learning methods that operate under unknown dynamics [19,23], which is the approach taken in this work. We propose a new, partially observed guided policy search method based on the Bregman alternating directions method of multipliers (BADMM) that makes it practical to train complex, generalizable policies under partial observation. The goal of our approach is also similar to visual servoing, which performs feedback control on feature points in a camera image [6,27,42]. However, our visuomotor policies are entirely learned from real-world data, and do not require feature points or feedback controllers to be specified by hand. This gives our method considerable flexibility in choosing how to use the visual signal. Furthermore, our approach does not require any sort of camera calibration, in contrast to many visual servoing methods (though not all -see e.g. [11,43]). III. OVERVIEW The aim of our method is to learn a policy π θ (u t |o t ) that specifies a distribution over actions u t conditioned on the observation o t , which includes a camera image and the configuration of the robot. The policy parameters θ are optimized to minimize a cost function (x t , u t ) over the course of a fixed-length episode. The actions u t are the motor torques, and the state x t includes the known robot configuration as well as (for example) the target position for an object placement task or the grasp pose. The latter information is not observed directly by the policy, and must be inferred from the camera image. We represent π θ (u t |o t ) as a Gaussian, with the mean given by a nonlinear function approximator. Since this function approximator needs to operate directly on raw images, we use convolutional neural networks (CNNs), which have enjoyed considerable success in computer vision [16]. The architecture of our CNN is shown in Figure 2. This network has 7 layers and around 92,000 parameters, which presents a tremendous challenge for standard policy search methods [3]. To handle this high dimensionality and the challenge of partial observability, we extend the framework of guided policy search. In guided policy search, the policy is trained with supervised learning, which scales well even to very highdimensional function approximators. To construct the training set for supervised learning, we employ a trajectory-centric reinforcement learning method that finds good trajectories from a number of initial states. This phase does not require knowledge of the system dynamics, but does use the full state x t , which makes learning very efficient. For example, Fig. 2: Visuomotor policy architecture. The network contains three convolutional layers, followed by a spatial softmax and an expected position layer that converts pixel-wise features to feature points, which are better suited for spatial computations. The points are concatenated with the robot configuration, then passed through three fully connected layers to produce the torques. if the unobserved part of x t is the position of a target object, such as the bottle, we can hold this object in the robot's left gripper, while the right arm performs the task. This type of instrumented training is a natural fit for many robotic tasks, where the training is performed in a controlled environment, but the final policy must be able to succeed "in the wild." Naïve supervised learning will often fail to produce a good policy, since a small mistake on the part of the policy will put it in states that are not part of the training, causing compounding errors. To avoid this problem, the training data must come from the policy's own state distribution [35]. We use BADMM [41] to adapt the trajectories to the policy, alternating between optimizing the policy to match the trajectories, and optimizing the trajectories to minimize cost and match the policy, such that at convergence, they have the same state distribution. IV. PARTIALLY OBSERVED GUIDED POLICY SEARCH Guided policy search methods transform policy search into a supervised learning problem, where the training set is generated by simple trajectory-centric algorithms. The trajectory phase produces Gaussian trajectory distributions p i (τ ), which correspond to a mean trajectory with linear feedback. Each p i (τ ) succeeds from a specific initial state. For example, in the task of placing a cap on a bottle, these initial states corresponds to the positions of the bottle. By training on multiple trajectories for multiple bottle positions, the final CNN policy can succeed from all initial states, and can generalize to other states from the same distribution. run each p i (ut|xt) on robot samples τ i optimize π θ w.r.t. L θ fit dynamics optimize each p i (τ ) w.r.t. Lp inner loop outer loop We present a partially observed guided policy search method that uses BADMM to iteratively enforce agreement between the policy π θ (u t |o t ) and the trajectory distributions p i (τ ). A diagram of this method is shown on the right. In the outer loop, we draw a sample for each initial state on the real system. The samples are used to fit the dynamics for trajectory optimization, and serve as training data for the policy. The inner loop alternates between optimizing each p i (τ ) and optimizing the policy. Unlike prior guided policy search methods, the policy is trained on observations o t , allowing the method to handle partial observability, while the trajectories are optimized on the full state x t . Using BADMM allows us to formulate simple and efficient optimizations for both inner loop phases. We derive this algorithm below, followed by a discussion of the two inner loop phases and a comparison with prior guided policy search methods. A. Algorithm Derivation Policy search methods minimize the expected cost E π θ [ (τ )] of the policy π θ , where τ = {x 1 , u t , . . . , x T , u T } is a trajectory, and (τ ) = T t=1 (x t , u t ) is the cost of an episode. In the fully observed case, the expectation is taken un- der π θ (τ ) = p(x 1 ) T t=1 π θ (u t |x t )p(x t+1 |x t , u t ). For a par- tially observed task, we only know π θ (u t |o t ), but π θ (u t |x t ) can be recovered as π θ (u t |x t ) = π θ (u t |o t )p(o t |x t )do t . We will present the derivation in this section for π θ (u t |x t ), but we do not require knowledge of p(o t |x t ) in the final algorithm. As discussed in Section IV-C, the integral will be evaluated with samples from the real system. We begin by rewriting the expected cost minimization as a constrained problem: min p,π θ E p [ (τ )] s.t. p(u t |x t ) = π θ (u t |x t ) ∀ x t , u t , t,(1) where p(τ ) is another distribution. This formulation is equivalent to the original problem, since the constraint forces the two distributions to be identical. However, if we approximate the initial state distribution p(x 1 ) with samples x i 1 , we can choose p(τ ) to be a class of distributions that is much easier to optimize than π θ , as we will show later. The constrained problem can be solved by a dual descent method, which alternates between minimizing the Lagrangian with respect to the primal variables, and incrementing the Lagrange multipliers by their subgradient. Minimization of the Lagrangian with respect to p(τ ) and θ is done in alternating fashion: minimizing with respect to θ corresponds to supervised learning (making π θ match p(τ )), and minimizing with respect to p(τ ) consists of one or more trajectory optimization problems. The dual descent method we use is based on BADMM [41], a variant of ADMM that augments the Lagrangian with a Bregman divergence between the constrained variables. We use the KL-divergence and replace the constraint with the equivalent constraint p(u t |x t )p(x t ) = π θ (u t |x t )p(x t ), which yields iterations with the following steps: θ ← arg min θ T t=1 E p(xt)π θ (ut|xt) [λ xt,ut ] + ν t φ θ t (θ, p) p ← arg min p T t=1 E p(xt,ut) [ (x t , u t )−λ xt,ut ]+ν t φ p t (p, θ) λ xt,ut ← αν t (π θ (u t |x t )p(x t ) − p(u t |x t )p(x t )), where λ xt,ut is the Lagrange multiplier for state x t and action u t at time t, φ p t (p, θ) = E p(xt) [D KL (p(u t |x t ) π θ (u t |x t ))] and φ θ t (θ, p) = E p(xt) [D KL (π θ (u t |x t )) p(u t |x t )] are the KLdivergence terms, and α is a step size. The dynamics only affect the optimization with respect to p(τ ). In order to make this optimization efficient, we choose p(τ ) to be a mixture of N Gaussians p i (τ ), one for each initial state sample x i 1 . This makes the action conditionals p i (u t |x t ) and the dynamics p i (x t+1 |x t , u t ) linear Gaussian. This is a reasonable choice when the system is deterministic, or the noise is Gaussian or small, and we found that this approach is sufficiently tolerant to noise for use on real physical systems. Our choice of p also assumes that the policy π θ (u t |o t ) is conditionally Gaussian. This is also reasonable, since the mean and covariance of π θ (u t |o t ) can be any nonlinear function of the observations o t , which themselves are a function of the unobserved state x t . In Section IV-B, we show how these assumptions enable each p i (τ ) to be optimized very efficiently. But first, we must choose a tractable way to represent the infinite set of constraints p(u t |x t )p(x t ) = π θ (u t |x t )p(x t ). One approach proposed in prior work on policy search for approximating such infinite constraints is to replace them with expectations of features [32]. When the features consist of linear, quadratic, or higher order monomial functions of the random variable, this can be viewed as a constraint on the moments of the distributions. If we only use the first moment, we get a constraint on the expected action: E p(ut|xt)p(xt) [u t ] = E π θ (ut|xt)p(xt) [u t ]. If the stochasticity in the dynamics is low, as we assumed previously, the optimal solution for each p i (τ ) will have low entropy, making this first moment constraint a reasonable approximation. Furthermore, the KL-divergence terms in the augmented Lagrangians will still serve to softly enforce agreement between the higher moments. While this simplification is quite drastic, we found that it was more stable in practice than including higher moments. The alternating optimization is now given by θ ← arg min θ T t=1 E p(xt)π θ (ut|xt) [u T t λ µt ] + ν t φ θ t (θ, p) p ← arg min p T t=1 E p(xt,ut) [ (x t , u t )−u T t λ µt ]+ν t φ p t (p, θ) λ µt ← αν t (E π θ (ut|xt)p(xt) [u t ] − E p(ut|xt)p(xt) [u t ]), where λ µt is the Lagrange multiplier on the expected action at time t. In the algorithm diagram, we use L θ (θ, p) and L p (p, θ) as shorthand for the two augmented Lagrangians minimized with respect to θ and p, respectively. In the next two sections, we will describe how L p (p, θ) can be optimized with respect to p under unknown dynamics, and how L θ (θ, p) can be optimized for complex, high-dimensional policies. Implementation details of the BADMM optimization are presented in the supplementary appendix. B. Trajectory Optimization under Unknown Dynamics Since the Lagrangian L p (p, θ) in the previous section factorizes over the mixture elements in p(τ ) = i p i (τ ), we describe the trajectory optimization method for a single Gaussian p(τ ). When there are multiple mixture elements, this procedure is applied in parallel to each p i (τ ). The derivation follows prior work [19], and we describe novel modifications for improving performance and sample efficiency at the end. Since p(τ ) is Gaussian, the conditionals p(x t+1 |x t , u t ) and p(u t |x t ), which correspond to the dynamics and the controller, are linear-Gaussian. The dynamics are determined by the environment. If the dynamics are known, p(u t |x t ) can be optimized with a variant of the iterative linear-quadratic regulator [20,24]. In the case of unknown dynamics, we can fit p(x t+1 |x t , u t ) to sample trajectories gathered from the trajectory distribution at the previous iteration, denotedp(τ ). Ifp(τ ) is too different from p(τ ), these samples will not give a good estimate of p(x t+1 |x t , u t ), and the optimization will diverge. To avoid this, we can bound the change fromp(τ ) to p(τ ) in terms of their KL-divergence by a step size , resulting in the following constrained problem: min p(τ )∈N (τ ) L p (p, θ) s.t. D KL (p(τ ) p(τ )) ≤ . This type of policy update has previously been proposed by several authors in the context of policy search [1,19,31,32]. In the case when p(τ ) is Gaussian, this problem can be solved efficiently using dual gradient descent, while the dynamics p(x t+1 |x t , u t ) are fitted to samples gathered by running the previous controllerp(u t |x t ) on the robot. Fitting a global Gaussian mixture model to tuples (x t , u t , x t+1 ) and using it as a prior for fitting the dynamics p(x t+1 |x t , u t ) serves to greatly reduce the sample complexity. Note that this constrained optimization is performed in the "inner loop" of the optimization described in the previous section. The overall algorithm then becomes an instance of generalized BADMM [41]. Note that the augmented Lagrangian L p (p, θ) consists of an expectation under p(τ ) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of (x t , u t ), and fitting a linear-Gaussian to π θ (u t |x t ) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. This is significantly simpler and much faster than the forward-backward dynamic programming procedure employed in previous work [19,22]. This improvement is enabled by the use of BADMM, which allows us to always formulate the KL-divergence term in the Lagrangian with the distribution being optimized as the first argument. Since the KL-divergence is convex in its first argument, this makes the corresponding optimization significant easier. We also depart from previous work by allowing samples from multiple trajectories p i (τ ) to be used to fit a shared dynamics p(x t+1 |x t , u t ), while the controllers p i (u t |x t ) are allowed to vary. This makes sense when the initial states of these trajectories are similar, and they therefore visit similar regions. This allows us to draw just a single sample from each p i (τ ) at each iteration, allowing us to handle many more initial states. Prior work required around 5 samples per initial state, and was therefore limited to only a few initial states [23]. C. Supervised Policy Optimization Since the policy parameters θ participate only in the constraints of the optimization problem in Equation 1, optimizing the policy corresponds to minimizing the KL-divergence between the policy and trajectory distribution, as well as the expectation of λ T µt u t . For a conditional Gaussian policy of the form π θ (u t |o t ) = N (µ π (o t ), Σ π (o t )), the objective is L θ (θ, p) = 1 2N N i=1 T t=1 E pi(xt,ot) tr[C −1 ti Σ π (o t )]−log |Σ π (o t )| +(µ π (o t )−µ p ti (x t ))C −1 ti (µ π (o t )−µ p ti (x t )) + 2λ T µt µ π (o t ) , where µ p ti (x t ) is the mean of p i (u t |x t ) and C ti is the covariance, and the expectation is evaluated using samples from each p i (τ ) with corresponding observations o t . The observations are sampled from p(o t |x t ) by recording camera images on the real system. Since the input to µ π (o t ) and Σ π (o t ) is not the state x t , but only an observation o t , we can train the policy for partially observed tasks. Note that L θ (θ, p) is simply a weighted quadratic loss on the difference between the policy mean and the mean action of the trajectory distribution, offset by the Lagrange multiplier. The weighting is the precision matrix of the conditional in the trajectory distribution, which is equal to the curvature of its cost-to-go function [20]. This has an intuitive interpretation: L θ (θ, p) penalizes deviation from the trajectory distribution, with a penalty that is proportional to the approximate cost-to-go function. We optimize L θ (θ, p) with respect to θ using stochastic gradient descent (SGD), a standard method for neural network training. The covariance of the Gaussian policy does not depend on the observation in our prototype, though adding this dependence would be straightforward. Since training complex neural networks requires a substantial number of samples, we found it beneficial to include sampled observations from previous iterations into the policy optimization, evaluating the action µ p ti (x t ) at their corresponding states using the current trajectory distributions. Since these samples come from the wrong state distribution, we use importance sampling and weight them according to the ratio of their probability under the current distribution p(x t ) and the one they were sampled from, which is straightforward to evaluate under linear-Gaussian dynamics [21]. D. Comparison with Prior Guided Policy Search Methods We presented the first guided policy search method where the policy is trained on observations, while the trajectories are trained on the full state. The BADMM formulation of guided policy search is also new to this work, though several prior guided policy search methods based on constrained optimization have been proposed. Levine et al. proposed a formulation similar to Equation 1, but with a constraint on the KLdivergence between p(τ ) and π θ [22]. This results in a more complex forward-backward trajectory optimization phase. We found that our method was more effective at finding a good policy in fewer iterations, due to the Lagrange multipliers on the actions that more aggressively correct discrepancies between the policy and trajectories. We also found our method to be much more computationally efficient, especially when the number of trajectories p i (τ ) is large, since the trajectories are optimized with standard LQR backward passes. The use of ADMM for guided policy search was also proposed by Mordatch et al. for deterministic policies under known dynamics [28]. This approach requires known, deterministic dynamics and trains deterministic policies. Furthermore, because this approach uses a simple quadratic augmented Lagrangian term, it further requires penalty terms on the gradient of the policy to account for local feedback. Our approach enforces this feedback behavior due to the higher moments included in the KL-divergence term, but does not require computing the second derivative of the policy. V. END-TO-END VISUOMOTOR POLICIES Guided policy search allows us to optimize complex, highdimensional policies that act under partial observation. In this section, we describe a policy architecture that uses images from a monocular camera to execute a variety of manipulation tasks, as well as our training procedure. A. Visuomotor Policy Architecture Our visuomotor policy runs at 20 Hz on the robot, mapping monocular RGB images and the robot configurations to joint torques on a 7 DoF arm. The configuration includes the angles of the joints and the pose of the end-effector (defined by 3 points), as well as their velocities, but does not include the position of the target object or goal, which must be determined from the image. CNNs often use pooling to discard the locational information that is necessary to determine positions, since it is an irrelevant distractor for tasks such as object classification [18]. Because locational information is important for control, our policy does not use pooling. Additionally, CNNs built for spatial tasks such as human pose estimation often also rely on the availability of location labels in image-space, such as hand-labeled keypoints [40]. We propose a novel CNN architecture capable of estimating spatial information from an image without direct supervision in image space. Our pose estimation experiments, discussed in Section V-B, show that this network can learn useful visual features using only 3D positional information provided by the robot and no camera calibration. Furthermore, by training our network with guided policy search, it can acquire task-specific visual features that improve policy performance. Our network architecture is shown in Figure 2. The visual processing layers of the network consist of three convolutional layers, each of which learns a bank of filters that are applied to patches centered on every pixel of its input. These filters form a hierarchy of local image features. Each convolutional layer is followed by a rectifying nonlinearity of the form a cij = max(0, z cij ) for each channel c and each pixel coordinate (i, j). The third convolutional layer contains 32 response maps with resolution 109 × 109. These response maps are passed through a spatial softmax function of the form s cij = e acij / i j e a ci j . Each output channel of the softmax is a probability distribution over the location of a feature in the image. To convert from this distribution to a spatial representation, the network calculates the expected image position of each feature, yielding a 2D coordinate for each channel. These feature points are concatenated with the robot's configuration and fed through two fully connected layers, each with 40 rectified units, followed by linear connections to the torques. The full visuomotor policy contains about 92,000 parameters, of which 86,000 are in the convolutional layers. The spatial softmax and the expected position computation serve to convert pixel-wise representations in the convolutional layers to spatial coordinate representations, which can be manipulated by the fully connected layers into 3D positions or motor torques. The softmax also provides lateral inhibition, which suppresses low, erroneous activations. This makes our policy more robust to distractors, providing generalization to novel visual variation. We compare our architecture with more standard alternatives in Section VI-C. B. Visuomotor Policy Training We train the policy using the full state during the trajectory optimization phase, though the final policy acts under partial observations. This type of instrumented training is a natural choice for many robotics tasks, where the robot is trained under controlled conditions, but must then act intelligently in uncontrolled, real-world situations. In our tasks, the unobserved variables are the pose of a target object (e.g. the bottle on which a cap must be placed). During training, this target object is held in the robot's left gripper, while the robot's right arm performs the task, as shown above. This allows the robot to move the target through a range of known positions. The final visuomotor policy does not receive this position as input, but must instead use the camera images. The left arm is covered with cloth to prevent the policy from associating its appearance with the object's position. While we can train the visuomotor policy entirely on the robot, the algorithm would spend a large number of iterations learning basic visual features and arm motions that can more efficiently be learned by themselves, before being incorporated into the policy. To speed up learning, we initialize both the vision layers in the policy and the trajectory distributions for guided policy search by leveraging the fully observed training setup. This initialization does not use any additional information that is not already available from the robot. To initialize the vision layers, the robot moves the target object through a range of random positions, recording the object's pose and camera images. This dataset is used to train a pose regression CNN, which consists of the same vision layers as the policy, followed by a fully connected layer that outputs the 3D points that define the target. Since the training set is still quite limited, we initialize the filters in the first layer with weights from the model of Szegedy et al. [38], which is trained on ImageNet [4] classification. After training on pose regression, the weights in the convolutional layers are transferred to the policy CNN. This enables the robot to learn the appearance of the objects prior to learning the behavior. To initialize the trajectories, we take 15 iterations of guided policy search without optimizing the visuomotor policy. This allows for much faster training in the early iterations, when the trajectories are not yet successful, and optimizing the full visuomotor policy is unnecessarily time consuming. Since we still want the trajectories to arrive at compatible strategies for each target position, we replace the visuomotor policy during these iterations with a small network that receives the full state. This network serves only to constrain the trajectories and avoid divergent behaviors from emerging for similar initial states, which would make subsequent policy learning difficult. As shown in the diagram above, the trajectories can be pre-trained in parallel with the vision layer pre-training, which does not require the robot. After initialization, we train the full visuomotor policy with guided policy search. During the supervised policy optimization phase, the fully connected motor control layers are first optimized by themselves, since they are not initialized with pre-training. Then, the entire network is further optimized endto-end. We found that this setup prevents the convolutional layers from forgetting the useful features learned during pretraining while still training all layers of the policy. VI. EXPERIMENTAL RESULTS We evaluated our method by training policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, fitting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. The cost function for these tasks encourages low distance between three points on the end-effector and corresponding target points, low torques, and, for the bottle task, spinning the wrist. The equations for these cost functions follow prior work [23]. The tasks are illustrated in Figure 3. Each task involved variation of about 10-20 cm in each direction in the position of the target object (the rack, shape sorting cube, nail, and bottle). In addition, the coat hanger and hammer tasks were trained with two and three grasps, respectively. All tasks used the same policy architecture and model parameters. A. Visuomotor Policy Generalization We evaluated the visuomotor policies in three conditions: (1) the training target positions and grasps, (2) new target positions not seen during training and, for the hammer, new grasps (spatial test), and (3) training positions with visual distractors (visual test). A selection of these experiments is shown in the supplementary video. For the visual test, the shape sorting cube was placed on a table rather than held in the gripper, the coat hanger was placed on a rack with clothes, and the bottle and hammer tasks were done in the presence of clutter. Illustrations of this test are shown in Figure 4. The success rates for each test are shown in Table I. We compared to two baselines, both of which train the vision layers in advance for pose prediction, instead of training the entire policy end-to-end. The features baseline discards the last layer of the pose predictor and uses the feature points, resulting in the same architecture as our policy, while the prediction baseline feeds the predicted pose into the control layers. The pose prediction baseline is analogous to a standard modular approach to policy learning, where the vision system is first trained to localize the target, and the policy is trained on top of it. This variant achieves poor performance, because although the pose is accurate to about 1 cm, this is insufficient for such precise tasks. As shown in the video, the shape sorting cube and bottle cap insertions have tolerances of just a few millimeters. Such accuracy is difficult to achieve even with calibrated cameras and checkerboards. Indeed, prior work has reported that the PR2 can maintain a camera to end effector accuracy of about 2 cm during open loop motion [25]. This suggests that the failure of this baseline is not atypical, and that our visuomotor policies are learning visual features and control strategies that improve the robot's accuracy. When provided with pose estimation features, the policy has more freedom in how it uses the visual information, and achieves somewhat higher success rates. However, full endto-end training performs significantly better, achieving high accuracy even on the challenging bottle task, and successfully The policies exhibit moderate tolerance to distractors that are visually separated from the target object. However, as expected, they tend to perform poorly under drastic changes to the backdrop, or when the distractors are adjacent to or occluding the manipulated objects, as shown in the supplementary video. In future work, this could be mitigated by varying the scene at training time, or by artificially augmenting the image samples with synthetic transformations, as discussed in prior work in computer vision [37]. B. Features Learned with End-to-End Training In Figure 5, we compare the feature points learned through guided policy search to those learned by a CNN trained for pose prediction. After end-to-end training, the policy acquired a distinctly different set of feature points compared to the pose prediction CNN used for initialization. The end-toend trained model finds more feature points on task-relevant objects and fewer points on background objects. This suggests that the policy improves its performance by acquiring taskspecific visual features that differ from those learned for object localization. We further analyze the features learned by our policies in the supplementary appendix. C. CNN Architecture Evaluation To evaluate the visual processing portion of our architecture, we measured its accuracy on the pose estimation pre-training task discussed in Section V-B. We compare to a network where the fixed transformation from the softmax to the feature points is replaced with a conventional learned fully connected layer, as well as to networks that omit the softmax and use 3 × 3 max pooling with stride 2 at the first two layers. These alternative architectures have many more parameters, since the new fully connected layer takes as input the entire bank of response maps from the third convolutional layer. The results in Table II indicate that using the softmax and the fixed transformation from the softmax output to the spatial feature representation improves pose estimation accuracy and reduces overfitting. Our network is able to outperform the more standard architectures because it is forced by the softmax and expected position layers to learn feature points, which provide a concise representation suitable for spatial inference. The lower number of parameters also results in an easier optimization and reduces overfitting. D. Implementation and Computational Performance CNN training was implemented using the Caffe [12] deep learning library. Each visuomotor policy required 3-4 hours of training time: 20-30 minutes for the pose prediction data collection on the robot, 40-60 minutes for the fully observed trajectory pre-training on the robot and offline pose pretraining (which can be done in parallel), and between 1.5 and 2.5 hours for end-to-end training with guided policy search. The coat hanger task required two iterations of guided policy search, the shape sorting cube and the hammer required three, and the bottle task required four. Training time was dominated by computation rather than robot interaction time, and we expect significant speedup from a more efficient implementation. VII. DISCUSSION AND FUTURE WORK In this paper, we presented a method for learning robotic control policies that use raw input from a monocular camera. These policies are represented by a novel convolutional neural network architecture, and can be trained end-to-end using our partially observed guided policy search algorithm, which decomposes the policy search problem in a trajectory optimization phase that uses full state information and a supervised learning phase that only uses partial observations. This decomposition allows us to leverage state-of-the-art tools from supervised learning, making it straightforward to optimize extremely high-dimensional policies. Our experimental results show that our method can execute complex manipulation skills, and that end-to-end training produces significant improvements in policy performance compared to using fixed vision layers trained for pose prediction. Although we demonstrate moderate generalization over variations in the scene, our current method does not generalize to dramatically different settings, especially when visual distractors occlude the manipulated object or break up its silhouette in ways that differ from the training. The success of CNNs on exceedingly challenging vision tasks suggests that this class of models is capable of learning invariance to irrelevant distractor features [8,16,40], and in principle this issue can be addressed by training the policy in a variety of environments, though this poses certain logistical challenges. More practical alternatives that could be explored in future work include simultaneously training the policy on multiple robots, each of which is located in a different environment, developing more sophisticated regularization and pre-training techniques to avoid overfitting, and introducing artificial data augmentation to encourage the policy to be invariant to irrelevant clutter. However, even without these improvements, our method has numerous applications in, for example, an industrial setting where the robot must repeatedly and efficiently perform a task that requires visual feedback under moderate variation in background and clutter conditions. In future work, we hope to explore more complex policy architectures, such as recurrent policies that can deal with extensive occlusions by keeping a memory of past observations. We also hope to extend our method to a wider range of tasks that can benefit from visual input, as well as a variety of other rich sensory modalities, including haptic input from pressure sensors and auditory input. With a wider range of sensory modalities, end-to-end training of sensorimotor policies will become increasingly important: while it is often straightforward to imagine how vision might help to localize the position of an object in the scene, it is much less apparent how sound can be integrated into robotic control. A learned sensorimotor policy would be able to naturally integrate a wide range of modalities and utilize them to directly aid in control. L θ (θ, p) = 1 2N N i=1 T t=1 E pi(xt,ot) tr[C −1 ti Σ π ] − log |Σ π | . Differentiating and setting the derivative to zero, we obtain the following equation for Σ π : Σ π = 1 N T N i=1 T t=1 C −1 ti −1 , where the expectation under p i (x t ) is omitted, since C ti does not depend on x t . 3) Trajectory optimization: In this section, we review how the LQR backward pass can be used to optimize the constrained objective in Section IV-B. This derivation follows previous work [19]. The constrained trajectory optimization problem is given by min p(τ )∈N (τ ) L p (p, θ) s.t. D KL (p(τ ) p(τ )) ≤ . As discussed in the paper, L p (p, θ) can be written as the expectation of some function c(τ ) that is independent of p, such that L p (p, θ) = E p(τ ) [c(τ )]. Writing the Lagrangian of the constrained optimization, we have L(p) = E p(τ ) [c(τ )] − ηE p(τ ) [logp(τ )] − ηH(p(τ )) − η , where η is the Lagrange multiplier. Note that L(p) is the Lagrangian of the constrained trajectory optimization, which is not related to the augmented Lagrangian L p (τ, θ). Grouping the terms in the expectation and omitting constants, we can rewrite the minimization of the Lagrangian with respect to the primal variables as Letc(τ ) = 1 η c(τ ) − logp(τ ). The above optimization then corresponds to minimizing E p(τ ) [c(τ )] − H(p(τ )). As shown in prior work (see, e.g., [20]), this type of maximum entropy problem can be solved using the LQR algorithm, and the solution is given by p(u t |x t ) = N (K t x t + k t ; Q −1 u,ut ), where K t and k t are the feedback and open loop terms of the optimal linear feedback controller corresponding to the cost c(x t , u t ) and the dynamics p(x t+1 |x t , u t ), and Q u,ut is the quadratic term in the Q-function at time step t. All of these terms are obtained as a result of the standard LQR backward pass (see, e.g., [24]). B. Feature Point Analysis The visual processing layers of our architecture automatically learn features points using the fixed transformation from the softmax to spatial coordinates. These feature points encapsulate all of the visual information received by the motor layers of the policy. In Figure 6, we show the features points discovered by our visuomotor policy through guided policy search. Each policy learns features on the target object and the robot manipulator, both clearly relevant to task execution. The policy tends to pick out robust, distinctive features on the objects, such as the left pole of the clothes rack, the left corners of the shape-sorting cube, the bottom-left corner of the toy tool bench, and the edges of the bottle. In Figure 7, we compare the feature points learned through guided policy search to those learned by a CNN trained for pose prediction. Note that in all tasks, the end-to-end trained model produces fewer points on the background compared to the model trained on object pose. In the bottle task, the endto-end trained policy outputs points on both sides of the bottle, including one on the cap, while the pose prediction network only finds points on the right edge of the bottle. The feature point representation is very simple, since it assumes that the learned features are present at all times. While this is a drastic simplification, both the pose predictor and the policy still achieve good results. A more flexible architecture that still learns a concise feature point representation could further improve policy performance. We hope to explore this in future work. (c) hammer (d) bottle Fig. 6: Feature points tracked by the policy during task execution for each of the four tasks. Each feature point is displayed in a different random color, with consistent coloring across images. The policy finds features on the target object and the robot gripper and arm. In the bottle cap task, note that the policy correctly ignores the distractor bottle in the background, even though it was not present during training. (a) hanger (b) cube (c) hammer (d) bottle Fig. 7: Feature points learned for each task. For each input image, the feature points produced by the policy are shown in blue, while the feature points of the pose prediction network are shown in red. The end-to-end trained policy tends to discover more feature points on the target object and the robot arm than the pose prediction network. Fig. 1 : 1Our method learns visuomotor policies that directly use camera image observations (left) to set motor torques on a PR2 robot (right). Illustration of the tasks in our experiments, showing the variation in the position of the target for the hanger, cube, and bottle tasks, as well as two of the three grasps for the hammer, which also included variation in position (not shown). Feature points learned by the shape sorting cube policy. Two of the 32 conv3 response maps are shown in (a), and the corresponding softmax distributions are displayed in (b). In (c), we show the output feature points for this input image in blue, while the feature points of the pose prediction network are shown in red. The end-to-end trained model discovers more feature points on the cube and the gripper. τ ) − logp(τ ) −H(p(τ )). The video can be viewed at http://sites.google.com/site/visuomotorpolicy(a) hanger (b) cube (c) hammer (d) bottle training visual test Fig. 4: Training and visual test scenes as seen by the policy at the ends of successful episodes. The hammer and bottle images were cropped for visualization only. adapting to the variety of grasps on the hammer task. This suggests that, although the vision layer pre-training is clearly beneficial for reducing computation time, it is not sufficient by itself for discovering good features for visuomotor policies. coat hanger training (18) spatial test (24) visual test (18) end-to-end training 100% 100% 100% pose features 88.9% 87.5% 83.3% pose prediction 55.6% 58.3% 66.7% shape sorting cube training (27) spatial test (36) visual test (40) end-to-end training 96.3% 91.7% 87.5% pose features 70.4% 83.3% 40% pose prediction 0% 0% n/a toy claw hammer training (45) spatial test (60) visual test (60) end-to-end training 91.1% 86.7% 78.3% pose features 62.2% 75.0% 53.3% pose prediction 8.9% 18.3% n/a bottle cap training (27) spatial test (12) visual test (40) end-to-end training 88.9% 83.3% 62.5% pose features 55.6% 58.3% 27.5% TABLE I : ISuccess rates on training positions, on novel test positions, and in the presence of visual distractors. The number of trials per test is shown in parentheses. TABLE II : IIAverage pose estimation accuracy and standard deviation with various architectures, measured as average Euclidean error for the three target points in 3D, with ground truth determined by forward kinematics from the left arm. APPENDIXA. Guided Policy Search Algorithm DetailsIn this appendix, we describe a number of implementation details of our BADMM-based guided policy search algorithm.1) BADMM step size and weight adjustment: Recall that the inner loop alternating optimization is given byWe use a step size of α = 0.1 in all of our experiments, which we found to be more stable than α = 1.0. The weights ν t are initialized to 0.01 and incremented based on the following schedule: at every iteration, we compute the average KLdivergence between p(u t |x t ) and π θ (u t |x t ) at each time step, as well as its standard deviation over time steps. The weights ν t corresponding to time steps where the KL-divergence is higher than the average are increased by a factor of 2, and the weights corresponding to time steps where the KL-divergence is two standard deviations or more below the average are decreased by a factor of 2. The rationale behind this schedule is to adjust the KL-divergence penalty to keep the policy and trajectory in agreement by roughly the same amount at all time steps. Increasing ν t too quickly can lead to the policy and trajectory becoming "locked" together, which makes it difficult for the trajectory to decrease its cost, while leaving it too low requires more iterations for convergence. We found this schedule to work well across all tasks, both during trajectory pre-training and while training the visuomotor policy.2) Policy variance optimization: As discussed in the paper, the variance of the Gaussian policy π θ (u t |o t ) does not depend on the observation, though this dependence would be straightforward to add. Analyzing the objective L θ (θ, p), we can write out only the terms that depend on Σ π : Covariant policy search. J A Bagnell, J Schneider, International Joint Conference on Artificial Intelligence (IJCAI). J. A. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on Artificial Intelli- gence (IJCAI), 2003. Learning to control a low-cost manipulator using data-efficient reinforcement learning. M Deisenroth, C Rasmussen, D Fox, Robotics: Science and Systems (RSS). M. Deisenroth, C. Rasmussen, and D. Fox. Learning to control a low-cost manipulator using data-efficient reinforcement learning. In Robotics: Science and Systems (RSS), 2011. A survey on policy search for robotics. Foundations and Trends in Robotics. M Deisenroth, G Neumann, J Peters, 2M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1-142, 2013. ImageNet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L Li, K Li, L Fei-Fei, Computer Vision and Pattern Recognition (CVPR). J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition (CVPR), 2009. Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot. G Endo, J Morimoto, T Matsubara, J Nakanishi, G Cheng, International Journal of Robotic Research. 272G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi, and G. Cheng. Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot. International Journal of Robotic Research, 27(2):213- 228, 2008. A new approach to visual servoing in robotics. B Espiau, F Chaumette, P Rives, IEEE Transactions on Robotics and Automation. 83B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 1992. Fast biped walking with a reflexive controller and realtime policy searching. T Geng, B Porr, F Wörgötter, Advances in Neural Information Processing Systems (NIPS). T. Geng, B. Porr, and F. Wörgötter. Fast biped walking with a reflexive controller and realtime policy searching. In Advances in Neural Information Processing Systems (NIPS), 2006. Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, Conference on Computer Vision and Pattern Recognition (CVPR). R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and se- mantic segmentation. In Conference on Computer Vision and Pattern Recognition (CVPR), 2014. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. X Guo, S Singh, H Lee, R L Lewis, X Wang, Advances in Neural Information Processing Systems (NIPS). X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems (NIPS), 2014. Neural networks for control systems: A survey. K J Hunt, D Sbarbaro, R Żbikowski, P J Gawthrop, Automatica. 286K. J. Hunt, D. Sbarbaro, R.Żbikowski, and P. J. Gawthrop. Neural networks for control systems: A survey. Automatica, 28(6):1083-1112, November 1992. Experimental evaluation of uncalibrated visual servoing for precision manipulation. M Jägersand, O Fuentes, R C Nelson, International Conference on Robotics and Automation (ICRA). M. Jägersand, O. Fuentes, and R. C. Nelson. Exper- imental evaluation of uncalibrated visual servoing for precision manipulation. In International Conference on Robotics and Automation (ICRA), 1997. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, Caffe, arXiv:1408.5093Convolutional architecture for fast feature embedding. arXiv preprintY. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. Reinforcement learning to adjust robot movements to new situations. J Kober, E Oztop, J Peters, Robotics: Science and Systems (RSS). J. Kober, E. Oztop, and J. Peters. Reinforcement learning to adjust robot movements to new situations. In Robotics: Science and Systems (RSS), 2010. Reinforcement learning in robotics: A survey. J Kober, J A Bagnell, J Peters, International Journal of Robotic Research. 3211J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotic Research, 32(11):1238-1274, 2013. Policy gradient reinforcement learning for fast quadrupedal locomotion. N Kohl, P Stone, International Conference on Robotics and Automation (IROS). N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Interna- tional Conference on Robotics and Automation (IROS), 2004. ImageNet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G Hinton, Advances in Neural Information Processing Systems (NIPS). A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS). 2012. Acquiring visual servoing reaching and grasping skills using neural reinforcement learning. T Lampe, M Riedmiller, International Joint Conference on Neural Networks (IJCNN). T. Lampe and M. Riedmiller. Acquiring visual servoing reaching and grasping skills using neural reinforcement learning. In International Joint Conference on Neural Networks (IJCNN), 2013. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. H Lee, R Grosse, R Ranganath, A Y Ng, International Conference on Machine Learning (ICML). H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convo- lutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning (ICML), 2009. Learning neural network policies with guided policy search under unknown dynamics. S Levine, P , Advances in Neural Information Processing Systems (NIPS). S. Levine and P. Abbeel. Learning neural network poli- cies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), 2014. Guided policy search. S Levine, V Koltun, International Conference on Machine Learning (ICML). S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013. Variational policy search via trajectory optimization. S Levine, V Koltun, Advances in Neural Information Processing Systems (NIPS). S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in Neural Informa- tion Processing Systems (NIPS), 2013. Learning complex neural network policies with trajectory optimization. S Levine, V Koltun, International Conference on Machine Learning (ICML). S. Levine and V. Koltun. Learning complex neural network policies with trajectory optimization. In Interna- tional Conference on Machine Learning (ICML), 2014. Learning contactrich manipulation skills with guided policy search. S Levine, N Wagener, P Abbeel, International Conference on Robotics and Automation (ICRA). S. Levine, N. Wagener, and P. Abbeel. Learning contact- rich manipulation skills with guided policy search. In International Conference on Robotics and Automation (ICRA), 2015. Iterative linear quadratic regulator design for nonlinear biological movement systems. W Li, E Todorov, ICINCO (1). W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222-229, 2004. Autonomous door opening and plugging in with a personal robot. W Meeussen, M Wise, S Glaser, S Chitta, C Mcgann, P Mihelich, E Marder-Eppstein, M Muja, T Victor Eruhimov, J Foote, R B Hsu, B Rusu, G Marthi, K Bradski, B Konolige, E Gerkey, Berger, International Conference on Robotics and Automation (ICRA). W. Meeussen, M. Wise, S. Glaser, S. Chitta, C. McGann, P. Mihelich, E. Marder-Eppstein, M. Muja, Victor Eruhi- mov, T. Foote, J. Hsu, R.B. Rusu, B. Marthi, G. Bradski, K. Konolige, B. Gerkey, and E. Berger. Autonomous door opening and plugging in with a personal robot. In International Conference on Robotics and Automation (ICRA), 2010. V Mnih, K Kavukcuoglu, D Silver, A Graves, I Antonoglou, D Wierstra, M Riedmiller, Playing Atari with deep reinforcement learning. NIPS '13 Workshop on Deep Learning. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Play- ing Atari with deep reinforcement learning. NIPS '13 Workshop on Deep Learning, 2013. Vision based control of a quadrotor for perching on planes and lines. K Mohta, V Kumar, K Daniilidis, International Conference on Robotics and Automation (ICRA). K. Mohta, V. Kumar, and K. Daniilidis. Vision based control of a quadrotor for perching on planes and lines. In International Conference on Robotics and Automation (ICRA), 2014. Combining the benefits of function approximation and trajectory optimization. I Mordatch, E Todorov, Robotics: Science and Systems (RSS). I. Mordatch and E. Todorov. Combining the benefits of function approximation and trajectory optimization. In Robotics: Science and Systems (RSS), 2014. Learning and generalization of motor skills by learning from demonstration. P Pastor, H Hoffmann, T Asfour, S Schaal, International Conference on Robotics and Automation (ICRA). P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal. Learn- ing and generalization of motor skills by learning from demonstration. In International Conference on Robotics and Automation (ICRA), 2009. Teaching 3D geometry to deformable part models. B Pepik, M Stark, P Gehler, B Schiele, Computer Vision and Pattern Recognition (CVPR). B. Pepik, M. Stark, P. Gehler, and B. Schiele. Teaching 3D geometry to deformable part models. In Computer Vision and Pattern Recognition (CVPR), 2012. Reinforcement learning of motor skills with policy gradients. J Peters, S Schaal, Neural Networks. 214J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682- 697, 2008. Relative entropy policy search. J Peters, K Mülling, Y Altün, AAAI Conference on Artificial Intelligence. J. Peters, K. Mülling, and Y. Altün. Relative entropy pol- icy search. In AAAI Conference on Artificial Intelligence, 2010. ALVINN: an autonomous land vehicle in a neural network. D Pomerleau, Advances in Neural Information Processing Systems (NIPS). D. Pomerleau. ALVINN: an autonomous land vehicle in a neural network. In Advances in Neural Information Processing Systems (NIPS), 1989. Autonomous reinforcement learning on raw visual input data in a real world application. M Riedmiller, S Lange, A Voigtlaender, International Joint Conference on Neural Networks. M. Riedmiller, S. Lange, and A. Voigtlaender. Au- tonomous reinforcement learning on raw visual input data in a real world application. In International Joint Conference on Neural Networks, 2012. A reduction of imitation learning and structured prediction to no-regret online learning. S Ross, G Gordon, A Bagnell, Journal of Machine Learning Research. 15S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15:627-635, 2011. 3D generic object categorization, localization and pose estimation. S Savarese, L Fei-Fei, International Conference on Computer Vision (ICCV). S. Savarese and L. Fei-Fei. 3D generic object categoriza- tion, localization and pose estimation. In International Conference on Computer Vision (ICCV), 2007. Best practices for convolutional neural networks applied to visual document analysis. P Y Simard, D Steinkraus, J C Platt, Seventh International Conference on Document Analysis and Recognition. P. Y. Simard, D. Steinkraus, and J. C. Platt. Best prac- tices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, 2003. . C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, arXiv:1409.4842Going deeper with convolutions. arXiv preprintC. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. Stochastic policy gradient reinforcement learning on a simple 3d biped. R Tedrake, T Zhang, H Seung, International Conference on Intelligent Robots and Systems (IROS). R. Tedrake, T. Zhang, and H. Seung. Stochastic policy gradient reinforcement learning on a simple 3d biped. In International Conference on Intelligent Robots and Systems (IROS), 2004. Joint training of a convolutional network and a graphical model for human pose estimation. J J Tompson, A Jain, Y Lecun, C Bregler, Advances in Neural Information Processing Systems (NIPS). J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in Neural Information Processing Systems (NIPS), 2014. Bregman alternating direction method of multipliers. H Wang, A Banerjee, Advances in Neural Information Processing Systems (NIPS). H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. In Advances in Neural Informa- tion Processing Systems (NIPS). 2014. Relative end-effector control using cartesian position based visual servoing. W J Wilson, C W Williams Hulls, G S Bell, IEEE Transactions on Robotics and Automation. 125W. J. Wilson, C. W. Williams Hulls, and G. S. Bell. Relative end-effector control using cartesian position based visual servoing. IEEE Transactions on Robotics and Automation, 12(5), 1996. Active, uncalibrated visual servoing. B H Yoshimi, P K Allen, International Conference on Robotics and Automation (ICRA). B. H. Yoshimi and P. K. Allen. Active, uncalibrated visual servoing. In International Conference on Robotics and Automation (ICRA), 1994.
[]
[ "ARITHMETIC GEOMETRY OF CHARACTER VARIETIES WITH REGULAR MONODROMY", "ARITHMETIC GEOMETRY OF CHARACTER VARIETIES WITH REGULAR MONODROMY" ]
[ "Masoud Kamgarpour ", "ANDGyeonghyeon Nam ", "Anna Puskás " ]
[]
[]
We study character varieties arising as moduli of representations of an orientable surface group into a reductive group G. We first show that if G/Z acts freely on the representation variety, then both the representation variety and the character variety are smooth and equidimensional. Next, we count points on a family of smooth character varieties; namely, those involving both regular semisimple and regular unipotent monodromy. In particular, we show that these varieties are polynomial count and obtain an explicit expression for their E-polynomials. Finally, by analysing the E-polynomial, we determine certain topological invariants of these varieties such as the Euler characteristic and the number of connected components.
null
[ "https://export.arxiv.org/pdf/2209.02171v2.pdf" ]
252,089,615
2209.02171
29b9551e6fd18c4a7be2bb91d622f35643e95115
ARITHMETIC GEOMETRY OF CHARACTER VARIETIES WITH REGULAR MONODROMY 16 Feb 2023 Masoud Kamgarpour ANDGyeonghyeon Nam Anna Puskás ARITHMETIC GEOMETRY OF CHARACTER VARIETIES WITH REGULAR MONODROMY 16 Feb 2023 We study character varieties arising as moduli of representations of an orientable surface group into a reductive group G. We first show that if G/Z acts freely on the representation variety, then both the representation variety and the character variety are smooth and equidimensional. Next, we count points on a family of smooth character varieties; namely, those involving both regular semisimple and regular unipotent monodromy. In particular, we show that these varieties are polynomial count and obtain an explicit expression for their E-polynomials. Finally, by analysing the E-polynomial, we determine certain topological invariants of these varieties such as the Euler characteristic and the number of connected components. Introduction and the main results Character varieties of surface groups play a central role in diverse areas of mathematics such as non-abelian Hodge theory [Sim92,Sim94] and the geometric Langlands program [BD97,BZN18]. The study of the topology and geometry of character varieties has been a subject of active research for decades. In their groundbreaking work [HRV08], Hausel and Rodriguez-Villegas counted points on character varieties associated to the once-punctured surface groups and GL n , assuming the loop around the puncture is mapped to a primitive n th root of unity. This work set in motion much further progress in understanding the arithmetic geometry of character varieties, cf. [HLRV11,dCHM12,Let15,BH17,Sch16,Mel20,LRV20,Bal22,BK22]. As far as we are aware, aside from [Cam17] and [BK22], all the previous papers regarding point count on character varieties concern the case when G is of type A. However, in many applications, e.g. the Langlands program, it is crucial to study character varieties associated to general reductive groups. In a recent preprint [BK22], the first author and Bridger counted points on character stacks associated to general reductive groups and compact surfaces. In this paper, we initiate the study of the arithmetic geometry of character varieties associated to non-compact surfaces. We thus make progress in generalising the program of [HLRV11] from type A to arbitrary types. A key feature of our approach is that it works uniformly for all G; i.e., we avoid a type-by-type analysis. 1.1. Definitions. Fix non-negative integers g and n. Let Γ = Γ g,n be the fundamental group of a compact orientable surface with genus g and n punctures; i.e., Γ = a 1 , b 1 , . . . , a g , b g , c 1 , . . . , c n [a 1 , b 1 ] · · · [a g , b g ]c 1 · · · c n . Let G be a connected split reductive group over a field k and (C 1 , . . . , C n ) an n-tuple of conjugacy classes of G. The associated representation variety R = R(C 1 , ..., C n ) is R := (A 1 , B 1 , . . . , A g , B g , S 1 , . . . , S n ) ∈ G 2g × n i=1 C i [A 1 , B 1 ] · · · [A g , B g ]S 1 · · · S n = 1 . This is an affine scheme of finite type over k. The group G acts on R by conjugation and the corresponding GIT quotient X := R/ /G = R/ /(G/Z) (where Z is the centre of G) is called the character variety associated to Γ, G, and (C 1 , ..., C n ). 1.1.1. Non-emptiness. It is easy to see that R(k) (and therefore X(k)) is empty, unless there exist S i ∈ C i (k) such that the product S 1 · · · S n is in [G(k), G(k)]. 1 Henceforth, we assume that this condition is satisfied. If g > 0 then this condition is also sufficient for R to be nonempty, because the commutator map G × G → [G, G] is surjective, cf. [Ree64]. In contrast, when g = 0, deciding if R is non-empty is subtle and is closely related to the Deligne-Simpson problem, cf. [Kos04]. See Theorem 8 for some new results regarding non-emptiness. 1.2. Smoothness. Character varieties are often singular. Our first main result is a criteria for smoothness. Theorem 1. Suppose R is non-empty and G/Z acts freely on it. Assume char(k) is admissible in the sense of Definition 10. Then (i) R is smooth of pure dimension dim(R) = 2g dim(G) − dim([G, G]) + n i=1 dim(C i ). (ii) The canonical map [R/(G/Z)] → X from the quotient stack into the GIT quotient is an isomorphism. Thus, R is a principal G/Z-bundle over X in theétale topology. Hence, X is also smooth of pure dimension dim(X) = 2g dim(G) − 2 dim([G, G]) + n i=1 dim(C i ). This theorem is a reductive generalisation of [HLRV11, Theorem 2.1.5]. Proof of Part (i) is an application of the Regular Value Theorem. Part (ii) follows from Part (i) and general facts about actions of reductive groups on affine schemes. See §2 for details of the proof. Remark 2. (i) Suppose G/Z acts with finite stabilisers on the representation variety R. Then the proof of Theorem 1.(i) shows that R is smooth. In this case, the character variety X has orbifold singularities. (ii) If k = C and all C i 's are semisimple, an analytic proof of the above theorem is given in [Boa14,§9.3]. 1.2.1. Free action. We now consider a case where it is easy to see that G/Z acts freely on R. Let T be a maximal split torus of G. Following Steinberg [Ste65], we call a semisimple element S ∈ T strongly regular if C G (S) = T . Lemma 3. Assume one of the C i 's is strongly regular semisimple and another one is regular unipotent. Then the action of G/Z on R is free. Proof. The centraliser C G (N) of a regular unipotent element is a product ZA of the centre and a (possibly disconnected) unipotent group A, cf. Theorem 4.11 and 4.12 of [Spr66]. Thus, if S ∈ T is a strongly regular element, then C G (S) ∩ C G (N) = T ∩ ZA = Z. Here, the last equality follows from the fact that the only semisimple elements of ZA are the central ones. Corollary 4. Under the assumptions of the lemma, R and X are smooth. 2 1.3. Character varieties with regular monodromy. Definition 5. Let m and n be integers satisfying 1 ≤ m < n. Let (C 1 , ..., C n ) be conjugacy classes satisfying the following properties: • C 1 , ..., C m are conjugacy classes of strongly regular elements S 1 , ..., S m ∈ T (k); • C m+1 , ..., C n are regular unipotent classes; • The product S 1 · · · S n is in [G(k), G(k)]. We call the resulting character variety X = X(C 1 , ..., C n ) a character variety with regular monodromy. In considering these character varieties, we were motivated by the work of Deligne and Flicker [DF13], who counted local systems with regular unipotent monodromy in the ℓ-adic setting. In our case, the inclusion of regular semisimple monodromy is convenient because it implies that the action of G/Z is free. In addition, it makes the counting formula simpler, because a remarkable theorem of Deligne and Lusztig [DL76, Corollary 7.6.2] implies 3 that we only need to consider principal series representations in the Frobenius Mass Formula (see Equation (1) and §3.2.2). 1.3.1. Polynomial count. For the rest of the introduction, we assume that k = F q . A variety Y over k is said to be polynomial count [HRV08,LRV20] if there exists a polynomial ||Y || ∈ C[t] such that |Y (F q n )| = ||Y ||(q n ), ∀n ≥ 1. In this case, the counting polynomial ||Y || equals the E-polynomial of Y . In particular, ||Y || has integer coefficients. Moreover, the degree (resp. leading coefficient) of ||Y || equals the dimension (resp. the number of irreducible components of maximal dimension) of Y . 1.3.2. Main result. Let (X, Φ, X ∨ , Φ ∨ ) denote the root datum of G and let G ∨ be the Langlands dual group. Theorem 6. Suppose X/ Φ is torsion free (equivalently, Z(G) is smooth and connected). Then there exists a polynomial ||X|| with integer coefficients (given explicitly in Theorem 37) such that the following holds: If char(k) is admissible (Definition 10) and q ≡ 1 mod d(G ∨ ), where d(G ∨ ) is the modulus of G ∨ (Definition 26), then X is polynomial count with counting polynomial ||X||. 1.3.3. Outline of the proof. The first observation is that since G/Z is acting freely on R, the character variety X agrees with the quotient stack [R/(G/Z)]. Next, the Frobenius Mass Formula states that (1) |[R/(G/Z)](k)| = |Z(k)| χ∈Irr(G(k)) |G(k)| χ(1) 2g−2 n i=1 χ(C i (k)) χ(1) |C i (k)|. Here, Irr(G(k)) denotes the set of irreducible complex characters of the finite reductive group G(k). Using Deligne-Lusztig theory, we reduce the evaluation of |X(k)| to computing a certain sum of characters of the finite abelian group T (k). This is Part I of the proof and is carried out in §3. Part II concerns evaluating these character sums using Pontryagin duality and Möbius inversion on the partially ordered set of closed subsystems of the root system of G ∨ . See §5 for details of the proof. Example 7. (i) Suppose G = GL 2 , n = 2, and the regular semisimple class is generic, in the sense of [HLRV11,§2]. Then ||X|| = (q − 1) 4g−1 q 2g−1 ((q + 1) 2g − 1). (ii) Suppose G = GL 3 , n = 2, and the regular semisimple class is generic. Then ||X|| = q 6g−2 (q − 1) 6g−2 (q 2 + q + 1) 2g (q + 1) 2g − 3(q + 1) 2g + 2 . 1.3.4. Remarks. (1) The above theorem is new even for G = GL n , though we were informed by Letellier that in this case, it can be extracted from results of [Let15]. (2) The condition on q is mild: if q is co-prime to d(G ∨ ) then this condition is satisfied after a finite base change. For instance, if G = E 8 , then d(G ∨ ) = 60; thus, after a finite base change, the conclusion of the theorem holds whenever the ground characteristic is bigger than 5. (3) If the condition on q is dropped, then X becomes PORC 4 count, cf. [BK22] where a similar phenomena is proved for character stacks of compact surface groups. (4) We expect the theorem continues to hold if the condition on the centre is dropped; however, computing the counting polynomials becomes complicated because the representation theory of finite reductive groups with disconnected centre is intricate. (5) The isomorphism class of the variety X depends on the semisimple tuple (S 1 , ..., S m ). However, one can show that in the "generic case", the counting polynomial is independent of the tuple. Details will be given elsewhere. (6) The above theorem implies that the complex character variety with regular monodromy is also polynomial count (in the sense of the appendix of [HRV08]) with the counting polynomial ||X||. Details are left to the reader. (7) One of the main achievements of [HRV08, HLRV11, Let15, LRV20] is to give a conjectural formula for the mixed Hodge polynomial of the complex character variety for GL n . 5 It would be interesting to understand the mixed Hodge structure of complex character varieties with regular monodromy. (8) As the above examples illustrate, the counting polynomial for X is not necessarily palindromic. It would be interesting to understand this phenomenon in the context of the P = W program [dCHM12]. 1.4. Topological properties. We now discuss the implications of Theorem 6 for the topology of character varieties. Let Tor(X ∨ / Φ ∨ ) the torsion part of the finitely generated abelian group X ∨ / Φ ∨ . Theorem 8. Suppose we are in the situation of Theorem 6 and G is non-commutative. 6 (i) Suppose either g > 0 or n > 3. Then |π 0 (X)| = |Tor(X ∨ / Φ ∨ )|; in particular, X is non-empty. (ii) Suppose either g > 0 or n > m + 2. Then the Euler characteristic of X is 0. We prove this theorem by analysing the counting polynomial ||X||; see §6 for details. 1.4.1. Remarks. (1) It is easy to see that if (g, n) = (0, 2) then X is empty. The case (g, n) = (0, 3) exhibits interesting properties. For instance, if G = GL 2 and if we have two regular semisimple conjugacy classes, then X can either be a single point or two points, depending on whether the eigenvalues are generic or not. See §7.1 for details. (2) In §7.2 we consider the case where G equals GL 2 or GL 3 and (g, n, m) equal to (0,4,3) or (0, 4, 2). We show that in these cases, the Euler characteristic may be non-zero. The fact that the Euler characteristic behaves differently in genus 0 is also observed in [HLRV11, Remark 5.3.4]. (3) When g = 0 and n > 3, the theorem implies that |X(k)| is non-empty. It would be interesting to give a direct proof of this result. 1.5. Notation. Throughout the paper, k denotes a field of characteristic p, G a connected split reductive group over k, Z = Z(G) the centre of G, T a maximal split torus of G, B a Borel subgroup containing T , U the unipotent radical of B, W the Weyl group, r := dim(T ∩ [G, G]) the semisimple rank, (X, Φ, X ∨ , Φ ∨ ) the root datum, Φ root lattice, and Φ ∨ the coroot lattice 5 As far as we know, these conjectures are not yet proven. 6 If G is commutative, then it is easy to see that X = G 2g , see §5.4.1. 1.5.1. For each root subsystem Ψ ⊆ Φ ∨ (including the empty one), we have a reflection subgroup W (Ψ) ⊆ W generated by reflections associated to roots α ∈ Ψ. Let S(Φ ∨ ) denote the set of closed subsystems of Φ ∨ . This is a partially ordered set (poset) with the ordering given by inclusion. The Weyl groups W (Ψ) associated to closed subsystems will play a central role in our formulas. 1.5.2. Let T ∨ := Spec k[X] be the dual torus and G ∨ the Langlands dual group of G over k. In other words, G ∨ is a connected split reductive group over k with maximal split torus T ∨ and root datum (X ∨ , X, Φ ∨ , Φ). For a finite abelian group A, we let A ∨ := Hom(A, C × ) denote its Pontryagin dual. When k is a finite field, we identify the Pontryagin dual T (k) ∨ with the Langlands dual T ∨ (k) as follows. 1.5.3. Let µ ∞,p ′ (C) denote the set of roots of unity in C × whose order is prime to p. We choose, once and for all, two isomorphisms: (2) F p × ≃ (Q/Z) p ′ and (Q/Z) p ′ ≃ µ ∞,p ′ (C). As noted in [DL76,§5], this induces an isomorphism between the Pontryagin dual and the Langlands dual: T (k) ∨ ≃ T ∨ (k Smoothness of character varieties The goal of this section is to prove Theorem 1. Our proof is modelled on the proof of [HLRV11, Theorem 2.1.5]. The novelty here is that we work in the reductive setting and avoid using matrices. 2.1. Proof of Part (i) in characteristic 0. We start with some notation. 2.1.1. For each h ∈ G, let l h : G → G denote the left multiplication map. This is an isomorphism of varieties; thus, it induces an isomorphism of vector spaces dl h : T g G ∼ − → T hg G. Similarly, we have the right multiplication map r h and its derivative dr h : T g G ∼ − → T gh G. Thus, dr h −1 • dl h is a linear map T g G → T hgh −1 G. If g is the identity of G, then this is a linear automorphism g → g which equals the adjoint map Ad h . Let F : G 2g × n i=1 C i → [G, G] be the morphism of varieties defined by F (A 1 , B 1 , . . . , A g , B g , S 1 , . . . , S n ) := [A 1 , B 1 ] · · · [A g , B g ]S 1 · · · S n . Note that our assumption in §1.1.1 implies that the image of F is indeed in [G, G]. By definition, the representation variety is R = F −1 (1). Now consider a point r = (A 1 , B 1 , . . . , A g , B g , S 1 , . . . , S n ) ∈ R = F −1 (1). It follows from the algebraic avatar of the Regular Value Theorem 7 that R is smooth and equidimensional if the differential dF r : T r (G 2g × n i=1 C i ) → T 1 ([G, G]) = [g, g] is surjective. To establish surjectivity, we construct an auxiliary surjective map φ = φ r : g 2g+n → [g, g] and prove that the image of dF r equals the image of φ. To this end, we need yet another auxiliary map Ψ = Ψ r . Consider the morphism of varieties Ψ = Ψ r : G 2g+n → G 2g × n i=1 C i , defined by Ψ(x 1 , y 1 , ..., x g , y g , z 1 , ..., z n ) := (A 1 x 1 , B 1 y 1 , ..., A g x g , B g y g , z −1 1 S 1 z 1 , ..., z −1 n S n z n ). In other words, Ψ = (l A 1 , l B 1 , . . . l Ag , l Bg , η 1 , . . . , η n ), where η j : G → C j is given by η j (g) = g −1 S j g. 2.1.5. Observe that Ψ(1) = r. Thus, taking the differential at 1, we obtain a linear map dΨ 1 : g 2g+n → T r G 2g × n i=1 C i = g i=1 (T A i G × T B i G) × n j=1 T S j C j . The auxiliary function φ is defined as follows: φ : g 2g+n → [g, g], φ := dF r • dΨ 1 To prove that φ has the desired properties, we first need some notation. X i ∈ T A i G, Y i ∈ T B i G, and Z j ∈ T S j C j ⊆ T S j G. Let X ′ i , Y ′ i , and Z ′ j be elements of g satisfying the following: 8 X ′ i = (dl A i ) −1 (X i ), Y ′ i = (dl B i ) −1 (Y i ), −Z ′ j ∈ (1 − Ad S j ) −1 ((dr S j ) −1 (Z j )). In addition, set A i ♯B i := [A 1 , B 1 ] · · · [A i , B i ] and S j := n t=j S −1 t . For convenience, set S n+1 := 1. Proposition 9. We have dF r (X 1 , Y 1 , . . . , X g , Y g , Z 1 , . . . , Z n ) = φ(X ′ 1 , Y ′ 1 , . . . , X ′ g , Y ′ g , Z ′ 1 , . . . , Z ′ n ). Moreover, this equals g i=1 Ad (A i−1 ♯B i−1 )A i ((1 − Ad B i ) X ′ i )+ g i=1 Ad (A i−1 ♯B i−1 )A i B i ((1−Ad A −1 i )Y ′ i )+ n j=1 Ad S j+1 ((1−Ad S j )(Z ′ j )). Proof. This follows from repeated application of the chain rule and is left as an exercise. The first statement of the proposition implies that Im(dF r ) = Im(φ). 9 We now use the second statement to show that φ is surjective. 2.1.7. Let H be the closed subgroup of G generated by the elements A 1 , B 1 , . . . , A g , B g , S 1 , ..., S n . We then have (3) Z(G) = C G (H) =⇒ Lie(Z(G)) = Lie(C G (H)) = C g (H) =⇒ [g, g] ∩ C g (H) = 0. Here C g (H) is defined as follows: C g (H) := {x ∈ g | Ad h (x) = x, ∀h ∈ H}. As the Killing form K = K [g,g] is invariant and non-degenerate, we conclude that for every h ∈ G (4) K(t, (1 − Ad h )v) = 0 ∀v ∈ [g, g] =⇒ Ad h t = t. To see this, observe that K(t, (1 − Ad h )v) = K(t, v) − K(t, Ad h v) = K(t, v) − K(Ad h −1 t, v) = K(t − Ad h −1 t, v). Returning to the map φ, setting all but one of X ′ i , Y ′ i , and Z ′ j to zero, and repeatedly applying observation (4), one can check that if t ∈ [g, g] such that K(t, Imφ) = 0, then t is fixed by every one of Ad A i , Ad B i , and Ad S j . This implies t ∈ C g (H) and thus t = 0. It follows that Im(φ) = [g, g], as required. 8 Note that dl Ai and dl Bi are invertible and dη j = dr Sj • (Ad Sj − 1) = dl Sj − dr Sj : g → T Sj C j is surjective [Ric67, §2]. Thus, the elements X ′ i , Y ′ i , Z ′ j exist. 9 Alternatively, this follows from the definition of φ and the surjectivity of dΨ 1 . 2.2. Proof of Part (i) in positive characteristics. In positive characteristics, the notion of Lie algebra can be ambiguous so let us explain what we mean by this. The group G, being connected split reductive, has a canonical Z-model G. Let g := Lie(G) denote the Lie ring scheme of G over Z. Then the Lie algebra g is, by definition, the base change of g to k. 2.2.1. There are three parts of the above proof that require care in positive characteristic: (i) The equality C g (H) = Lie(C G (H)) does not always hold. This is the issue of "separability" in positive characteristics. However, if we assume that p is a very good prime for G then this equality does hold, cf. [BMRT10]. (iii) The map dη j : g → T S j C j is not necessarily surjective [Ric67, Lemma 2.1]. However, this is actually not needed. The first statement of Proposition 9 holds as long as Z j ∈ Im(dη j ). This implies Im(dF r ) ⊇ Im(φ) which is sufficient for the proof. 2.2.2. The upshot is that the theorem and its proof are valid, provided we assume that p is admissible in the following sense: Definition 10. We say p is admissible for G if it is either 0 or is a very good prime for G and does not divide 2h ∨ . In terms of the root system Φ of G, the requirements on p are as follows: • p = 2; • if Φ has a component of type A r or C r then p ∤ r + 1; • if Φ has a component of type B r then p ∤ 2r − 1; • if Φ has a component of type D r then p ∤ r − 1; • if Φ has a component of exceptional type then p = 3; • if Φ has a component of type E 8 then p = 5. 2.3. General facts about actions of group schemes. To prove part (ii) of the theorem, we need some facts about actions of group schemes. 11 We momentarily use a more general notation than the rest of the paper. So let G be a group scheme acting on a scheme X. Let ϕ : G × X → X × X be the morphism ϕ(g, x) := (g.x, x). Following [MFK94], we say that the action is • proper if ϕ is proper; • free if for every scheme S, the action of G(S) on X(S) is free; equivalently, ϕ is a monomorphism. • scheme-theoretically free if ϕ is a closed immersion; i.e. a proper monomorphism. Example 11. The action of G m on A 2 − {0} given by g.(x, y) = (gx, g −1 y) is free but not scheme-theoretically free. The GIT quotient is the affine line with a double point which is not separated. 10 We thank Paul Levy for his help in this regard. See also https://mathoverflow.net/questions/46813/. 11 We thank Jack Hall for explaining to us the following facts about group actions. By definition, a free action is scheme-theoretically free if and only if it is proper. In favourable situations, properness is automatic: Proposition 12. Let G be a connected reductive group over a field k acting on an affine scheme X of finite type over k. Suppose the geometric points of X have finite stabilisers under the action of G. Then, the action is proper. Proof. This is [MFK94, Proposition 0.8]. Corollary 13. Let G be a connected reductive group over a field k acting on an affine scheme X of finite type over k. If the action is free, then it is scheme-theoretically free. Proof. This is immediate from the previous proposition. 2.3.2. Quotients. Let G be a group scheme of finite type over a field k acting on an affine scheme X of finite type over k. 2.4. Proof of part (ii). By Lemma 3, the action of G/Z on the representation variety R is free. Since R is affine and of finite type over k and G is connected reductive, the above discussions imply that the action is scheme-theoretically free and the canonical map [R/(G/Z)] → R/ /(G/Z) = X is an isomorphism. This proves the first statement of (ii). The remaining statements follow immediately. Character varieties over finite fields In this section, we work over a finite field k = F q and assume that the reductive group G has connected centre and that p = char(k) is a good prime for G. Let X be a character variety with regular monodromy over k (Definition 5). In what follows, we use the Frobenius Mass Formula (1) and the character theory of finite reductive groups (à la Deligne-Lusztig) to obtain an expression for |X(k)| involving certain character sums. The evaluation of these character sums will be carried out in subsequent sections. 3.1. Recollections on characters of G(k). Let Irr(G(k)) denote the set of irreducible complex characters of G(k). In what follows, we identify a representation with its character and denote them by the same letter. Proof. This is well-known, cf. [DL76, Corollary 6.3]. Let . , . denote the standard invariant inner product on the class functions on G(F q ). Theorem 15 (Deligne-Lusztig). Let S ∈ T (k) be a strongly regular element and χ ∈ Irr(G(k)). Then χ(S) = θ∈T (k) ∨ χ, B(θ) θ(S). In particular, χ(S) is non-zero only if χ is an irreducible constituent of a principal series representation B(θ) for some θ ∈ T (k) ∨ . Proof. This is a corollary of Deligne and Lusztig's construction of representations of G(k), see [DL76, Corollary 7.6.2]. 3.1.2. Our assumptions on G and p imply that regular unipotent elements of G(k) form a single conjugacy class, cf. [Spr66,Theorem 4.14]. Let N ∈ B(k) be a regular unipotent element. Theorem 16 (Green-Lehrer-Lusztig). (i) For each θ ∈ T (k) ∨ , the principal series representation B(θ) has a unique irreducible constituent that does not vanish on N. (ii) Every irreducible character of G takes value 0 or ±1 on N. Proof. See [GLL76] or [DL76, Corollary 10.8]. Notation 17. We denote by χ θ the constituent of B(θ) which does not vanish on N. 3.1.3. To study the representation χ θ , we need some more information about the stabiliser of θ. For each θ ∈ T (k) ∨ , let W θ := {w ∈ W | w.θ = θ}. In view of the identification in §1.5.3, we can think of θ as an element of the dual torus T ∨ (k); thus, we can define Φ ∨ θ := {α ∈ Φ ∨ | α(θ) = 1}. Theorem 18. (i) Φ ∨ θ is a closed subsystem of Φ ∨ . (ii) W θ = W (Φ ∨ θ ); in particular, W θ is a reflection subgroup. Proof. For (i), see [Hum95,§2.2] where it is explained that Φ ∨ θ is the root system of the centraliser C G ∨ (θ). 13 For (ii), see [DL76, Theorem 5.13]. Remark 19. As noted in Remark 5.14 of [DL76], the closed subsystems Φ ∨ θ are of a special kind; namely, if Φ ∨ is irreducible, then Φ ∨ θ is generated by a subset of simple coroots together with the negative of the highest coroot. We will not use this fact. 13 Since we have assumed G has connected centre, Steinberg's theorem implies that C G ∨ (θ) is connected. 3.1.4. We now fix a character θ and study the representation χ θ . To this end, let ℓ and ℓ θ denote the length functions on W and W θ , respectively. Let P (t) := w∈W t ℓ(w) and P θ (t) := w∈W θ t ℓ θ (w) denote the corresponding Poincaré polynomials. Proposition 20. (i) χ θ (1) = P (q) P θ (q) . Thus |G(k)| χ θ (1) = P θ (q) · |G(k)| P (q) = P θ (q) · |B(k)|. (ii) χ θ (N) = 1. (iii) χ θ = χ θ ′ if and only if θ and θ ′ are W -conjugate. (iv) If S ∈ T (k) is strongly regular then χ θ (S) = 1 |W θ | w∈W θ(w.S). Proof. Further by Theorem 16 χ θ is the unique constituent of B(θ) whose character does not vanish at N, and its value at N is ±1. We conclude χ θ (N) = 1. (iii) This follows from Lemma 14 because if χ θ = χ θ ′ then B(θ) and B(θ ′ ) have a common irreducible constituent. (iv) Theorem 15 implies χ θ (S) = θ ′ ∈T (Fq) ∨ χ θ , B(θ ′ ) θ ′ (S). By the above discussion χ θ , B(θ ′ ) = 1 θ ′ = w.θ for some w ∈ W ; 0 otherwise. Thus χ θ (S) = θ ′ ∈W.θ θ ′ (S) = 1 |W θ | w∈W (w.θ)(S) = 1 |W θ | w∈W θ(w.S). 3.2. Point count. In this subsection, we begin counting points on the character variety with regular monodromy X = X(C 1 , ..., C n ) (Definition 5). Recall that C 1 , ..., C m are strongly regular semisimple and C m+1 , ..., C n are regular unipotent; thus, |C i (k)| = |G(k)| · |T (k)| −1 i = 1, 2, ..., m |G(k)| · |Z(k)| −1 q −r i = m + 1, ..., n. By the Frobenius Mass Formula (1), we have |X(k)| = Z χ∈Irr(G(k)) |G(k)| χ(1) 2g+n−2 n i=1 χ(C i (k)), where (5) Z := |Z(k)| · n i=1 |C i (k)| |G(k)| = |Z(k)| · |T (k)| −m · |Z(k)|q r m−n . Note that Z is a rational function in q. 3.2.2. Since at least one of the C i 's is strongly regular, Theorem 15 implies that in the above sum, we only need to consider those χ ∈ Irr(G(k)) that are constituents of principal series representations B(θ) for some θ ∈ T (k) ∨ . On the other hand, since at least one of the C i 's is regular unipotent, Theorem 16 implies that it suffices to consider the unique constituent χ = χ θ . Finally, in view of Proposition 20.(iii) two characters in the same Weyl orbit yield isomorphic constituents. Therefore the Frobenius sum reduces to a sum over T (k) ∨ /W , which by Proposition 20.(ii), equals: |X(k)| = Z θ∈T (k) ∨ /W |G(k)| χ θ (1) 2g+n−2 m i=1 χ θ (S i ). For convenience, we re-write this as a sum over T (k) ∨ : |X(k)| = Z θ∈T (k) ∨ |W θ | |W | |G(k)| χ θ (1) 2g+n−2 m i=1 χ θ (S i ). 3.2.3. Next, Proposition 20.(i) implies that χ θ (1) depends only on the stabiliser W θ . By theorem 18, we have W θ = W (Φ ∨ θ ). Collecting the terms corresponding to the same closed subsystem Ψ = Φ ∨ θ ⊆ Φ ∨ and writing P Ψ = P θ for the Poincare polynomial of W (Φ ∨ θ ) yields |X(k)| = Z Ψ∈S(Φ ∨ ) |W (Ψ)| |W | · (P Ψ (q) · |B(k)|) 2g+n−2     θ∈T (k) ∨ W θ =W (Ψ) m i=1 χ θ (S i )     . 3.2.4. Using Proposition 20.(iv), we can rewrite the inner sum as follows: θ∈T (k) ∨ W θ =W (Ψ) m i=1 χ θ (S i ) = 1 |W (Ψ)| m θ∈T (k) ∨ W θ =W (Ψ) m i=1 w∈W θ(w.S i ) . 3.2.5. Let S denote the tuple (S 1 , ..., S m ). For each w = (w 1 , ..., w m ) ∈ W m , let w.S denote the product (w 1 .S 1 )...(w m .S m ) ∈ T (k). Then we can rewrite the above sum as: θ∈T (k) ∨ W θ =W (Ψ) m i=1 w∈W θ(w.S i ) = θ∈T (k) ∨ W θ =W (Ψ) w∈W m θ(w 1 .S 1 ) · · · θ(w m .S m ) = w∈W m θ∈T (k) ∨ W θ =W (Ψ) θ(w.S). 3.2.6. For ease of notation, let α Ψ,S := θ∈T ∨ (k) W θ =W (Ψ) θ(S). Then we can summarise the results of this subsection in the following proposition: Proposition 21. We have |X(k)| = Z.|B(k)| 2g+n−2 |W | Ψ∈S(Φ ∨ ) |W (Ψ)| 1−m · P Ψ (q) 2g+n−2 .   w∈W m α Ψ,w.S   . To evaluate the character sum α Ψ,S , we first need a detour on fixed points of the Weyl group W acting on the maximal torus T ∨ . Invariants of Weyl groups on tori We work in the general setting of §1.5. Let T W be the functor which associates to every k-algebra R, the set of fixed points T (R) W := {t ∈ T (R) | w.t = t, ∀w ∈ W }. In this section, we recall some properties of T W . 14 The results of this section will be applied in the subsequent sections for counting points on the character variety, but there we need to consider the action of W on T ∨ . 4.1. The structure of T W . Observe that T W is represented by a closed (but not necessarily connected) algebraic subgroup of T . Indeed, T W = Spec k[X W ], where X W is the group of coinvariants of W on X; i.e. X W := X/D W , where D W := x − w.x | x ∈ X, w ∈ W . Note that T W is smooth if and only if p is not a torsion prime for X/D W . Example 22. If G = SL p , then T W ≃ µ p which is not smooth in characteristic p. 4.1.1. Next, let Tor(X W ) denote the torsion subgroup of X W and F (X W ) its maximal free quotient. Then we have a short exact sequence of finitely generated abelian groups 0 → Tor(X W ) → X W → F (X W ) → 0. Applying the Spec functor, we obtain a short exact sequence of groups of multiplicative type 1 ← π 0 (T W ) ← T W ← (T W ) • ← 1. 14 Here, we benefited from https://mathoverflow.net/questions/307868. Neutral component. One readily checks that 2 Φ ⊆ D W ⊆ Φ . Thus, X/D W and X/ Φ differ only in 2-torsion; in particular, the ranks of these groups are equal. Recall that the centre of G is given by Z := Spec k[X/ Φ ]. Thus, we conclude (T W ) • = Z • . Remark 23. In fact, one can show that T W ≃ (Z/2) r × Z, where r is the number of direct factors of G isomorphic to SO 2n+1 , for some n ≥ 1, cf. [JMO95, Proposition 3.2]. We shall not use this fact. 4.1.3. The group of components. For ease of notation, let T := Tor(X W ) and π 0 := π 0 (T W ) = Spec k[T]. Then π 0 (k) = Hom(T, k × ). If p does not divide |T|, then π 0 isétale and π 0 (k) = π 0 (k) Gal(k/k) = Hom(T, k × ) Gal(k/k) . 4.1.4. Group of components over finite fields. Now suppose k = F q and gcd(q, |T|) = 1. Then, we have π 0 (k) = Hom(T, k × ) = Hom(T, µ p ′ ) = Hom(T, C × ) = T ∨ . Here, the second equality uses the identification (2) while the third one follows from the fact that gcd(q, |T|) = 1. Taking the Galois fixed points and noting that the Galois group is generated by the Frobenius Fr which raises elements to power of q, we obtain π 0 (k) = π 0 (k) Gal(k/k) = π 0 (k) Fr = π 0 (k) q = (T ∨ ) q . Polynomial property. Lemma 24. Suppose q ≡ 1 mod |Tor(X W )|. Then T W is polynomial count with counting polynomial ||T W || = |Tor(X W )|(t − 1) rank(X/ Φ ) . Proof. Indeed, if q ≡ 1 mod |T|, then (T ∨ ) q = T ∨ . Thus, the above discussions imply |T W (F q )| = |T|(q − 1) rank(X/ Φ ) . The argument goes through if q is replaced with q n , establishing the lemma. 4.2. Invariants for subgroups of W . Lemma 25. Suppose X ∨ / Φ ∨ is free. Let Ψ be a root subsystem of Φ with Weyl group W (Ψ). Then D W (Ψ) = Ψ . Thus, T W (Ψ) = Spec k[X/ Ψ ]. Proof. This is stated without proof in [Der85, p 1035]. For the sake of completeness, we sketch a proof. The inclusion D W (Ψ) ⊆ Ψ is easy and follows from the fact that if x ∈ X and α ∈ Ψ, then x − s α x = x, α ∨ α ∈ Ψ . For the reverse inclusion, we need show that every β ∈ Ψ is in D W (Ψ) . Let β ∨ be the coroot corresponding to β (under a fixed W -invariant bijection Φ → Φ ∨ , cf. [GM20, §1.2]). Let w ∈ W be such that α := w.β is a simple root of Φ (cf. [Ser00, Theorem V.10.2.(c)]). Let µ α ∈ X ⊗ Q be the corresponding fundamental weight. The assumption that X ∨ / Φ ∨ is free implies that µ α ∈ X. Now, we have w −1 µ α , β ∨ = µ α , w.β ∨ = µ α , α ∨ = 1. Thus, β = w −1 µ α , β ∨ β = w −1 µ α − s β (w −1 µ α ) ∈ D W (Ψ) . Thus, D W (Ψ) = Ψ which implies that X W = X/ Ψ , establishing the last statement of the lemma. Modulus of a reductive group. Definition 26. The modulus of G, denoted by d(G), is defined to be the least common multiple of |Tor(X/ Ψ )|, where Ψ ranges over all closed subsystems of Φ. One checks that d(GL n ) = 1. On the other hand, suppose G is (almost) simple and simply connected. Then one can show that d(G) equals the least common multiple of coefficients of the highest root and the order of Z(G), cf. [Der85]. Thus, we have: Type A n B n C n D n E 6 E 7 E 8 F 4 G 2 d(G) n + 1 2 2 4 6 12 60 12 6 Note that for types B n , C n , E 6 , G 2 (resp. D n , E 7 , E 8 ), d(G) is the product of bad primes (resp. twice the product of bad primes) of G. Proposition 27. Suppose k = F q , q ≡ 1 mod d(G), and X ∨ / Φ ∨ is free. Then for every closed subsystem Ψ ⊆ Φ, the variety T W (Ψ) is polynomial count with counting polynomial ||T W (Ψ) || = |Tor(X/ Ψ )|(t − 1) rank(X/ Ψ ) . Proof. By Lemma 25, the assumption on X ∨ / Φ ∨ implies that X W (Ψ) = X/ Ψ . Next, the the fact that q ≡ 1 mod d(G) implies that q ≡ 1 mod |Tor(X/ Ψ )|. The result then follows from Lemma 24. Remark 28. The proposition remains true if we consider all subsystems instead of closed ones, provided that we modify the definition of the modulus. Details are left to the reader. Example 29. Suppose G = GL n and Ψ is the root system of the Levi subgroup L = GL λ 1 × · · · × GL λr where λ 1 ≥ λ 2 ≥ · · · ≥ λ r is a partition of n. Then T W (Ψ) = Z(L) ≃ G r m . Thus, for all q, |T W (Ψ) (F q )| = (q − 1) r . Evaluation of character sums Throughout this section, we assume that X/ Φ is free. This implies for each θ ∈ T (k) ∨ ; W θ is a subgroup of W of the form W (Ψ) for some closed subsystem Ψ ⊆ Φ ∨ . Let us recall the definition of the character sum α Ψ,S which was introduced in §3.2.6. Definition 30. For a closed subsystem Ψ ⊆ Φ ∨ and an element S ∈ T (k), let α Ψ,S := θ∈T ∨ (k) W θ =W (Ψ) θ(S). Our goal is to evaluate this sum and establish that it is a polynomial in q, thus completing the proof of Theorem 6. Our method is a generalisation of the approach of Deriziotis [Der85] who considered 15 the case S = 1. Note that in our application to point count ( §5.4), we take S to be a strongly regular element; however, the rest of the discussions of this section are valid for arbitrary S. 5. 1. An auxiliary sum. To evaluate α Ψ,S it is convenient to consider an auxiliary sum: Definition 31. For a closed subsystem Ψ ⊆ Φ ∨ and an element S ∈ T (k), define ∆ Ψ,S := θ∈T ∨ (k) W θ ⊇W (Ψ) θ(S). 5.1.1. Observe that W (and therefore W (Ψ)) acts on T ∨ (k). Moreover, W θ ⊇ W (Ψ) if and only if θ is in the set of fixed points (T ∨ (k)) W (Ψ) . Thus, we can rewrite the sum as (6) ∆ Ψ,S = θ∈(T ∨ (k)) W (Ψ) θ(S). To evaluate this sum, we need to recall a basic fact about Pontryagin duality. Given a homomorphism of finite abelian groups f : A → B, let f ∨ : B ∨ → A ∨ denote the map f ∨ (τ ) = τ • f . Note that if f is surjective, then f ∨ is injective. Lemma 32. Let f : A ։ B be a surjective group homomorphism with B abelian. Then, for every a ∈ A, we have θ∈f ∨ (B ∨ ) θ(a) = |B| if f (a) = 1 0 otherwise. Proof. Indeed, θ∈f ∨ (B ∨ ) θ(a) = τ ∈B ∨ f ∨ (τ )(a) = τ ∈B ∨ τ (f (a)) = |B| if f (a) = 1 0 otherwise. Here, the first equality follows from the injectivity of f ∨ , the second one is by definition of f ∨ , and the third one is the elementary fact that sum of characters of B evaluated an element b ∈ B vanishes unless b = 1, in which case the sum equals |B|. 5.1.3. We now return to evaluating ∆ Ψ,S . By definition, we have an injective homomorphism of abelian groups f ∨ Ψ : (T ∨ (k)) W (Ψ) ֒→ T ∨ (k). By Pontryagin duality, we obtain a surjective homomorphism f Ψ : T (k) ։ (T ∨ (k)) W (Ψ) ∨ . Lemma 32 then implies: 15 The sum α Ψ,1 is closely related to the genus number attached to Ψ, cf. [Der85]. Corollary 33. We have ∆ Ψ,S = |(T ∨ (k)) W (Ψ) | if f Ψ (S) = 1 0 otherwise. Thus, if q ≡ 1 mod d(G ∨ ), then ∆ is either 0 or it equals |Tor(X/ Ψ )|(q − 1) rank(X/ Ψ ) . In particular, it is a polynomial in q. Proof. The first statement follows from the previous lemma and the expression for ∆ given in (6). The second statement follows from Proposition 27. 5.1.4. Example. Suppose G = GL n and that S ∈ T (F q ) ∩ [G(F q ), G(F q )] is generic in the sense of [HLRV11]. This is equivalent to the statement that S / ∈ [L(F q ), L(F q )] for every proper Levi subgroup L ⊂ G. It follows that ∆ Ψ,S = 0, unless Ψ = Φ ∨ , in which case, we have ∆ Φ ∨ ,S = (q − 1). Thus, in the generic case, ∆ Ψ,S and α Ψ,S are independent of S. 5.2. On the map f Ψ . The purpose of this subsection, which is independent of the rest of the article, is to give a more straightforward description of the map f Ψ . To this end, it would be convenient to have some notation. Recall that Ψ is a closed subsystem of Φ ∨ . Let H ∨ be the connected reductive subgroup of G ∨ with maximal torus T ∨ and root system Ψ. Let H be the Langlands dual group over k. Thus, H is a connected split reductive group over k with maximal split torus T and root datum (X, Ψ ∨ , X ∨ , Ψ). 16 Let H(k) ′ denote the derived subgroup [H(k), H(k)]. Proposition 34. The homomorphism f Ψ coincides with the canonical projection T (k) → T (k)/ T (k) ∩ H(k) ′ . Proof. Let E ⊆ T (k) ∨ be the set of homomorphisms T (k) → C × which extend to homomorphisms H(k) → C × . It is easy to see (cf. [KS13,Lemma 12]) that E = T (k)/(T (k) ∩ H(k) ′ ) ∨ Thus, to prove the proposition, we need to show that E equals the set of W (Ψ)-invariant characters of T ∨ (k); i.e. E = (T ∨ (k)) W (Ψ) . Let θ ∈ T (k) ∨ . Suppose first that θ is extendable; i.e., there exists θ : H(k) → C × such that θ| T (k) = θ. Let w ∈ W (Ψ) and choose a liftẇ ∈ N H(k) (T (k)). Then for every t ∈ T (k), we have θ(w.t) = θ(ẇtẇ −1 ) = θ(t) = θ(t). Thus, θ is W (Ψ)-invariant. Conversely, suppose θ is W (Ψ)-invariant. As X/ ΦT(k) ∩ [G(k), G(k)] = α ∨ (k) α∈Φ , In another direction, observe that we have a natural inclusion [G(k), G(k)] ⊆ [G, G](k) which is not necessarily an equality if k is not algebraically closed. For instance, the commutator subgroup of PGL n (F q ) is a proper subgroup equal to the kernel of the determinant map det : Proposition 36. PGL n (F q ) → F × q /(F × q ) n .(i) We have α Ψ,S = Ψ ′ ∈S(Φ ∨ ) Ψ ′ ⊇Ψ µ(Ψ, Ψ ′ )∆ Ψ ′ ,S . (ii) If X/ Φ is free and q ≡ 1 mod d(G ∨ ), then α Ψ,S is a polynomial in q. Proof. The first part follows immediately from the Möbius inversion formula, while the second one follows from Corollary 33. .., n}, ordered by refinement. The Möbius function of the latter poset is given explicitly in, e.g., [Sta11,§3]. So suppose Φ 1 ⊆ Φ 2 are two closed subsystems of Φ ∨ corresponding to partitions π 1 ≤ π 2 . Let t and s be the number of blocks of π 1 and π 2 respectively. Then µ(Φ 1 , Φ 2 ) = (−1) t−s s i=1 (t i − 1)! where t i is the number of blocks of π 1 whose union is the i-th block of π 2 for i = 1, 2, . . . , s. 5. 3.2. Suppose S = 1 and Ψ = ∅. Then α Ψ,S is the number of regular elements of T ∨ (F q ). Thus, if G = GL n , then α ∅,1 = (q − 1)(q − 2) . . . (q − n). By the above discussions, we also have α ∅,1 = Ψ∈S(A n−1 ) µ(∅, Ψ)|T (F q )| W (Ψ) . Using the description of µ given in the previous paragraph, one can check that the RHS does indeed equal (q − 1) . . . (q − n). 5.4. Conclusion of the Proof of Theorem 6. Combining the results of this section with those of §3, we obtain a more precise version of Theorem 6: Theorem 37. Under the assumptions of Theorem 6, we have |X(k)| = Z.|B(k)| 2g+n−2 |W | Ψ∈S(Φ ∨ ) |W (Ψ)| 1−m · P Ψ (q) 2g+n−2 . w∈W m Ψ ′ ∈S(Φ ∨ ) Ψ ′ ⊇Ψ µ(Ψ, Ψ ′ )∆ Ψ ′ ,w.S . The above expression is a polynomial in q; moreover, if F q is replaced with F q n , then in this polynmoial q is replaced with q n . Thus, X is polynomial count. Proof. We have already shown that |X(k)| is given by the above formula. Thus, it remains to establish the last statement of the theorem. Observe that the right-hand side is evidently a rational function in q; indeed, aside from Z, every term is a polynomial in q. Let us denote this rational function by ||X||. Then, we have |X(F q n )| = |[R/(G/Z)(F q n )]| = |[R(F q n )/(G/Z)(F q n )]| = ||X||(q n ), where the middle equality follows from the fact that taking quotient stacks commutes with base change. We conclude that X is rational count with counting function ||X||. Finally, to see that ||X|| is actually a polynomial, one can either prove this directly by using the explicit formula or one can appeal to the general observation that if a variety is rational count, then in fact, it is polynomial count; cf. [LRV20, Remark 2.7]. 5.4.1. Let us do a consistency check when G = T is a torus. In this case, every element g ∈ G is regular semisimple and regular unipotent. Moreover, every conjugacy class is a singleton. Thus, the representation variety and the character variety coincide R(C 1 , ..., C n ) = X(C 1 , ..., C n ) = T 2g . On the other hand, since Φ ∨ = ∅, the above formula gives |X(k)| = Z|T (k)| 2g+n−2 ∆ ∅,S = |T (k)| 1−n |T (k)| 2g+n−2 |T (k)| = |T (k)| 2g . Topological invariants of character varieties The goal of this section is to prove Theorem 8. 6.1. Proof of Part (i). We have already seen that X is equidimensional. (This is part (ii) of Theorem 1.) Thus, it is enough to show that the leading coefficient of the counting polynomial ||X|| is |Tor(X ∨ / Φ ∨ )|. 6.1.1. In view of Theorem 37, to understand the leading coefficient of ||X||, we need to study the polynomials Q Ψ,Ψ ′ ,S := P Ψ (q) 2g+n−2 ∆ Ψ ′ ,S . Here, Ψ ⊆ Ψ ′ are closed subsystems of Φ ∨ and S = S 1 · · · S m . By Corollary 33, we have deg(Q Ψ,Ψ ′ ,S ) ≤ (2g + n − 2)|Ψ + | + dim((T ∨ ) W (Ψ ′ ) ), To prove Theorem 8.(ii) it is enough to show: (1) the leading coefficient of Q Φ ∨ ,Φ ∨ ,S is |Tor(X ∨ /Q ∨ )| and (2) deg(Q Ψ,Ψ ′ ,S ) is maximal if and only if Ψ = Ψ ′ = Φ ∨ . These are established in the two propositions below. Proposition 38. (a) deg(Q Φ ∨ ,Φ ∨ ,S ) = (2g + n − 2)|Φ + | + dim((T ∨ ) W ). (b) The leading coefficient of Q Φ ∨ ,Φ ∨ ,S is |Tor(X ∨ /Q ∨ )|. Proof. For (a) it is enough to show that ∆ Φ ∨ ,S = θ∈T (k) ∨ W θ =W θ(S) = 0. Since X/ Φ is free, we can apply Theorem 3.(iii) of [KS13] (or Proposition 34) to conclude that every W -invariant character θ ∈ T (k) ∨ extends to a character of G(k). Thus θ T (k) ∩ [G(k), G(k)] = 1. On the other hand, we have assumed (see §1.1.1) that S ∈ [G(k), G(k)]. It follows that θ(S) = 1 for all W -invariant θ's. Thus, ∆ Φ ∨ ,S = |(T ∨ (k)) W |, establishing (a). For (b), note that the leading coefficient equals |π 0 ((T ∨ ) W )|. Proposition 27 (applied to G ∨ instead of G) implies that |π 0 ((T ∨ ) W )| = |Tor(X ∨ /Q ∨ )|. Proposition 39. deg(Q Ψ,Ψ ′ ,S ) is maximal if and only if Ψ = Ψ ′ = Φ ∨ . Proof. We need to show that if Ψ is not equal to Φ ∨ , then (2g + n − 2)|Φ + | + dim(T ∨ ) W > (2g + n − 2)|Ψ + | + dim(T ∨ ) W (Ψ ′ ) . This is equivalent to (2g + n − 2)(|Φ + | − |Ψ + |) > dim(T ∨ ) W (Ψ ′ ) − dim(T ∨ ) W . Note that by assumption, 2g + n − 2 ≥ 2, |Φ + | − |Ψ + | > 0, and (T ∨ ) W (Ψ) ⊇ (T ∨ ) W (Ψ ′ ) . Thus it is enough to show that 2(|Φ + | − |Ψ + |) = |Φ| − |Ψ| > dim(T ∨ ) W (Ψ) − dim(T ∨ ) W . We now reduce this to an inequality about irreducible root systems. Let Φ = Φ 1 ⊔ · · · ⊔ Φ t , where Φ i are irreducible root systems. Let Ψ i = Φ i ∩ Ψ, 1 ≤ i ≤ t. Since Ψ is a proper subsystem of Φ, we may assume without the loss of generality that there exists s ∈ {1, 2, ..., t − 1} such that we have Ψ j = Φ j , 1 ≤ j ≤ s, and, Ψ i Φ i , s + 1 ≤ i ≤ t. We thus obtain: |Φ| − |Ψ| = t i=s+1 (|Φ i | − |Ψ i |) and dim(T ∨ ) L − dim(T ∨ ) W = t i=s+1 (rank(Φ i ) − rank(Ψ i )) ≤ t i=s+1 rank(Φ i ). Therefore, to prove our result, it suffices to show that for each irreducible root system Φ i and proper subsystem Ψ i Φ i , we have rank(Φ i ) < |Φ i | − |Ψ i |. This follows from the next lemma. Lemma 40. Let Φ be an irreducible root system of rank r and Ψ a proper subsystem. Then |Φ| − |Ψ| ≥ 2r. Proof. We prove this by induction on r. The statement is obvious for r = 1. For r > 1 consider a base ∆ ⊂ Φ, and let α be an element of ∆ not in Ψ. Consider the root subsystem Φ ′ Φ with base ∆ \ {α}. It has rank r − 1. Now Φ ′ is the union of (at most three) irreducible components whose ranks add up to r − 1. Let Φ ′′ be one such component and s = rank(Φ ′′ ). We have two cases: (a) If Φ ′′ ∩ Ψ = Φ ′′ then by the inductive hypothesis |Φ ′′ \ (Φ ′′ ∩ Ψ)| ≥ 2s. (b) If Φ ′′ ∩ Ψ = Φ ′′ , then it is easy to see that Φ ′′ contains β 1 , . . . , β s so that ±s β j (α), 1 ≤ j ≤ s are 2s distinct elements of Φ \ Ψ \ {±α}. Indeed, we can take each β j to be the sum of a few elements of ∆ ∩ Φ ′′ , so that one of them is connected to α in the Dynkin diagram of Φ, and together they form a connected subset. Then α, β 1 , . . . , β s is a linearly independent set of roots, and the ±s β j (α) are distinct from each other and ±α. In either case, we find 2s separate roots of Φ \ Ψ corresponding to each irreducible component Φ ′′ of Φ ′ . Together with ±α these give 2r distinct elements of Φ \ Ψ. This completes the proof of the lemma. Remark 41. (1) The statement of Lemma 40 is sharp. For instance, consider B 2 ⊃ A 1 × A 1 , or A r ⊃ A r−1 . (2) The above discussions show that the degree of ||X|| equals 2g dim(G)−2 dim([G, G])+ n i=1 dim(C i ). This is in agreement with Theorem 1. 6.2. Proof of Part (ii). To prove that X has Euler characteristic 0, it is sufficient to show that (q − 1) divides the counting polynomial ||X|| of Theorem 37. First, observe that Z, defined in (5), is a rational function in q and ord q−1 (Z) = (m − n + 1) dim(Z) − m dim(T ). Next, B(k) is a polynomial in q with ord q−1 (B(k)) = dim(T ). Finally, by Corollary 33, ord q−1 (∆ Ψ,S ) ≥ dim(T ∨ ) W (Ψ) ≥ dim(T ∨ ) W = dim(Z(G ∨ )) = dim(Z(G)). Combining these observations with the expression for ||X|| given in Theorem 37, one readily verifies that ord q−1 (||X||) ≥ (2g + n − 2) dim(T ) + (m − n + 1) dim(Z) − m dim(T ) + dim(Z). Since, we have assumed G is non-commutative, dim(T ) > dim(Z) and so when g > 0 or n − m > 2, we obtain: ord q−1 (||X||) ≥ (2g + n − m − 2) dim(T ) − (n − m − 2) dim(Z) > 0. Examples We conclude the paper by giving a few explicit examples of cases not covered by Theorem 8. In all of these examples, the group is GL 2 or GL 3 and the genus g is zero. 7.1. The case (G, g, n) = (GL 2 , 0, 3). Let us consider character varieties with regular monodromy on P 1 minus three points. Then we can have two types of monodromies: either we have two regular semisimple and one regular unipotent or we have two regular unipotent and one regular semisimple. 7.1.1. Let us first assume we have two regular semisimple classes. Specifically, let C 1 and C 2 be the conjugacy classes of diag(a, b) and diag(c, d), respectively, with a = b and c = d. Recall, by the assumption of §1.1.1, abcd = 1, for otherwise the character variety is empty. We then have two cases: (i) If 1 / ∈ {ac, ad, bc, bd} 17 , then one may check using our counting formula that X(C 1 , C 2 , C 3 ) is a singleton. We now give explicitly the unique element of X. To this end, we need to specify matrices X i ∈ C i , i = 1, 2, 3, satisfying X 1 X 2 X 3 = 1: X 1 = a 0 a + b + c −1 + d −1 b , X 2 = a −1 −a −1 −c − c + b −1 + a −1 . c + d − a −1 , X 3 = 1 1 0 1 . (ii) Suppose 1 ∈ {ac, ad, bc, bd}. In this case, our counting formula implies that X has two points. Let us assume that c = a −1 . Then abcd = 1 implies that d = b −1 . We can then write down explicit expressions for the two points (X 1 , X 2 , X 3 ) and (Y 1 , Y 2 , Y 3 ) as follows: 7.1.2. Next, suppose we have two regular unipotent and one regular semisimple monodromies. Again using our counting formula, one can check that X(C 1 , C 2 , C 3 ) is a singleton. The unique element can be expressed as follows: X 1 = a −a + b 0 b , X 2 = a −X 1 = a 2 −a+1 a + 1 a a 2 −2a+1 − a 2 −2a+1 a 0 , X 2 = 0 − a a 2 −2a+1 a 2 −2a+1 a 2 , X 3 = 1 1 0 1 . 7.2. Character varieties with positive Euler characteristic. In this subsection, we give some examples of character varieties with regular monodromy and non-zero Euler characteristic. It would be interesting to find a general expression for these Euler characteristics. 7.2.1. Let (G, g, m, n) = (GL 2 , 0, 3, 4), with C 1 , C 2 , C 3 the conjugacy classes of diag(a 1 , a 2 ), diag(b 1 , b 2 ), diag(c 1 , c 2 ), respectively and C 4 the regular unipotent conjugacy class. Let us assume that a 1 = a 2 , b 1 = b 2 and c 1 = c 2 and a i b j c k = 1 for all (i, j, k). 18 Then our counting formula implies |X(C 1 , C 2 , C 3 , C 4 )| = q(q + 3). Thus the Euler characteristic is 4. 7.2.2. Let (G, g, m, n) = (GL 3 , 0, 3, 4) with C 1 , C 2 , C 3 the conjugacy classes of diag (a 1 , a 2 , a 3 ), diag(b 1 , b 2 , b 3 ), diag(c 1 , c 2 , c 3 ), respectively and C 4 the regular unipotent conjugacy class. Let us assume that a i = a j , b i = b j and c i = c j for all i = j and a i b j c k = 1. Then our counting formula implies |X(C 1 , C 2 , C 3 , C 4 )| = q 4 (q 4 + 6q 3 + 19q 2 + 42q + 46). Thus the Euler characteristic is 114. 7.2.3. Let (G, g, m, n) = (GL 2 , 0, 2, 4) with C 1 , C 2 the conjugacy classes of diag(a 1 , a 2 ), diag(b 1 , b 2 ), respectively and C 3 , C 4 the regular unipotent class. Let us assume that a 1 = a 2 and b 1 = b 2 and a i b j = 1 for all (i, j). Then |X(C 1 , C 2 , C 3 , C 4 )| = q 2 + 2q − 1. Thus, the Euler characteristic is 2. 7.2.4. Finally, suppose (G, g, m, n) = (GL 3 , 0, 2, 4) with C 1 , C 2 the conjugacy classes of diag(a 1 , a 2 , a 3 ), diag(b 1 , b 2 , b 3 ), respectively and C 3 , C 4 the regular unipotent class. Let us assume that a i = a j and b i = b j for all i = j and a i b j = 1 for all (i, j). Then |X(C 1 , C 2 , C 3 , C 4 )| = q 2 (q + 1) 2 (q 2 + q + 1) 2 − 9q 2 (q + 1) 2 + 12q 2 . Thus, the Euler characteristic is 12. (i) The action of G on X is free if and only if the quotient stack [X/G] is represented by an algebraic space. This follows from the fact that an Artin stack is Deligne-Mumford if and only if it has an unramified diagonal. 12 (ii) If G is connected reductive and the action of G on X is free then the canonical map [X/G] → X/ /G is an isomorphism. This is because, under our assumptions, [X/G] is a separated algebraic space, and X/ /G is a categorical quotient in the category of separated algebraic spaces, cf. Theorem 7.2.1 of[Alp14]. 3. 1 . 1 . 11To every character θ ∈ T (k) ∨ , one associates a principal series representation: Lemma 14. If θ and θ ′ are W -conjugate then B(θ) and B(θ ′ ) are isomorphic. Otherwise, they have no isomorphic irreducible constituents.12 Cf. Stacks Project, §100.21. (i) This is proved in [Kil78, §5.11] for semisimple G of adjoint type. The proof for reductive G with connected centre is left as an exercise. (See [GM20, Theorem 2.6.11 (b)], [DM20, Proposition 3.5.1, Remark 11.2.2] and [GM20, Theorem 1.6.7].) (ii) The induced character formula implies that the character of B(θ) at N is positive. 5. 3 . 3Evaluating α Ψ,S via Möbius inversion. Let µ : S(Φ ∨ ) × S(Φ ∨ ) → Z denote the Möbius function on the poset S(Φ ∨ ) of closed subsystems of Φ ∨ . For a description of this Möbius function, cf. [DH93, FJ91, FJ93]. 5.3.1. For G = GL n , the poset S(Φ ∨ ) is isomorphic to the poset of set-partitions of {1, 2, . (ii) The Killing form on [g, g] may be degenerate. However, if, in addition to being very good, we assume that p does not divide 2h ∨ (where h ∨ is the dual Coxeter number [Kac90, Chapter 6]), then the Killing form is non-degenerate, cf. [SS70, I. Theorem 4.8]. 10 This is equivalent to requiring that for every S i ∈ C i (k) the product S 1 · · · S n is in [G(k), G(k)]. Note that under these assumptions, every representation in R is stable under the action of G/Z (in the sense of geometric invariant theory). However, such a representation is not necessarily irreducible, cf. §7.1.1, Part (ii). This kind of phenomenon is rare. Irreducible representations are often stable.3 This observation is also used in[Cam17] to count points on certain Sp 2n -character varieties. Polynomial On Residue Classes. Note that H is not necessarily a subgroup of G. If H ∨ is a centraliser of a semisimple element, then H is an endoscopy group for G. I.e. the tuple (C 1 , C 2 , C 3 ) is generic in the sense of[HLRV11] Adequate moduli spaces and geometrically reductive group schemes. J Alper, Journal of Algebraic Geometry. 14J. Alper, Adequate moduli spaces and geometrically reductive group schemes, Journal of Algebraic Geometry 1 (2014), no. 4, 489-531. M Ballandras, arXiv:2201.08795Intersection cohomology of character varieties for punctured Riemann surfaces. M. Ballandras, Intersection cohomology of character varieties for punctured Riemann surfaces, arXiv:2201.08795 (2022). Arithmetic of singular character varieties and their E-polynomials. D Baraglia, P Hekmati, Proceedings of the London Mathematical Society. Third Series. 1142D. Baraglia and P. Hekmati, Arithmetic of singular character varieties and their E-polynomials, Proceedings of the London Mathematical Society. Third Series 114 (2017), no. 2, 293-332. Complete reducibility and separability. M Bate, B Martin, G Röhrle, R Tange, Transactions of the American Mathematical Society. 3628M. Bate, B. Martin, G. Röhrle, and R. Tange, Complete reducibility and separability, Transactions of the American Mathematical Society 362 (2010), no. 8, 4283-4311. Quantization Of Hitchin's integrable System And Hecke eigensheaves. A Beilinson, V Drinfeld, A. Beilinson and V. Drinfeld, Quantization Of Hitchin's integrable System And Hecke eigensheaves (1997). http://math.uchicago.edu/~drinfeld/langlands/QuantizationHitchin.pdf. Betti geometric Langlands, Algebraic Geometry: Salt Lake City. D Ben-Zvi, D Nadler, D. Ben-Zvi and D. Nadler, Betti geometric Langlands, Algebraic Geometry: Salt Lake City 2015 97 (2018), 3-41. Geometry and braiding of Stokes data: fission and wild character varieties. P Boalch, Annals of Mathematics. 179P. Boalch, Geometry and braiding of Stokes data: fission and wild character varieties, Annals of Mathematics 179 (2014), 301-365. N Bridger, M Kamgarpour, arXiv:2203.04521Character stacks are PORC count. N. Bridger and M. Kamgarpour, Character stacks are PORC count, arXiv:2203.04521 (2022). V Cambo, arXiv:1708.00393On the E-polynomial of parabolic Sp 2n -character varieties. V. Cambo, On the E-polynomial of parabolic Sp 2n -character varieties, arXiv:1708.00393 (2017). Topology of Hitchin systems and Hodge theory of character varieties: the case A. M A De Cataldo, T Hausel, L Migliorini, Annals of Mathematics. Second Series. 13M. A. de Cataldo, T. Hausel, and L. Migliorini, Topology of Hitchin systems and Hodge theory of character varieties: the case A 1 , Annals of Mathematics. Second Series 175 (2012), no. 3, 1329-1407. 4 ) is generic in the sense of. I E , 11C 1 , C 2 , C 3 , CI.e., (C 1 , C 2 , C 3 , C 4 ) is generic in the sense of [HLRV11]. Counting local systems with principal unipotent local monodromy. P Deligne, Y Z Flicker, Annals of Mathematics. P. Deligne and Y. Z. Flicker, Counting local systems with principal unipotent local monodromy, Annals of Mathematics (2013), 921-982. Representations of reductive groups over finite fields. P Deligne, G Lusztig, Annals of Mathematics. Second Series. 1031P. Deligne and G. Lusztig, Representations of reductive groups over finite fields, Annals of Math- ematics. Second Series 103 (1976), no. 1, 103-161. On the number of conjugacy classes in finite groups of Lie type. D I Deriziotis, Communications in Algebra. 135D. I. Deriziotis, On the number of conjugacy classes in finite groups of Lie type, Communications in Algebra 13 (1985), no. 5, 1019-1045. The Möbius function of the lattice of closed subsystems of a root system. D I Derizlotis, D F Holt, Communications in Algebra. 215D.I. Derizlotis and D. F. Holt, The Möbius function of the lattice of closed subsystems of a root system, Communications in Algebra 21 (1993), no. 5, 1543-1570. Representations of finite groups of Lie type. F Digne, J Michel, Cambridge University Press95F. Digne and J. Michel, Representations of finite groups of Lie type, Vol. 95, Cambridge University Press, 2020. The lattices and Möbius functions of stable closed subrootsystems and hyperplane complements for classical Weyl groups. P Fleischmann, I Janiszczak, manuscripta mathematica. 721P. Fleischmann and I. Janiszczak, The lattices and Möbius functions of stable closed subroot- systems and hyperplane complements for classical Weyl groups, manuscripta mathematica 72 (1991), no. 1, 375-403. Combinatorics and Poincaré polynomials of hyperplane complements for exceptional Weyl groups. Journal of Combinatorial Theory, Series A. 632, Combinatorics and Poincaré polynomials of hyperplane complements for exceptional Weyl groups, Journal of Combinatorial Theory, Series A 63 (1993), no. 2, 257-274. The character theory of finite groups of Lie type: a guided tour. M Geck, G Malle, Cambridge University Press187M. Geck and G. Malle, The character theory of finite groups of Lie type: a guided tour, Vol. 187, Cambridge University Press, 2020. On the degrees of certain group characters. J A Green, G I Lehrer, G Lusztig, The Quarterly Journal of Mathematics. Oxford. Second Series. 27105J. A. Green, G. I. Lehrer, and G. Lusztig, On the degrees of certain group characters, The Quarterly Journal of Mathematics. Oxford. Second Series 27 (1976), no. 105, 1-4. Arithmetic harmonic analysis on character and quiver varieties. T Hausel, E Letellier, F Rodriguez-Villegas, Duke Mathematical Journal. 1602T. Hausel, E. Letellier, and F. Rodriguez-Villegas, Arithmetic harmonic analysis on character and quiver varieties, Duke Mathematical Journal 160 (2011), no. 2, 323-400. Mixed Hodge polynomials of character varieties. T Hausel, F Rodriguez-Villegas, Inventiones Mathematicae. 1743T. Hausel and F. Rodriguez-Villegas, Mixed Hodge polynomials of character varieties, Inventiones Mathematicae 174 (2008), no. 3, 555-624. Conjugacy classes in semismiple algebra groups. J E Humphreys, Mathematical Surveys and Monographs. 43American Mathematical SocietyJ. E. Humphreys, Conjugacy classes in semismiple algebra groups, Mathematical Surveys and Monographs, vol. 43, American Mathematical Society, 1995. Self homotopy equivalences of classifying spaces of compact connected Lie groups. S Jackowski, J Mcclure, B Oliver, Fundamenta Mathematicae. 147S. Jackowski, J. McClure, and B. Oliver, Self homotopy equivalences of classifying spaces of compact connected Lie groups, Fundamenta Mathematicae 147 (1995), 99-126. Infinite-dimensional Lie algebras. V G Kac, Cambridge university pressV. G. Kac, Infinite-dimensional Lie algebras, Cambridge university press, 1990. Ramified Satake isomorphisms for strongly parabolic characters. M Kamgarpour, T Schedler, Documenta Mathematica. 18M. Kamgarpour and T. Schedler, Ramified Satake isomorphisms for strongly parabolic characters, Documenta Mathematica 18 (2013), 1275-1300. Principal series representations of finite Chevalley groups. R W Kilmoyer, Journal of Algebra. 511R. W. Kilmoyer, Principal series representations of finite Chevalley groups, Journal of Algebra 51 (1978), no. 1, 300-319. The Deligne-Simpson problem-a survey. V P Kostov, Journal of Algebra. 2811V. P. Kostov, The Deligne-Simpson problem-a survey, Journal of Algebra 281 (2004), no. 1, 83-108. Character varieties with Zariski closures of GL n -conjugacy classes at punctures. E Letellier, Selecta Mathematica. 211E. Letellier, Character varieties with Zariski closures of GL n -conjugacy classes at punctures, Selecta Mathematica 21 (2015), no. 1, 293-344. E Letellier, F Rodriguez-Villegas, arXiv:2008.13435E-series of character varieties of non-orientable surfaces. E. Letellier and F. Rodriguez-Villegas, E-series of character varieties of non-orientable surfaces, arXiv:2008.13435 (2020). Poincaré polynomials of moduli spaces of Higgs bundles and character varieties (no punctures). A Mellit, Inventiones Mathematicae. 2211A. Mellit, Poincaré polynomials of moduli spaces of Higgs bundles and character varieties (no punctures), Inventiones Mathematicae 221 (2020), no. 1, 301-327. D Mumford, J Fogarty, F Kirwan, Geometric invariant theory. Springer Science & Business Media34D. Mumford, J. Fogarty, and F. Kirwan, Geometric invariant theory, Vol. 34, Springer Science & Business Media, 1994. Commutators in semi-simple algebraic groups. R Ree, Proceedings of the American Mathematical Society. 153R. Ree, Commutators in semi-simple algebraic groups, Proceedings of the American Mathematical Society 15 (1964), no. 3, 457-460. R W Richardson, Conjugacy classes in Lie algebras and algebraic groups. R.W. Richardson, Conjugacy classes in Lie algebras and algebraic groups, Annals of Mathematics (1967), 1-15. O Schiffmann, Indecomposable vector bundles and stable Higgs bundles over smooth projective curves. O. Schiffmann, Indecomposable vector bundles and stable Higgs bundles over smooth projective curves, Annals of Mathematics (2016), 297-362. Complex semisimple Lie algebras. J P Serre, Springer Science & Business MediaJ. P. Serre, Complex semisimple Lie algebras, Springer Science & Business Media, 2000. Higgs bundles and local systems. C T Simpson, Publications Mathématiques de l'IHÉS. 75C. T. Simpson, Higgs bundles and local systems, Publications Mathématiques de l'IHÉS 75 (1992), 5-95. Moduli of representations of the fundamental group of a smooth projective variety I, Publications Mathématiques de l'Institut des HautesÉtudes Scientifiques. 79, Moduli of representations of the fundamental group of a smooth projective variety I, Publications Mathématiques de l'Institut des HautesÉtudes Scientifiques 79 (1994), no. 1, 47- 129. Some arithmetical results on semi-simple Lie algebras, Publications Mathématiques de l'IHÉS. T A Springer, 30T. A. Springer, Some arithmetical results on semi-simple Lie algebras, Publications Mathématiques de l'IHÉS 30 (1966), 115-141. T A Springer, R Steinberg, Conjugacy classes. 131T. A. Springer and R. Steinberg, Conjugacy classes, Lecture Notes in Mathematics 131 (1970). Enumerative Combinatorics I. R P Stanley, Cambridge Studies in Advanced Mathematics. R. P. Stanley, Enumerative Combinatorics I, Cambridge Studies in Advanced Mathematics (2011). Regular elements of semi-simple algebraic groups. R Steinberg, Publications Mathématiques de l'IHÉS. 25R. Steinberg, Regular elements of semi-simple algebraic groups, Publications Mathématiques de l'IHÉS 25 (1965), 49-80. The rising sea: foundations of algebraic geometry. R Vakil, preprintR. Vakil, The rising sea: foundations of algebraic geometry, preprint (2017). http://math.stanford.edu/~vakil/216blog/FOAGnov1817public.pdf.
[]
[ "Principal eigenvector and spectral radius of uniform hypergraphs", "Principal eigenvector and spectral radius of uniform hypergraphs" ]
[ "Haifeng Li \nCollege of Automation\nHarbin Engineering University\n150001HarbinPR China\n", "Jiang Zhou \nCollege of Science\nHarbin Engineering University\n150001HarbinPR China\n", "Changjiang Bu \nCollege of Automation\nHarbin Engineering University\n150001HarbinPR China\n\nCollege of Science\nHarbin Engineering University\n150001HarbinPR China\n" ]
[ "College of Automation\nHarbin Engineering University\n150001HarbinPR China", "College of Science\nHarbin Engineering University\n150001HarbinPR China", "College of Automation\nHarbin Engineering University\n150001HarbinPR China", "College of Science\nHarbin Engineering University\n150001HarbinPR China" ]
[]
In this paper, we give some bounds for principal eigenvector and spectral radius of connected uniform hypergraphs in terms of vertex degrees, the diameter, and the number of vertices and edges.
null
[ "https://arxiv.org/pdf/1605.08688v1.pdf" ]
119,123,199
1605.08688
9f688d9f14d7abae57ec770c388769ae3aafd473
Principal eigenvector and spectral radius of uniform hypergraphs 27 May 2016 Haifeng Li College of Automation Harbin Engineering University 150001HarbinPR China Jiang Zhou College of Science Harbin Engineering University 150001HarbinPR China Changjiang Bu College of Automation Harbin Engineering University 150001HarbinPR China College of Science Harbin Engineering University 150001HarbinPR China Principal eigenvector and spectral radius of uniform hypergraphs 27 May 2016HypergraphSpectral radiusPrincipal eigenvectorPrincipal ratio AMS classification: 05C6515A6915A18 In this paper, we give some bounds for principal eigenvector and spectral radius of connected uniform hypergraphs in terms of vertex degrees, the diameter, and the number of vertices and edges. Introduction For a positive integer n, let [n] = {1, 2, . . . , n}. Let C [m,n] be the set of order m dimension n tensors over the complex field C. For A = (a i 1 i 2 ···im ) ∈ C [m,n] , if all the entries a i 1 i 2 ···im ≥ 0(or a i 1 i 2 ···im > 0) of A, then A is called nonnegative (or positive) tensor. When m = 2, A is a n × n matrix. Let I = (δ i 1 i 2 ···im ) ∈ C [m,n] be the unit tensor, where δ i 1 i 2 ···im is Kronecker function. In 2005, Qi [1] and Lim [2] defined the eigenvalue of tensors, respectively. For A = (a i 1 i 2 ···im ) ∈ C [m,n] and x = (x 1 , x 2 , . . . , x n ) T ∈ C n , Ax m−1 is a dimension n vector whose the i-th component is Chang et al. [11], Yang et al. [3,4], Friedland et al. [12] gave Perron-Frobenius theorem of nonnegative tensors. Let A = (a i 1 i 2 ···im ) be an order m dimension n nonnegative tensor, if for any nonempty proper index subset α ⊂ {1, . . . , n}, there is at least an entry a i 1 ···im > 0, where i 1 ∈ α and at least an i j / ∈ α, j = 2, . . . , m, then A is called nonnegative weakly irreducible tensor (see [4]). Let V (G) = { 1, 2, . . . , n} and E (G) = { e 1 , e 2 , . . . , e m } denote the vertex set and edge set of a hypergraph G, respectively. If each edge of G contains exactly k distinct vertices, then G is called a k-uniform hypergraph. In particular, 2-uniform hypergraphs are exactly the ordinary graphs. For a connected k-uniform hypergraph G, e i denotes an edge that contains vertex i, the degree of a vertex i of G is denoted by d i , ∆ = max{d i }, δ = min{d i }, for i = 1, . . . , n. If all vertices of G have the same degree, then G is regular. A path P of a k-uniform hypergraph is defined to be an alternating sequence of vertices and edges P = v 0 e 1 v 1 e 2 · · · v l−1 e l v l , where v 0 , . . . , v l are distinct vertices of G, e 1 , . . . , e l are distinct edges of G and v i−1 , v i ∈ e i , for i = 1, . . . , l. The number of edges in P is called the length of P . If there exists a path starting at u and terminating at v for all u, v ∈ V (G), then G is connected. Let u, v be two distinct vertices of G, the distance between u and v is defined to be the length of the shortest path connecting them, denoted by d(u, v). The diameter of a connected k-uniform hypergraph G is the maximum distance among all vertices of G, denoted by D. In 2012, Cooper and Dutle [6] gave the concept of adjacency tensor of a k-uniform hypergraph. The adjacency tensor of a k-uniform hypergraph G denoted by A G , is an order k dimension n nonnegative symmetric tensor with entries a i 1 i 2 ···i k =    1 (k − 1)! , if { i 1 , i 2 , . . . , i k } ∈ E (G) , 0, otherwise. Eigenvalues of A G are called eigenvalues of G, the spectral radius of A G is called the spectral radius of G, denoted by ρ(G). Let G be a connected k-uniform hypergraph, then the adjacency tensor A G of G is nonnegative weakly irreducible (see [8]), by Perron-Frobenius theorem of nonnegative tensors (see [4]), ρ (G) is an eigenvalue of A G , there exists a unique positive eigenvector x = (x 1 , . . . , x n ) T corresponding to ρ (G) and n i=1 x k i = 1, x is called and the principal eigenvector of A G , the maximum and minimum entries of x are denoted by x max and x min , respectively. γ = xmax x min is called the principal ratio of A G (see [5]). In this paper, the principal eigenvector and the principal ratio of A G are called the principal eigenvector and the principal ratio of G. Let ρ(G) be the spectral radius of a k-uniform hypergraph G with eigenvector x = (x 1 , . . . , x n ) T . Since A G x k−1 = ρ(G)x [k−1] , we know that cx is also an eigenvector of ρ(G) for any [6,10]), we have nonzero constant c. When n i=1 |x i | k = 1, let x e = x i 1 x i 2 · · · x i k , {i 1 , i 2 , . . . , i k } = e (seeρ(G) = x T (A G x k−1 ) x T (Ix k−1 ) = x T (A G x k−1 ) = n i 1 ,...,i k =1 a i 1 ···i k x i 1 · · · x i k = e∈E(G) k! 1 (k − 1)! x e = k e∈E(G) x e . In spectral graph theory, there are some work concern relations among the spectral radius, principal eigenvector and graph parameters [5,7]. The interest of this paper is to consider similar problems in spectral hypergraph theory. This paper is organized as follows. In Section 2, we give some bounds for the principal ratio and the maximum and minimum entries of principal eigenvector of connected uniform hypergraphs. In Section 3, we show some bounds for the spectral radius of connected uniform hypergraphs via degrees of vertices, the principal ratio and diameter. The principal eigenvector of hypergraphs Let G be a connected k-uniform hypergraph, G is regular if and only if γ = 1. Thus, γ is an index which measure the irregularity of G. In 2005, Zhang [7] gave some bounds of the principal ratio of irregular graph G, these results were used to obtain a bound of the spectral radius of G. Let G be a connected uniform hypergraph with maximum degree ∆, minimum degree δ. We give the lower bound for the principal ratio γ of G, which extend the result of Zhang [7, Theorem 2.3] to hypergraphs. Theorem 2.1. Let G be a connected k-uniform hypergraph, then γ ≥ ∆ δ 1 2(k−1) . (2.1) If equality in (2.1) holds, then ρ (G) = √ ∆δ. Proof. Let G be a connected k-uniform hypergraph , A G be the adjacency tensor of G, x = (x 1 , . . . , x n ) T be the principal eigenvector of G. Suppose that d p = ∆, d q = δ, (p, q ∈ V (G)), since A G x k−1 = ρ (G) x [k−1] , we have ρ (G) x k−1 p = ep∈E(G) x ep\{p} ≥ ∆x k−1 min . (2.2) ρ (G) x k−1 q = eq∈E(G) x eq\{q} ≤ δx k−1 max . (2.3) where x e i \{i} = x i 2 x i 3 · · · x i k , {i, i 2 , . . . , i k } = e i . By (2.2) and (2.3), we have ∆ ρ (G) ρ (G) δ ≤ x p x min k−1 x max x q k−1 ≤ x max x min 2(k−1) =γ 2(k−1) . (2.4) i.e. γ ≥ ∆ δ 1 2(k−1) . Since equality in (2.1) holds, three equalities in (2.2), (2.3) and (2.4) hold. If equality in (2.4) holds, we have x p = x max , x q = x min . (2.5) When equalities in (2.2) and (2.3) hold, by (2.5), we obtain ρ (G) = √ ∆δ. Applying the bound of the principal ratio γ, we obtained the result as follows. Theorem 2.2. Let G be a connected k-uniform hypergraph, x = (x 1 , . . . , x n ) T be the principal eigenvector of G, then (1)x max ≥ δ ∆ k 2(k−1) + n − 1 −( 1 k ) ; (2)x min ≤ ∆ δ k 2(k−1) + n − 1 −( 1 k ) . Proof. Let G be a connected k-uniform hypergraph, x = (x 1 , . . . , x n ) T be the principal eigenvector of G, then 1= n i=1 x k i ≤ x k min + (n − 1) x k max . Let γ be the principal ratio of G, we obtain x −k max ≤ γ −k + n − 1, x k max ≥ γ −k + n − 1 −1 . (2.6) By Theorem 2.1, we know that γ ≥ ∆ δ 1 2(k−1) , so x k max ≥ δ ∆ k 2(k−1) + n − 1 −1 , i.e. x max ≥ δ ∆ k 2(k−1) + n − 1 −( 1 k ) . (2.7) Since 1= n i=1 x k i ≥ (n − 1) x k min + x k max , x −k min ≥ γ k + n − 1, x k min ≤ γ k + n − 1 −1 . By Theorem 2.1, we know that γ ≥ ∆ δ 1 2(k−1) , so x k min ≤ ∆ δ k 2(k−1) + n − 1 −1 , i.e. x min ≤ ∆ δ k 2(k−1) + n − 1 −( 1 k ) . Proof. Let G be a connected k-uniform hypergraph with n vertices and m edges, A G = (a ii 2 ···i k ) be the adjacency tensor of G, x = (x 1 , . . . , x n ) T be the principal eigenvector of G, then ρ (G) = x T A G x k−1 = k e∈E(G) x e ≤ kmx k max . (2.8) x max ≥ ρ (G) km 1 k . (2.9) Clearly, equality in (2.9) holds if and only if equality in (2.8) holds, i.e. x 1 = x 2 = · · · = x n , therefore G is regular. The spectral radius of hypergraphs Let G be a connected uniform hypergraph with maximum degree ∆, minimum degree δ. We obtain some bounds for the spectral radius of G via degrees of vertices, the principal ratio and diameter. We give some auxiliary lemmas which will be used in the sequel. Lemma 3.1. [9] Let y 1 , . . . , y n be nonnegative real numbers (n ≥ 2), then y 1 + · · · + y n n − (y 1 · · · y n ) 1 n 1 n(n − 1) 1 i<j n ( √ y i − √ y j ) 2 . Lemma 3.2. Let a, b, y 1 , y 2 be positive numbers. Then a(y 1 − y 2 ) 2 + by 2 2 ab a+b y 2 1 . Proof. By computation, we have a(y 1 − y 2 ) 2 + by 2 2 = (a + b)(y 2 − ay 1 a + b ) 2 + ab a + b y 2 1 ab a + b y 2 1 . Theorem 3.3. Let G be a connected k-uniform hypergraph, then ∆ γ k−1 ≤ ρ (G) ≤ γ k−1 δ. Proof. Let x = (x 1 , . . . , x n ) T be the principal eigenvector of G, for all i ∈ V (G), we have ρ (G) x k−1 i = e i ∈E(G) x e i \{i} ≥ d i x k−1 min > 0, Suppose that d µ = δ, µ ∈ V (G), we obtain ρ (G) = eµ∈E(G) x eµ\{µ} x k−1 µ ≤ γ k−1 δ. Similarly, we have ρ (G) ≥ ∆ γ k−1 . Thus, ∆ γ k−1 ≤ ρ (G) ≤ γ k−1 δ.ρ(G) < km∆ km + (n∆ − km)γ −k + k 2(k−1)D 1 − γ − k 2 2 , where D is the diameter of G. Proof. Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, x = (x 1 , . . . , x n ) T is the principal eigenvector of G, then ∆ − ρ(G) = ∆ n i=1 x k i − k e∈E(G) x e = n i=1 (∆ − d i )x k i + n i=1 d i x k i − k e∈E(G) x e = n i=1 (∆ − d i )x k i + {i 1 ···i k }=e∈E(G) (x k i 1 + · · · + x k i k − kx e ). Let x u = max i x i , x v = min i x i , since x > 0, G be an irregular connected k-uniform hypergraph, by Lemma 3.1, it yields that ∆ − ρ(G) > (n∆ − km)x k v + 1 k − 1 i,j∈e∈E(G) (x k 2 i − x k 2 j ) 2 . (3.1) Let P = v 0 e 1 v 1 e 2 · · · v l−1 e l v l be the shortest path from vertex u to vertex v, where u = v 0 , v = v l v i−1 , v i ∈ e i , f or i = 1, . . . , l, we have i,j∈e∈E(P ) (x k 2 i − x k 2 j ) 2 ≥ l−1 i=0 (x k 2 v i − x k 2 v i+1 ) 2 + l−1 i=0 u i ∈e i \{v i−1 ,v i } [(x k 2 v i − x k 2 u i+1 ) 2 + (x k 2 u i+1 − x k 2 v i+1 ) 2 ] ≥ l−1 i=0 (x k 2 v i − x k 2 v i+1 ) 2 + 1 2   l−1 i=0 u i ∈e i \{v i−1 ,v i } (x k 2 v i − x k 2 v i+1 ) 2   = l−1 i=0 (x k 2 v i − x k 2 v i+1 ) 2 + k − 2 2 l−1 i=0 (x k 2 v i − x k 2 v i+1 ) 2 . By Cauchy-Schwarz inequality, we obtain i,j∈e∈E(P ) (x k 2 i − x k 2 j ) 2 ≥ 1 l l−1 i=0 (x k 2 v i − x k 2 v i+1 ) 2 + k − 2 2l l−1 i=0 (x k 2 v i − x k 2 v i+1 ) 2 = 1 l (x k 2 u − x k 2 v ) 2 + k − 2 2l (x k 2 u − x k 2 v ) 2 = k 2l (x k 2 u − x k 2 v ) 2 . Let D is the diameter of G, since l = d(u, v) ≤ D, so i,j∈e∈E(P ) (x k 2 i − x k 2 j ) 2 ≥ k 2D (x k 2 u − x k 2 v ) 2 . (3.2) By (3.1) and (3.2), it yields that ∆ − ρ(G) > (n∆ − km)x k v + k 2(k − 1)D (x k 2 u − x k 2 v ) 2 . (3.3) Let γ be the principal ratio of G, we have ∆ − ρ(G) x k u > (n∆ − km)γ −k + k 2(k − 1)D 1 − γ − k 2 2 . (3.4) It follows from Theorem 2.3 that x k u ≥ ρ(G) km , so (∆ − ρ(G)) km ρ(G) > ∆ − ρ(G) x k µ > (n∆ − km)γ −k + k 2(k − 1)D 1 − γ − k 2 2 , ρ(G) < km∆ km + (n∆ − km)γ −k + k 2(k−1)D 1 − γ − k 2 2 . Theorem 3.5. Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, D is the diameter of G, then ρ(G) < ∆ − 2(k − 1)D(n∆ − km)γ −k + k 1 − γ − k 2 2 2 (γ −k + n − 1) (k − 1)D . Proof. Let x = (x 1 , . . . , x n ) T , γ be the principal eigenvector and the principal ratio of G, respectively, x u = max i x i , by (3.4), we know ∆ − ρ(G) x k u > (n∆ − km)γ −k + k 2(k − 1)D 1 − γ − k 2 2 . By (2.6), we have x k u ≥ γ −k + n − 1 −1 . Thus ∆ − ρ(G) > (n∆ − km)γ −k + k 2(k−1)D 1 − γ − k 2 2 γ −k + n − 1 = 2(k − 1)D(n∆ − km)γ −k + k 1 − γ − k 2 2 2 (γ −k + n − 1) (k − 1)D . ρ(G) < ∆ − 2(k − 1)D(n∆ − km)γ −k + k 1 − γ − k 2 2 2 (γ −k + n − 1) (k − 1)D . Theorem 3.6. Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, D is the diameter of G, then ρ(G) < ∆ − k(n∆ − km) [2(k − 1)D(n∆ − km) + k] δ ∆ k 2(k−1) + n − 1 . Proof. Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, Let x = (x 1 , . . . , x n ) T , γ be the principal eigenvector and the principal ratio of G, respectively, x u = max i x i , x v = min i x i , by (3.3), we know ∆ − ρ(G) > (n∆ − km)x k v + k 2(k − 1)D (x k 2 u − x k 2 v ) 2 . By Lemma 3.2, we get ∆ − ρ(G) > k(n∆ − km) 2(k − 1)D(n∆ − km) + k x k u . (3.5) By Theorem 2.2, we have x u ≥ δ ∆ k 2(k−1) + n − 1 −( 1 k ) . Thus, ∆ − ρ(G) > k(n∆ − km) [2(k − 1)D(n∆ − km) + k] δ ∆ k 2(k−1) + n − 1 . i.e. ρ(G) < ∆ − k(n∆ − km) [2(k − 1)D(n∆ − km) + k] δ ∆ k 2(k−1) + n − 1 . Theorem 3.7. Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, then ρ(G) < 2m∆(k − 1)D(n∆ − km) + km∆ 2m(k − 1)D(n∆ − km) + n∆ , where D is the diameter of G. Proof. Let x = (x 1 , . . . , x n ) T , γ be the principal eigenvector and the principal ratio of G, respectively, x u = max Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, when k = 2, by Theorem 3.6 and Theorem 3.7, we can obtain results as follows. . Remark: For a connected irregular graph G with n vertices and m edges, Cioabȃ, Gregory and Nikiforov [13] obtain the following bound ∆ − ρ(G) > n∆ − 2m n(D(n∆ − 2m) + 1) . (3.6) Clearly, the results of Corollary3.8 and Corollary 3.9 improve bound (3.6). is called an eigenvalue of A, x is an eigenvector of A corresponding to the eigenvalue λ, where x [m−1] = x of A, the spectral radius ρ (A) = max {|λ|| λ ∈ σ (A)}. Theorem 2. 3 . 3Let G be a connected k-uniform hypergraph with n vertices and m edges, x = (x 1 , . . . , x n ) T be the principal eigenvector of G, thenx max ≥ ρ (G) km 1 k ,with equality if and only if G is regular. Theorem 3 . 4 . 34Let G be an irregular connected k-uniform hypergraph with n vertices and m edges, then km(∆ − ρ(G)) > k(n∆ − km)ρ(G) 2(k − 1)D(n∆ − km) + k . n∆ − km) + km∆ 2m(k − 1)D(n∆ − km) + n∆ . Corollary 3 . 8 . 38Let G be an irregular connected graph with n vertices and m edges, D is the diameter of G, then∆ − ρ(G) > n∆ − 2m [D(n∆ − 2m) + 1] δ ∆ + n − 1 > n∆ − 2m n(D(n∆ − 2m) + 1).Corollary 3.9. Let G be an irregular connected graph with n vertices and m edges, D is the diameter of G, then∆ − ρ(G) > ∆ − 2m∆D(n∆ − 2m) + 2m∆ 2mD(n∆ − 2m) + n∆ > n∆ − 2m n(D(n∆ − 2m) + 1) Acknowledgements. Eigenvalues of a real supersymmetric tensor. L Qi, J. Symbolic Comput. 40L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput. 40(2005) 1302-1324. Singular values and eigenvalues of tensors: a variational approach. L H Lim, Proceedings 1st IEEE International Workshop on Computational Advances of Multiensor Adaptive Processing. 1st IEEE International Workshop on Computational Advances of Multiensor Adaptive ProcessingL.H. Lim, Singular values and eigenvalues of tensors: a variational approach, in: Proceedings 1st IEEE International Workshop on Computational Advances of Multiensor Adaptive Processing, (2005) 129-132. Further results for Perron-Frobenius theorem for nonnegative tensors. Y Yang, Q Yang, SIAM. J. Matrix Anal. Appl. 31Y.Yang and Q. Yang, Further results for Perron-Frobenius theorem for nonneg- ative tensors, SIAM. J. Matrix Anal. Appl. 31(2010) 2517-2530. Y Yang, Q Yang, aXiv:1111.0713On some properties of nonnegative weakly tensors. Y.Yang and Q. Yang, On some properties of nonnegative weakly tensors, aXiv:1111.0713. Principal eigenvectors of irregular graphs. S M Cioabȃ, D A Gregory, Electronic Journal of Linear Algebra. 16S.M. Cioabȃ and D.A. Gregory, Principal eigenvectors of irregular graphs, Elec- tronic Journal of Linear Algebra, 16(2007) 366-379. Spectral of a uniform hypergraph. J Cooper, A Dutle, Linear Algebra Appl. 436J. Cooper, A. Dutle, Spectral of a uniform hypergraph, Linear Algebra Appl. 436 (2012) 3268-3292. Eigenvectors and eigenvalues of non-regular graphs. X D Zhang, Linear Algebra Appl. 409X.D. Zhang, Eigenvectors and eigenvalues of non-regular graphs, Linear Algebra Appl. 409(2005) 79-86. On spectral hypergraph theory of the adjacency tensor, Graphs and Combin. K Pearson, T Zhang, 30K. Pearson, T. Zhang, On spectral hypergraph theory of the adjacency tensor, Graphs and Combin. 30(2014) 1233-1248. A Chang, J Cooper, W Li, arXiv:1507.02763v1Analytic connectivity of k-uniform hypergraphs. A. Chang, J. Cooper, W. Li, Analytic connectivity of k-uniform hypergraphs, arXiv:1507.02763v1. The extremal spectral radii of k-uniform supertrees. H Li, J Y Shao, L Qi, 10.1007/s10878-015-9896-4J. Comb. Optim. H. Li, J.Y. Shao, L. Qi, The extremal spectral radii of k-uniform supertrees, J. Comb. Optim. DOI:10.1007/s10878-015-9896-4. Perron-Frobenius theorem for nonnegative tensors. K C Chang, Kelly Pearson, Tan Zhang, Commun. Math. Sci. 6K.C. Chang, Kelly Pearson and Tan Zhang, Perron-Frobenius theorem for non- negative tensors, Commun. Math. Sci. 6(2008) 507-520. Perron-Frobenius theorem for nonnegative multilinear forms and extensions. S Friedland, S Gaubert, L Han, Linear Algebra Appl. 438S.Friedland, S.Gaubert and L.Han, Perron-Frobenius theorem for nonnegative multilinear forms and extensions, Linear Algebra Appl. 438(2013) 738-749. Extreme eigenvalues of nonregular graphs. S M Cioabȃ, D A Gregory, V Nikiforov, J. Combin. Theory, Ser. B. 97S.M. Cioabȃ, D.A. Gregory, V. Nikiforov, Extreme eigenvalues of nonregular graphs, J. Combin. Theory, Ser. B 97 (2007) 483-486.
[]
[ "MODERATE DEVIATION PRINCIPLE FOR MULTISCALE SYSTEMS DRIVEN BY FRACTIONAL BROWNIAN MOTION", "MODERATE DEVIATION PRINCIPLE FOR MULTISCALE SYSTEMS DRIVEN BY FRACTIONAL BROWNIAN MOTION" ]
[ "Solesne Bourguin ", "ANDThanh Dang ", "Konstantinos Spiliopoulos " ]
[]
[]
In this paper we study the moderate deviations principle (MDP) for slow-fast stochastic dynamical systems where the slow motion is governed by small fractional Brownian motion (fBm) with Hurst parameter H ∈ (1/2, 1). We derive conditions on the moderate deviations scaling and on the Hurst parameter H under which the MDP holds. In addition, we show that in typical situations the resulting action functional is discontinuous in H at H = 1/2, suggesting that the tail behavior of stochastic dynamical systems perturbed by fBm can have different characteristics than the tail behavior of such systems that are perturbed by standard Brownian motion.
10.1007/s10959-023-01235-y
[ "https://export.arxiv.org/pdf/2206.06794v2.pdf" ]
249,642,567
2206.06794
2561adaaf863c078a554e277304ca73e76072471
MODERATE DEVIATION PRINCIPLE FOR MULTISCALE SYSTEMS DRIVEN BY FRACTIONAL BROWNIAN MOTION 7 Apr 2023 Solesne Bourguin ANDThanh Dang Konstantinos Spiliopoulos MODERATE DEVIATION PRINCIPLE FOR MULTISCALE SYSTEMS DRIVEN BY FRACTIONAL BROWNIAN MOTION 7 Apr 2023arXiv:2206.06794v2 [math.PR] In this paper we study the moderate deviations principle (MDP) for slow-fast stochastic dynamical systems where the slow motion is governed by small fractional Brownian motion (fBm) with Hurst parameter H ∈ (1/2, 1). We derive conditions on the moderate deviations scaling and on the Hurst parameter H under which the MDP holds. In addition, we show that in typical situations the resulting action functional is discontinuous in H at H = 1/2, suggesting that the tail behavior of stochastic dynamical systems perturbed by fBm can have different characteristics than the tail behavior of such systems that are perturbed by standard Brownian motion. Introduction The goal of this paper is to study the asymptotic behavior, in the moderate deviations regime, of the following system of slow-fast dynamics dX ǫ t = g(X ǫ t , Y ǫ t )dt + √ ǫf (X ǫ t , Y ǫ t )dW H t , X ǫ 0 = x 0 dY ǫ t = 1 ǫ c(Y ǫ t ) + 1 √ ǫ σ(Y ǫ t )dB t , Y ǫ 0 = y 0 .(1) Here ǫ is a small parameter that goes to zero. We assume that t ∈ [0, 1] and (X ǫ , Y ǫ ) ∈ R n × R d . Also, B is a standard m-dimensional Brownian motion, while W H is a p-dimensional fractional Brownian motion (fBm) with Hurst parameter H ∈ (1/2, 1) independent of B. As is known, if H = 1/2 then W 1/2 will be a standard Brownian motion. Moreover, the integral with respect to W H is a pathwise Riemann-Stieltjes integral and is commonly known as a Young integral (see Appendix A for a brief introduction). Since, 1/ǫ ↑ ∞ as ǫ ↓ 0, we expect that under the appropriate conditions, the distribution of Y ǫ will be converging to its invariant distribution, while the equation that X ǫ satisfies can be viewed as a perturbation of a dynamical system by small multiplicative noise of magnitude √ ǫ. We can think of X ǫ as the slow component and of Y ǫ as the fast component. Model (1) is a prototypical dynamical system that exhibits multiple characteristic scales in time and is perturbed by small noise to account for imperfect information and to capture random phenomena. Such systems arise naturally as models in a great variety of applied fields, including physics, chemistry, biology, neuroscience, meteorology, and mathematical finance, to name a few. The novelty of this paper lies in the consideration of the tail behavior of (1) in the case where H = 1/2. In the case of H = 1/2, i.e., when both the X ǫ and Y ǫ components are driven by Brownian motions, the asymptotic behavior of systems like (1) have been extensively studied in the literature. We refer the interested reader to [BF77, DS12, Fre78, FS99, FW12, Gui03, KL99, MS17, LS90, PV01, PV05, Spi14,Spi13], which contain results on related typical averaging dynamics, central limit theorems, moderate and large deviations. Choosing the noise that perturbs the system to be standard Bronwian motion, we embed the Markov property and semimartingale structure of the standard Brownian motion in the system. However, many physical dynamical system exhibit long-range dependence or a particular sort of self-similarity that may not be amenable to accurate description by a model driven by standard Brownian noise. One way to account for this issue, is to perturb the dynamical system by fractional Brownian motion. Such practice has seen growing interest in literature, for example the references [BFG + 19, BS20, Che03, CR98a, Fuk17, FZ17a, GJRS18, HJL19, SV03b] to name a few. However, the corresponding literature for multiscale systems like (1) in the case of perturbation by fractional Brownian motion is quite sparse and still in its infancy. We refer the interested reader to the very recent papers of [BGS21,HL20,PIX20] for results concerning typical averaging behavior, homogenization, and fluctuations corrections for multiscale models like (1) under different sets of assumptions for the model coefficients. As discussed in these papers, replacing Brownian motion by fractional Brownian motion creates a number of issues that need to be overcome. These issues are mainly related to the partial loss of the Markovian structure as well as to the proper averaging of the integral with respect to the fractional Brownian motion W H which originates from the interaction of ergodicity and fBm. The intent of this paper is to study the tail behavior of X ǫ in (1) as ǫ ↓ 0 in the moderate deviation setting. To be more precise, letting h(ǫ) → ∞ such that √ ǫh(ǫ) → 0 and definingX t = lim ǫ→0 X ǫ t (the limit in the appropriate sense), we define the moderate deviations process η ǫ t = X ǫ t −X t √ ǫh(ǫ) .(2) Moderate deviations for X ǫ refer to large deviations for η ǫ . In fact the scaling by √ ǫh(ǫ) implies that moderate deviations is in the regime between central limit theorem (corresponding to the choice h(ǫ) = 1) and large deviations (corresponding to the choice h(ǫ) = 1/ √ ǫ). Moderate deviations for systems like (1) and for H = 1/2, i.e., when both slow and fast components are driven by standard Brownian motions, have been considered in [Gui03,MS17]. An interesting conclusion of our results for the case H = 1/2, which will be discussed in Remark 7 is that the resulting action functional is not continuous in H at H = 1/2. At this point we also mention the recent work [GG22] that considers the large deviations counterpart for stochastic dynamical systems similar to (1). In order to study the moderate deviations principle for X ǫ , we shall follow the weak convergence method of [DE11]. The core of this approach lies in the use of a variational representation of exponential functionals of the driving noise (W H , B), see [DE11,Zha09]. In our case, such a representation leads to a representation of the exponential functional of the moderate deviations process η ǫ that appears in the Laplace principle (which is equivalent to the moderate deviations) as a variational infimum of a family of controlled moderate deviations processes η ǫ,w ǫ together with a quadratic cost over a suitable family of stochastic controls w ǫ . To be more precise, letting a be a bounded Borel function on C([0, 1]; R n ), we have the representation − 1 h 2 (ǫ) ln E exp −h 2 (ǫ)a(η ǫ ) = inf w ǫ ∈S E 1 2 w ǫ 2 S + a η ǫ,w ǫ ,(3) where S denotes the Cameron-Martin space associated with the process (W H t , B t ) : t ∈ [0, 1] (see (42)) and the controlled deviations process η ǫ,w ǫ is defined by η ǫ,w ǫ t = X ǫ,w ǫ t −X t √ ǫh(ǫ)(4) with the controlled processes (X ǫ,w ǫ , Y ǫ,w ǫ ) defined by (14). Essentially, proving the moderate deviations principle for X ǫ amounts to finding the limit as ǫ → 0 to (3). When H = 1/2, i.e., when the standard Brownian motion in the slow component is replaced by fBm, a number of additional technical issues come up and the standard methodology needs to be modified. After we introduce proper notation, we explain in Remark 9 of Section 3 one of the core ideas that allow us to study the H = 1/2 case in a way that naturally extends the H = 1/2 setting. The rest of the paper is organized as follows. In Section 2 we establish necessary notation, go over our assumptions and present the main result, Theorem 1, on the moderate deviations principle with an explicit representation of the action functional, as well as a corollary of the aforementioned theorem. Section 3 contains the details of the weak convergence approach for the problem at hand, introduces the appropriate controlled processes and presents Theorem 2 which has a variational representation of the moderate deviations action functional. Theorem 1 can be viewed as a direct consequence of Theorem 2. In Section 3 we also go over one of the main ideas that essentially unlock the computation for the case H = 1/2, in a way that naturally extends the standard H = 1/2 framework, see Remark 9. Section 4 contains examples that demonstrate our theoretical results. Section 5 contains the proof of Theorem 2 and consequently of Theorem 1 as well. In particular, in Section 5 we prove tightness of the appropriate controlled processes and occupational measures introduced in Section 3, we identify their weak limit which then allows to prove the limit Laplace principle lower and the upper bound of (3). The proof of the Laplace upper bound leads to the exact representation of Theorem 1. Next, Section 6 provides the proof of Corollary 1. Section 7 discusses avenues for future work on this topic. Appendix A recalls several aspects of fBm and necessary results on pathwise stochastic integration with respect to fBm used in this paper. In Appendix A, we also discuss the Cameron-Martin space of fBm and prove associated results that are potentially of independent interest as well. Appendix B recalls regularity results of [PV01,PV05] on Poisson equations, proves necessary a-priori bounds of the slow-fast process (X ǫ , Y ǫ ) in (1) as well as necessary a-priori estimates on η ǫ,w ǫ from (4) that allows to establish the necessary tightness results in Section 3. Notation, conditions and main results In this section, we introduce some notation, present the main assumptions we make, and state our main results. We work with a canonical probability space (Ω, F , P ) equipped with a filtration {F t } 0≤t≤T satisfying the usual conditions (namely, {F t } 0≤t≤T is right continuous and F 0 contains all P -negligible sets). We will denote by A : B the Frobenius inner product Σ i,j [a i,j · b i,j ] of matrices A = (a i,j ) and B = (b i,j ). We will use single bars | · | to denote the Frobenius (or Euclidean) norm of a matrix and double bars || · || to denote the operator norm. For α ∈ (0, 1), | · | α is the standard Hölder semi-norm, i.e. |h| α = sup 0≤s =t≤1 |h(s) − h(t)| |s − t| α . For some set A and α ∈ (0, 1), C α (A) is the Hölder space for Hölder functions defined on A. Meanwhile, for k ∈ N, C k (A) denote the usual space of k-times continuously differentiable functions on A. In addition, for given sets A, B and i, j ∈ N and ζ ∈ (0, 1), C i,j+ζ (A × B) is the space of functions on A × B with i bounded derivatives in x and j derivatives in y, with all partial derivatives being ζ-Hölder continuous with respect to y, uniformly in x. 2.1. Conditions. We start by stating the assumptions we make on the coefficients of Y ǫ ensuring its ergodicity. Note that these assumptions are satisfied in the context of the multi-scale models studied in [BGS21, Theorem 1] and [HL20, Theorem A]. Condition H1. -c(y) = −Γy + ζ(y), for which Γ be a d × d positive matrix with bounded entries and ζ(y) is a uniformly Lipschitz function with Lipschitz coefficient L ζ . Moreover, ζ(y) ≤ C |y| and (Γ − L ζ I)ξ, ξ ≥ γ 0 |ξ| 2 for some γ 0 > 0. -c(y), σ(y) have first and second derivatives that are α-Hölder continuous for some α > 0. -σ(y)σ ⊤ (y) is uniformly continuous, bounded and non-degenerate. -There are positive constants β 1 , β 2 such that 0 < β 1 ≤ σ(y)σ ⊤ (y)y,y |y| 2 ≤ β 2 , ∀y ∈ R d \ {0}. Remark 1. Condition H1 guarantees that the fast process Y ǫ has a unique invariant measure, which we denote by µ(dy) in the sequel. Denote by L the normalized infinitesimal generator of the fast motion Y ǫ (with respect to which averaging is being performed). It is given by LF (y) = ∇ y F (y) ⊤ c(y) + 1 2 σ(y)σ ⊤ (y) : ∇ 2 y F (y),(5) where F ∈ C 2 (R d ). Set Y = R d . For any function G(x, y), define the averaged functionḠ bȳ G(x) = Y G(x, y)µ(dy). In particular, the averaging of the drift term g in the slow motion X ǫ with respect to µ will be given bȳ g(x) = Y g(x, y)µ(dy). Remark 2. Under the growth assumption on g and its derivatives in either the upcoming Condition H2-A or H2-B, Theorem 4 implies that the partial differential equation (6) Lφ(x, y) = g(x, y) −ḡ(x), Y φ(x, y)µ(dy) = 0 has a unique, twice differentiable solution (that we denote by φ(x, y) in the sequel) in the class of functions that grow at most polynomially in |y|. Finally, we provide two different sets of assumptions on the coefficients of X ǫ , each of which is based on the available averaging results for X ǫ appearing in [BGS21] and [HL20], respectively. Depending on the specific multi-scale model at hand, one may choose to work with one set of assumptions or the other. Condition H2-A. The assumptions below relate to the setting of [HL20]. -f (x, y) and g(x, y) are uniformly bounded with bounded first and second partial derivatives. -There exists β in [0, 1] such that β + H > 1 and h(ǫ) −1 ǫ − β 2 → 0 as ǫ → 0. Condition H2-B. The assumptions below relate to the setting of [BGS21]. We assume that there are constants D f , D g , M f , M k in [0, 1] and α in (0, 1] such that -g ∈ C 2,α (R n , Y). -f = f (y) and g satisfy the growth assumption |f (y)| ≤ C 1 + |y| D f and g(x, y) + ∇ x g(x, y) + ∇ 2 x g(x, y) ≤ C 1 + |y| Dg . -D f and D g are related via 0 ≤ D f + D g < 1. -f (y) and ∇ x φ(x, y)f (y) are respectively M f and M k -Hölder continuous, where φ(x, y) is defined at (6). Moreover, we have min M f 2 + H, M k 2 + H > 1. -h(ǫ) −1 ǫ − M f 2 → 0 as ǫ → 0. Remark 3. Conditions H2-A and H2-B relate to the averaging results in [HL20] and [BGS21] that state that the slow motion X ǫ converges in probability, as ǫ goes to 0, to a deterministic limitX defined to be the solution to the integral equation dX t =ḡ(X t )dt,X 0 = x 0 . In addition, we need to assume uniqueness of a strong solution. Without having to refer to it again, this assumption is always in effect in this paper. Condition H3. The stochastic differential equation at (1) has a unique strong solution. Remark 4. We direct readers to [GN08,MS11,dSEE18] for existence and uniqueness of solutions to stochastic differential equations like (1). Finally, define the operator Q H X by Q H X =f X K H f X K H * + Y ∇ y φ X , y σ(y) ∇ y φ X , y σ(y) ⊤ µ(dy),(7) where µ is the invariant measure defined in Remark 1,K H is the operator (related to the fractional Brownian motion) defined in (41) (see Appendix A.3). Per the explanation in Section 5.5, both the domain and range of Q H X can be taken to be L 2 ([0, 1]; R n ). In fact, let h ∈ L 2 ([0, 1]; R n ). Then the operatorf X K H f X K H * admits the explicit representation f X K H f X K H * h (t) = c 2 Hf X t t H−1/2 t 0 (t − z) H−3/2 z 1−2H 1 z (s − z) H−3/2 s H−1/2f X s ⊤ h(s)dsdz such that the constant c H equals (H(2H − 1)/β(2 − 2H, H − 1/2)) 1/2 , where β(x, y) = Γ(x)Γ(y) Γ(x+y) is the standard beta function. Remark 5. In both this paper and the paper [BGS21], the latter of which provides averaging results on multiscale models like (1), one needs to bound terms that are Young integrals. However, each paper uses a different bounding technique which leads to different assumptions on (1). For instance, the authors of [BGS21] use the maximal inequality in their Lemma 1 to bound t 0 f (Y ǫ s )dW H s ,(8) an integral term which appears in (1). Having the kernel f (Y ǫ ) independent from the driving process W H simplifies the application of the maximal inequality and yields the necessary bound on the integral (8). For this paper, we instead have to bound t 0 f Y ǫ,w ǫ s dW H s ,(9) an integral term which appears in (14). In this case, the control process w ǫ is dependent on W H , implying the kernel f Y ǫ,w ǫ is dependent on W H as well. This lack of independence between the kernel and the driving process leads us to substitute the Young-Loéve inequality for the maximal inequality in order to bound (9). The Young-Loéve inequality for Young integrals requires some kind of uniform Hölder continuity of the kernel, which explains why we impose certain uniform Hölder continuity condition on the coefficients of (1), an assumption not made in [BGS21]. 2.2. Main results. The weak convergence approach to large deviations developed in [DE11] states that the large deviations principle for η ǫ t is equivalent to the Laplace principle which states that for any bounded continuous function a : C([0, 1]; R n ) → R, there exists a rate function (also called action functional) S H : C([0, 1]; R n ) → R that satisfies lim ǫ→∞ − 1 h 2 (ǫ) ln E exp −h 2 (ǫ)a(η ǫ ) = inf Φ∈C([0,1];R n ) {S H (Φ) + a(Φ)}. In this paper, we prove that the above Laplace principle holds and our main result, Theorem 1, identifies that rate function S H (Φ) explicitly. The statement of this theorem is given below. Theorem 1. Let Conditions H1 and either H2-A or H2-B be satisfied. Moreover, assume that the operator Q H X defined in (7) is invertible on L 2 ([0, 1]; R n ). Then, the process {X ǫ : ǫ > 0} satisfies the moderate deviations principle, with the action functional S H (Φ) given by S H (Φ) = 1 0 Φ s − ∇ xḡ (X s )Φ s ⊤ (Q H Xs ) −1 Φ s − ∇ xḡ (X s )Φ s ds 1 √ ǫ σ Y ǫ,w ǫ s dB s .(14) Note that, based on (13) and (14), we can rewrite η ǫ,w ǫ in the form η ǫ,w ǫ t = t 0 1 √ ǫh(ǫ) g X ǫ,w ǫ s , Y ǫ,w ǫ s −ḡ(X s ) ds + t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds + 1 h(ǫ) t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s .(15) Let U = R m and V = R p . These are the spaces in which the control processes u ǫ and v ǫ take values in, respectively. Define θ x, η, y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r : R n ×R n ×Y ×Y ×U ×U ×V ×V ×[0, 1]×[0, 1] by θ x, η, y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r = ∇ y φ x, y (1) σ y (1) v (1) + ∇ xḡ (x)η + c H f (x, y (1) )(s − r) H−3/2 s H−1/2 r 1/2−H u (2) 1 [0,s] (r).(16) Condition H1 and Theorem 4 guarantee that the function θ is bounded in x, affine in η, u (2) and v (1) and bounded polynomially in |y|. Next, we introduce the occupation measure P ǫ . Let A 1 , A 2 , B and Γ be Borel sets of U, V, Y = R d and [0, 1], respectively. Let (X ǫ,w ǫ , Y ǫ,w ǫ ) solve (14). Associate with Y ǫ,w ǫ ,û ǫ ,v ǫ a family of occupation measures P ǫ defined by P ǫ (A 1 × A 2 × B × Γ) = Γ 1 A1 (û ǫ s )1 A2 (v ǫ s )1 B (Y ǫ,w ǫ s )ds.(17) Definition 1. Let F x, η, y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r : R n ×R n ×Y ×Y ×U ×U ×V ×V ×[0, 1]×[0, 1] be a function that has at most polynomial growth in |y|. Let L be a second order elliptic partial differential operator and denote its domain by D(L). A pair (ψ, P ) ∈ C([0, 1]; R n ) × P(U × V × Y × [0, 1]) is called a viable pair with respect to (θ, L) if -The function ψ ∈ C([0, 1]; R n ) is absolutely continuous. -The measure P is integrable in the sense that U ×V×Y×[0,1] |u| 2 + |v| 2 + |y| 2 P (dudvdyds) < ∞. -For all t ∈ [0, 1], ψ t = U 2 ×V 2 ×Y 2 ×[0,t] 2 F X s , ψ s , y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r P ⊗ P (du (1) du (2) dv (1) dv (2) dy (1) dy (2) dsdr). -For all t ∈ [0, 1], it holds that P (dudvdydt) = ν y,t (dudv)µ(dy)dt,(18) where ν y,t is a kernel on U × V dependent on y ∈ Y and t ∈ [0, 1], while µ is the unique invariant measure associated with the operator L. In order to indicate that the pair (ψ, P ) is viable with respect to (F, L), we write (ψ, P ) ∈ V(F, L). The controlled process (13) and the definition of viable pairs (Definition 1) will be used to prove the theorem below. Theorem 2. Let Conditions H1 and either H2-A or H2-B be satisfied. Then, the process {X ǫ : ǫ > 0} from (1) satisfies the moderate deviations principle, with the action functional S H (Φ) given by S H (Φ) = inf (Φ,P )∈V(θ,L) 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 P (dudvdyds) (19) with the convention that the infimum over the empty set is ∞. Remark 8. As will be shown in the proof of Theorem 2, Theorem 1 follows directly from Theorem 2. Remark 9. In this remark we discuss one of the key ideas that allows to naturally generalize the computations to the H = 1/2 case from the H = 1/2 case. In the course of the proof, we will need to handle terms of the form t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds, where u ǫ is the control process introduced in the beginning of this section. Roughly speaking, if H = 1/2 and P ǫ is the occupational measure defined as in (17), then one hasu ǫ =û ǫ and thus t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds = U ×V×Y×[0,t] f (X ǫ,w ǫ s , y)uP ǫ (dudvdyds), and then after establishing tightness of (X ǫ,w ǫ , P ǫ ) one can study its limit. This approach does not work exactly like that in the case where the Hurst parameter H = 1/2. In order to generalize this idea for the case H = 1/2, we first notice that one can write that d ds u ǫ s = d ds [K Hû ǫ ] s , where K H is the operator associated to fBm, see Appendix A.3. With this observation at hand we then write t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds = t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s d ds [K Hû ǫ ] s ds = t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s c H s H−1/2 s 0 (s − r) H−3/2 r 1/2−Hûǫ r dr ds = c H U 2 ×V 2 ×Y 2 ×[0,t] 2 f (X ǫ,w ǫ s , y (1) )(s − r) H−3/2 s H−1/2 r 1/2−H u (2) 1 [0,s] (r) P ǫ ⊗ P ǫ (du (1) du (2) dv (1) dv (2) dy (1) dy (2) dsdr),(20) which is what allows us then to take limits. The details are in Section 5. Examples 4.1. Fractional financial model. In [Shi99,Chapter 4], the author collects various empirical studies which observe persistence or long memory phenomena in financial data such as financial indexes and currency cross rates, among others. This motivates us, for the first example of this paper, to consider the multiscale volatility model dX ǫ t = Y ǫ t dt + √ ǫτ dW H t , dY ǫ t = β(θ − Y ǫ t )dt + v Y t dB t .(21) We assume τ > 0 and β, θ, v are real constants such that 2βθ ≥ v 2 . W H , B is an independent pair of one-dimensional fractional Brownian motion of Hurst parameter H > 1/2 and one-dimensional Brownian motion. X ǫ is a financial instrument with a perturbed fractional Brownian noise (in order to account for the long memory effect). The fast volatility process Y ǫ follows the Cox-Ingersoll-Ross model of interest rate and the assumption 2βθ ≥ v 2 ensures that Y ǫ is strictly positive. It is also worth noting that adding fractional Brownian noise to financial models to simulate long memory has been an increasingly common practice in literature, see [Che03, CR98b, FZ17b, HJL19, SV03a, Shi99]. The stochastic differential equation of Y ǫ in (21) has the Cox-Ingersoll-Ross process as its unique solution, which implies the process X ǫ as an integral function of Y ǫ plus a fractional Brownian noise term is well-defined. Moreover, in the context of the previous section, the invariant measure µ has the Gamma density ([FPSS11, Section 33.4]) µ(y) = 2β/v 2 2βθ/v 2 Γ(2βθ/v 2 ) y 2βθ/v 2 −1 e −2βy/v 2 , for y ≥ 0. Then according to [BGS21, Theorem 1], X ǫ converges in probability in C([0, 1]) tō X t = R yµ(dy) = θ. The Poisson equation at (6) has an unique solution φ(y) due to Theorem 4 and this solution satisfies φ ′ (y) = − 1 2β . This implies the operator Q H defined at (36) is Q H = τ 2K HK * H + τ 2β 2 , which is invertible since τ > 0 (see Lemma 9). HereK HK * H is the operator K HK * H h (t) = c 2 H t H−1/2 t 0 (t − z) H−3/2 z 1−2H 1 z (s − z) H−3/2 s H−1/2 h(s)dsdz. Let us now discuss moderate deviations of X ǫ . We have already established Y ǫ has an invariant measure µ, which makes Condition H1 redundant for the model (21). Moreover, if we use the notation of Section 1 then the equation of X ǫ at (21) has f = τ and d dx φ(y) f = 0. Therefore, based on the proofs of Lemma 12 and Lemma 15, Condition H2-B in this setting simplifies to |f (y)| ≤ C 1 + |y| D f and 0 ≤ D f < 1, which is clearly satisfied for f = τ . Then, as long as there exists β ∈ [0, 1] such that β + H > 1 and h(ǫ) −1 ǫ − β 2 → 0 as ǫ → 0, Theorem 1 asserts that for the moderate deviations process X ǫ −X /h(ǫ) √ ǫ, its action functional, when finite, takes the form S H (Φ) = 1 0Φ s Q H −1Φ s ds. Fractional Langevin equation. For the second example, we consider the multiscale model dX ǫ t = (−Q ′ (Y ǫ t ) − V ′ (X ǫ t ))dt + √ ǫ √ 2DdW H t , dY ǫ t = − 1 ǫ Q ′ (Y ǫ t )dt + 1 √ ǫ √ 2DdB t .(22) The equation of X ǫ can be viewed as a rescaled Langevin equation with a fractional Brownian noise. A simpler version of this fractional Langevin equation that does not contain a fast process Y ǫ was studied in [AMP21, CKM03, GJR18] among others. We assume that -Y is the one-dimensional unit torus. -There is a constant C such that |Q ′ (y)| ≤ C(1 + |y|) and sup x∈R 3 k=1 V (k) (x) ≤ C. -Q ′ (y) is Lipschitz. -V ′′′ (x), Q ′′′ (y) are continuous. -D is a real-valued non-zero constant. Our assumption implies that Q ′ (y), V ′ (x) are Lipschitz and that |Q ′ (y)| + |V ′ (x)| ≤ C(1 + |y|), so that there is a unique strong solution to (22) based on [GN08, Theorem 2.2]. Next, we consider averaging of X ǫ . Since Y is the unit torus, Condition 3 in [BGS21, Theorem 1] is not needed for ergodicity of Y ǫ . Condition 1 in [BGS21, Theorem 1] is met by our second and fourth assumptions for (22) above. Thus, we conclude X ǫ converges in probability on C([0, 1]) tō X t = t 0 − Y Q ′ (y)µ(dy) − V ′ X s ds where, according to [PS07], the invariant measure µ is the Gibbs measure µ(dy) = 1 Z e −Q(y)/D , Z = Y e −Q(y)/D dy. The Poisson equation at (6) becomes −Q ′ (y)φ ′ (y) + Dφ ′′ (y) =Q ′ − Q ′ (y), Y φ(y)µ(dy) = 0 such thatQ ′ = Y Q ′ (y)µ(dy). Its solution satisfies φ ′ (y) =Q ′ D e Q(y)/D y 0 e −Q(ξ)/D dξ + M e Q(y)/D + 1 where the constant M is M = − Q′ D Y e −Q(y)/D y 0 e Q(ρ)/D ξ 0 e −Q(ξ)/D dξdρdy + Y ye −Q(y)/D dy Y e −Q(y)/D y 0 e Q(ξ)/D dξdy −1 . At this point, we can consider moderate deviations of X ǫ . In the notation of the previous section, we have g(x, y) = −Q ′ (y) − V ′ (x), c(y) = −Q ′ (y), f = σ = √ 2D. Since Y is the unit torus, the first recurrence assumption in Condition H1 and D f + D g < 1 in Condition H2-B are no longer needed. In addition, the fact that d dx φ(y) = 0 makes redundant the assumption M k /2 + H > 1 in Condition H2-B. Then the rest of Conditions H1 and H2-B are satisfied by (22). In particular, we have g(x, y) ∈ C 2,α (R d × Y) since V ′′ (x), V ′′′ (x) are bounded. Next, the operator Q H in (7) becomes Q H = 2DK HK * H + 2D Z Y e −Q(y)/D Q′ D e Q(y)/D y 0 e −Q(ξ)/D dξ + M e Q(y)/D + 1 2 dy whereK HK * H is K HK * H h (t) = c 2 H t H−1/2 t 0 (t − z) H−3/2 z 1−2H 1 z (s − z) H−3/2 s H−1/2 h(s)dsdz. Thus, under the condition that Q H is invertible and there exists β ∈ [0, 1] such that β + H > 1 and h(ǫ) −1 ǫ − β 2 → 0 as ǫ → 0, Theorem 1 says for the moderate deviations process X ǫ −X /h(ǫ) √ ǫ, its action functional, when finite, is S H (Φ) = 1 0 Φ s + V ′′ X s Φ s Q H −1 Φ s + V ′′ X s Φ s ds. Remark 10. Here we compare the result above to the moderate deviations of X ǫ in dX ǫ t = (−Q ′ (Y ǫ t ) − V ′ (X ǫ t ))dt + √ ǫ √ 2DdW t , dY ǫ t = − 1 ǫ Q ′ (Y ǫ t )dt + 1 √ ǫ √ 2DdB t .(23) We assume W is a Brownian motion independent from B and Y is the one-dimensional unit torus. Under appropriate conditions, [MS17, Theorem 2.1] says the moderate deviations action functional of (23), when finite, is S 1/2 (Φ) = 1 0 Φ s + V ′′ X s Φ s Q 1/2 −1 Φ s + V ′′ X s Φ s ds. such that Q 1/2 = 2D + 2D Z Y e −Q(y)/D Q′ D e Q(y)/D y 0 e −Q(ξ)/D dξ + M e Q(y)/D + 1 2 dy. Notice in this particular situation f (x, y) = √ 2D (i.e. independent of y) and thus we have continuity of the mapping H → S H at H = 1/2 (see Remark 7). Proof of Theorem 2 The proof of Theorem 2 will be divided into five subsections. In Subsections 5.1 and 5.2, we prove tightness and convergence of the pair (η ǫ,w ǫ , P ǫ ), respectively. In Subsection 5.3, we prove the Laplace principle lower bound. In Subsection 5.4, we prove that the level sets of S(·) are compact. Finally, in Subsection 5.5, we prove the Laplace principle upper bound and the representation formula of Theorem 1. The main additional work that needs to be done due to the effect of the fBm is seen in the bounds that we need in order to prove tightness (see also Appendix B) and in the proof of the upper bound in Subsection 5.5. 5.1. Proof of tightness. The main result of this section is the following proposition on tightness. Proposition 1. Let Conditions H1 and either H2-A or H2-B be satisfied. Consider any family {w ǫ : ǫ > 0} of controls in S satisfying, for some N < ∞, sup ǫ>0 w ǫ 2 S = sup ǫ>0 1 0 |û ǫ s | 2 + |v ǫ s | 2 ds < N almost surely. Then, the family (η ǫ,w ǫ , P ǫ ) : ǫ > 0 is tight. The proof of Proposition 1 will be divided into two parts which are the subject of Subsections 5.1.1 and 5.1.2. 5.1.1. Tightness of {P ǫ : ǫ > 0} in P(U × V × Y × [0, 1]) . The argument for tightness is similar to the argument for tightness in [HSS19]. As a first step, we claim that Λ(P ) = U ×V×Y×[0,1] |u| 2 + |v| 2 + |y| 2 P (dudvdydt). is a tightness function from P(U × V × Y × [0, 1]) to R ∪ {∞}. Since Λ is bounded from below, it is sufficient to show that for every k ∈ N, the level sets L k = {P ∈ P(U × V × Y × [0, 1]) : Λ(P ) ≤ k} are relatively compact. For ǫ > 0, let M be a positive constant large enough so that k/M < ǫ, define λ(u, v, y, t) = |u| 2 + |v| 2 + |y| 2 and A ǫ = {(u, v, y, t) ∈ U × V × Y × [0, 1] : |λ(u, v, y, t)| > M }. By Chebyshev's inequality, sup P ∈L k P (A ǫ ) ≤ 1 M {(u,v,y,t)∈U ×V×Y×[0,1] : |λ(u,v,y,t)|≥k} |λ(u, v, y, t)| P (dudvdydt) ≤ Λ(P ) M < k M < ǫ. Therefore, we get sup P ∈L k P ((U × V × Y × [0, 1]) \ A ǫ ) > 1 − ǫ. Since (U × V × Y × [0, 1]) \ A ǫ is also compact, this implies that L k is a tight set of measures and Λ is a tightness function on P(U × V × Y × [0, 1]). For the second step, define G : P(P(U × V × Y × [0, 1])) → R ∪ {∞} by G(ν) = P(U ×V×Y×[0,1]) Λ(x)ν(dx). Then, according to [DE11, Theorem A.3.17], G is a tightness function on P(P(U × V × Y × [0, 1])). Moreover, the same theorem states that {P ǫ : ǫ > 0} is a tight family in P(U × V × Y × [0, 1]) as long as sup ǫ>0 G(L(P ǫ )) < ∞, which is equivalent to sup ǫ>0 E[Λ(P ǫ )] < ∞. As the above holds by Lemma 10 and Lemma 11, we get that indeed {P ǫ : ǫ > 0} is tight in P(U × V × Y × [0, 1]). 5.1.2. Tightness of η ǫ,w ǫ : ǫ > 0 on C([0, 1]; R n ). Let ω f (δ) = sup |s−t|<δ |f (s) − f (t)|> 0 such that P η ǫ,w ǫ 0 ≥ a ≤ δ for ǫ ≤ δ 0 . -For all a > 0, lim δ→0 lim sup ǫ→0 P ω η ǫ,w ǫ (δ) ≥ a = 0. We only need to check the second condition above since the first condition is automatically true as η ǫ,w ǫ 0 = 0. Recall from (15) that η ǫ,w ǫ is given by η ǫ,w ǫ t = 1 √ ǫh(ǫ) t 0 g X ǫ,w ǫ s , Y ǫ,w ǫ s −ḡ(X s ) ds + t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds + 1 h(ǫ) t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s . A combination of the Poisson equation stated in (6) and Itô's formula yields t 0 1 √ ǫh(ǫ) g X ǫ,w ǫ s , Y ǫ,w ǫ s −ḡ X ǫ,w ǫ s ds = t 0 ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s v ǫ s ds + R ǫ 1 (t), where R ǫ 1 (t) = − √ ǫ h(ǫ) φ X ǫ,w ǫ t , Y ǫ,w ǫ t − φ(x 0 , y 0 ) + √ ǫ h(ǫ) t 0 ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s g X ǫ,w ǫ s , Y ǫ,w ǫ s ds + ǫ t 0 ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds + 1 h(ǫ) t 0 ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s dB s + ǫ h(ǫ) t 0 ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s .(24) Therefore, we can rewrite η ǫ,w ǫ as (24) converges to zero in probability, which implies tightness on C([0, 1]; R n ). Furthermore, tightness of the remaining integral terms in (24) is implied by Markov's inequality, Lemma 12 and Lemma 15. This shows that {R ǫ 1 : ǫ > 0} is tight and hence that η ǫ,w ǫ : ǫ > 0 is indeed tight on C([0, 1]; R n ). η ǫ,w ǫ t = t 0 ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s v ǫ s ds + t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds + 1 h(ǫ) t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s + 1 √ ǫh(ǫ) t 0 ḡ X ǫ,w ǫ s −ḡ X s ds + R ǫ 1 (t) = D ǫ 1 (t) + D ǫ 2 (t) + D ǫ 3 (t) + D ǫ 4 (t) + R ǫ 1 (t).(25) 5.2. Proof of existence of a viable pair. In the previous subsection, we have proved that the family of processes (η ǫ,w ǫ , P ǫ ) : ǫ > 0 is tight (see Proposition 1). It follows that for any subsequence of ǫ converging to 0, there exists a subsubsequence of (η ǫ,w ǫ , P ǫ ) : ǫ > 0 which is convergent in distribution to some limit (η,P ). The goal of this subsection is to show that (η,P ) is a viable pair with respect to (θ, L) according to Definition 1 (where L is the generator defined in (5)). By the Skorokhod Representation Theorem, we may assume that η ǫ,w ǫ converges toη almost surely along any subsequence. This will allow us to obtain an equation satisfied byη since we can study the almost sure limits of each individual summand in the representation of η ǫ,w ǫ we had obtained in (25). Recall that we had η ǫ,w ǫ t = t 0 ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s v ǫ s ds + t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds + 1 h(ǫ) t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s + 1 √ ǫh(ǫ) t 0 ḡ X ǫ,w ǫ s −ḡ X s ds + R ǫ 1 (t).(26) Note that we can write the term before last as t 0 1 √ ǫh(ǫ) ḡ X ǫ,w ǫ s −ḡ X s ds = t 0 ∇ xḡ (X s )η ǫ,w ǫ s ds + R ǫ 2 (t),(27) where the remainder term R ǫ 2 (t) is given by R ǫ 2 (t) = 1 2 t 0 ∇ 2 xḡ (ζ s )η ǫ,w ǫ s X ǫ,w ǫ s −X s 2 ds with ζ s being a point in between X ǫ,w ǫ s andX s . Under either Condition H2-A or Condition H2-B, ∇ 2 xḡ (x) is bounded, so that we can write R ǫ 2 (t) ≤ 1 0 η ǫ,w ǫ s X ǫ,w ǫ s −X s ds.(28) Lemma 17 assesses the convergence to zero of R ǫ 2 (t), and [DS12, Lemma 3.2] addresses the convergence of all the other terms at play in (26) and (27) except for one, namely A ǫ,w ǫ (t) = t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds. In order to deal with this last term, let us introduce the term B ǫ,w ǫ (t) = t 0 f X s , Y ǫ,w ǫ s u ǫ s ds. and prove that it has the same limit as A ǫ,w ǫ (t). First, note that if we assume that Condition H2-B holds, the assumption that f does not depend on x implies that A ǫ,w ǫ (t) = B ǫ,w ǫ (t) even before taking limits. If instead we assume that Condition H2-A holds, we can use the Lipschitz continuity of f (x, y) to write A ǫ,w ǫ (t) − B ǫ,w ǫ (t) ≤ t 0 X ǫ,w ǫ s −X s |u ǫ s | ds ≤ sup 0≤s≤1 X ǫ,w ǫ s −X s t 0 |u ǫ s | ds. Proposition 10 and the fact that X ǫ,w ǫ converges toX in probability then imply that A ǫ,w ǫ (t) − B ǫ,w ǫ (t) → 0 (29) almost surely as ǫ goes to 0. Therefore identifying the limit of A ǫ,w ǫ (t) is the same as identifying the weak limit of B ǫ,w ǫ (t). Using the definition of our occupation measures given by (17) and Lemma 6, we can rewrite B ǫ,w ǫ (t) as B ǫ,w ǫ (t) = t 0 f X s , Y ǫ,w ǫ s c H s H−1/2 s 0 (s − r) H−3/2 r 1/2−Hû r dr ds = c H U 2 ×V 2 ×Y 2 ×[0,t] 2 f (X s , y)(s − r) H−3/2 s H−1/2 r 1/2−H u (2) 1 [0,s] (r) P ⊗ P (du (1) du (2) dv (1) dv (2) dy (1) dy (2) dsdr). In order to somewhat compactify notation, let us introduce the function k y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r = c H f (X s , y)(s − r) H−3/2 s H−1/2 r 1/2−H u (2) 1 [0,s] (r) as well as, for 0 < ζ < s, the sequence k ζ y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r = c H k y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r 1 [ζ,s−ζ] (r). With these definitions at hand, we can state the following convergence lemma. Lemma 1. Assume Conditions H1 and either H2-A or H2-B hold. Then, one has that (i) U 2 ×V 2 ×Y 2 ×[0,t] 2 k ζ dP ǫ ⊗ dP ǫ − U 2 ×V 2 ×Y 2 ×[0,t] 2 kdP ǫ ⊗ dP ǫ → 0 a.s. as ζ → 0; (ii) 2 U 2 ×V 2 ×Y 2 ×[0,t] k ζ dP ǫ ⊗ dP ǫ − 2 U 2 ×V 2 ×Y 2 ×[0,t] k ζ dP ⊗ dP → 0 a.s. as ǫ → 0; (iii) U 2 ×V 2 ×Y 2 ×[0,t] 2 k ζ dP ⊗ dP − U 2 ×V 2 ×Y 2 ×[0,t] 2 kdP ⊗ dP → 0 a.s. as ζ → 0; Proof. We first prove part (i). Since k ζ → k pointwise as ζ → 0, all we need to prove is that the function k is integrable with respect to P ǫ ⊗ P ǫ on U 2 × V 2 × Y 2 × [0, 1] 2 as then, the Dominated Convergence Theorem applies and yields the desired limit. We have for some constant C < ∞ U 2 ×V 2 ×Y 2 ×[0,1] 2 kdP ǫ ⊗ dP ǫ ≤ c H 1 0 f (X s , Y ǫ,w ǫ s ) s H−1/2 s 0 (s − r) H−3/2 r 1/2−H |û ǫ r | drds ≤ 1 0 f (X s , Y ǫ,w ǫ s ) K H |û ǫ | s ds ≤ C 1 0 f (X s , Y ǫ,w ǫ s ) 2 ds, where the second inequality follows by Lemma 6 and the last inequality is a consequence of Hölder's inequality and Proposition 10. Now, the boundedness of f under Condition H2-A or the sublinear growth of f under Condition H2-B together with Lemma 11 yield U 2 ×V 2 ×Y 2 ×[0,1] 2 kdP ǫ ⊗ dP ǫ ≤ C 1 0 f (X s , Y ǫ,w ǫ s ) 2 ds < ∞. Part (iii) is proven in the exact same way. Part (ii) is a consequence of the weak convergence of P ǫ tō P and the uniform integrability of {P ǫ ⊗ P ǫ : ǫ > 0} (implied by the second point of Definition 1). The above results allow us to obtain an explicit representation of the limit points (η,P ), which is the object of the following proposition. Proposition 2. Let (η,P ) be a limit point of {(η ǫ,w ǫ , P ǫ ) : ǫ > 0}. Under Conditions H1 and either H2-A or H2-B, it holds that η t = U ×V×Y×[0,t] ∇ y φ X s , y σ(y)v + ∇ xḡ (X s )η s dP + c H U 2 ×V 2 ×Y 2 ×[0,t] 2 f (X s , y (1) )(s − r) H−3/2 s H−1/2 r 1/2−H u (2) 1 [0,s] (r)dP ⊗ dP . Proof. As pointed out earlier, we can consider the limiting behavior of each individual summands in the representation (26) of η ǫ,w ǫ . First, under Conditions H1 and either H2-A or H2-B, Lemma 1 and (29) guarantee that lim ǫ→0 t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds = c H U 2 ×V 2 ×Y 2 ×[0,t] 2 f (X s , y)(s − r) H−3/2 s H−1/2 r 1/2−H u (2) 1 [0,s] (r)dP ⊗ dP . Next, we consider the Young integral terms. Under Conditions H1 and H2-A, part (i) of Lemma 15 implies that, as ǫ → 0, E sup 0≤t≤1 1 h(ǫ) t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s ≤ Ch(ǫ) −1 ǫ − β 2 → 0, and E sup t∈[0,1] ǫ h(ǫ) t 0 ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s ≤ Ch(ǫ) −1 ǫ 1 2 → 0. Likewise, under Conditions H1 and H2-B, part (ii) of Lemma 15 implies that as ǫ → 0, E sup t∈[0,1] 1 h(ǫ) t 0 f Y ǫ,w ǫ s dW H s ≤ Ch(ǫ) −1 ǫ − M f 2 → 0, and E sup t∈[0,1] ǫ h(ǫ) t 0 ∇ x φ(X ǫ,w ǫ s , Y ǫ,w ǫ s )f (Y ǫ,w ǫ s )dW H s ≤ Ch(ǫ) −1 ǫ 1− M k 2 → 0. Consequently, under Conditions H1 and either H2-A or H2-B, we get lim ǫ→0 1 h(ǫ) t 0 f Y ǫ,w ǫ s dW H s = 0 and lim ǫ→0 ǫ h(ǫ) t 0 ∇ x φ(X ǫ,w ǫ s , Y ǫ,w ǫ s )f (Y ǫ,w ǫ s )dW H s = 0. For the limits of the remaining terms in the representation (26) of η ǫ,w ǫ , we refer to [DS12, Lemma 3.2] in which these terms have already been addressed. The following proposition asserts that the invariant measure of Y ǫ and the Lebesgue measure are among the marginals ofP . Proposition 3. Recall that µ denotes the unique invariant measure associated with the generator L defined in (5). Under Condition H1, we have the decomposition P (dudvdydt) = ν y,t (dudv)µ(dy)dt, where ν y,t is a kernel on U × V dependent on y ∈ Y and t ∈ [0, 1]. Proof. Let F be an element of a dense subset of C 2 (Y) that consists of bounded functions with bounded first and second derivatives. By Itô's formula, we have U ×V×Y×[0,t] LF (y)dP ǫ = ǫ F (Y ǫ,w ǫ t ) − F (y 0 ) − √ ǫ t 0 (∇ y F ) ⊤ (Y ǫ,w ǫ s )σ(Y ǫ,w ǫ s )dB s − √ ǫh(ǫ) t 0 (∇ y F ) ⊤ (Y ǫ,w ǫ s )σ(Y ǫ,w ǫ s )v ǫ s ds(30) Let us consider each individual term on the right-hand side of the above equation. The first term converges to 0 given that F is bounded. For the second term, an application of the Burkhölder-Davis-Gundy inequality yields √ ǫE t 0 (∇ y F ) ⊤ (Y ǫ,w ǫ s )σ(Y ǫ,w ǫ s )dB s ≤ C √ ǫ E 1 0 σ(Y ǫ,w ǫ s ) 2 ds , which converges to zero due to the boundedness of σ(y)σ ⊤ (y) in Condition H1. Similarly, by Lemma 10, we have √ ǫh(ǫ) t 0 (∇ y F ) ⊤ (Y ǫ,w ǫ s )σ(Y ǫ,w ǫ s )v ǫ s ds ≤ √ ǫh(ǫ) t 0 (∇ y F ) ⊤ (Y ǫ,w ǫ s )σ(Y ǫ,w ǫ s )σ ⊤ (Y ǫ,w ǫ s )∇ y F (Y ǫ,w ǫ s ) ds t 0 |v ǫ s | 2 ds ≤ C √ ǫh(ǫ). Hence, (30) becomes U ×V×Y×[0,t] LF (y)dP = 0. Moreover, it is immediate to see that P ǫ (U × V × Y × [0, t]) = t, which implies that the last marginal ofP is the Lebesgue measure. In other words,P is of the formP (dudvdydt) = ν t,y (dudv)m(dy)dt. Moreover, since L is independent of the control (u, v), (31) implies that Y LF (y)m(dy) = 0, which implies that m(dy) is the unique invariant measure µ(dy) associated with L. The next proposition asserts that the pair (η,P ) is indeed a viable pair with respect to (θ, L), which was what this subsection was aimed at proving. Proposition 4. The pair η,P is a viable pair with respect to (θ, L), where θ is the function defined in (16) and L is the generator defined in (5). Proof. Lemmas 10 and 11 together with Fatou's lemma ensure thatP satisfies the first property in Definition 1. The following two properties in Definition 1 have been established in Proposition 2 and Proposition 3. 5.3. Proof of the Laplace principle lower bound. The Laplace principle lower bound can be immediately derived from Fatou's lemma and Proposition 2, which is shown in the following proposition. Proposition 5. Assume Conditions H1 and either H2-A or H2-B are satisfied. Then, for all bounded and continuous mappings a : C([0, 1]; R n ) → R, the following Laplace principle lower bound holds. lim inf ǫ→0 − 1 h 2 (ǫ) ln E exp −h 2 (ǫ)a(η ǫ ) ≥ inf Φ∈C([0,1];R n ) S H (Φ) + a(Φ), where the rate function S H is defined at (19). Proof. We can write lim inf ǫ→0 − 1 h 2 (ǫ) ln E exp −h 2 (ǫ)a(η ǫ ) ≥ lim inf ǫ→0 E 1 2 1 0 |û ǫ s | 2 + |v ǫ s | 2 ds + a η ǫ,w ǫ − δ = lim inf ǫ→0 E 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 dP ǫ + a η ǫ,w ǫ ≥ E 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 dP + a(η) ≥ inf Φ∈C([0,1];R n ) S H (Φ) + a(Φ). The first inequality comes from the variational formula (12). The second line is a direct application of the definition of the occupation measure P ǫ . The third line follows from Fatou's lemma and the convergence result in Proposition 2. Finally, the last line is a consequence of Proposition 4. 5.4. Proof of the compactness of the level sets of S H (·). We need to show that, for each k ∈ R, the level sets of S H given by L k = {Φ ∈ C([0, 1]; R n ) : S H (Φ) ≤ k}, k < ∞. are compact subsets of C([0, 1]; R n ), which indicates that S H is a good rate function. We will actually show that, for any k ∈ R, L k is relatively compact and closed. We start with relative compactness, which the following lemma addresses. Lemma 2. Let {(Φ n , P n ) : n ∈ N} be a sequence such that for every n ∈ N, (Φ n , P n ) ∈ V(θ, L) and Φ n ∈ L k . Assuming Conditions H1 and either H2-A or H2-B are satisfied, this sequence is relatively compact on C([0, 1]; R n ). Proof. We can show that the family {P n : n ∈ N} is relatively compact in the same way as in Subsection 5.1.1 where we proved the tightness of {P ǫ : ǫ > 0}. To show the relative compactness of {Φ n : n ∈ N}, it is sufficient to verify that lim δ→0 sup Φ∈L k ω Φ (δ) = lim δ→0 sup Φ∈L k sup |t−r|≤δ |Φ(t) − Φ(r)| = 0. By Proposition 7, the fact that (Φ n , P n ) ∈ V(θ, L) implies there exists a pair of ordinary controls (u, v) ∈ L 2 Y 2 × [0, 1] 2 ; R m × R p such that Φ t = Y×[0,t] ∇ y φ X s , y σ(y)v(s, y)µ(dy)ds + t 0 ∇ xḡ (X s )Φ s ds + Y×[0,t] f (X s , y) K H u (s, y)µ(dy)ds. Then, Φ t − Φ r = Y×[r,t] ∇ y φ X s , y σ(y)v(s, y)µ(dy)ds + t 0 ∇ xḡ (X s )Φ s ds + Y×[r,t] f (X s , y) K H u (s, y)µ(dy)ds = A 1 + A 2 + A 3 . The term A 1 can be estimated by |A 1 | ≤ Y×[r,t] ∇ y φ X s , y σ(y) 2 µ(dy)ds Y×[0,1] |v(s, y)| 2 µ(dy)ds ≤ C Y×[r,t] |y| 2Dg µ(dy)ds ≤ C |t − r|. Similarly, we have |A 2 | ≤ C t r |Φ s | ds ≤ C |t − r| , which is immediate as Φ ∈ C([0, 1]; R n ) is bounded. For the final term A 3 , we apply Proposition 10 to get |A 3 | ≤ Y×[r,t] |y| 2D f µ(dy)ds Y×[0,1] K H u (s, y) 2 µ(dy)ds ≤ C |t − r|. Combining the previous estimates leads to |Φ t − Φ r | ≤ C |t − r| + |t − r| , which completes the proof. The next step is to prove that the limit of a sequence of viable pairs is a viable pair. This is the object of the next lemma. Lemma 3. Let {(Φ n , P n ) : n ∈ N} be a sequence such that for every n ∈ N, (Φ n , P n ) ∈ V(θ, L) and Φ n ∈ L k . Furthermore, assume that the sequence {(Φ n , P n ) : n ∈ N} converges to a limit (Φ, P ). Assuming Conditions H1 and either H2-A or H2-B are satisfied, we have (Φ, P ) ∈ V(θ, L). |u| 2 + |v| 2 dP n ≤ k, so that the second criterion in Definition 1 is satisfied. The third and fourth criteria can be proved in a similar but simpler manner as in the proofs of Propositions 2 and 3. The final step is to prove that the map S H is lower semicontinuous, which is done in the lemma below. Proof. Let Φ n converge to Φ in C([0, 1]; R n ). We will show lim inf n→∞ S H (Φ n ) ≥ S H (Φ). When S H (Φ n ) < ∞, there exists P n such that (Φ n , P n ) ∈ V(θ, L) and S H (Φ n ) ≥ 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 dP n − 1 n . By Lemma 2, we can consider a subsequence along which (Φ n , P n ) converges to (Φ, P ). Moreover, Lemma 3 guarantees (Φ, P ) ∈ V(θ, L). Consequently, lim inf n→∞ S H (Φ n ) = lim inf n→∞ 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 dP n − 1 n ≥ 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 dP ≥ 1 2 inf (Φ,P )∈V U ×V×Y×[0,1] |u| 2 + |v| 2 dP = S H (Φ), which concludes the proof. We can now combine the preceding results in order to state the following proposition, which was the object of this subsection. Proposition 6. Assume Conditions H1 and either H2-A or H2-B are satisfied. Then, for every k ∈ R, L k is a compact subset of C([0, 1]; R n ). 5.5. Proof of the Laplace principle upper bound and representation formula. We begin by introducing an alternate representation of S H (Φ). By the definition of viable pairs (see Definition 1) and that of S H (Φ) (see (19)), we can write, for any Φ ∈ C([0, 1]; R n ), S H (Φ) = inf (Φ,P )∈V(θ,L) 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 P (dudvdyds) = L r Φ,X , where L r Φ,X = inf P ∈A r Φ 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 P (dudvdydt). The set A r Φ consists of elements P ∈ P(U × V × Y × [0, 1]) for which the decomposition (18) holds and such that U ×V×Y×[0,1] |u| 2 + |v| 2 + |y| 2 P (dudvdyds) < ∞ and (recalling the definition of the function θ given in (16)) U 2 ×V 2 ×Y 2 ×[0,t] 2 θ X s , Φ s , y (1) , y (2) , u (1) , u (2) , v (1) , v (2) , s, r dP ⊗ dP = Φ t . Now, for any Φ ∈ C([0, 1]; R n ), let us define L o Φ,X = inf w∈A o Φ 1 2 Y×[0,1] |w(t, y)| 2 µ(dy)dt,(32)where the set A o Φ consists of elements w = (u, v) : Y 2 × [0, 1] 2 → R m+p of S such that, for any t ∈ [0, 1], Y×[0,t] ∇ y φ X s , y σ(y)v(s, y) + ∇ xḡ (X s )Φ s +f (X s ) K H u (s, y) µ(dy)ds = Φ t(33) and Y×[0,1] |u(t, y)| 2 + |v(t, y)| 2 µ(dy)dt < ∞. Our claim is that one actually has that L r Φ,X = L o Φ,X , which will provide us with the representation of S H (Φ) we need to derive the upper bound of the Laplace principle. The equivalence between these two control systems is the object of the following proposition. Proposition 7. Under Conditions H1 and either H2-A or H2-B, it holds that L r Φ,X = L o Φ,X . Proof. Let us first show L r Φ,X ≥ L o Φ,X . Choose any P ∈ A r Φ . Then, by definition of A r Φ , the decomposition P (dudvdydt) = ν t,y (dudv)µ(dy)dt holds. This allows us to define an element w = (u, |u| 2 + |v| 2 P (dudvdydt) < ∞.(35) Hence, the last property in the definition of A o Φ is satisfied and based on (34), so is the first one. This shows that w (1) (t, y) ∈ A o Φ . Furthermore, (35) yields L r Φ,X = inf P ∈A r Φ 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 P (dudvdydt) ≥ inf P ∈A r Φ 1 2 Y×[0,1] |u(t, y)| 2 + |v(t, y)| 2 µ(dy)dt ≥ inf w∈A o Φ 1 2 Y×[0,1] |w(t, y)| 2 µ(dy)dt = L o Φ,X . It remains to prove that L r Φ,X ≤ L o Φ,X . To this end, choose any w = (u, v) ∈ A o Φ and construct a measure P ∈ A r Φ according to P (dudvdydt) = δ u(t,y) (du)δ v(t,y) (dv)µ(dy)dt. Checking that P satisfies all the needed properties to belong to A r Φ is similar to what was done above. We hence have a set B r ⊆ A r Φ that corresponds to the set A o Φ , from which we deduce that L o Φ,X = inf (u,v)∈A o Φ 1 2 Y×[0,1] |u(t, y)| 2 + |v(t, y)| 2 µ(dy)dt ≥ inf P ∈A r Φ 1 2 U ×V×Y×[0,1] |u| 2 + |v| 2 P (dudvdydt) = L r Φ,X , which concludes the proof. The next step is to derive an explicit expression of L o Φ,X . The statement of this expression requires us to introduce some linear maps. For a given x ∈ C([0, 1]; R n ), let π x : L 2 (Y × [0, 1]; R m ) → L 2 (Y × [0, 1]; R n ) and ρ x : L 2 (Y × [0, 1]; R p ) → L 2 (Y × [0, 1]; R n ) be two operators defined by π x u(t) = Yf (x)K H u(t, y)µ(dy), ρ x v(t) = Y ∇ y φ(x, y)σ(y)v(t, y)µ(dy). Under either Condition H2-A or H2-B,f (x) is bounded. This fact and Proposition 10 yield π x u L 2 ([0,1];R n ) ≤ C Y×[0,1] K H u(t, y) 2 µ(dy)dt ≤ C u L 2 (Y×[0,1];R m ) , so that π x is bounded. The operator ρ x is also bounded via Lemma 11 and the estimates (43), (44). Therefore, the Hilbert adjoints π * x and ρ * x are well-defined and given by π * ·)h, respectively. It follows from these facts that x h =K * H f ⊤ (x)h and ρ * x h = σ(·) ⊤ (∇ y φ) ⊤ (x,Σ x (u, v) = π x u + ρ x v is also a bounded operator. Thus, its Hilbert adjoint Σ * x exists and is given by Σ * x h = (π * x h, ρ * x h). With these definitions at hand, let us finally define the operator Q H x from L 2 ([0, 1]; R n ) to itself by Q H x = Σ x Σ * x =f (x)K H f (x)K H * + Y (∇ y φ(x, y)σ(y))(∇ y φ(x, y)σ(y)) ⊤ µ(dy).(36) such that for h ∈ L 2 ([0, 1]; R n ), f X K H f X K H * h (t) = c 2 Hf X t t H−1/2 t 0 (t − z) H−3/2 z 1−2H 1 z (s − z) H−3/2 s H−1/2f X s ⊤ h(s)dsdz. We are now ready to present the explicit expression of L o Φ,X , which is the object of the next proposition. Proposition 8. Assume Conditions H1 and either H2-A or H2-B, and further that the operator Q H X is invertible. Then the ordinary control problem (32) has a finite minimum cost if and only if Φ is absolutely continuous andΦ (defined a.e.) is square integrable. In this case, the solution is given by L o Φ,X = 1 0 Φ s − ∇ xḡ (X s )Φ s ⊤ (Q H Xs ) −1 Φ s − ∇ xḡ (X s )Φ s ds and is achieved for the optimal control (ū,v) = π * (Q H X ) −1 Φ − ∇ xḡ X Φ , ρ * (Q H X ) −1 Φ − ∇ xḡ X Φ . Proof. In one direction, let us assume (32) has a finite minimum cost then Equation (33) is satisfied for some (u, v) ∈ A o Φ . Hence, Φ is absolutely continuous. Furthermore, by Cauchy-Schwarz inequality, Φ L 2 ([0,1];R n ) = 1 0 Y ∇ y φ X s , y σ(y)v(s, y) + ∇ xḡ (X s )Φ s +f (X s ) K H u (s, y) µ(dy) 2 ds ≤ Y×[0,1] ∇ y φ X s , y σ(y)v(s, y) 2 + ∇ xḡ (X s )Φ s 2 + f (X s ) K H u (s, y) 2 µ(dy)ds (37) To see thatΦ is square integrable, we study the right hand side of this inequality. ∇ y φ X s , y σ(y)v(s, y) ∈ L 2 (Y × [0, 1]; R n ) due to Lemma 11, Remark 12 and boundedness of σ(y)σ(y) T in Condition H1. Next, we have u ∈ L 2 (Y × [0, 1]; R m ), v ∈ L 2 (Y × [0, 1]; R p ) and it follows from Proposition 10 thaṫ K H u ∈ L 2 (Y × [0, 1]; R m ). This along with the fact thatf (x), ∇ xḡ (x) is bounded under either Condition H2-A or H2-B, and the fact that Φ is bounded since its derivative exists a.e. on [0, 1], imply the remaining quantities on the right side of (37) are in L 2 (Y × [0, 1]; R n ). Therefore, we can concludeΦ is square integrable. In the other direction, let us assume that Φ is absolutely continuous and that the a.e. defined derivative is square integrable. ThenΦ − ∇ xḡ X Φ is also square integrable. Then, we can construct a control (ū,v) as in the statement of the proposition so that the set A o Φ associated with the ordinary control problem (32) is non-empty and therefore, the minimum cost L o is finite. This settles the first claim of this proposition. Next, let us derive an explicit formula for the minimum cost. When Φ is absolutely continuous and its a.e. defined derivative is square integrable, we have (ū,v) 2 L 2 (Y×[0,1];R m ×R p ) = ū 2 L 2 (Y×[0,1];R m ) + v 2 L 2 (Y×[0,1];R p ) = (Q H X ) −1 Φ − ∇ xḡ X Φ , ππ * (Q H X ) −1 Φ − ∇ xḡ X Φ L 2 ([0,1];R n ) + (Q H X ) −1 Φ − ∇ xḡ (X)Φ , ρρ * (Q H X ) −1 Φ − ∇ xḡ (X)Φ L 2 ([0,1];R n ) = (Q H X ) −1 Φ − ∇ xḡ (X)Φ , Q H X (Q H X ) −1 Φ − ∇ xḡ (X)Φ L 2 ([0,1];R n ) = Φ − ∇ xḡ (X)Φ, (Q H X ) −1 Φ − ∇ xḡ (X)Φ L 2 ([0,1];R n ) . Since we also know that (ū,v) ∈ A o Φ , this implies L o Φ,X, Φ ≤ Φ − ∇ xḡ X Φ, (Q H X ) −1 Φ − ∇ xḡ X Φ L 2 ([0,1];R n ) . Furthermore, by Lemma 8 and the fact thatΦ − ∇ xḡ (X)η = Σ X (u, v), we can write L o Φ,X ≥ inf (u,v)∈A o Φ Σ * X (Q H X ) −1 Σ(u, v) 2 L 2 (Y×[0,1];R m ×R p ) = Σ * X (Q H X ) −1 Φ − ∇ xḡ X Φ 2 L 2 (Y×[0,1];R m ×R p ) = (Q H X ) −1 Φ − ∇ xḡ X Φ , ΣX Σ * X (Q H X ) −1 Φ − ∇ xḡ X Φ L 2 ([0,1];R n ) = Φ − ∇ xḡ X Φ, (Q H X ) −1 Φ − ∇ xḡ X Φ L 2 ([0,1];R n ) . Thus, the minimum cost of the ordinary control problem (32) is the quantity in the last line. We are now ready to prove the Laplace principle upper bound, which is the object of the next proposition. Proposition 9. Assume Conditions H1 and either H2-A or H2-B, and further that the operator Q H X is invertible. Then the following Laplace principle upper bound holds. lim sup ǫ→0 − 1 h 2 (ǫ) ln E exp −h 2 (ǫ)a(η ǫ ) ≤ inf Φ∈C([0,1];R n ) S H (Φ) + a(Φ), where the function S is the one defined at (19). Proof. We can assume, without loss of generality, that inf Φ∈C([0,1];R n ) S H (Φ) < ∞, so that for any ζ > 0, there exists an element Φ 0 ∈ C([0, 1]; R n ) for which S H (Φ 0 ) + h(Φ 0 ) ≤ inf Φ∈C([0,1];R n ) S H (Φ) + h(Φ) + ζ. Let us also define w(φ, x, y, η) = ( u(φ, x, y, η), v(φ, x, y, η)) = π * x Q −1 x φ − ∇ xḡ (x)η , ρ * x Q −1 x φ − ∇ xḡ (x)η and w 0 = u Φ 0 , X ǫ,w ǫ , Y ǫ,w ǫ , η ǫ,w ǫ , v Φ 0 , X ǫ,w ǫ , Y ǫ,w ǫ , η ǫ,w ǫ . We can then substitute w 0 into the control variable of equation (25) and take the limit of η ǫ,w0 as ǫ → 0. This procedure is the same as the one that was carried out in Proposition 2 and after which we obtained η t = t 0 ρ ρ * (Q H X ) −1 Φ 0 (s) − ∇ xḡ (X s )η s + π π * (Q H X ) −1 Φ 0 (s) − ∇ xḡ (X s )η s ds + t 0 ∇ xḡ (X s )η s ds = t 0 Q H X (Q H X ) −1 Φ 0 (s) − ∇ xḡ (X s )η s ds + t 0 ∇ xḡ (X s )η s ds = Φ 0 (t).(38) In addition, we have lim ǫ→0 E 1 2 1 0 u(Φ 0 (s), X ǫ,w ǫ s , Y ǫ,w ǫ s , η ǫ,w ǫ s ) 2 + v(Φ 0 (s), X ǫ,w ǫ s , Y ǫ,w ǫ s , η ǫ,w ǫ s ) 2 ds = E 1 2 Y×[0,1] u(Φ 0 (s),X s , y,η s ) 2 + v(Φ 0 (s),X s , y,η s ) 2 µ(dy)ds = S H (Φ 0 ),(39) where the last equality is a consequence of Propositions 7, 8 and (38). Therefore, lim sup ǫ→0 − 1 h 2 (ǫ) ln E exp −h 2 (ǫ)a(η ǫ ) = lim sup ǫ→0 inf w ǫ ∈S E 1 2 1 0 |û ǫ s | 2 + |v ǫ s | 2 ds + a η ǫ,w ǫ ≤ lim sup ǫ→0 E 1 2 1 0 u Φ 0 (s), X ǫ,w ǫ s , Y ǫ,w ǫ s , η ǫ,w ǫ s 2 + v Φ 0 (s), X ǫ,w ǫ s , Y ǫ,w ǫ s , η ǫ,w ǫ s 2 ds + a(η ǫ,w0 s ) = E 1 2 Y×[0,1] u(Φ 0 (s),X s , y,η s ) 2 + v(Φ 0 (s),X s , y,η s ) 2 µ(dy)ds + a(η s ) = S H (Φ 0 ) + a(Φ 0 ) ≤ inf Φ∈C([0,1];R n ) S H (Φ) + h(Φ) + ζ, where the first equality is due to the variational formula (12), the second line is due to the choice of a particular control, and the last two equalities are consequences of (38) and (39). Finally, the fact that ζ can be chosen arbitrarily yields the desired Laplace principle upper bound. Proof of Corollary 1 First, observe that given any Ψ ∈ L 2 ([0, 1]; R n ), the quantities D 1 = t H−1/2f X ⊤ Ψ, D 2 = t 1−2H I H−1/2 1 − t H−1/2f X ⊤ Ψ are in L 2 ([0, 1]; R n ) . D 1 is square-integrable because H > 1/2 andf is bounded under either Condition H2-A or H2-B. Regarding D 2 , notice that the assumptions of Lemma 5 are satisfied with p = 2, α = H − 1/2, β = 1 − 2H and γ = H − 1/2 for values of H in the range (1/2, 3/4). Then the operator t 1−2H I H−1/2 1 − t H−1/2 is bounded on L 2 ([0, 1]; R n ), which implies D 2 is square-integrable. Next, g = g(x) implies ∇ y φ(x, y) = 0, where φ(x, y) is defined in (6). Thus, Q H X =f X K HK * Hf X ⊤ . We also knoẇ K H = c H Γ(H − 1/2)t H−1/2 I H−1/2 0 + t 1/2−H ,K * H = c H Γ(H − 1/2)t 1/2−H I H−1/2 1 − t H−1/2 so that Q H X = c 2 H Γ(H − 1/2) 2f X t H−1/2 I H−1/2 0 + t 1−2H I H−1/2 1 − t H−1/2f X ⊤ . Recalling that Lf X = f X L = I, at this point we want to show W = c −2 H Γ(H − 1/2) −2 L ⊤ t 1/2−H D H−1/2 1 − t 2H−1 D H−1/2 0 + t 1/2−H L (40) is the left inverse of Q H X . [KST06, Lemma 2.4] says given any h ∈ L 2 ([0, 1]; R n ), we have D H−1/2 0 + t H−1/2f X ⊤ Ψ = Ψ. Therefore, W is the left inverse of Q H X and ker Q H X = {0}. Moreover, we know Q H X is self-adjoint, hence Im Q H X = ker Q H X * ⊥ = ker Q H X ⊥ = {0} ⊥ = L 2 ([0, 1]; R n ). It follows that Q H X is bijective. It is also bounded on L 2 ([0, 1]; R n ) via Proposition 10, so we can conclude it has a bounded inverse by the inverse mapping theorem. Finally, the inverse of Q H X must coincide with the left inverse W at (40) and by using the formula for fractional derivatives in Appendix A, we get the second equation for Q H X −1 in the statement of this lemma. Conclusions and future work In this paper, we established the moderate deviations principle for slow-fast systems of the form (1) where the slow component is driven by fractional Brownian motion. There are many interesting potential directions for future work on this topic. In this paper, the fast motion is driven by standard Brownian motion and is independent of the slow component. This was done in order to focus on the effect of fBm on the tail behavior of the slow component. If the fast component was driven by fBm as well, then one would first need to understand the proper ergodic behavior of the fast process, an issue still not fully resolved, see though [LS22] for some preliminary results in special cases. Feedback from the slow process into the fast process would also mean interaction of the ergodic behavior of the fast process with the fBm driving the slow process, see [HL20] for partial preliminary results in this direction. Another interesting direction would be to include "unbounded homogenization" terms in the slow component as done for similar systems driven by standard Brownian motion, see [Spi14]. Lastly, establishing the MDP opens the door to the construction of provably-efficient accelerated Monte Carlo methods, like importance sampling, for the estimation of rare event probabilities. See [SM20] for related work in the case where H = 1/2. We plan to explore these avenues in future works on this topic. Funding Solesne Bourguin was partially supported by the Simons Foundation Award 635136. Konstantinos Spiliopoulos was partially supported by the National Science Foundation (DMS 1550918, DMS 2107856) and Simons Foundation Award 672441. Data Availability Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Declarations Conflict of interest The authors declare that they have no conflict of interest. Appendix A. Fractional Brownian motion and pathwise stochastic integration A.1. Fractional Brownian motion: definition and main properties. A one-dimensional fractional Brownian motion (fBm) is a centered Gaussian process W H = W H t : t ∈ [0, 1] ⊂ L 2 (Ω), characterized by its covariance function R H (t, s) = E(W H t W H s ) = 1 2 s 2H + t 2H − |t − s| 2H . It is straightforward to verify that increments of fBm are stationary. The parameter H ∈ (0, 1) is usually referred to as the Hurst exponent, Hurst parameter, or Hurst index. By Kolmogorov's continuity criterion, such a process admits a modification with continuous sample paths, and we always choose to work with such. In this case one may show in fact that almost every sample path is locally Hölder continuous of any order strictly less than H. It is this sense in which it is often said that the value of H determines the regularity of the sample paths. Note that when H = 1 2 , the covariance function is R 1 2 (t, s) = t ∧ s. Thus, one sees that W 1 2 is a standard Brownian motion, and in particular that its disjoint increments are independent. In contrast to this, when H = 1 2 , nontrivial increments are not independent. In particular, when H > 1 2 , the process exhibits long-range dependence. Note moreover that when H = 1 2 , the fractional Brownian motion is not a semimartingale, and the usual Itô calculus therefore does not apply. Another noteworthy property of fractional Brownian motion is that it is self-similar in the sense that, for any constant a > 0, the processes W H t : t ∈ [0, 1] and a −H W H at : t ∈ [0, 1] have the same distribution. Finally, an n-dimensional fractional Brownian motion is a random vector where the components are independent one-dimensional fractional Brownian motions with the same Hurst parameter H ∈ (0, 1). The self-similarity and long-memory properties of the fractional Brownian motion make it an interesting and suitable input noise in many models in various fields such as analysis of financial time series, hydrology, and telecommunications. However, in order to develop interesting models based on fractional Brownian motion, one needs an integration theory with respect to it, which we present in the next subsection. A.2. Pathwise stochastic integration with respect to fractional Brownian motion. Stochastic integrals with respect to fractional Brownian motion can be understood, when H ≥ 1/2, as generalized Stieltjes integral as introduced in the work of Zähle [Zäh98]. Let f ∈ L 1 ([a, b]) and α > 0. The left-sided and right-sided fractional Rienmann-Liouville integrals of f of order α are defined for almost all x ∈ [a, b] by I α a + f (x) = 1 Γ(α) x a (x − y) α−1 f (y)dy and I α b − f (x) = 1 Γ(α) b x (y − x) α−1 f (y)dy respectively, where Γ(α) is the Euler gamma function. This naturally leads to the definition of the function spaces I α a + (L p ([a, b])) = {g = I α a + (f ) : f ∈ L p ([a, b])} and I α b − (L p ([a, b])) = {g = I α b − (f ) : f ∈ L p ([a, b])}. The following integration by parts formula holds b a I α a + f (x)g(x)dx = b a f (x)I α b − g(x)dx for f ∈ L p ([a, b]), g ∈ L q ([a, b]) such that 1/p + 1/q ≤ 1 + α. For 0 < α < 1, we can define the fractional derivatives D α a + f (x) = d dx I 1−α a + f (x) = 1 Γ(1 − α) d dx x a (x − t) −α f (t)dt and D α b − f (x) = d dx I 1−α b − f (x) = 1 Γ(1 − α) d dx b x (t − x) −α f (t)dt as long as the right hand sides are well-defined. Furthermore, if f ∈ I α a + (L p ([a, b])) (respectively f ∈ I α b − (L p ([a, b])) and 0 < α < 1 then the previous fractional derivatives admit the Weyl representation D α a + f (x) = 1 Γ(1 − α) f (x) (x − a) α + α x a f (x) − f (y) (x − y) α−1 dy 1 (a,b) (x) and D α b − f (x) = 1 Γ(1 − α) f (x) (b − x) α + α b x f (x) − f (y) (y − x) α−1 dy 1 (a,b) (x), respectively, for almost all x ∈ [a, b]. There is also the integration by parts formula ([a, b])) such that 1/p + 1/q ≤ 1 + α. We refer to [SKM + 93] for more detailed properties of fractional operators. b a D α a + f (x)g(x)dx = b a f (x)D α b − g(x)dx for f ∈ I α a + (L p ([a, b])), g ∈ I α b − (L q Next, let f (a+) = lim xց0 f (a + x) and g(b−) = lim xց0 g(b − x) and define f a + (x) = (f (x) − f (a+))1 (a,b) (x) g b − (x) = (g(x) − g(b−))1 (a,b) (x). We recall from [Zäh98] the definition of generalized Stieltjes fractional integrals with respect to irregular functions (in the sense of which we view the stochastic integrals with respect to fractional Brownian motion appearing in this paper). Definition 2 (Generalized Stieltjes integral). Suppose that f and g are functions such that f (a+), g(a+) and g(b−) exist, f a + ∈ I α a + (L p ([a, b])) and ([a, b])) for some p, q ≥ 1, 1/p + 1/q ≤ 1, 0 < α < 1. Then the integral of f with respect to g is defined by g b − ∈ I 1−α b − (L pb a f dg = (−1) α b a D α a + f (x)D 1−α b − g b − (x)dx + f (a+) (g(b−) − g(a+) ) . Remark 11. If αp < 1, under the assumptions of the preceding definition, we have that f ∈ I α a + (L p ([a, b])) and we can write b a f dg = (−1) α b a D α a + f a + (x)D 1−α b − g b − (x)dx. In [Zäh98], it was further shown that if f and g are respectively λ and µ-Hölder continuous such that λ + µ > 1, then the conditions for the generalized Stieltjes integral b a f dg are satisfied for p = q = ∞ and α < λ, 1 − α < µ. In particular, this class of generalized Stieltjes integrals with Hölder continuous f, g coincides with the class of Riemann-Stieltjes integrals studied in [You36] by Young. We note here that any Young integrals appearing in this paper are constructed from Hölder continuous paths of a fractional Brownian motion. 1/2 . Slightly abusing notation, we also write K H for the integral operator K H g(s) = s 0 K H (s, r)g(r)dr. For H ≥ 1/2, the operator K H can be represented as K H g = c H Γ(H − 1/2)I 1 0 + t H−1/2 I H−1/2 0 + t 1/2−H g. Additionally, we denote byK H the "derivation" of the operator K H , i.e., . Note that later on, we will alternate betweenĝ and K −1 H g, which are equivalent ways of writing the same quantity. In this paper, the noise process we consider in the slow-fast systems we study is of the form K H g = c H Γ(H − 1/2)t H−1/2 I H−1/2 0 + t 1/2−H g.W H (t), B(t) : t ∈ [0, 1] , where B is a m-dimensional standard Brownian motion, W H is a p-dimensional fractional Brownian motion of Hurst parameter H and they are independent. We will hence need to work with the Cameron-Martin space associated with the proces (W H , B), which, based on the previous description, is defined to be the space S given by K Hĝ1 , K 1/2ĝ2 : (ĝ 1 ,ĝ 2 ) ∈ L 2 [0, 1]; R m+p .(42) As a Cameron-Martin space, S is a Hilbert space equipped with the inner product given by (g 1 , g 2 ), (f 1 , f 2 ) S = g 1 , f 1 HH + g 2 , f 2 H 1/2 . Let us now state an important fact regarding the differentiability of elements in H H when H > 1/2 which we will need throughout the paper. Lemma 6. If H > 1/2 and u ∈ H H such that u = K Hû ,û ∈ L 2 ([0, 1]; R n ), then we havė u(t) =K Hû (t) = c H Γ(H − 1/2)t H−1/2 I H−1/2 0 + t 1/2−Hû (t) = c H t H−1/2 t 0 (t − s) H−3/2 s 1/2−Hû s ds, such that c H = (H(2H − 1)/β(2 − 2H, H − 1/2)) 1/2 . Meanwhile, if H = 1/2 and u ∈ H 1/2 , thenu t =û t . Proof. This is a direct consequence of formula (41). The following is another important property of the operatorK H . = t H−1/2 I H−1/2 0 + t 1/2−H f L 2 ([0,1];R n ) ≤ I H−1/2 0 + t 1/2−H f L 2 ([0,1];R n ) ≤ C f L 2 ([0,1];R n ) . For more details about fractional Brownian motion, we refer the reader to the monographs [BHØZ08,Nua06]. A.4. Results related to Young integrals. The two results presented here provide us with a way of bounding Young integrals and with a version of change of variable formula for differential equations that contain Young integrals, respectively. Lemma 7 (Young-Loéve's inequality). Let f and g be respectively α and β-Hölder continuous, such that α + β > 1. Then a.s one has t r f s dg s − f r (g t − g r ) ≤ C |f | α |g| β |t − r| α+β . Moreover, assume f is bounded then t r f s dg s ≤ C |f | α |g| β |t − r| α+β + |f | ∞ |t − r| β ≤ C |f | α |g| β |t − r| β . Proof. Refer to [FV10, Proposition 6.4]. Theorem 3. For i = 1, . . . , m, let 0 < α i < 1/2, f i ∈ I αi 0+ (L 2 ([0, b])) be bounded and g i b− ∈ I 1−αi b− (L 2 ([0, b])), where the function g i b− is defined below Lemma 5. Moreover, assume h = (h 1 , . . . , h m ) such that h i t = h i 0 + t 0 f i s dg i s . Then for any C 1 mapping F : R m × R → R n such that ∂F ∂xi ∈ C 1 , i = 1, . . . , m and r ≤ t ≤ T , it holds that F (h t , t) − F (h r , r) = m i=1 t r ∂F ∂x i (h s , s)f i s dg i s + t r ∂F ∂s (h s , s)ds. In particular, this change of variable formula applies to the special case when f i and g i are respectively λ i and µ i -Hölder continuous such that λ i + µ i > 1, i = 1, . . . , m. Proof. For the change of variable formula in the general case, we refer to [Zäh99, Theorem 5.2]. Now, let us consider the special case and assume there is a constant C such that |f i | λi ,|g i | µi < C and λ i + µ i > 1 for 1 ≤ i ≤ m. Then one can choose α i in the interval (0, 1/2) such that λ i > α i and µ i > 1 − α i for 1 ≤ i ≤ m.f i ∈ I αi 0+ (L 2 ([0, b])), g i b− ∈ I 1−αi b− (L 2 ([0, b])) . Theorem 4. Recall C 2,ζ (R n × Y) for some ζ > 0 is the function space defined at the beginning of Section 2. Let h ∈ C 2,ζ (R n × Y) such that Y h(x, y)µ(dy) = 0 and that for some positive constants K and D h , |h(x, y)| + |∇ x h(x, y)| + ∇ 2 x h(x, y) ≤ K 1 + |y| D h uniformly with respect to x. Then, there is a unique solution to Lu(x, y) = −h(x, y), Y u(x, y)µ(dy) = 0. Moreover, u(·, y) ∈ C 2 , ∇ 2 x u ∈ C(R n × Y) and there exists a positive constant M such that |u(x, y)| + |∇ y u(x, y)| + |∇ x u(x, y)| + ∇ 2 x u(x, y) + |∇ y ∇ x u(x, y)| ≤ M (1 + |y| D h ). Remark 12. Consider the Poisson equation in (6). Under Conditions H1 and H2-A, Theorem 4 states that there exists a positive constant C such that, uniformly, |φ(x, y)| + |∇ y φ(x, y)| + |∇ x φ(x, y)| + ∇ 2 x φ(x, y) + |∇ y ∇ x φ(x, y)| < C. On the other hand, under Conditions H1 and H2-B, Theorem 4 states that there exists a positive constant C such that, uniformly with respect to x, |φ(x, y)| + |∇ y φ(x, y)| + |∇ x φ(x, y)| + ∇ 2 x φ(x, y) + |∇ y ∇ x φ(x, y)| < C 1 + |y| Dg . B.2. Ancillary results related to the control problems. This subsection gathers all technical results related to the study of the control problems appearing throughout the paper. Lemma 8 (Lemma 5.2 in [HSS19]). Let H, H ′ be Hilbert spaces and a : H → H ′ be a bounded linear operator. Moreover, let q = aa * and q −1 be the inverse of q. Then for any u ∈ H, a * q −1 au H ≤ u H . Lemma 9. Assume that for all x and non-zero z ∈ R n , Y ∇ y φ(x, y)σ(y)(∇ y φ(x, y)σ(y)) ⊤ µ(dy)z, z > 0. Then, the operator Q H x defined in (36) is invertible and its inverse (Q H x ) −1 is a bounded in L 2 ([0, 1]; R n ). Proof. Using the operators π, π * , ρ, ρ * defined in Section 5.5, we have Q H x h = (ππ * + ρρ * )h with ρρ * h(t) = Y ∇ y φ(x, y)σ(y)(∇ y φ(x, y)σ(y)) ⊤ h(t, y)µ(dy). Furthermore, ππ * , ρρ * are positive and self-adjoint operators, which means that Q H x is also positive and self-adjoint. In addition, the fact that Q H x ≥ ρρ * and Condition H2-A or H2-B imply that (Q H x ) 2 ≥ (ρρ * ) 2 > 0. This leads to inf h L 2 ([0,1];R n ) =1 Q H x h L 2 ([0,1];R n ) = inf h L 2 ([0,1];R n ) =1 (Q H x ) 2 h, h L 2 ([0,1];R n ) ≥ inf h L 2 ([0,1];R n ) =1 (ρρ * ) 2 h, h L 2 ([0,1];R n ) > 0, so that Q H x is bounded from below and ker Q H x = {0}. This combined with self-adjointness implies Im Q H x = ker Q H x * ⊥ = ker Q H x ⊥ = {0} ⊥ = L 2 ([0, 1]; R n ). It follows that Q H X is bijective. The operator Q H X is also bounded in L 2 ([0, 1]; R n ) via Proposition 10, so we can conclude it has a bounded inverse by the inverse mapping theorem. Lemma 10. It can be assumed that there exists a finite constant N such that, almost surely, the control process w ǫ appearing in the variational representation (12) satisfies sup ǫ>0 w ǫ 2 S ≤ N. Proof. This is an immediate consequence of [Zha09, Theorem 3.2]. Lemma 11. Assume w ǫ ∈ S is a control such that sup ǫ>0 w ǫ 2 S = sup for some constant C > 0, which further implies that E sup t∈[0,1] Y ǫ,w ǫ t ≤ C √ ǫ . Proof. The first estimate was proven in [SM20, Lemma 3.1]. For the second estimate, the dissipative property of the drift coefficient of Y ǫ,w ǫ and Itô's formula yield Y ǫ,w ǫ t = e − 1 ǫ Γt y 0 + t 0 1 ǫ e − 1 ǫ (t−s) ζ(Y ǫ,w ǫ )ds + t 0 h(ǫ) √ ǫ e − 1 ǫ (t−s) σ Y ǫ,w ǫ s v ǫ s ds + t 0 1 √ ǫ e − 1 ǫ (t−s) σ Y ǫ,w ǫ s dB s . We then apply the Burkhölder-Davis-Gundy inequality to the Itô integral term and Hölder's inequality to the Riemann integral terms to get E sup t∈[0,1] Y ǫ,w ǫ t ≤ sup t∈[0,1] e − 1 ǫ Γt y 0 + 1 ǫ t 0 e − 2 ǫ Γ(t−s) ds E 1 0 |ζ(Y ǫ,w ǫ )| 2 ds + h(ǫ) √ ǫ t 0 e − 2 ǫ Γ(t−s) ds 1 0 |v ǫ s | 2 ds + 1 √ ǫ E 1 0 e − 2 ǫ Γ(t−s) σ(Y ǫ,w ǫ s ) 2 ds . Since σ(y)σ T (y) is bounded and ζ(y) is sublinear, the first estimate of this lemma can be applied to the expression E 1 0 ζ(Y ǫ,w ǫ ) 2 ds . Then, the simple fact that t r e − 2 ǫ Γ(t−s) ds ≤ ǫ ∞ 0 e −2Γs ds = ǫ 2Γ implies that E sup t∈[0,1] Y ǫ,w ǫ t ≤ C 1 √ ǫ + h(ǫ) ≤ C √ ǫ . Lemma 12. Assume w ǫ ∈ S is a control such that sup ǫ>0 w ǫ 2 S = sup ǫ>0 1 0 |û ǫ s | 2 + |v ǫ s | 2 ds < N for some finite constant N . (i) Under Conditions H1 and H2-A, there exist constants C that change from line to line such that Proof. We start with part (i). The first estimate is straightforward due to the boundedness of ∇ x φ(x, y) stated in (43) and the boundedness of g(x, y) guaranteed by Condition H2-A. For the second estimate, we assume that 0 ≤ r ≤ t ≤ 1 and apply the Burkhölder-Davis-Gundy inequality to obtain The last inequality in part (i) is a consequence of the boundedness of σ(y)σ ⊤ (y) in Condition H1 and the boundedness of ∇ x φ(x, y) stated in (43) (requiring Condition H2-A). Finally, the two remaining estimates of part (i) are derived similarly to the previous one. E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s g X ǫ,w ǫ s , Y ǫ,w ǫ s ds    ≤ Cρ, E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s dB s 2    ≤ Cρ E    sup 0≤r, We continue with part (ii). For the first inequality, the sublinear growth of ∇ x φ(x, y) in y stated at (44) (requiring Condition H2-B) and the sublinear growth of g(x, y) in y from Condition H2-B imply for any where the last inequality is due to Lemma 11. For the second estimate, assume that 0 ≤ r ≤ t ≤ 1. Then, the Burkhölder-Davis-Gundy inequality combined with the sublinear growth of ∇ y φ(x, y) in y (requiring Condition H2-B) and the boundedness of σ(y)σ ⊤ (y) in Condition H1 imply that for any q in 1, 1 Dg , E    sup 0≤r≤t≤1 |r−t|<ρ t r ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s dB s 2q    ≤ E r+ρ r ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s 2 ds q ≤ Cρ q−1 E    sup 0≤r,t≤1 |r−t|<ρ t r 1 + Y ǫ,w ǫ s 2qDg ds    ≤ Cρ q−1 . The arguments for the three remaining estimates of part (ii) are similar, so we will handle one case only. The sublinear growth of ∇ x φ(x, y) in y stated at (44) (requiring Condition H2-B) and sublinear growth of f (y) in y in Condition H2-B imply that for any q in 1, 1 D f +Dg , where the last inequality is once again obtained using Lemma 11. Lemma 13. Assume w ǫ ∈ S is a control such that sup ǫ>0 w ǫ 2 S = sup ǫ>0 1 0 |û ǫ s | 2 + |v ǫ s | 2 ds < N for some finite constant N . Under Condition H1, for 0 < α ≤ 1/2, we have the almost sure Hölder estimate Y ǫ,w ǫ α ≤ C √ ǫ . and hence that for 0 < γ ≤ 1, E f X ǫ,w ǫ , Y ǫ,w ǫ γ ≤ C E X ǫ,w ǫ γ + ǫ − γ as well as E t r g X ǫ,w ǫ s , Y ǫ,w ǫ s ds ≤ C |t − r| 1 2 . Consequently, we have E X ǫ,w ǫ t − X ǫ,w ǫ r ≤ C |t − r| K + √ ǫh(ǫ) |t − r| ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s f X ǫ,w ǫ s , Y ǫ,w ǫ s dW H s ≤ E ∇ x φ X ǫ,w ǫ , Y ǫ,w ǫ f (X ǫ,w ǫ , Y ǫ,w ǫ ) β ≤ |∇ x φ(x, y)f (x, y)| Lip E X ǫ,w ǫ β + Y ǫ,w ǫ β ≤ C 1 + ǫ − 1 2 , where the last inequality is a consequence of Lemmas 13 and 14. We now proceed to the proof of part (ii). For the first estimate, we perform a similar calculation to the one that was done at (47) (this requires Conditions H1 and H2-B) and get E sup t∈[0,1] t 0 f Y ǫ,w ǫ s dW H s ≤ C Y ǫ,w ǫ M f 2 + E (y 0 ) D f ≤ Cǫ − M f 2 . Next, under Conditions H1 and H2-B, the M k -Hölder continuity of ∇ x φ(x, y)f (x) together with the estimates in Lemmas 13 and 14 yield E ∇ x φ(X ǫ,w ǫ r , Y ǫ,w ǫ r )f (Y ǫ,w ǫ r ) − ∇ x φ(X ǫ,w ǫ t , Y ǫ,w ǫ t )f (Y ǫ,w ǫ t ) ≤ E X ǫ,w ǫ r − X ǫ,w ǫ t M k + E Y ǫ,w ǫ r − Y ǫ,w ǫ t M k ≤ C 1 + ǫ − M k 2 |r − t| M k 2 , so that E ∇ x φ(X ǫ,w ǫ , Y ǫ,w ǫ )f (Y ǫ,w ǫ ) M k 2 ≤ Cǫ − M k 2 . Therefore, as M k 2 + H > 1 in Condition H2-B, we can apply the Young-Loéve inequality to obtain E sup t∈[0,1] t 0 ∇ x φ(X ǫ,w ǫ s , Y ǫ,w ǫ s )f (Y ǫ,w ǫ s )dW H s ≤ C W H H E ∇ x φ(X ǫ,w ǫ , Y ǫ,w ǫ )f (Y ǫ,w ǫ ) M k 2 + E[|∇ x φ(x 0 , y 0 )f (y 0 )|] ≤ Cǫ − M k 2 . Lemma 16. Assume w ǫ ∈ S is a control such that sup ǫ>0 w ǫ 2 S = sup be the modulus of continuity of a function f on C([0, 1]; R n ). According to [Bil13, Theorem 7.3], the family {η ǫ,w ǫ : ǫ > 0} is tight on C([0, 1]; R n ) if and only if -For each positive δ, there exist an a, δ 0 Lemma 4 . 4Assume Conditions H1 and either H2-A or H2-B hold. Then, S H (Φ) is lower semicontinuous, which is equivalent to the statement that the level sets of S H are closed in C([0, 1]; R n ). that w ∈ A o Φ . Jensen's inequality and the decomposition of P imply The upcoming lemma contains a useful technical result in [SKM + 93]. Lemma 5. Let p ≥ 1 and b > 0. Then the operator t β I α 0 + t γ is bounded in L p ([0, b]) if α > 0, α+β +γ = 0 and (γ + 1)p > 1. Meanwhile, the operator t β I α 1 − t γ is bounded in L p ([0, b]) if α > 0, α + β + γ = 0 and (α + γ)p < 1.Proof. This is a consequence of [SKM + 93, (5.45') and (5.46')]. Further details are given in [Zäh98, Section 5.1]. A.3. The Cameron-Martin space of fractional Brownian motion. Consider the deterministic kernel K H (t, s) = c H s 1/2−H t s (u − s) H−3/2 u H−1/2 du 1 {t>s} for which c H = (H(2H − 1)/β(2 − 2H, H − 1/2)) -Martin space H H associated with W H is H H = {K Hĝ :ĝ ∈ L 2 ([0, 1]; R m )}, equipped with the inner product g, f HH = ĝ,f L 2 ([0,1];R m ) Proposition 10 . 10The mapK H as described in Lemma 6 is a bounded operator in L 2 ([0, 1]; R n ).Proof. The assumptions of Lemma 5 are satisfied for p = 2, α = H − 1/2, β = 0 and γ = 1/2 − H, hence the operator I is bounded in L 2 ([0, 1]; R n ). SinceK H = t Moreover, f i as Hölder continuous functions on [0, b] are necessarily bounded. Consequently, the general change of variable formula covers this particular case. Appendix B. Regularity results and other technical lemmas This appendix gathers results related to Poisson equations as well as the technical lemmas required for the analysis of the control problems. B.1. Results related to Poisson equations. The following theorem is a consequence of [PV01, Theorem 2] and [PV05, Theorem 3] for solutions of Poisson equations. Let L be the infinitesimal generator defined in (5). 2 + |v ǫ s | 2 ds < N for some finite constant N . Then, under Condition H1, it holds that for ǫ 0 > 0 small enough, finite constant N . Under Conditions H1 and either H2-A or H2-B, there exists a constant C Proof. Under Condition H2-A or H2-B, ∇ xḡ (x) is bounded. This fact, combined with equation(27)and the fact that X ǫ,w ǫ converges toX in probability, implies that there exists some constant C such that In combination with Markov's inequality, Lemma 12 implies tightness of {D ǫ 1 : ǫ > 0} and {D ǫ 2 : ǫ > 0}. Lemma 15 implies tightness of {D ǫ 3 : ǫ > 0} and Lemma 16 implies tightness of {D ǫ 4 : ǫ > 0}. It remains to prove the tightness of {R ǫ 1 : ǫ > 0}. The estimates at (43), (44) combined with Lemma 11 and the fact that √ ǫ h(ǫ) → 0 imply that the first term in equation Proof. Using Fatou's lemma, we can writeU ×V×Y×[0,1] |u| 2 + |v| 2 dP ≤ lim inf n→∞ U ×V×Y×[0,1] Based on the previous fact, Lemmas 13.2 and 13.2' in [SKM + 93] imply respectively that (ii) Under Conditions H1 and H2-B, there exist constants C that change from line to line such that for any q in 1, 1 D f +Dg , we have Cρ q−1 .t≤1 |r−t|<ρ t r ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s v ǫ s ds 2    ≤ Cρ E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s f Y ǫ,w ǫ s u ǫ s ds 2    ≤ Cρ, E    sup 0≤r,t≤1 |r−t|<ρ t 0 f X ǫ,w ǫ s , Y ǫ,w ǫ s u ǫ s ds 2    ≤ Cρ. E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s g X ǫ,w ǫ s , Y ǫ,w ǫ s ds q    ≤ Cρ q−1 , E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s dB s 2q    ≤ Cρ q−1 E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ y φ X ǫ,w ǫ s , Y ǫ,w ǫ s σ Y ǫ,w ǫ s v ǫ s ds 2q    ≤ Cρ q−1 , E    sup 0≤r,t≤1 |r−t|<ρ t r ∇ x φ X ǫ,w ǫ s , Y ǫ,w ǫ s f Y ǫ,w ǫ s u ǫ s ds 2q    ≤ Cρ q−1 , E    sup 0≤r,t≤1 |r−t|<ρ t 0 f Y ǫ,w ǫ s u ǫ s ds 2q    ≤ This combined with the fact that D 1 , D 2 ∈ L 2 ([0, 1]; R n ) implies for any Ψ ∈ L 2 ([0, 1]; R n ),Proof. Without loss of generality, let us assume t > r. The dissipative property of the drift coefficient of Y ǫ,w ǫ and Itô's formula yieldNow, by subtracting Y ǫ,w ǫ r from both sides and applying Hölder's inequality along with the Burkhölder-Davis-Gundy inequality, we getTo bound the first term on the right-hand side, we combine the second estimate in Lemma 11 and the fact thatFor the second term, note that t r e − 2 ǫ Γ(t−s) ds = Cǫ |t − r| . Moreover, the sublinearity of ζ(y) and the first estimate in Lemma 11 yield a finite bound on the expression E 1 0 |ζ(Y ǫ,w ǫ )| 2 ds . The third term on the right-hand side of (45) can be treated similarly with the help of Lemma 10. Regarding the last term, recall that σ(y)σ T (y) is bounded in Condition H1. Thus, we haveThe Kolmogorov Continuity Theorem then yields the almost sure Hölder contintuity of Y ǫ,w ǫ .Lemma 14. Assume w ǫ ∈ S is a control such that sup ǫ>0 w ǫ 2 S = sup ǫ>0 1 0 |û ǫ s | 2 + |v ǫ s | 2 ds < N for some finite constant N . Under Conditions H1 and H2-A or H2-B, there exists a constant C and ǫ 0 small enough such that for 0 < β ≤ 1/2,Proof. We begin by proving the result under Conditions H1 and H2-A. According to Condition H2-A, f (x, y) is Lipschitz-continuous and bounded, so that f (x, y) is also γ-Hölder continuous for 0 < γ ≤ 1. This further implies Fractional ornstein-uhlenbeck process with stochastic forcing, and its applications. Giacomo Ascione, Yuliya Mishura, Enrica Pirozzi, Methodology and Computing in Applied Probability. 231Giacomo Ascione, Yuliya Mishura, and Enrica Pirozzi, Fractional ornstein-uhlenbeck process with stochastic forcing, and its applications, Methodology and Computing in Applied Probability 23 (2021), no. 1, 53-84. A variational representation for certain functionals of brownian motion. Michelle Boué, Paul Dupuis, The Annals of Probability. 264Michelle Boué and Paul Dupuis, A variational representation for certain functionals of brownian motion, The Annals of Probability 26 (1998), no. 4, 1641-1659. D Baȋer, M I Freȋdlin, MR 0451366Theorems on large deviations, and stability under random perturbations. 235D. Baȋer and M. I. Freȋdlin, Theorems on large deviations, and stability under random perturbations, Dokl. Akad. Nauk SSSR 235 (1977), no. 2, 253-256. MR 0451366 Short-time near-the-money skew in rough fractional volatility models. ] C + 19, P K Bayer, A Friz, B Gulisashvili, B Horvath, Stemper, MR 3939657Quant. Finance. 195+ 19] C. Bayer, P. K. Friz, A. Gulisashvili, B. Horvath, and B. Stemper, Short-time near-the-money skew in rough fractional volatility models, Quant. Finance 19 (2019), no. 5, 779-798. MR 3939657 Typical dynamics and fluctuation analysis of slow-fast systems driven by fractional brownian motion. Solesne Bourguin, Siragan Gailus, Konstantinos Spiliopoulos, Stochastics and Dynamics. 21072150030Solesne Bourguin, Siragan Gailus, and Konstantinos Spiliopoulos, Typical dynamics and fluctuation analysis of slow-fast systems driven by fractional brownian motion, Stochastics and Dynamics 21 (2021), no. 07, 2150030. Stochastic calculus for fractional brownian motion and applications. Francesca Biagini, Yaozhong Hu, Bernt Øksendal, Tusheng Zhang, Springer Science & Business MediaFrancesca Biagini, Yaozhong Hu, Bernt Øksendal, and Tusheng Zhang, Stochastic calculus for fractional brow- nian motion and applications, Springer Science & Business Media, 2008. Patrick Billingsley, Convergence of probability measures. John Wiley & SonsPatrick Billingsley, Convergence of probability measures, John Wiley & Sons, 2013. Large deviation principles for stochastic dynamical systems with a fractional brownian noise. Amarjit Budhiraja, Xiaoming Song, arXiv:2006.07683arXiv preprintAmarjit Budhiraja and Xiaoming Song, Large deviation principles for stochastic dynamical systems with a fractional brownian noise, arXiv preprint arXiv:2006.07683 (2020). Arbitrage in fractional Brownian motion models. Patrick Cheridito, Finance Stoch. 74MRPatrick Cheridito, Arbitrage in fractional Brownian motion models, Finance Stoch. 7 (2003), no. 4, 533-553. MR 2014249 Fractional ornstein-uhlenbeck processes. Patrick Cheridito, Hideyuki Kawaguchi, Makoto Maejima, Electronic Journal of probability. 8Patrick Cheridito, Hideyuki Kawaguchi, and Makoto Maejima, Fractional ornstein-uhlenbeck processes, Elec- tronic Journal of probability 8 (2003), 1-14. Long memory in continuous-time stochastic volatility models. Fabienne Comte, Eric Renault, MR 1645101Math. Finance. 84Fabienne Comte and Eric Renault, Long memory in continuous-time stochastic volatility models, Math. Finance 8 (1998), no. 4, 291-323. MR 1645101 Long memory in continuous-time stochastic volatility models. Mathematical finance. 84, Long memory in continuous-time stochastic volatility models, Mathematical finance 8 (1998), no. 4, 291-323. A weak convergence approach to the theory of large deviations. Paul Dupuis, Richard S Ellis, John Wiley & SonsPaul Dupuis and Richard S Ellis, A weak convergence approach to the theory of large deviations, John Wiley & Sons, 2011. Large deviations for multiscale diffusion via weak convergence methods. Paul Dupuis, Konstantinos Spiliopoulos, Stochastic Processes and their Applications. 122Paul Dupuis and Konstantinos Spiliopoulos, Large deviations for multiscale diffusion via weak convergence methods, Stochastic Processes and their Applications 122 (2012), no. 4, 1947-1987. Mixed stochastic differential equations: Existence and uniqueness result. José Luís Da Silva, Mohamed Erraoui, El Hassan Essaky, Journal of Theoretical Probability. 312José Luís da Silva, Mohamed Erraoui, and El Hassan Essaky, Mixed stochastic differential equations: Existence and uniqueness result, Journal of Theoretical Probability 31 (2018), no. 2, 1119-1141. Multiscale stochastic volatility for equity, interest rate, and credit derivatives. Jean-Pierre Fouque, George Papanicolaou, Ronnie Sircar, Knut Sølna, Cambridge University PressJean-Pierre Fouque, George Papanicolaou, Ronnie Sircar, and Knut Sølna, Multiscale stochastic volatility for equity, interest rate, and credit derivatives, Cambridge University Press, 2011. The averaging principle and theorems on large deviations. Mark Iosifovich Freidlin, Russian mathematical surveys. 335117Mark Iosifovich Freidlin, The averaging principle and theorems on large deviations, Russian mathematical surveys 33 (1978), no. 5, 117. A comparison of homogenization and large deviations, with applications to wavefront propagation, Stochastic processes and their applications. I Mark, Richard B Freidlin, Sowers, 82Mark I Freidlin and Richard B Sowers, A comparison of homogenization and large deviations, with applications to wavefront propagation, Stochastic processes and their applications 82 (1999), no. 1, 23-52. Short-time at-the-money skew and rough fractional volatility. Masaaki Fukasawa, MR 3592946Quant. Finance. 172Masaaki Fukasawa, Short-time at-the-money skew and rough fractional volatility, Quant. Finance 17 (2017), no. 2, 189-198. MR 3592946 Multidimensional stochastic processes as rough paths. K Peter, Nicolas B Friz, Victoir, MR 2604669Cambridge Studies in Advanced Mathematics. 120Cambridge University PressPeter K. Friz and Nicolas B. Victoir, Multidimensional stochastic processes as rough paths, Cambridge Studies in Advanced Mathematics, vol. 120, Cambridge University Press, Cambridge, 2010, Theory and applications. MR 2604669 Random perturbations of dynamical systems. I Mark, Alexander D Freidlin, Wentzell, Szücs. MR 2953753Grundlehren der mathematischen Wissenschaften. HeidelbergSpringer260third ed.Mark I. Freidlin and Alexander D. Wentzell, Random perturbations of dynamical systems, third ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 260, Springer, Heidelberg, 2012, Translated from the 1979 Russian original by Joseph Szücs. MR 2953753 Asymptotics for rough stochastic volatility models. Martin Forde, Hongzhong Zhang, MR 3608743SIAM J. Financial Math. 81Martin Forde and Hongzhong Zhang, Asymptotics for rough stochastic volatility models, SIAM J. Financial Math. 8 (2017), no. 1, 114-145. MR 3608743 Asymptotics for rough stochastic volatility models. SIAM Journal on Financial Mathematics. 81, Asymptotics for rough stochastic volatility models, SIAM Journal on Financial Mathematics 8 (2017), no. 1, 114-145. Large deviations of slow-fast systems driven by fractional brownian motion. Siragan Gailus, Ioannis Gasteratos, arXiv preprint: 2210.03678Siragan Gailus and Ioannis Gasteratos, Large deviations of slow-fast systems driven by fractional brownian motion, arXiv preprint: 2210.03678 (2022). Volatility is rough. Jim Gatheral, Thibault Jaisson, Mathieu Rosenbaum, Quantitative finance. 186Jim Gatheral, Thibault Jaisson, and Mathieu Rosenbaum, Volatility is rough, Quantitative finance 18 (2018), no. 6, 933-949. Asymptotic behavior of the fractional Heston model. Hamza Guennoun, Antoine Jacquier, Patrick Roome, Fangwei Shi, 1017-1045. MR 3836176SIAM J. Financial Math. 93Hamza Guennoun, Antoine Jacquier, Patrick Roome, and Fangwei Shi, Asymptotic behavior of the fractional Heston model, SIAM J. Financial Math. 9 (2018), no. 3, 1017-1045. MR 3836176 Stochastic differential equations driven by fractional brownian motion and standard brownian motion, Stochastic analysis and applications. João Guerra, David Nualart, 26João Guerra and David Nualart, Stochastic differential equations driven by fractional brownian motion and standard brownian motion, Stochastic analysis and applications 26 (2008), no. 5, 1053-1075. Averaging principle of sde with small diffusion: moderate deviations. Arnaud Guillin, The Annals of Probability. 311Arnaud Guillin, Averaging principle of sde with small diffusion: moderate deviations, The Annals of Probability 31 (2003), no. 1, 413-443. Asymptotic behaviour of randomised fractional volatility models. Blanka Horvath, Antoine Jacquier, Chloé Lacombe, MR 3986948J. Appl. Probab. 562Blanka Horvath, Antoine Jacquier, and Chloé Lacombe, Asymptotic behaviour of randomised fractional volatil- ity models, J. Appl. Probab. 56 (2019), no. 2, 496-523. MR 3986948 Averaging dynamics driven by fractional brownian motion. Martin Hairer, Xue-Mei Li, The Annals of Probability. 484Martin Hairer and Xue-Mei Li, Averaging dynamics driven by fractional brownian motion, The Annals of Probability 48 (2020), no. 4, 1826-1860. Large deviations and averaging for systems of slow-fast stochastic reaction-diffusion equations. Wenqing Hu, Michael Salins, Konstantinos Spiliopoulos, Stochastics and Partial Differential Equations: Analysis and Computations. 7Wenqing Hu, Michael Salins, and Konstantinos Spiliopoulos, Large deviations and averaging for systems of slow-fast stochastic reaction-diffusion equations, Stochastics and Partial Differential Equations: Analysis and Computations 7 (2019), no. 4, 808-874. Moderate deviations for randomly perturbed dynamical systems, Stochastic processes and their applications. F C Klebaner, Liptser, 80FC Klebaner and R Liptser, Moderate deviations for randomly perturbed dynamical systems, Stochastic pro- cesses and their applications 80 (1999), no. 2, 157-176. Anatoly A Kilbas, Hari M Srivastava, Juan J Trujillo, Theory and applications of fractional differential equations, North-Holland Mathematics Studies. AmsterdamElsevier Science B.V204Anatoly A. Kilbas, Hari M. Srivastava, and Juan J. Trujillo, Theory and applications of fractional differential equations, North-Holland Mathematics Studies, vol. 204, Elsevier Science B.V., Amsterdam, 2006. MR 2218073 Stochastic version of the averaging principle for diffusion type processes. Robert Liptser, Jordan Stoyanov, Stochastics: An International Journal of Probability and Stochastic Processes. 323-4Robert Liptser and Jordan Stoyanov, Stochastic version of the averaging principle for diffusion type processes, Stochastics: An International Journal of Probability and Stochastic Processes 32 (1990), no. 3-4, 145-163. Slow-fast systems with fractional environment and dynamics. Xue-Mei Li, Julian Sieber, MR 4498200Ann. Appl. Probab. 325Xue-Mei Li and Julian Sieber, Slow-fast systems with fractional environment and dynamics, Ann. Appl. Probab. 32 (2022), no. 5, 3964-4003. MR 4498200 Existence and uniqueness of the solution of stochastic differential equation involving wiener process and fractional brownian motion with hurst index h¿ 1/2. S Yulia, Mishura, M Georgiy, Shevchenko, Communications in Statistics-Theory and Methods. 4019Yulia S Mishura and Georgiy M Shevchenko, Existence and uniqueness of the solution of stochastic differential equation involving wiener process and fractional brownian motion with hurst index h¿ 1/2, Communications in Statistics-Theory and Methods 40 (2011), no. 19-20, 3492-3508. Moderate deviations for systems of slow-fast diffusions. R Matthew, Konstantinos Morse, Spiliopoulos, Asymptotic Analysis. 1053-4Matthew R Morse and Konstantinos Spiliopoulos, Moderate deviations for systems of slow-fast diffusions, Asymptotic Analysis 105 (2017), no. 3-4, 97-135. David Nualart, The malliavin calculus and related topics. SpringerDavid Nualart, The malliavin calculus and related topics, vol. 1995, Springer, 2006. Averaging principles for mixed fast-slow systems driven by fractional brownian motion. Bin Pei, Yuzuru Inahama, Yong Xu, arXiv:2001.06945arXiv preprintBin Pei, Yuzuru Inahama, and Yong Xu, Averaging principles for mixed fast-slow systems driven by fractional brownian motion, arXiv preprint arXiv:2001.06945 (2020). Parameter estimation for multiscale diffusions. A Grigorios, A M Pavliotis, Stuart, Journal of Statistical Physics. 1274Grigorios A Pavliotis and AM Stuart, Parameter estimation for multiscale diffusions, Journal of Statistical Physics 127 (2007), no. 4, 741-781. On the poisson equation and diffusion approximation. i, The Annals of Probability. E Pardoux, Yu Veretennikov, 29E Pardoux and Yu Veretennikov, On the poisson equation and diffusion approximation. i, The Annals of Probability 29 (2001), no. 3, 1061-1085. On the poisson equation and diffusion approximation 3. Etienne Pardoux, Yu Veretennikov, The Annals of Probability. 333Etienne Pardoux and A Yu Veretennikov, On the poisson equation and diffusion approximation 3, The Annals of Probability 33 (2005), no. 3, 1111-1133. N Albert, Shiryaev, Essentials of stochastic finance: facts, models, theory. World scientific3Albert N Shiryaev, Essentials of stochastic finance: facts, models, theory, vol. 3, World scientific, 1999. G Stefan, Anatoly A Samko, Oleg I Kilbas, Marichev, Gordon and breach science publishers. Yverdon Yverdon-les-Bains, Switzerland1Stefan G Samko, Anatoly A Kilbas, Oleg I Marichev, et al., Fractional integrals and derivatives, vol. 1, Gordon and breach science publishers, Yverdon Yverdon-les-Bains, Switzerland, 1993. Importance sampling for slow-fast diffusions based on moderate deviations. Konstantinos Spiliopoulos, Matthew R Morse, Multiscale Modeling & Simulation. 181Konstantinos Spiliopoulos and Matthew R Morse, Importance sampling for slow-fast diffusions based on mod- erate deviations, Multiscale Modeling & Simulation 18 (2020), no. 1, 315-350. Large deviations and importance sampling for systems of slow-fast motion. Konstantinos Spiliopoulos, Applied Mathematics & Optimization. 671Konstantinos Spiliopoulos, Large deviations and importance sampling for systems of slow-fast motion, Applied Mathematics & Optimization 67 (2013), no. 1, 123-161. Fluctuation analysis and short time asymptotics for multiple scales diffusion processes. Stochastics and Dynamics. 14031350026, Fluctuation analysis and short time asymptotics for multiple scales diffusion processes, Stochastics and Dynamics 14 (2014), no. 03, 1350026. On arbitrage and replication in the fractional black-scholes pricing model. Tommi Sottinen, Esko Valkeila, Statistics & Decisions. 212Tommi Sottinen and Esko Valkeila, On arbitrage and replication in the fractional black-scholes pricing model, Statistics & Decisions 21 (2003), no. 2, 93-108. On arbitrage and replication in the fractional Black-Scholes pricing model. Statist. Decisions. 212MR, On arbitrage and replication in the fractional Black-Scholes pricing model, Statist. Decisions 21 (2003), no. 2, 93-107. MR 2000665 An inequality of the Hölder type, connected with Stieltjes integration. L C Young, MR 1555421Acta Math. 671L. C. Young, An inequality of the Hölder type, connected with Stieltjes integration, Acta Math. 67 (1936), no. 1, 251-282. MR 1555421 Martine Zähle, Integration with respect to fractal functions and stochastic calculus. i, Probability theory and related fields. 111Martine Zähle, Integration with respect to fractal functions and stochastic calculus. i, Probability theory and related fields 111 (1998), no. 3, 333-374. On the link between fractional and stochastic calculus. Martina Zähle, Stochastic dynamics. Martina Zähle, On the link between fractional and stochastic calculus, Stochastic dynamics (1999), 305-325. Xicheng Zhang, A variational representation for random functionals on abstract wiener spaces. 49Xicheng Zhang, A variational representation for random functionals on abstract wiener spaces, Journal of Mathematics of Kyoto University 49 (2009), no. 3, 475-490.
[]
[]
[ "A M Badalian \nNRC \"Kurchatov Institute\" Moscow\nRussia\n", "Yu A Simonov \nNRC \"Kurchatov Institute\" Moscow\nRussia\n" ]
[ "NRC \"Kurchatov Institute\" Moscow\nRussia", "NRC \"Kurchatov Institute\" Moscow\nRussia" ]
[]
The scalar resonances X(3915), X(3960), X(4140) are considered as exotic four-quark states: cqcq, cscs, cscs, while the X(3863) is proved to be the cc, 2 3 P 0 state. The masses and the widths of these resonances are calculated in the framework of the Extended Recoupling Model, where a four-quark system is formed inside the bag and has relatively small size ( < ∼ 1.0 fm). Then the resonance X(3915) appears due to the transitions: J/ψω into D * + D * − (or D * 0D * 0 ) and back, while the X(3960) is created due to the transitions D + s D − s into J/ψφ and back, and the X 0 (4140) is formed in the transitions J/ψφ into D * + s D * − s and back. The characteristic feature of the recoupling mechanism is that this type of resonances can be predominantly in the S-wave decay channels and has J P = 0 + . In two-channel case the resonance occurs to be just near the lower threshold, while due to coupling to third channel (like the cc channel) it is shifted up and lies by (20-30) MeV above the lower threshold. The following masses and widths are calculated: M (X(3915)) = 3920 MeV, Γ(X(3915)) = 20 MeV; M (X(3960)) = 3970 MeV, Γ(X(3960) = 45(5) MeV, M (X 0 (4140)) = 4120(20) MeV, Γ(X 0 (4140)) = 100 MeV, which are in good agreement with experiment.
10.1140/epjc/s10052-023-11590-z
[ "https://export.arxiv.org/pdf/2301.13597v2.pdf" ]
256,416,061
2301.13597
673cc6884cb4002f93106e30ee93e0a6d39e2090
17 Mar 2023 March 21, 2023 A M Badalian NRC "Kurchatov Institute" Moscow Russia Yu A Simonov NRC "Kurchatov Institute" Moscow Russia 17 Mar 2023 March 21, 2023The scalar exotic resonances X(3915), X(3960), X(4140) The scalar resonances X(3915), X(3960), X(4140) are considered as exotic four-quark states: cqcq, cscs, cscs, while the X(3863) is proved to be the cc, 2 3 P 0 state. The masses and the widths of these resonances are calculated in the framework of the Extended Recoupling Model, where a four-quark system is formed inside the bag and has relatively small size ( < ∼ 1.0 fm). Then the resonance X(3915) appears due to the transitions: J/ψω into D * + D * − (or D * 0D * 0 ) and back, while the X(3960) is created due to the transitions D + s D − s into J/ψφ and back, and the X 0 (4140) is formed in the transitions J/ψφ into D * + s D * − s and back. The characteristic feature of the recoupling mechanism is that this type of resonances can be predominantly in the S-wave decay channels and has J P = 0 + . In two-channel case the resonance occurs to be just near the lower threshold, while due to coupling to third channel (like the cc channel) it is shifted up and lies by (20-30) MeV above the lower threshold. The following masses and widths are calculated: M (X(3915)) = 3920 MeV, Γ(X(3915)) = 20 MeV; M (X(3960)) = 3970 MeV, Γ(X(3960) = 45(5) MeV, M (X 0 (4140)) = 4120(20) MeV, Γ(X 0 (4140)) = 100 MeV, which are in good agreement with experiment. Introduction In the region (3.9-4.2) GeV there are now three scalar resonances and the X(3915) was the first, observed by the Belle in the e + e − → J/ψωK process [1]. Later this resonance was confirmed by the BaBar [2] and in several other experiments [3]), in particular, in two-photon collisions [4,5]. For some years this resonance was assumed to be the conventional cc meson -χ co (2P ), although this interpretation has called out some doubts [6,7] (see discussion in the reviews [8,9]) and does not agree with predictions in different relativistic potential models (RPM) [10]- [13]. The experimental masses of the X(3915) and χ c2 (2P ) were found to be almost equal, while in the RPMs a smaller mass, M(2 3 P 0 ) ∼ = 3870 ± 30 MeV, and much larger mass difference, δ 20 (2P ) = M(χ c2 (2P ) − M(χ c0 (2P ) ∼ = (70 − 100) MeV, were predicted. Notice that large mass difference δ 20 is kept even if the coupling of the χ c0 (2P ) to open channels is taken into account [14,15]. Such theoretical expectations were supported by the Belle observation of the wide scalar X(3860) resonance [16], both in e + e − → J/ψD + D − and e + e − → J/ψD 0D0 decays, which has the mass M = 3862 +26 −32 +40 −82 MeV and large width Γ ∼ = 200 MeV. The existence of the scalar X(3860) resonance is confirmed by the analysis of two-photon production, γγ → DD in [17]. Very recently the LHCb [18] has observed two more scalar resonances X(3960), X 0 (4140) in the D + s D − s mass spectrum in the B + → D + s D − s K + decays with the parameters: M(X(3960)) = (3956±5±10) MeV, Γ(X(3960)) = (43 ± 13 ± 8) MeV, M(X 0 (4140)) = (4133 ± 6 ± 6) MeV, Γ(X 0 (4140)) = (67 ± 17 ± 7) MeV, both with J P C = 0 ++ . These new scalar resonances evidently look as exotic states and the X(3960) was interpreted as the molecular D + s D − s state within the QCD sum rules approach [19,20] and in a coupledchannel model [21]; in [22] it appears due to the triangle singularity, while in [23] the parameters of the X(3960), as a diquark-antidiquark state, were obtained in a good agreement with experiment, using the QCD sum rules approach. Notice that the masses of the X(3960) and X(4140) resonances lie by ∼ 20 MeV above the thresholds: D + s D − s and J/ψφ, respectively. In our paper we assume that the X(3915) and both the X(3960), X 0 (4140) belong to exotic four-quark states cqcq and cscs and to define their parameters we will use the Extended Recoupling Model (ERM), recently suggested in [24], which develops the Recouplimg Model, presented earlier [25]. The ERM allows to calculate the mass and width of a scalar four-quark states, however, within suggested mechanism such resonances cannot exist in the systems with two identical mesons, like D + s D + s , D * + s , D * + s . This theoretical prediction is supported by the Belle experiment [26]. In the ERM the system of two mesons, e.g. (J/ψ + φ), can transfer into another pair of the mesons (D + s , D − s ) by rearranging confining strings and back in the infinite chain of transformations, like J/ψφ → (D + sD − s ) → J/ψφ → .... Note that such sequences can also be treated, for example, in the standard OBE approximation with the meson exchanges, which, however, does not produce the singularities near the thresholds. In the coupled-channel models (CCM) [27,28] the interaction between hadrons, like D + s D − s , J/ψφ, is usually neglected, while in the ERM such interaction is taken into account, introducing the fourquark bag. It is important that all hadrons involved have rather small sizes, ∼ = (0.40 − 0.55) fm and only ω(1S) has a bit larger r.m.s. ∼ 0.7 fm. We would like to underline the characteristic features of the ERM [24]: first, due to the string rearrangement of a four-quark system the singularity lies close to the lower threshold; second, this mechanism produces the resonance in the S-wave hadron-hadron system and therefore, the quantum numbers of these resonances J P C = 0 ++ , 1 ++ , 2 ++ ; third, a resonance does not appear, if hadrons are identical. In the literature there are still a controversy, concerning the X(3915), and different interpretations were proposed. This resonance was considered in the tetraquark model within the Born-Oppenheimer approach in [29,30,31,32], due to the triangle singularity [22] and the threshold effects [33], as the molecular D sDs bound state [34], or the lightest cscs state [35] and as the diquark-antidiquark state, using the QCD sum rule method [23,36]. In contrast to a molecular structure of four-quark states in the ERM these systems are assumed to be compact systems, similar to the diquark-antidiquark states studied in [37]. In such compact systems their wave functions at the origin are not small and therefore they can be produced in the γγ transitions. At this point one can assume a possible existence of at least two different but subsidiary mechanisms, producing resonances in the four-quark and multiquark systems: first, the resonances, which are formed inside a common multiquark bag and connected with external independent channels. As a result these resonances could be seen in all external channels. The theory of this type of approach was suggested long ago in [38]. Within the diquark-qntidiquark model the compact Q 2Q2 resonances were already predicted in 1988 [37]. Second type of multiquark resonances refers to the channel-coupling resonances where the internal multiquark region is only needed to connect different external channels with sufficient probability and the considered here Extended Recoupling Model belongs to this second type. One can easily imagine the existence of mixed type models and mechanisms where two these dynamics interfere with each other. In what follows we shall consider only the ERM mechanism. In our paper we will shortly discuss the higher scalars, X(4500), X(4700), observed by the LHCb [39], which admit different interpretations. The structure of the paper is as follows. In next section we shortly remind the basic formulas in two-channel case and give the values of the parameters, needed to define the masses and widths of the recoupled four-quark resonances. In section 3 more general matrix representation of the ERM is presented. In section 4 we calculate the transition amplitudes and give the masses and widths of the scalar resonances, and compare them with experimental data. In section 4 the masses of high X(4500), X(4700) resonances, as the cc states, are discussed. Our conclusions are presented in section 5. The two-channel approach in the Extended Recoupling Model We study the experimental process where, among other products, two hadrons are produced and one pair of hadrons (the pair 1) can transfer into another pair of hadrons (the pair 2). In [24] the probability amplitude of this transition was denoted as V 12 (p 1 , p 2 ), with p 1 , p 2 -relative momenta of the hadrons, referring to the pair 1 and 2. If an infinite set of the transformations was supposed and the total production amplitude A 2 of the pair 2 was written as a product of the slowly varying function F (E) and the singular factor f 12 (E) = 1 1−N , then the amplitude A 2 = F (E)f 12 (E). This definition of the transition amplitude V 12 = V 21 differs of that in other approaches, where one or more the OBE diagrams with meson exchanges are taken. In the ERM [24] the process occurs through the intermediate stage of the Quark Compound Bag (QCB) [38,40], where all quarks and antiquarks of two hadrons are participating in the string recoupling and, possibly, the spin recoupling. Denoting the QCB wave functions as Φ(q i ) (i = 1, 2, 3, 4) and the two-hadron wave functions as Ψ i (h 1 , h 2 ), the amplitude V 12 can be written as, V 12 = (Ψ 1 (h a1 h b1 )Φ(q i ))(Φ(q i )Ψ 2 (h a2 h b2 ) = V 1 (p 1 )V 2 (p 2 ),(1) i.e. the amplitude V 12 = 1 1−N acquires the factorized form: V 12 (p 1 , p 2 ) = v 1 (p 1 )v 2 (p 2 ) with the factor N, written as N = z(E)I 1 (E)I 2 (E).(2) Here z = z(E) can be called the transition probability, while I 1 (E), I 2 (E) are the following integrals (see [24]): I i (E) = v i G i v i = d 3 p i (2π) 3 v 2 i (p i ) E ′ (p i ) + E ′′ (p i ) − E ,(3) where the hadron energies E ′ (p i ), E ′′ (p i ) in the i-th pair near thresholds, E ′ (p) = p 2 2m ′ + m ′ , include corresponding thresholds E th i and the reduced masses µ i , namely, E th i = m ′ (i) + m ′′ (i), µ i = m ′ (i)m ′′ (i) m ′ (i) + m ′′ (i) .(4) The result of the integration in I i (E) can be approximated by the form: I i = const i 1 ν i − i 2µ i (E − E th i ) .(5) with µ i , defined in (4), while ν i is expressed via the parameters of the hadron wave functions, which were calculated explicitly in [24]. Here we would like to underline that the transition probability z(E) appears to be the only fitting parameter in the ERM. The whole series of the transitions from the pair 1 to 2 and back is summed up to the amplitude f 12 , f 12 (E) = 1 1 − zI 1 I 2 , I i = 1 ν i − i 2µ i (E − E th i ) ,(6) where ν i are found from the four-quark wave functions, as in [37,40]. The form of Eq. (6) takes place for the energies E > E 1 , E 2 , while for E < E 1 , E 2 , i.e. below thresholds, the amplitude f 1 = 1 ν 1 + √ 2µ 1 (|E−E 1 |) . It is important that in the ERM the process proceeds with the zero relative angular momentum between two mesons, L = 0, otherwise the transition probability z 12 (E) is much smaller and a resonance may not appear. Note also that if the recoupling mechanism is instantaneous, or the transition from one pair of the mesons to another proceeds instantaneously, then the transition amplitude V (12) does not factorize into V (1)V (2); such an assumption was used in the original Recoupling Model [25]. However, in this approximation, e.g. for the T cc resonance agreement with experiment was not reached [25]. On the contrary, in the ERM [24] the recoupling mechanism proceeds in two stages: at first stage the hadrons h 1 , h 2 collapse into common "compound bag" [38,40], where the four quarks are kept together by the confining interaction between all possible quark pairs. This compound bag has its own wave function Φ i (q 1 , q 2 , q 3 , q 4 ) and the probability amplitude of the h 1 , h 2 → Φ transition, which defines the factor V 1 (p 1 ) in Eq. (2). In a similar way the transition from the Bag state to the final hadrons h 3 , h 4 defines the factor V 2 (p 2 ) and we obtain the relation: v 1 (p i ) = d 3 q 1 ...d 3 q 4 ψ h 1 ψ h 2 Φ i (q 1 , ..q 4 ),(7) and similar equation for v 2 (p 2 ), replacing h 1 , h 2 by h 3 , h 4 . From v i (p i ) the function I i (3) is defined and using (6), one obtains ν i . Now we give experimental data and corresponding the ERM parameters, referring to the four-quark systems, cqcq for X(3915) and cscs for the X(3960), X(4140). We give also the threshold energies E 1 , E 2 . Here q can be u, d quarks. To define the structure of the cross sections we start with the value of the recoupling probability z = 0.2 GeV 2 and the parameters from the item 1) to obtain the distribution |f 12 (E)| 2 ; the values of |f 12 (E)| 2 will be given in Section 4. In the amplitude f 12 (E) the resulting singularity can be found in the form of (6) and for equal threshold masses it produces a pole nearby thresholds; however, real distance between the thresholds is large, ∼ 100 MeV and the actual singularity structure can be more complicated. The matrix approach in the ERM In previous Section we have presented the ERM equations in the case of two channels, which are convenient to define the mass of a resonance. However, they do not allow to study some details of the process, or to consider a larger number of channels, which can have a influence at the properties of a fourquark system. Therefore here we present a more general representation of the amplitude using the unitarity relation, when the standard form of the transition amplitudes f ij (E) (for L = 0) is f ij − f * ji = n 2ik n f in f * jn ,(8) or the unitarity relation can be realized through the M-matrix representation, f M = 1 M − ik ,(9) wheref ,M ,k are the matrices in the channel numbers [28]. In some cases instead of theM it is more convenient to use theK matrix,M = −K −1 , where the matrix elements (m.e.) M ik (E) are the real analytic functions of E with the dynamical cuts. For two-channel systemf M can be written aŝ f M = 1 M − ik =N D(E) ,(10)withN = M 22 − ik 2 −M 21 −M 12 M 11 − ik 1 .(11) Here D(E) = (M 11 − ik 1 )(M 22 − ik 2 ) − M 12 M 21 .(12) One can easily establish the relation between the equations (10)- (12) and the amplitude f 12 (ERM) (6) in two-channel case, which is a partial case of these equations: f 12 (ERM) = N 11 N 22 D(E) , D(E) = (ν 1 − ik 1 )(ν 2 − ik 2 ) − z,(13) and z = M 12 M 21 , ν i ≡ M ii (E).(14) One can see that for z > 0 the values ν i = M ii are real analytic functions of E. In the ERM [24] ν i were positive constants (defined via the parameters of the compound bag model), while in general case Eqs. (12)-(14) include other transition m.e.s f ik . Later in our analysis we will be interested only in the denominator D(E) (12) and the factors in (13), (14), which fully define the position of a resonance. The value of z, in principle, can be calculated within the ERM, however, it can depend on many unknown parameters, and at the present stage we prefer to keep z as a single fitting parameter. It can be shown that z depends on the width of a resonance, but weakly depends on the resonance position. Now we consider three channels case to study more realistic case and choose the situation, when a resonance lies above the threshold 3. Here we do not need to specify the channel 3, which for example, may be a conventional cc state with J P C = 0 ++ . We introduce the 3 × 3 amplitudef M (E) with three thresholds E i (i = 1, 2, 3) and the momenta k i = 2µ i (E − E i ), µ i = m 1i m 2i m 1i +m 2i , and E i = m 1i + m 2i . Here m 1i , m 2i are the masses of two hadrons in the channel i. In this case the form of Eq. (9) is kept, (16) For the energy E below the thresholds, 1 and 2, −ik 1 = |k 1 |, −ik 2 = |k 2 |, and the factor ∆M is a real function of E. For the threshold 3 below thresholds of 1 and 2 one can define the poles of the amplitudef 3 , or the zeroes of D 3 (E), and rewrite the Eq. (15) as, f 3 (E) =N 3 D 3 (E) , D 3 (E) = ((M 11 −ik 1 )(M 22 −ik 2 )−M 12 M 21 ))(M 23 −ik 3 )+∆M,(15)D 3 = (M 11 − ik 1 )(M 22 − ik 2 ) −z(E),(17) where the transition probabilityz(E) z(E) = M 12 M 21 − ∆M(M 33 + ik 3 ) M 2 33 + k 2 3(18) One can see thatz(E) acquires imaginary part, which can be of both signs. Therefore the influence of the third (or more) open channels, lying below the thresholds E 1 , E 2 in the 2 × 2 matrix f 12 (E), may be important in some cases. The channel 3 can be taken into account, introducing complex values of z(E), which can depend on the energy as in Eq. (18). The masses and widths of the scalar resonances We start with the X(3915) resonance and consider the following recoupling process: J/ψω → D * D * . At first we look at two-channel situation and choose the recoupling parameter z 2 = 0.18 GeV 2 . For the X(3915) structure -cqcq the parameters µ i , ν i , E i are given in the item 1) of section 2. Then inserting all parameters to the Eq. (13), one obtains the distribution |f 12 (E)| 2 (f 2 ≡ f 12 ). Its values for different E are given in Table 1, which show that the maximum takes place at E = 3880 MeV, just near the lower threshold, and Γ 2 = Γ(2 − channels) ∼ = 15 MeV. In experiment for this resonance, observed by the Belle group in the process e + e − → e + e − J/ψω [1], the larger mass M(exp .) = (3918.4 ± 1.9) MeV and Γ(exp .) = (20 ± 5) MeV [3] were obtained. In the case of 3-channels, when e.g. the coupling to the cc channel is taken into account, the factor z 3 (E) acquires an imaginary part. In this case we calculate the amplitude f 3 (E), taking z 3 = (0.18−i0.20) GeV 2 ; the values of |f 3 (E)| 2 are given in Tab. 1. From Table 1 one can see that in the 3-channel case the peak is shifted up by ∼ 35 MeV and corresponds the mass E R ∼ = 3.915 GeV and the width Γ 3 ∼ = 20 MeV, which are in good agreement with the experimental mass and Γ(exp.) = 20(5) MeV [3]. The scalar resonance X(3960) with J P C = 0 ++ was recently observed by the LHCb in the B + → J/ψφK + [18] and within the ERM it can be explained due to the infinite chain of the transitions: J/ψφ → D + s D − s and back. In two-channel approximation the X(3960) parameters (ν i , µ i , E i , (i = 1, 2) are given in the item 2) (Section 2), which are used to define the amplitude (13). First, we choose z 2 = 0.30 GeV 2 and calculate the transition amplitudes |f 12 (E)| 2 ; their values are given in the Table 2. In the two-channel approximation the numbers from Table 2 In [18] the LHCb has reported about another, the X(4140) resonance, with J P C = 0 ++ , in the B + → D + s D − s K + decay. Its mass M(X(4140) = 4133 (12) MeV is close to the J/ψφ threshold. We consider this resonance as the cscs system and first calculate the squared amplitudes |f 12 (E)| 2 in twochannel case, taking the parameters µ i , ν i , E i from the item 3) of Section 2. In this 2-channel case: J/ψφ and D * + s D * − s the transition probability z 2 = 0.35 is taken and the calculated values of |f 12 | 2 are given in Table 3. In three-channel case the channel D + s D − s is added as the third one, then the values |f 3 | 2 are calculated for z 3 = 0.20 − i0.20 and given in Table 3. From Table 3 one can see the peak at E R = (4.09 ± 0.01) GeV, Γ(th.) = 60 MeV in two-channel approximation and the peak at E R = (4.12±0.02) GeV Our numbers in Tables 1-3 show that in two-channel case the resonance always lies just near the lower threshold, however, if the coupling to the third channel is taken into account, then it is shifted up and its position occurs to be close to the experimental number. The masses and widths of the exotic resonances, X(3915), X(3960), X(4140), defined in the ERM, are given in the Table 4 together with experimental data. From Table 4 one can see that in the ERM the predicted masses and the widths of the scalar four-quark resonances are in good agreement with experiment, if besides two channels, which creates the resonance, the coupling of the resonance to third channel is taken into account. Comparing our results with those in literature, one can notice that our conclusions on the four-quark structure of the X(3915), X(3960, X(4140)) also agree with the analysis in the paper [33], based on the coupled channel model of the cc and meson-meson systems. Notice that the general structure of the channel-coupling matrix elements in both approaches is similar. The scalar X(4500), X(4700) resonances High scalar resonances X(4500), X(4700), or χ c0 (4500), χ c0 (4700), [39], were studied in many papers and for them two interpretations were suggested. First, the X(4500) and X(4700) are considered as the cc states -4 3 P 0 and 5 3 P 0 and their masses were calculated in relativistic quark models, where coupling to open channels was taken into account [14,15,41]. In [41] the influence of open channels is studied using the so-called screened potential [11], while in [13] the spectrum was calculated using the relativistic string Hamiltonian [42] with the flattened confining potential [43]; this flattening effect arises due to creation of virtual qq pairs. Notice that the flattened confining potential appears to be universal for all types of the mesons and it produces the hadronic shifts down ∼ (100 − 130) MeV for the 4P, 5P charmonium states and gives the masses of the 4 3 P 0 , 5 3 P 0 states in a reasonable agreement with experiment [13]. On the contrary, in [44], within the 3 P 0 model, much smaller shifts due to the coupled-channel effects, < ∼ 30 MeV , were obtained for the 4 3 P 0 , 5 3 P 0 states, while in [41] these states acquire too large mass shifts for the chosen screened potential. Model-independent analysis of the cc spectrum can also be done by means of the Regge trajectories, if they are defined not for the meson mass M(nL) but for the excitation energy: E(nL) = M(nL) − 2m Q [45], wherem Q is the current heavy quark mass [13]: M(n 3 P 0 ) − 2m c 2 = 1.06+1.08n r , (in GeV 2 ); n = n r +1,m c = 1.20 GeV 2 . (19) This Regge trajectory gives M(4 3 P 0 ) = 4.474 GeV and M(5 3 P 0) = 4.719 GeV, in good agreement with the LHCb data [39] (see Table 5). In Table 5 the masses M(2 3 P 0 ) = 3863 MeV, M(4 3 P 0 ) = 4473 MeV and M(5 3 P 0 ) = 4719 MeV, show very good agreement with those of χ c0 (3862) [16], X(4500) and X(4700) [39]. At present other high excitations with J P = 1 + , 2 + (n = 4, 5) are not yet found and their observation would be very important to understand the fine-structure effects of high charmonium, in particular, the fine-structure splitting have to decrease for a screened GE potential. Notice that the resonance X(4700) lies very close to the ψ(2S)φ threshold and this fact indicates a possible connection between the cc and the cscs states. The four-quark interpretation of the X(4500), X(4700) was discussed [19], [46]- [49], where in the mass region (4.4-4.8) GeV the radial or orbital excitations of a diquark-antidiquark systems can exist. Conclusions In our paper the scalar resonances X(3915), X(3960), X(4140) are assumed to be the four-quark states, produced due to recoupling mechanism, when one pair of mesons can transform into another pair of mesons infinitely many times. These resonances do not exist in the cc spectrum. As the four-quark states they have several specific features: 1. The resonance appears only in the S-wave decay channel. 2. Within the ERM it lies rather close to the lower threshold. 3. The scalar four-quark resonance can be created in two channel case due to transitions between channels, but it can also be coupled to another channel 3, e.g. the cc channel. 4. These resonances have no large sizes, being the compact systems, and this fact may be important for their observation. In the case of the X(3915) this statement is confirmed by the Belle analysis of the Q 2 distribution of the X(3915) → J/ψω decays in [50]. The parameters of the four-quark resonances 1) X(3915), J P = 0 + , Γ(exp .) = 20(5) MeV [1, 3], J/ψω → D * D * , E 1 = 3.880, E 2 = 4020, µ 1 = M (J/ψ)M (ω) M (J/ψ)+M (ω) = 0.624, µ 2 = M (D * )M (D * ) M (D * )+M (D * ) = 1.050 (all in GeV). From [24] ν 1 (J/ψω) = 0.21 GeV, ν 2 (D * D * ) = 0.44 GeV. 2) X(3960), J P = 0 + , Γ(exp .) = 43(21) MeV [18], [J/ψφ] → [D − s D + s ], E 1 = 3.936, E 2 = 4116, µ 1 = M J/ψ M φ M J/ψ +M φ = 0.767, µ 2 = M (D + s )M (D − s ) M (D + s +M (D − ) = 0.984; ν 1 (J/ψφ) = 0.265, ν 2 = 0.424 (all in GeV). 3 ) 3X(4140), J P = 0 + , Γ(exp .) = 67(24) MeV[18], [J/ψφ] → [D * − s D * + s ], E 1 = 4.116, E 2 = 4.224, µ 1 = 0.767, µ 2 = 1.056, ν 1 = 0.265, ν 2 = 0.410 (all in GeV). where ∆M is ∆M = M 31 M 12 M 23 +M 32 M 21 M 13 −M 13 M 31 (M 22 −ik 2 )−M 32 M 23 (M 11 −ik 1 ). show the peak at E = 3940 MeV, near D + s D − s threshold, and Γ(2 − ch.) ∼ = 15 MeV. In the 3-channel case the mass of the X(3960) resonance is shifted up to the position M(3 − ch.) = 3970 MeV and the width increases to the value Γ(th.) ∼ = 45(5) MeV; these values are in agreement with the experimental numbers: M(X(3960)) = 3956(15) MeV, Γ(X(3960)) = (43 ± 21) MeV [18]. 3 : 3The values of the |f 12 (E)| 2 and |f 3 (E)| 2 for the X(width Γ(th.) ∼ = 100 MeV in tree-channel case, which are in good agreement with the experimental mass M(X(4140)) = (4133 ± 12) MeV and Γ(X(4140)) = (67 ± 24) MeV [18]. Table 1 : 1The values of the |f 12 (E)| 2 for X(3915)E(GeV) 3.85 3.86 3.88 3.89 3.90 3.91 3.915 3.93 |f 2 (E)| 2 3.04 3.68 63.08 25.02 8.33 2.13 1.65 1.72 |f 3 (E)| 2 1.82 1.79 1.03 1.50 3.30 348.4 360 243 Table 2 : 2The transition probability |f 12 | 2 as a function of the energy E for the X(3960) resonanceE(GeV) 3.85 3.88 3.89 3.92 3.95 3.97 4.00 4.05 |f 12 | 2 (z = 0.30) 3.93 28.6 7.89 3.20 2.28 2.00 1.38 1.50 |f 3 | 2 (z = 0.30 − i0.30) 2.0 1.43 4.02 23.7 198 500 142.3 42.2 Table Table 4 : 4The ERM predictions for the masses and widths (in MeV) of exotic resonances with J P C = 0 ++Resonance M(th.) M(exp.) Γ(th.) Γ(exp.) X(3915) 3920 3918 (2) 20 20(5) [3] X(3960) 3970 3956(15) 45(5) 43(21) [18] X(4140) 4120(20) 4133(12) 100 67(24) [18] Table 5 : 5The Regge trajectory predictions for the masses of the charmonium n 3 P 0 states (in MeV)state M(nP ) exp. mass 1 3 P 0 3429 3414.8(3)) 2 3 P 0 3863 3862 +26 −32 [16] 3 3 P 0 4194 abs. 4 3 P 0 4473 4474 ± 6 [39] 5 3 P 0 4719 4694 ± 4 +16 −3 [39] 6 3 P 0 4941 abs in different models The masses and widths of the X(3915), X(3960), X(4140), presented inTable 4, are obtained in a good agreement with experiment.The authors are grateful to N. P. Igumnova for collaboration. . S K Choi, Belle Collab.arXiv:hep-ex/0408126Phys. Rev. Lett. 94182002S. K. Choi et al. ( Belle Collab.), Phys. Rev. Lett. 94, 182002 (2005); arXiv: hep-ex/0408126. . B Aubert, BaBar Collab.Phys. Rev. lett. 10182001B. Aubert et al. (BaBar Collab.),Phys. Rev. lett. 101, 082001 (2008). . P A Zvla, Particle Data GroupProg. Theor. Exp. Phys. 202083P. A.Zvla et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 083 (2020). . S Uehara, Belle Collab.arXiv:0912.4451Phys. Rev. Lett. 10492000hep-exS. Uehara et al. (Belle Collab.), Phys. Rev. Lett. 104, 092000 (2010); arXiv: 0912.4451 [hep-ex]. . J P Lees, BaBar Collab.arXiv:1207.2651Phys. Rev. D. 8672002hep-exJ. P. Lees et al. (BaBar Collab.), Phys. Rev. D 86, 072002 (2012); arXiv: 1207.2651 [hep-ex]. . M X Duan, arXiv:2002.03311Phys. Rev. D. 10154029hep-ph. and references thereinM. X. Duan et al. Phys. Rev. D 101, 054029 (2020); arXiv: 2002.03311 [hep-ph], and references therein. . S L Olsen, arXiv:1904.06130hep-ex. and references thereinS. L. Olsen, arXiv:1904.06130 [hep-ex] and references therein; . arXiv:1410.6534Phys. Rev. D. 9157501hep-exPhys. Rev. D 91, 057501 (2015), arXiv: 1410.6534 [hep-ex]. . S L Olsen, T Skwaricki, D Xiemnska, arXiv:1708.04012Rev. Mod. Phys. 9015003hep-ex. and references there inS. L. Olsen, T. Skwaricki, and D. Xiemnska, Rev. Mod. Phys. 90, 015003 (2018); arXiv: 1708.04012 [hep-ex]. and references there in. . N Brambilla, arXiv:1907.07583Phys. Rept. 8731hepexN. Brambilla et al., Phys. Rept. 873, 1 (2020); arXiv: 1907.07583 [hep- ex]. . T Barnes, S Godfrey, E S Swanson, arXiv:hep-ph/0505002Phys. Rev. D. 7254026T. Barnes, S. Godfrey, and E. S. Swanson, Phys. Rev. D 72, 054026 (2005); arXiv: hep-ph/0505002. . B Q Li, C Meng, K T Chao, arXiv:0904.4068Phys. Rev. D. 8014012hep-phB. Q. Li, C. Meng, and K.T. Chao, Phys. Rev. D 80, 014012 (2009); arXiv: 0904.4068 [hep-ph]. . D Ebert, R N Faustov, V O Galkin, arXiv:1111.0454Eur. Phys. J. C. 711825hep-phD. Ebert, R. N. Faustov, and V. O. Galkin, Eur. Phys. J. C 71, 1825 (2011); arXiv: 1111.0454 [hep-ph]. . A M Badalian, B L G Bakker, arXiv:1902.09174Phys. Rev. D. 10054036hep-phA. M. Badalian and B. L. G. Bakker, Phys. Rev. D 100, 054036 (2019); arXiv: 1902.09174 [hep-ph]. . P G Ortega, J Segovia, D R Entem, F Fernandez, arXiv:1608.01325Phys. Rev. D. 94114018hep-phP. G.Ortega, J. Segovia, D. R. Entem, and F. Fernandez, Phys. Rev. D 94,114018 (2016), arXiv:1608.01325 [hep-ph]. . E J Eichten, K Lane, C Quigg, arXiv:hep-ph/0511179Phys. Rev. D. 7314014E. J. Eichten, K. Lane, and C. Quigg, Phys. Rev. D 73, 014014 (2006); arXiv: hep-ph/0511179. . K Chilikin, Belle Collab.arXiv:1704.01872Phys. Rev. D. 95112003hep-exK. Chilikin et al. (Belle Collab.), Phys. Rev. D 95, 112003 (2017); arXiv: 1704.01872 [hep-ex]. . E Wang, H S Li, W H Liang, E Oset, arXiv:2010.15431Phys. Rev. D. 10354008hep-phE. Wang, H. S. Li, W. H. Liang, and E. Oset, Phys. Rev. D 103, 054008; arXiv: 2010.15431 [hep-ph]. . Q Xin, Z G Wang, X S Yang, arXiv:2207.09910hep-phQ. Xin, Z. G. Wang, and X. S. Yang, arXiv: 2207.09910 [hep-ph]. . H Mutuk, arXiv:2211.14836hep-phH. Mutuk, arXiv:2211.14836 [hep-ph]. . T Ji, arXiv:2212.00631hep-phT. Ji et al. arXiv: 2212.00631 [hep-ph]. . J M Xie, M Z Liu, L S Geng, arXiv:2207.12178hep-phJ. M. Xie, M. Z. Liu, and L. S. Geng, arXiv:2207.12178 [hep-ph]. . S S Agaev, K Azizi, H Sundu, arXiv:2211.14129hep-phS. S. Agaev, K. Azizi, and H. Sundu, arXiv:2211.14129 [hep-ph]. . Yu A Simonov, arXiv:2209.03697EPJ C. 821024hep-phYu. A. Simonov, EPJ C 82, 1024 (2022), arXiv: 2209.03697 [hep-ph]; . A M Badalian, Yu A Simonov, arXiv:2205.02576hep-phA. M. Badalian and Yu. A. Simonov, arXiv: 2205.02576 [hep-ph]. . Yu A Simonov, arXiv:2011.12326JHEP. 0451hep-phYu. A.Simonov, JHEP 04, 51 (2021); arXiv: 2011.12326 [hep-ph]. . X Y Gao, Belle Collab.arXiv:2112.02497Phys. Rev. 10532002hep-exX. Y. Gao et al. (Belle Collab.) Phys. Rev. 105, 032002 (2022), arXiv: 2112.02497 [hep-ex]. . I V Danikin, Yu A Simonov, arXiv:1006.0211Phys. Rev. lett. 105102002hep-phI. V. Danikin and Yu. A. Simonov, Phys. Rev. lett. 105, 102002 (2010), arXiv:1006.0211 [hep-ph]; . arXiv:0907.1088Phys. Rev. D. 8174027hep-phPhys. Rev. D 81, 074027 (2010), arXiv: 0907.1088 [hep-ph]; . V I Danilikn, V D Orlovsky, Yu A Simonov, arXiv:1106.1552Phys. Rev. D. 8534012hep-phV. I. Danilikn, V. D. Orlovsky, and Yu. A.Simonov, Phys. Rev. D 85, 034012 (2012), arXiv: 1106.1552 [hep-ph]. . A M Badalian, L P Kok, M I Polikarpov, Yu A Simonov, Phys. Rept. 8232A. M. Badalian, L. P. Kok, M. I. Polikarpov and Yu. A. Simonov, Phys. Rept. 82, 32 (1982). . N Brambilla, G Krein, J T Castella, A Vairo, arXiv:1707.09647Phys. Rev. D. 9716016hep-phN. Brambilla, G. Krein, J. T. Castella and A. Vairo, Phys. Rev. D 97, 016016 (2018), arXiv:1707.09647 [hep-ph]. . L Maiani, A Pilloni, A D Polosa, V Riquet, arXiv:2208.02730hep-phL. Maiani, A. Pilloni, A. D. Polosa, and V. Riquet, arXiv: 2208.02730 [hep-ph]. . D Ebert, R N Faustov, V O Galkin, arXiv:0808.3912EPJC. 58399hep-phD. Ebert, R. N. Faustov and V. O. Galkin, EPJC 58, 399 (2008) ; arXiv: 0808.3912; [hep-ph]. . E Braaten, C Langmarck, D H Smith, arXiv:1402.0438Phys. Rev. 9014040hep-phE. Braaten, C. Langmarck, and D. H. Smith, Phys. Rev. 90, 014040 (2014); arXiv: 1402.0438 [hep-ph]. . P G Ortega, J Segovia, D R Entem, F Fernandez, arXiv:1706.02639Phys. Lett. B. 7781hep-phP. G. Ortega, J. Segovia, D. R. Entem, and F. Fernandez, Phys. Lett. B 778, 1 (2018), arXiv: 1706.02639 [hep-ph]. . X Li, M B Voloshin, arXiv:1503.04431Phys. rev. D. 91114014hep-phX. Li and M. B. Voloshin, Phys. rev. D 91, 114014 (2015), arXiv: 1503.04431 [hep-ph]. . R F Lebed, A D Polosa, arXiv:1602.08421Phys. Rev. D. 9394024hep-phR. F. Lebed and A. D. Polosa, Phys. Rev. D 93, 094024 (2016), arXiv: 1602.08421 [hep-ph] . W Chen, arXiv:1706.09731Phys. Rev. D. 96114017hep-phW. Chen, et al. Phys. Rev. D 96, 114017 (2017), arXiv: 1706.09731 [hep-ph]. . A M Badalian, B L Ioffe, A V Smilga, Nucl. Phys. B. 28185A. M. Badalian, B. L. Ioffe, and A. V. Smilga, Nucl. Phys. B 281, 85 (1987). . R L Jaffe, ibid. 281Phys. Rev. D. 15267R. L. Jaffe, Phys. Rev. D 15, 267 (1977); ibid. 281 (1977). . R Aaij, LHCb Collab.arXiv:1606.07895Phys. Rev. Lett. 11822003hep-exR. Aaij et al. (LHCb Collab.) Phys. Rev. Lett. 118, 022003 (2017), arXiv:1606.07895 [hep-ex]; . arXiv:1606.07898Phys. Rev. D. 9512002hep-exPhys. Rev. D 95, 012002(2017), arXiv: 1606.07898 [hep-ex]; . arXiv:2103.01803Phys. Rev. Lett. 12782001hep-exPhys. Rev. Lett. 127,082001 (2021), arXiv: 2103.01803 [hep-ex]. . Yu A Simonov, Nucl. Phys. A. 416103Yu. A. Simonov, Nucl. Phys. A 416, 103 (1984); . J Sov, Nucl. Phys. 36722Yad. Fiz.Sov. J. Nucl. Phys. 36, 99 (1982). Yad. Fiz. 36, 722 (1982). . M X Duan, X Liu, arXiv:2107.14438Phys. Rev. D. 10474010hep-phM. X. Duan and X. Liu, Phys. Rev. D 104, 074010 (2021), arXiv: 2107.14438 [hep-ph] . A V Dubin, A B Kaidalov, Yu A Simonov, Phys. lett. B. 343310A. V. Dubin, A. B. Kaidalov, and Yu. A. Simonov, Phys. lett. B 343, 310 (1995); . arXiv:hep-ph/9311344Phys. Atom. Nucl. 56213Phys. Atom. Nucl. 56, 213 (1993); arXiv: hep-ph/9311344. . A M Badalian, B L G Bakker, Yu A Simonov, arXiv:hep-ph/0204088Phys. Rev. D. 6634026A. M. Badalian, B. L. G. Bakker, and Yu. A. Simonov, Phys. Rev. D 66, 034026 (2002), arXiv: hep-ph/0204088. . S Ferretti, E Santopinto, arXiv:2104.00918Front. in Phys. 976hep-phS. Ferretti and E. Santopinto, Front. in Phys. 9, 76 (2021); arXiv: 2104.00918 [hep-ph]. . S S Afonin, Mod. Phys. Lett. A. 221369S. S. Afonin, Mod. Phys. Lett. A 22, 1369 (2007); . S S Afonin, I V Pusenkov, arXiv:1308.6540Phys. Rev. D. 9094020hep-phS. S. Afonin and I. V. Pusenkov, Phys. Rev. D 90, 094020 (2014); arXiv: 1308.6540 [hep-ph]. . D Ebert, R N Faustov, V O Galkin, arXiv:0808.3912Eur. Phys. J. C. 58399hep-phD. Ebert, R. N. Faustov, and V. O. Galkin, Eur. Phys. J. C 58, 399 (2008); arXiv: 0808.3912 [hep-ph]. . J Wu, arXiv:1608.07900Phys. Rev. D. 9494031hep-phJ. Wu, Phys. Rev. D 94, 094031 (2016), arXiv: 1608.07900 [hep-ph]. . Q F , Y B Dong, arXiv:1607.05570Phys. Rev. D. 9474007hep-phQ. F. L'u and Y. B. Dong, Phys. Rev. D 94, 074007 (2016); arXiv: 1607.05570 [hep-ph]. . Y Xie, D He, X Luo, H Sun, arXiv:2204.03924hep-phY. Xie, D. He, X. Luo, and H. Sun, arXiv:2204.03924 [hep-ph]. . Y Teramoto, Belle collabarXiv:2301.09421hep-exY. Teramoto et al. (Belle collab.), arXiv:2301.09421 [hep-ex].
[]
[ "PROPER MOTIONS OF SUNSPOT'S UMBRAL DOTS AT HIGH TEMPORAL AND SPATIAL RESOLUTION", "PROPER MOTIONS OF SUNSPOT'S UMBRAL DOTS AT HIGH TEMPORAL AND SPATIAL RESOLUTION" ]
[ "Hadis Goodarzi [email protected] \nSchool of Astronomy Institute for Research in Fundamental Sciences (IPM)\nP.O. Box 19395-5746TehranIran\n\nResearch Institute for Astronomy and Astrophysics of Maragha (RIAAM)\nMaraghaIran\n", "Serge Koutchmy \nInstitut d'Astrophysique de Paris\nUMR 7091 CNRS and UPMC (Sorbonne University\n98 Bis Bd Arago75014ParisFrance\n", "Ali Adjabshirizadeh \nDept of Astrophysics\nFaculty of Physics\nTabriz University\nTabrizIran\n" ]
[ "School of Astronomy Institute for Research in Fundamental Sciences (IPM)\nP.O. Box 19395-5746TehranIran", "Research Institute for Astronomy and Astrophysics of Maragha (RIAAM)\nMaraghaIran", "Institut d'Astrophysique de Paris\nUMR 7091 CNRS and UPMC (Sorbonne University\n98 Bis Bd Arago75014ParisFrance", "Dept of Astrophysics\nFaculty of Physics\nTabriz University\nTabrizIran" ]
[]
To deepen the analysis of the photometric properties of the umbra of a sunspot, we study proper motions of small features such as umbral dots (UDs) inside a single sunspot observed by SOT of Hinode close to the disk center. We consider horizontal flows with high precision and details to study transient motion behavior of UDs in short time intervals. Blue continuum images were first deconvolved with the PSF, such that the stray light is precisely removed and the original resolution is improved. Several images were co-added to improve the S/N ratio keeping a reasonable temporal resolution and checking that the results are reproducible. The Fourier local correlation tracking (FLCT) technique is applied to the new corrected time sequence of images and horizontal velocity maps were obtained both for the whole umbra (16 ×12 ) and for a high resolution small region of the umbra (3.5 ×3.5 ) to study the smallest details of the velocity fields. We used two different Gaussian tracking windows (0.8 and 0.2 arcsec) which reveals two types of horizontal motions for umbral features. First, a global inner penumbra and peripheral umbra inward motion directed to the central parts is revealed as an overall proper motion of bright peripheral fine structures. Second, motions matching small cells inside the darkest parts of the umbra with apparent sink and source areas suggesting possible upflows and downflows appearing in different bright and dark locations without a definite answer regarding their brightness identification with a convective or a buoyant cell.
10.3847/1538-4357/aac499
[ "https://arxiv.org/pdf/1807.05531v1.pdf" ]
119,375,064
1807.05531
3dfb077feb11fce4c32bfd8a5d3605ad2de5d386
PROPER MOTIONS OF SUNSPOT'S UMBRAL DOTS AT HIGH TEMPORAL AND SPATIAL RESOLUTION September 3, 2018 15 Jul 2018 Hadis Goodarzi [email protected] School of Astronomy Institute for Research in Fundamental Sciences (IPM) P.O. Box 19395-5746TehranIran Research Institute for Astronomy and Astrophysics of Maragha (RIAAM) MaraghaIran Serge Koutchmy Institut d'Astrophysique de Paris UMR 7091 CNRS and UPMC (Sorbonne University 98 Bis Bd Arago75014ParisFrance Ali Adjabshirizadeh Dept of Astrophysics Faculty of Physics Tabriz University TabrizIran PROPER MOTIONS OF SUNSPOT'S UMBRAL DOTS AT HIGH TEMPORAL AND SPATIAL RESOLUTION September 3, 2018 15 Jul 2018(Accepted September 3, 2018) Submitted to ApJDraft version Typeset using L A T E X default style in AASTeX61 Corresponding author: Hadis Goodarzi 2photosphere, sunspot -umbral dots -Proper motions To deepen the analysis of the photometric properties of the umbra of a sunspot, we study proper motions of small features such as umbral dots (UDs) inside a single sunspot observed by SOT of Hinode close to the disk center. We consider horizontal flows with high precision and details to study transient motion behavior of UDs in short time intervals. Blue continuum images were first deconvolved with the PSF, such that the stray light is precisely removed and the original resolution is improved. Several images were co-added to improve the S/N ratio keeping a reasonable temporal resolution and checking that the results are reproducible. The Fourier local correlation tracking (FLCT) technique is applied to the new corrected time sequence of images and horizontal velocity maps were obtained both for the whole umbra (16 ×12 ) and for a high resolution small region of the umbra (3.5 ×3.5 ) to study the smallest details of the velocity fields. We used two different Gaussian tracking windows (0.8 and 0.2 arcsec) which reveals two types of horizontal motions for umbral features. First, a global inner penumbra and peripheral umbra inward motion directed to the central parts is revealed as an overall proper motion of bright peripheral fine structures. Second, motions matching small cells inside the darkest parts of the umbra with apparent sink and source areas suggesting possible upflows and downflows appearing in different bright and dark locations without a definite answer regarding their brightness identification with a convective or a buoyant cell. INTRODUCTION Study of the sunspots and their fine structures are one the most important aspects of solar activity physics Adjabshirzadeh and Koutchmy (1983). The velocity field inside a sunspot presents a significant amount of details presumably related to the origin of the sunspot. Methods like Local correlation tracking technique applied to images obtained in the deep continuum allows to determine the proper motions in horizontal directions. The analysis of spectral shifts of photospheric absorption lines permits to determine vertical or line of sight motions in higher levels where the lines are formed Riethmüller et al. 2013;Bharti et al. 2013) . The existence of upflows and downflows in the umbra of a sunspot have been debated for several years (Riethmüller et al. 2013;Deinzer 1965;Denker and Verma 2011). For example Deinzer (1965) concluded that convective motions should contribute to radiation explaining the brightness of the umbra and the question of the field-free or of the magnetized nature of these motions is open . The value of the plasma beta inside the umbra is close to unity making both the mechanism of thermo-dynamical convection and the mechanism of buoyancy as candidates for explaining up and down motions. Some evidences for upflows and downflows have been detected in sunspot structures as umbral dots and light bridges (Rouppe van der Riethmüller et al. 2013). For instance, Watanabe et al. (2012) detected clear upflows for UDs using CRISP imaging spectropolarimeter data but could not detect systematic downflows associated with UDs. Also Riethmüller et al. (2013) used spectropolarization technique from Hinode/SP data by means of 2D inversion method to deduce line of sight velocity and managed to detect upflows and downflows for peripheral UDs; note that this is above the deepest layers forming the continuum radiation that we study here. Because of the small size of the umbral dots, and their varying size and velocity within a few minutes, studying motions of UDs needs high spatial and temporal resolution. So we use the best available observations of the partial Sun free of turbulent Earth atmospheric effects from the Solar Optical Telescope (SOT) onboard the Hinode spacecraft, after significantly enhancing the visibility and contrast by an optimum Maxlikelihood deconvolution with the Point Spread Function (PSF) deduced in a preceding paper (Goodarzi et al. (2015), further designated paper I) and improving S/N ratio by co-adding several images. The Fourier local correlation tracking (FLCT) technique have been used with different time intervals, field of view and FWHM of the windowing Gaussian function for constructing velocity map of the sunspot UDs with different resolution to see any faint velocity changes in time. This technique was also applied to a granulation field of the nearby photosphere to demonstrate the correlation between converging (c) and diverging (d) proper motions in cell pattern with downflows of intergranular parts and upflows of bright granular parts, respectively. Besed on these these results, horizontal velocity proper motions maps can more convincingly also be used to consider vertical motion in the core of the sunspot by inferring sink (convergence) and source (divergence) area (November 1994;Sobotka et al. 1999) making the same assumption of mass continuity in a stratified medium as for granulation. OBSERVATIONS We used well focused images recorded with the broadband filter imager (BFI) of the solar optical telescope (SOT) onboard the Hinode spacecraft Suematsu et al. 2008). Blue continuum images of the sunspot in NOAA AR 10944 that observed close to the disk center (heliocentric angle of 4 • ) on March 1, 2007 were selected. The blue filter has a central wavelength at 4504.5Å and a band width of 4Å. Each pixel corresponds to 0.05448 on the surface of the Sun. Exposure time is 102 ms, the cadence is 6.4 seconds and the field of view is approximately 56 × 28 . All of the images are first corrected for dark current, flat field and bad pixels that are removed using fg prep routine and then co-aligned by means of two dimensional cross correlation algorithm involved in fg rigidalign routine (both of the codes are from Solar Soft library) and then deconvolved with the PSF deduced in the previous paper (Goodarzi et al. 2015). 90 successive blue continuum images taken between 00:19:51 and 00:29:20 with 6.4s time interval have been used to consider proper motions of fine structures inside sunspot umbra in high resolution, when adding images (at least 3 images averaging) to improve the signal to noise ratio. Fourier local correlation tracking technique In order to track proper motions of UDs, we used the Fourier local correlation tracking (FLCT) method (Fisher and Welsch 2008). This technique permits to map a 2D velocity field that connects two images which are taken at two different times, such that this flow field provides the scalar field in the first image, the result having the most similarity to the second image (Fisher and Welsch 2008). In this technique, it is possible to adjust a threshold value for intensities used to compute the flow field, so the algorithm will skip pixels with intensities lower than threshold value (Fisher and Welsch 2008). Note that since the work on this paper, a more advanced algorithm appears, see Asensio Ramos et al. (2017) where the possibility to compute the maps at 3 different levels is offered using much more extended computations. Up to now, the velocity field inside and around sunspots was described in several papers where ground-based images were used, see e.g. Molowny-Horas (1994) and Ji et al. (2016). Space data has the great advantage of being free from the turbulent earth atmosphere image effects which is very important for a correlation type analysis. In addition effects due to scattered light should be corrected. This is especially important when fine and low intensity structures are concerned such as umbral dots. We described in our previous paper (Goodarzi et al. 2015) how important it is to take into account the far wings of the point spread function (PSF) and only with this accountancy it is possible to remove stray light coming from far distances, indeed from the whole bright solar disk. We performed it by adding the Lorentzian function that has extended wings to the Gaussian function that describes the core of the PSF. We also use the limb of the sun for modeling stray light quite far from the solar disk instead of the Mercury or Venus disc of limited diameter; it permitted to go further and compute the coefficients of the Lorentzian and Gaussian functions more accurately. After deducing the PSF precisely, a deconvolution procedure was performed using the iteration method (Goodarzi et al. 2015(Goodarzi et al. , 2016. Correction with this method leads to a higher contrast and a more deeper evaluation of the images that are now free of stray light. In order to improve the signal to noise ratio, several images (at least 3 images, up to 10 images) were co-added and then the FLCT technique was applied to successive average images to deduce the velocity fields. We performed our analysis of sunspot umbra in 2 parts: an overall proper motion mapping of the whole umbra (including the inward motion of peripheral umbral dots) see fig. 1, and a mapping of a small selected region, which represents a part of the sunspot umbra, to see the ultimate details of proper motions. To see the overall proper motions of umbral features we applied the FLCT technique on the images of the sunspot observed on 1th March 2007 after deconvolution and co-adding images to improve the signal to noise ratio. This part is specially done to demonstrate that the result of running the FLCT code is altogether in agreement with the results of other researches (for example Molowny-Horas (1994); Sobotka et al. (2008)) but in the details, there is a lot of new features. It is also in agreement with the impression given after watching the movie of our deconvolved images, see our paper I. In order to analyze the overall proper motions of umbral features and UDs, we selected the entire field of view for running the FLCT code with a rather large FWHM of the windowing Gaussian function (0.8 arcsec) used for the mapping. Fig. 1 shows the velocity map of the whole sunspot region obtained using deconvolved images and averaging 3 successive images made with a 6.4 sec interval between 2 successive images (19 sec averaging) and shifting the averaging by 6 minutes after again another 3 successive images were averaged (without overlapping) to see the motion of the umbral and the penumbral features during this 6 minutes interval between two average images. Also we imposed a threshold value equal to approximately 15 percent of the average photospheric intensity for using the FLCT code to select just the behavior of the very bright features, so the code skips pixels with lower intensities dominated by the noise. For the corrected intensity values that we used, see our paper II Goodarzi et al. (2016). Inward motions of the peripheral UDs are very well apparent in this velocity map. Note also in the granular region of the image, the downflow in the dark intergranular region and upflows in the bright parts clearly seen all around the sunspot. It can be seen from Fig. 1 (better use a magnified image from the digital variant of the paper), our result of running the FLCT code on deeply deconvolved images is in overall agreement with Molowny-Horas (1994), Ji et al. (2016) and Sobotka et al. (2008). As Fig. 1 suggests, peripheral UDs show higher horizontal velocities in comparizon with central UDs. To compare this quantitively, two Histograms of the velocity values have been prepared: one of them includes both peripheral and central UDs and one of them includes only velocities of the central part of the umbra. This is performed using two different threshold values, corresponding to approximately 50 and 25 percent of the average frontier intensities. As it can be seen from the histograms in Fig. 2 and the velocity map of Fig. 1, when peripheral umbral dots are included, higher value of the horizontal velocities (between 400 and 700 m/s) are more likely to exist and it means that features faster than 400 m/s are usually found in the peripheral part of the umbra where magnetic field is weaker than inside the core and is more horizontal that is consistent with the results of Sobotka and Puschmann (2009) and also Watanabe et al. (2009). INWARDS MOTION OF PERIPHERAL UDS In velocity maps derived with different methods, peripheral UDs show the inwards motion towards the umbra (Hamedivafa 2015). These motions can be clearly seen from Fig. 1, but to consider them with more details and with a higher resolution, we made velocity map for a small part of the umbra-penumbral boarder (also called the periphery of the umbra) and using a much smaller FWHM of the windowing Gaussian function (0.2 arcsec). This is made achievable since that using accurately deconvolved images free of scattered light (Goodarzi et al. 2015) and after improving the S/N ratio at an optimum level to warranty the reproducibility of the results, the most significant intensity distributions (see the next part) are used. The selected part is shown with red arrow in top part of Fig. 3, using a deconvolved sunspot full image observed on 1th of March 2007; this field includes several bright peripheral UDs. Bottom part of fig. 3 shows the result of using the FLCT technique for this region (we have 6 consecutive images with 6.4s cadence, averaging every 3 images leads to two average image and fig. 3 is the result of a FLCt run from comparison of these two average images). As it can be seen peripheral umbral dots have inwards motion to the umbra but some more details are also revealed like the small diverging and/or converging flows that accompany with this inwards motions. HIGH RESOLUTION VELOCITY MAPS We presented in Fig. 1 the global proper motion map of the sunspot field including umbra, penumbra and also the granulation part of the surrounding photosphere by imposing a threshold value for using the FLCT code to follow only the very bright features while using a large FWHM of the windowing Gaussian function (0.8 arcsec). Now, to see the details in full resolution, a smaller FWHM of the windowing Gaussian function has been used for the whole umbra without any threshold value. Because we take into account all pixels including pixels corresponding to low intensities (especially in the darkest parts of the umbra), we have to significantly improve the S/N ratio by coadding more images (10 images) to reduce the noise that would contribute in deriving velocity maps. Fig. 4 shows the proper motion map of the sunspot umbra which is obtained after deconvolution of 90 images (after making evaluation tests to optimize the procedure), an averaging of 10 successive images with a 6.4sec time interval between them (1 min averaging) and shifting the averaging procedure to the next 10 images with a 3 images overlapping and computing velocity field from these average images. Averaging procedure continue up to 90 images that leaded to 11 resulting velocity maps. Fig. 4 is the average of these 11 velocity maps corresponding to a duration of 9.6 min. From Fig. 4 it can be seen that [i] Peripheral umbral dots show the inwards motion to the umbra with additional details revealing converging and diverging flows; the converging and diverging behavior happen more clearly in the endpoint of the penumbral filaments or peripheral UDs with higher S/N ratio in comparison with central UDs, which is consistent with Riethmüller et al. (2013). [ii] In the penumbral part of the image (top left) we see flows that are perpendicular to the bright filament and that move across the bright filament going to the dark filament. [iii] In the umbra, there is an evidence of both converging and diverging horizontal motions everywhere. INTERPRETING UPFLOWS AND DOWNFLOWS FROM CONVERGING AND DIVERGING HORIZONTAL MOTIONS FLCT technique has being used to deduce horizontal motions of different solar structures. However some upflows and downflows can be inferred from the converging and diverging arrows distribution, as it has been done for the granulation starting from the works on SOUP data, see Simon et al. (1989). In order to examine this technique to presumably infer vertical motions, we applied it to a granulation part of the adjacent photosphere surrounding the sunspot, indicated with a blue rectangle in top part of figure Fig. 5. This field was submitted to exactly the same processing (cadence, corrections, deconvolution and summing) that what was done for the UDs field (see after). We assume as usual that bright granules are convective cells. Accordingly, they correspond to upward motions and conversely, intergranular space corresponds to downward motions. We also assume that for the small scales corresponding to our analysis, 5 min oscillations do not perturb the results. Every 10 images with 6.4s cadence have been added with 3 images overlapping and the proper motion between the first two averege images (45s time interval) has been computed and depicted in Fig. 6. Gaussian width used for weighting images equals to 0.2 . In this velocity map, we see conveging flows in intergranular parts and diverging flows also appear in several bright points of granular parts which corresponds to downflows and upflows respectively. We notice that these apparent flows could be also due to brightness changes of coherent but unknown nature, however it is not legitimate to ignore the presence of downflows and upflows contributing to these brightness changes in order to take into account the convective phenomena that are today well explained by impressive theoretical simulations, see for e.g. (Stein and Nordlund 1998). The good correlation of lanes with downflow in intergranular parts and diverging pattern as cells with upflow in bright granular parts shows that the FLCT techniques permit a valuable evaluation of vertical motions using the parameters of our analysis. PROPER MOTIONS OF UMBRAL DOTS AT ULTIMATE RESOLUTION In velocity maps of Fig. 3 and Fig. 4 it was clear that using the high quality deconvolved and averaged images reveals more details of proper motions and some diverging and converging area appear in both the bright and the dark regions. Also some of the motions change their orientation in short time intervals (less than 1 min) that might be ignored in velocity maps derived from images with longer time interval (Fig. 4 that is an average over 9.6 min). To see the maximum of details and the tiny variations of proper motions using the ultimate resolution from this analysis, a set of new velocity maps was evaluated on a small field of view (3.5 ×3.5 ) and short time interval (45s) after corrections and improving the signal to noise ratio. For this purpose, we used 90 successive blue filter images with 6.4s cadence after deconvolution that cover approximately 9.6min. Each set of 10 images was co-added with 3 images overlapping in order to improve the signal to noise ratio and also to preserve the continuity between images, so 12 average images were deduced with approximately 45s time interval between them. Then a small field of view that includes a part of the inner penumbra and some peripheral UDs is selected from the sunspot image to see the tiny velocity field attributed to umbral features. This selected part (that is shown with red box in top part of Fig. 5) is specially chosen because of the very typical case of a well observed isolated bright UD (see the red cicle in part "e" of Fig. 5), which is approximately roundish and evolutionally appealing. Gaussian width used for weighting images equals to 0.2 that is almost the smallest reasonable width for a tracking window keeping a reasonable signal/noise ratio. From the 12 average images, 11 velocity map are prepared and a mosaic of them is depicted in Fig. 7 and they are also shown in a short on-line video (made from 11 frames) which is available on the electronic version of the paper. Bottom part of Fig. 5 shows the time brightness evolution of the selected small part of the umbra with a cadence of 45s (red box in top part of Fig. 5) including the interesting case of a well isolated umbral dot indicated with a red circle in "e". This umbral dot first appear faintly (a) and then become brighter (f, g) and finally fade after approximately 9.6 minutes when its direction of motion changing during time which can be seen better with the high resolution in the velocity maps of Fig. 7 (or in the movie). (Fig. 6) and red box showing a selected field of view for computing horizontal flow field during 9.6 minutes in small part of the umbra as illustrated in Fig. 8 Mosaic of the averaged images of a selected part of the sunspot (red box at the top of image) evolving during 9.6 minutes with the isolated bright UD of short lifetime shown in window "e". From the velocity maps ( Fig. 7 and the movie) two kind of motions can be distinguished at short time interval (45s) and for small field of view with a much improved resolution: i/ Proper motions toward a preferred direction, for example position (x,y)=(0.8, 1.4) in part"a" of Fig. 7. Some of these motions can change slightly their trajectory, by tracking the case of isolated umbral dot (position (x,y)=(1.5, 1.5) in part "a" of Fig. 7.) it can be seen that the direction of its horizontal motion changes during a small time interval but there is a dominant movement toward the umbra. As noted before, the most dominant horizontal motion of the bright points (like the one marked with red circle in bottom part of Fig. 5) is toward the umbra, but some small scale turbulent behavior may be seen also. ii/ Some flows in the form of converging (c) and diverging (d) cells that could correspond to possible inward and outward motions as deduced from considerations of continuity. Outward motions (source area, where red arrows tend to show up further from each other, see for example Fig. 8) are sometimes seen near some bright points, while there are cases of outward motions in dark area. The center of several diverging velocity pattern ("d" cells) are rather side-on of UDs maximum intensity brightness. Since the occurrence of vertical flows preferentially in deep photospheric layer were considered in theoretical works e.g. Riethmüller et al. (2013), images taken in the blue continuum that we used correspond the deepest layer of the photosphere and they show suitable candidates for revealing these flows. Source field associated with bright UDs are not always overlapping the maximum brightness of UDs and some displacement can be seen as also appearing in other works ). There are also some sink areas (or downflows) that mostly appear at the periphery of UDs or in the border region between bright features and this is rather consistent with Riethmüller et al. (2013) works which found downflows in 200 to 500 km distance from UD center which are concentrated on one or two sides of UDs. Since we could not detect any dark lane inside UDs, it is not possible to follow their endpoints looking downflows as Schüssler and Vögler (2006) predicted from their most detailed magnetoconvective simulations but we can say that they mostly appear only at the side of the bright features, see also our paper II. In addition our observation show fast changes, over 1 min of time, which is not seen in published magneto-convection simulations. CONCLUSION We detected proper motion of umbral fine features from corrected (restored by deconvolution of the measured PSF) blue continuum images with different time intervals and spatial scales show new features in the small scale velocity maps. The most dominant motion of fine structures is toward the umbra but when higher resolution is considered, more details can be seen such as the complex cells of converging and diverging flows which could be the sign of vertical motions of plasma. These possible upflows usually appear near bright features, while downflow evidences tend to exist at the periphery of bright features or in the border between them. We could not see any dark lane inside of UDs predicted by numerical simulation (Schüssler and Vögler 2006;Bharti et al. 2010), but their prediction of upflows and downflows in and around UDs can still be consistent with our observational findings, taking into account [i] the limited resolution of our observations and [ii] the limitation of the interpretation of our flow map of horizontal velocities in term of upflows (source) and downflows (sink); [iii] the approximations used in the simulations. Our results are complementary to the results obtained from the analysis of spectral line shifts in the regions above the continuum layers Riethmüller et al. 2013). Furthermore, we provide evidences for systematic fast motions inside the core of the probed sunspot. When a time sequence is considered, the temporal variation of the velocity reveals a sort of turbulent behavior in the trajectory of the bright UDs that was analyzed in details. Horizontal proper motions that are not reproduced in the recent numerical simulations that we know are evidenced. The example of a dark region of the core showing some diverging cells appearing temporary is pointed out. Accordingly, it is not inappropriate to remind about the umbral oscillatory convection theoretically studied in (Weiss et al. 1990) in the frame of the magneto-convection inside a sunspot. The analysis of a much longer time sequence taken at different formation heights is desirable to go in this direction. Given the improved S/N ratio ( Fig. 4 and Fig. 7), a detailed and complete map of the entire umbra including the darkest regions has been obtained. We note that when we point to the systematic motions towards the umbra, it does not necessarily mean a direct movement and some turbulent behavior also exist that is revealed by tracking a sequence of high resolution velocity maps (see also the movie of paper I). Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA andNAOJ (Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). We thank Habib Khosroshahi for providing a critical reading of the paper and our reviewer for meaningful remarks. This work has been supported financially by Research Institute for Astronomy and Astrophysics of Maragha (RIAAM) under research project No. 1/5237-69. Figure 1 . 1Proper motion (horizontal Velocity component) map in the sunspot observed on March 1th, 2007 as deduced from using the FLCT technique (FWHM of Gaussian windowing function is 0.8 ). The scale for velocity is inserted at the bottom left of the figure and corresponds to 2Km/s. Note the systematic inward proper motion of the inner penumbral features and dots and better use a magnified image from the digital variant of the paper. Figure 2 . 2Histogram of horizontal velocities (in km/s) inside the sunspot for two different thresholds, 0.25 and 0.52 of the average photosphere intensity revealing velocity values for only the central region of the umbra (left), and of the whole umbra (right) including the peripheral UDs. Figure 3 . 3Top: Sunspot image observed on March, 1th 2007 after correction with the deconvolution procedure. Red arrow show the part of the umbra that is used to consider in details inwards motion of the peripheral UDs. Bottom : Velocity map for a partial frame of the umbra including peripheral UDs (indicated with red arrow at the top of image). FWHM of Gaussian windowing function is 0.2 . Peripheral UDs show well inwards motion to the umbra but more details are revealed. Figure 4 . 4Detailed horizontal velocity map including the darkest umbral parts of the sunspot observed on March 1th, after significantly improving the S/N ratio (see Section 5). FWHM of the Gaussian windowing function is 0.2 . The scale inserted at the bottom left corresponds to 200 m/s. Figure 5 . 5Top: Deconvolved Sunspot image with the blue box showing a selected field of view for making proper motions maps of granulation part Figure 6 . 6Horizontal velocity field computed for selected granulation part near the sunspot shown in the blue box in top part of Fig. 5. FWHM of Gaussian windowing function is 0.2 . Figure 7 . 7Velocity field computed from FLCT technique for small field of view (red box in top part of Fig. 5) and for short time intervals (45s). FWHM of Gaussian windowing function is 0.2 . Red arrows show the direction of velocity for each point and its length shows the amplitude of the horizontal component of the velocity compared to the scale inserted in part j (bottom, left) which corresponds to 200m/s. Figure 8 . 8Velocity field computed from FLCT technique for small field of view and for short time intervals (45s) with labels showing converging (c) and diverging (D) flows. FWHM of Gaussian windowing function is 0.2 . . OVERALL PROPER MOTION OF THE WHOLE UMBRA . A Adjabshirzadeh, S Koutchmy, A&A. 122Adjabshirzadeh, A. and Koutchmy, S. 1983, A&A, 122, 1-8 . Asensio Ramos, A Requerey, I S Vitas, N , arXiv:1703.05128v2Asensio Ramos, A., Requerey, I. S., Vitas, N., 2017, arXiv: 1703.05128v2 . L Bharti, B Beeck, M Schüssler, A&A. 51012Bharti, L., Beeck, B., Schüssler, M., 2010, A&A, 510, A12 . L Bharti, J Hirzberger, S K Solanki, A&A. 5521Bharti, L., Hirzberger, J., Solanki, S. K. 2013, A&A, 552, L1 . W Deinzer, ApJ. 141548Deinzer, W., 1965, ApJ, 141, 548 . C Denker, M Verma, Proceedings IAU Symposium. 273Denker, C.& Verma, M., 2011, Proceedings IAU Symposium No. 273, 2011 Subsurface and Atmospheric Influences on Solar Activity. G H Fisher, B T Welsch, ASP Conference Series. 383Fisher, G. H. & Welsch, B. T., 2008, Subsurface and Atmospheric Influences on Solar Activity, ASP Conference Series, Vol. 383 . H Goodarzi, S Koutchmy, A Adjabshirizadeh, Ap&SS. 35825Goodarzi, H., Koutchmy, S., Adjabshirizadeh, A., 2015,Ap&SS, 358, 25 . H Goodarzi, S Koutchmy, A Adjabshirizadeh, Ap&SS. 361366Goodarzi, H., Koutchmy, S., Adjabshirizadeh, A., 2016, Ap&SS, 361, 366 . H Hamedivafa, Iranian Journal of Astronomy and Astrophysics. 21Hamedivafa, H., 2015, Iranian Journal of Astronomy and Astrophysics, 2, 1 . K Ji, X Jiang, S Feng, Y Yang, 291Ji, K., Jiang, X., Feng, S., Yang, Y. et al., 2016, SoPh, Volume 291, Issue 2, pp.357-369 . R Molowny-Horas, SoPh. 1541Molowny-Horas, R., 1994, SoPh, vol. 154, no. 1, p. 29-39 . L J November, SoPh. 1541November, L. J., 1994, SoPh, 154, 1 . A Ortiz, L R Bellot Rubio, Rouppe Van Der, L Voort, ApJ. 7131282Ortiz, A., Bellot Rubio, L. R., Rouppe van der Voort, L., 2010, ApJ, 713, 1282 . T L Riethmüller, S K Solanki, M Van Noort, S K Tiwari, A&A. 55453Riethmüller T.L., 2013, Solanki, S.K., Van Noort, M., Tiwari, S.K., 2013 , A&A, 554, A53 Solar and Stellar Granulation, Proceedings of the 3rd International Workshop of the Astronomical Observatory of Capodimonte (OAC 3) and the NATO Advanced Research Workshop on Solar and Stellar Granulation. G W Simon, L J November, S H Ferguson, R A Shine, T D Tarbell, A M Title, K P Topka, H Zirin, NATO Advanced Science Institutes (ASI) Series C. Robert J. Rutten and Giuseppe Severino263371KluwerSimon, G. W.; November, L. J.; Ferguson, S. H.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Topka, K. P.; Zirin, H., Solar and Stellar Granulation, Proceedings of the 3rd International Workshop of the Astronomical Observatory of Capodimonte (OAC 3) and the NATO Advanced Research Workshop on Solar and Stellar Granulation, June 21-25, held at Capri, Italy., Dordrecht: Kluwer, 1989, edited by Robert J. Rutten and Giuseppe Severino. NATO Advanced Science Institutes (ASI) Series C, Volume 263, p.371 . R F Stein, A Nordlund, ApJ. 499914Stein, R. F., Nordlund, A., 1998, ApJ, 499, 914 . S Tsuneta, K Ichimoto, Y Katsukawa, SoPh. 249Tsuneta , S., Ichimoto, K. , Katsukawa , Y. et al., 2008, SoPh, 249, 167-196 . Rouppe Van Der, L Voort, L R Bellot Rubio, A Ortiz, ApJ. 71878Rouppe van der Voort, L., Bellot Rubio, L. R., Ortiz, A., 2010, ApJ, 718, L78 . M Schüssler, A Vögler, ApJ. 64173Schüssler, M., Vögler, A., 2006, ApJ, 641, L73 . M Sobotka, M Vázquez, J Bonet, A Hanslmeier, J Hirzberger, ApJ. 511436Sobotka, M., Vázquez, M., Bonet, J., Hanslmeier, A., Hirzberger, J., 1999,ApJ, 511,436 . M Sobotka, K G Puschmann, H Hamedivafa, Central European Astrophysical Bulletin. 32125Sobotka, M., Puschmann, K. G., Hamedivafa, H., 2008, Central European Astrophysical Bulletin, 32,125 . M Sobotka, K G Puschmann, A&A. 504575Sobotka, M., Puschmann, K. G., 2009, A&A, 504, 575 . Y Suematsu, S Tsuneta, K Ichimoto, T Shimizu, M Otsubo, Y Katsukawa, M Nakagiri, M Noguchi, T Tamura, Y Kato, SoPh. 249Suematsu, Y. , Tsuneta, S., Ichimoto, K., Shimizu, T. , Otsubo, M. , Katsukawa, Y. , Nakagiri, M. , Noguchi, M., Tamura, T. Kato, Y. et al., 2008, SoPh, 249, 197-220 . H Watanabe, R Kitai, K Ichimoto, ApJ. 7021048Watanabe, H., Kitai, R., Ichimoto, K., 2009, ApJ, 702, 1048 . H Watanabe, L R Bellot Rubio, J De La Cruz Rodríguez, Rouppe Van Der, L Voort, ApJ. 75749Watanabe, H., Bellot Rubio, L. R., de la Cruz Rodríguez, J., Rouppe van der Voort, L., 2012, ApJ, 757, 49 . N O Weiss, D P Brownjohn, N E Hurlburt, M R E Proctor, MNRAS. 245Weiss, N.O. Brownjohn, D.P. Hurlburt N.E. and Proctor M.R.E., MNRAS, 1990, 245, 434-452
[]
[ "Cosmology of Nonlinear Oscillations", "Cosmology of Nonlinear Oscillations" ]
[ "Stephen D H Hsu *[email protected] \nDepartment of Physics\nUniversity of Oregon\n97403-5203EugeneOR\n" ]
[ "Department of Physics\nUniversity of Oregon\n97403-5203EugeneOR" ]
[]
The nonlinear oscillations of a scalar field are shown to have cosmological equations of state with w = p/ρ ranging from −1 < w < 1. We investigate the possibility that the dark energy is due to such oscillations.
10.1016/j.physletb.2003.05.001
[ "https://arxiv.org/pdf/astro-ph/0305096v2.pdf" ]
14,116,608
astro-ph/0305096
31fdbe46f786f89d61a012c0ff83f0514f216b9f
Cosmology of Nonlinear Oscillations May 2003 November 8, 2018 Stephen D H Hsu *[email protected] Department of Physics University of Oregon 97403-5203EugeneOR Cosmology of Nonlinear Oscillations May 2003 November 8, 2018arXiv:astro-ph/0305096v2 27 The nonlinear oscillations of a scalar field are shown to have cosmological equations of state with w = p/ρ ranging from −1 < w < 1. We investigate the possibility that the dark energy is due to such oscillations. Astrophysical data support the existence of dark energy [1,2]. Since many proposed solutions of the cosmological constant problem lead to exactly zero vacuum energy for empty space, it is natural to consider so-called quintessence models in which the dark energy is comprised of some scalar field which is slowly evolving towards its minimum [2]. The main objections to these models are their typically unnatural potentials, and that they require the suppression of higher dimension operators likely to be induced by quantum gravity [3]. In this letter we investigate a qualitatively different idea: that the dark energy is due to (possibly rapid) nonlinear oscillations rather than slow evolution on cosmological timescales. We consider oscillations in scalar models L = 1 2 (∂ µ z) 2 − V (z) ,(1) where the potential V (z) = a|z| l near its minimum. Potentials with l < 2 are particularly interesting, as we will see below that they yield an equation of state w = p/ρ < 0. This form of V (z) may appear odd, but a change of variables to, e.g., φ = |z| l/2 yields L = 1 2 K(φ)(∂ µ φ) 2 − aφ 2 ,(2) with K(φ) = 2 l 2 φ 4−2l l . A kinetic term of this type can be obtained from the Kähler potential in supersymmetric models. We focus here on the classical behavior of z, but its quantization when l < 2 warrants further investigation. It has the unusual property that there are no perturbative degrees of freedom -that is, small oscillations about z = 0 have infinite frequency, since V ′′ (z = 0) does not exist. Only large (non-infinitesimal) z oscillations can have finite energy density. This may lead to a number of interesting features: one might expect that z decay and production rates, as well as radiative corrections, only arise from nonperturbative effects and are exponentially small. The redshift of a field undergoing nonlinear oscillations can be calculated through its average equation of state and depends on the ratio w = p/ρ. From the scalar field equations in a Robertson-Walker universe, one obtains ρ(t) = ρ 0 R 0 R(t) 3(1+w) ,(3) where R is the Robertson-Walker scale factor. The scalar field energy redshifts like radiation when w = 1/3, like matter when w = 0, etc. It remains to calculate the equation of state for nonlinear oscillations. We note that p = T − V and ρ = T + V , where T is the kinetic energy density and V is the potential energy density. We calculate the relation between T and V averaged over an oscillation period, which is much smaller than the cosmological timescale except during the very first oscillations, which begin when the age of the universe is of order V ′′ (z) −1/2 . We define T = 1 2 dtż 2(4) and V = dt z l ,(5) where each integral is taken over the same period with boundary conditionsż(0) =ż(τ ) = 0 (or equivalently z = 0 at the endpoints) and we have adopted units in which the overall scale of the potential is unity. Since d dt (zż) =ż 2 + zz ,(6) and the equation of motion isz = −l z l−1 , we can rewrite the average potential energy as V = 1 l dtż 2 = 2 l T .(7) This yields w = (l − 2) (l + 2) ,(8) with −1 < w < 1. In potentials with l < 2, the average potential energy dominates over the kinetic part, and the pressure is negative. In the limit l → 0 these oscillations behave like a cosmological constant. In higher order potentials (l > 2), the situation is reversed, leading asymptotically to w = 1 as l → ∞ , or ρ ∼ 1/R 6 . These oscillations redshift away rapidly, although it was noted in [4] that the large l behavior of w = 1 can never be achieved, due to an instability to a non-oscillatory scaling solution. Given a periodic solution to the z equations of motion, one can obtain a rescaled solution via z(t) → a 2 l−2 z(at) . For l < 2, the frequency of oscillation goes to infinity as the amplitude goes to zero. Note, however, that the average energy density goes to zero in this limit. An advantage of nonlinear oscillation models of quintessence is that the potential V (z) need not be characterized by the size of the current dark energy density ∼ (10 −3 eV) 4 , nor be fine-tuned to be flat (i.e., have curvature of order the inverse horizon size squared ∼ (10 −33 eV) 2 ). Rather, the potential can be characterized by a larger energy scale more familiar to particle physics, with no small dimensionless parameters. The smallness of the energy density today relative to this scale could be explained by a small z oscillation amplitude. One scenario that would result in a small oscillation amplitude is if the original energy density in the z field were diluted away by inflation, and the subsequent reheat temperature insufficient to repopulate it. This is quite plausible if the couplings between the inflaton and ordinary matter fields to the z field are small, for example suppressed by the Planck scale if z is a hidden sector field 1 . The current z energy density is dependent on the number of e-foldings N during inflation ρ today ∼ ρ i 2 · 10 −5 e −N T md T rh ν ,(10) where ν = 3(1 + w), T md ∼ 5 eV is the temperature at which the universe becomes matter dominated and T rh is the reheat temperature after inflation. For example, using ν = 1/2 (or w = −5/6, consistent with the WMAP limit of w < −0.78 [1]), T rh = 5 · 10 10 GeV and ρ i the Planck energy density, we find that N ∼ 510 in order that ρ today ∼ (10 −3 eV) 4 . For smaller ρ i ∼ (10 11 GeV) 4 , appropriate for intermediate scale inflation, we find N ∼ 370. Note, while the scenario described here explains the small dark energy density today, it does not address the question of coincidence: why is ρ z of order ρ critical today? If the energy scale characterizing V (z) is small (within several orders of magnitude of an electron volt) little dilution is necessary, and it could be provided by the expansion of the universe after inflation. If z originates from the scalar component of a chiral superfield Φ, the energy scale of V (z) (i.e., the parameter a in (2)) is protected by supersymmetry from radiative corrections. A Yukawa coupling between z and its superpartner can be excluded by imposing Φ → −Φ symmetry in the superpotential, thereby stabilizing z. To conclude, we find that realistic quintessence models based on nonlinear oscillations can be constructed without any fine-tuning of fundamental parameters. These models will be disfavored if future data show that w = −1. Note added: After this work was completed, we learned that the result (8) for the equation of state was previously derived by M. Turner in [5]. Also, models based on a hyperbolic cosine potential raised to a fractional power have been considered in [6], which at late times exhibit oscillations of the type considered here. Some reheating of the z field is inevitable, even if its couplings to the inflaton are very small. However, the thermal z bosons produced do not necessarily contribute to the coherent oscillations studied here -their energy redshifts away more rapidly. AcknowledgementsWe would like to thank D.K. Hong and A. Zee for useful comments. This work was supported in part under DOE contract DE-FG06-85ER40224. For a determination of cosmological parameters from WMAP data, see. D N Spergel, arXiv:astro-ph/0302209For a determination of cosmological parameters from WMAP data, see D.N. Spergel, et al., arXiv:astro-ph/0302209. . P J Peebles, B Ratra, arXiv:astro-ph/0207347Rev. Mod. Phys. 75599For a review of dark energy and quintessence, see, e.g., P. J. Peebles and B. Ratra, Rev. Mod. Phys. 75, 599 (2003) [arXiv:astro-ph/0207347]. . S M Carroll, arXiv:astro-ph/9806099Phys. Rev. Lett. 813067S. M. Carroll, Phys. Rev. Lett. 81, 3067 (1998) [arXiv:astro-ph/9806099]. . A R Liddle, R J Scherrer, arXiv:astro-ph/9809272Phys. Rev. D. 5923509A. R. Liddle and R. J. Scherrer, Phys. Rev. D 59, 023509 (1999) [arXiv:astro- ph/9809272]. . M S Turner, Phys. Rev. D. 281243M. S. Turner, Phys. Rev. D 28, 1243 (1983). . V Sahni, L M Wang, arXiv:astro-ph/9910097Phys. Rev. D. 62103517V. Sahni and L. M. Wang, Phys. Rev. D 62, 103517 (2000) [arXiv:astro-ph/9910097]; . V Sahni, M Sami, T Souradeep, arXiv:gr-qc/0105121Phys. Rev. D. 6523518V. Sahni, M. Sami and T. Souradeep, Phys. Rev. D 65, 023518 (2002) [arXiv:gr- qc/0105121]; . M Sami, T Padmanabhan, arXiv:hep-th/0212317Phys. Rev. D. 6783509M. Sami and T. Padmanabhan, Phys. Rev. D 67, 083509 (2003) [arXiv:hep-th/0212317]; . M Sami, N Dadhich, arXiv:hep-th/0304187M. Sami and N. Dadhich, arXiv:hep-th/0304187.
[]
[ "ENTITY-ASSISTED LANGUAGE MODELS FOR IDENTIFYING CHECK-WORTHY SENTENCES", "ENTITY-ASSISTED LANGUAGE MODELS FOR IDENTIFYING CHECK-WORTHY SENTENCES" ]
[ "Ting Su \nUniversity of Glasgow Glasgow\nUK\n", "Craig Macdonald \nUniversity of Glasgow Glasgow\nUK\n", "Iadh Ounis [email protected] \nUniversity of Glasgow Glasgow\nUK\n" ]
[ "University of Glasgow Glasgow\nUK", "University of Glasgow Glasgow\nUK", "University of Glasgow Glasgow\nUK" ]
[]
We propose a new uniform framework for text classification and ranking that can automate the process of identifying check-worthy sentences in political debates and speech transcripts. Our framework combines the semantic analysis of the sentences, with additional entity embeddings obtained through the identified entities within the sentences. In particular, we analyse the semantic meaning of each sentence using state-of-the-art neural language models such as BERT, ALBERT, and RoBERTa, while embeddings for entities are obtained from knowledge graph (KG) embedding models. Specifically, we instantiate our framework using five different language models, entity embeddings obtained from six different KG embedding models, as well as two combination methods leading to several Entity-Assisted neural language models. We extensively evaluate the effectiveness of our framework using two publicly available datasets from the CLEF' 2019 & 2020 CheckThat! Labs. Our results show that the neural language models significantly outperform traditional TF.IDF and LSTM methods. In addition, we show that the ALBERT model is consistently the most effective model among all the tested neural language models. Our entity embeddings significantly outperform other existing approaches from the literature that are based on similarity and relatedness scores between the entities in a sentence, when used alongside a KG embedding.Keywords Knowledge graph embeddings · Language models · Entity embeddings · fake news detection Recently, the identification of these so-called most check-worthy sentences has gained increased attention. For example, the ClaimBuster system [1] was trained to label sentences in a news article as "non-factual", "unimportant factual", or "check-worthy factual". Moreover, the recent CLEF' 2019 & 2020 CheckThat! Labs [2, 3] were introduced as shared evaluation forums where participants were tasked to rank sentences based on their estimated check-worthiness. While CLEF' 2020 CheckThat! Lab also includes a task that aims to identify suspicious tweets on Twitter platform, in this work we focus on the debates and speeches. The reasons being tweets have distinct text features compared to speeches and debates, and social network features that speeches and debates do not have. Thus, we consider the identifying check-worthy tweets outside the scope of our task. As is common in recent years, the top-performing Figure 1: An example showing entities can be informative in identifying check-worthy sentences.participants applied neural language models (LMs), including long short memory models (LSTM) using pre-trained word embedding approaches[4,5,6], to represent sentences.On the other hand, to make a claim is to assert that something is true, while assertion has a common form of X verb Y [7], we therefore define a claim as conveying X verb Y is true, where the object of the claim is often an entity[8]. Moreover, claims made by politicians during debates often contain information about established entities (for instance, entities that are documented in Wikipedia)[9]. Thus, we focus on established entities in the claim in our check-worthiness identification task, as established entities can be verified with documented information, such as knowledge graphs. Knowledge graphs (KGs) are a useful source of information about entities, particularly how they relate to each others. Typically, in a knowledge graph, entities and their relationships are represented using a triplet structure ( entity 1 , relation, entity 2 ). An example of such a triplet is Arizona, a_state_of, the_United_States . Su et al.[10]demonstrated that enriching the sentence representation with the similarities and relatedness scores of the entities in that sentence, can significantly improve the identification of the check-worthy sentences than using text representation alone. Recent works[11,12,13]have shown that learned embeddings can be derived from KGs, allowing for the advantages of word embeddings to be applied to the entities found in the KGs.1. We represent sentences using language models that go beyond the bag of words and LSTM methods, by leveraging the latest developments in deep neural language models (BERT[15], ALBERT [16], or RoBERTa [17]); 2. We extend the method of incorporating entity information within sentences, from using the simple similarity and relatedness scores between the entities in a sentence [10] to a more sophisticated entity representation obtained from KG embeddings.3. Our proposed framework does not require the joint training of the language model and the entity representations (for example as done in ERNIE[14]), thereby providing greater flexibility for instantiating and deploying the framework in fact-checking tasks.Entity-Assisted Language Models for Identifying Check-worthy SentencesWe instantiate our proposed framework into several Entity-Assisted neural language models, by concatenating the language representation obtained from a state-of-the-art pre-trained language model (e.g. BERT [15], ALBERT [16], or RoBERTa [17]), with embedded representations for each pair of entities present in a sentence, to represent the sentence's semantic information as well as its entity-related information. Thus, the contributions of this paper are four-fold:1. We propose a simple yet powerful framework to represent sentences with rich entity information, by concatenating together a text model representation with entity pair representations.2. Using the CLEF 2019 and 2020 CheckThat! Lab datasets, we show that our generated Entity-Assisted neural language models significantly outperform the existing state-of-the-art approaches in the classification task, as well as outperform the participating groups on the CLEF CheckThat! leader board in the ranking task.3. We show that representing entity pairs with embeddings is significantly more effective than an existing recent technique from the literature that leverages the similarities and relatedness of the entities.4. Finally, our findings show that among the various knowledge graph embedding models, ComplEx [13] leads to the best results, for instance, achieving results as good as the best performing system submitted to the CLEF 2019 CheckThat! Lab, without the need for labour-intensive feature engineering.The rest of the paper is structured as follows: We review the literature related to language representation models, knowledge graph embeddings, as well as the claim check-worthiness task in Section 2. In Section 3, we state the task problem, along with our proposed model to address the task. We present our experimental setup in Section 4, and show the results of the experiments in Section 5. Finally, we provide concluding remarks in Section 6.
10.48550/arxiv.2211.10678
[ "https://export.arxiv.org/pdf/2211.10678v1.pdf" ]
253,734,980
2211.10678
acf0ee88f83b5b5c4da2e029bf9d41c612395dff
ENTITY-ASSISTED LANGUAGE MODELS FOR IDENTIFYING CHECK-WORTHY SENTENCES Ting Su University of Glasgow Glasgow UK Craig Macdonald University of Glasgow Glasgow UK Iadh Ounis [email protected] University of Glasgow Glasgow UK ENTITY-ASSISTED LANGUAGE MODELS FOR IDENTIFYING CHECK-WORTHY SENTENCES Knowledge graph embeddings · Language models · Entity embeddings · fake news detection We propose a new uniform framework for text classification and ranking that can automate the process of identifying check-worthy sentences in political debates and speech transcripts. Our framework combines the semantic analysis of the sentences, with additional entity embeddings obtained through the identified entities within the sentences. In particular, we analyse the semantic meaning of each sentence using state-of-the-art neural language models such as BERT, ALBERT, and RoBERTa, while embeddings for entities are obtained from knowledge graph (KG) embedding models. Specifically, we instantiate our framework using five different language models, entity embeddings obtained from six different KG embedding models, as well as two combination methods leading to several Entity-Assisted neural language models. We extensively evaluate the effectiveness of our framework using two publicly available datasets from the CLEF' 2019 & 2020 CheckThat! Labs. Our results show that the neural language models significantly outperform traditional TF.IDF and LSTM methods. In addition, we show that the ALBERT model is consistently the most effective model among all the tested neural language models. Our entity embeddings significantly outperform other existing approaches from the literature that are based on similarity and relatedness scores between the entities in a sentence, when used alongside a KG embedding.Keywords Knowledge graph embeddings · Language models · Entity embeddings · fake news detection Recently, the identification of these so-called most check-worthy sentences has gained increased attention. For example, the ClaimBuster system [1] was trained to label sentences in a news article as "non-factual", "unimportant factual", or "check-worthy factual". Moreover, the recent CLEF' 2019 & 2020 CheckThat! Labs [2, 3] were introduced as shared evaluation forums where participants were tasked to rank sentences based on their estimated check-worthiness. While CLEF' 2020 CheckThat! Lab also includes a task that aims to identify suspicious tweets on Twitter platform, in this work we focus on the debates and speeches. The reasons being tweets have distinct text features compared to speeches and debates, and social network features that speeches and debates do not have. Thus, we consider the identifying check-worthy tweets outside the scope of our task. As is common in recent years, the top-performing Figure 1: An example showing entities can be informative in identifying check-worthy sentences.participants applied neural language models (LMs), including long short memory models (LSTM) using pre-trained word embedding approaches[4,5,6], to represent sentences.On the other hand, to make a claim is to assert that something is true, while assertion has a common form of X verb Y [7], we therefore define a claim as conveying X verb Y is true, where the object of the claim is often an entity[8]. Moreover, claims made by politicians during debates often contain information about established entities (for instance, entities that are documented in Wikipedia)[9]. Thus, we focus on established entities in the claim in our check-worthiness identification task, as established entities can be verified with documented information, such as knowledge graphs. Knowledge graphs (KGs) are a useful source of information about entities, particularly how they relate to each others. Typically, in a knowledge graph, entities and their relationships are represented using a triplet structure ( entity 1 , relation, entity 2 ). An example of such a triplet is Arizona, a_state_of, the_United_States . Su et al.[10]demonstrated that enriching the sentence representation with the similarities and relatedness scores of the entities in that sentence, can significantly improve the identification of the check-worthy sentences than using text representation alone. Recent works[11,12,13]have shown that learned embeddings can be derived from KGs, allowing for the advantages of word embeddings to be applied to the entities found in the KGs.1. We represent sentences using language models that go beyond the bag of words and LSTM methods, by leveraging the latest developments in deep neural language models (BERT[15], ALBERT [16], or RoBERTa [17]); 2. We extend the method of incorporating entity information within sentences, from using the simple similarity and relatedness scores between the entities in a sentence [10] to a more sophisticated entity representation obtained from KG embeddings.3. Our proposed framework does not require the joint training of the language model and the entity representations (for example as done in ERNIE[14]), thereby providing greater flexibility for instantiating and deploying the framework in fact-checking tasks.Entity-Assisted Language Models for Identifying Check-worthy SentencesWe instantiate our proposed framework into several Entity-Assisted neural language models, by concatenating the language representation obtained from a state-of-the-art pre-trained language model (e.g. BERT [15], ALBERT [16], or RoBERTa [17]), with embedded representations for each pair of entities present in a sentence, to represent the sentence's semantic information as well as its entity-related information. Thus, the contributions of this paper are four-fold:1. We propose a simple yet powerful framework to represent sentences with rich entity information, by concatenating together a text model representation with entity pair representations.2. Using the CLEF 2019 and 2020 CheckThat! Lab datasets, we show that our generated Entity-Assisted neural language models significantly outperform the existing state-of-the-art approaches in the classification task, as well as outperform the participating groups on the CLEF CheckThat! leader board in the ranking task.3. We show that representing entity pairs with embeddings is significantly more effective than an existing recent technique from the literature that leverages the similarities and relatedness of the entities.4. Finally, our findings show that among the various knowledge graph embedding models, ComplEx [13] leads to the best results, for instance, achieving results as good as the best performing system submitted to the CLEF 2019 CheckThat! Lab, without the need for labour-intensive feature engineering.The rest of the paper is structured as follows: We review the literature related to language representation models, knowledge graph embeddings, as well as the claim check-worthiness task in Section 2. In Section 3, we state the task problem, along with our proposed model to address the task. We present our experimental setup in Section 4, and show the results of the experiments in Section 5. Finally, we provide concluding remarks in Section 6. Introduction Politics is an important part of our daily lives. The general public has the right to hold politicians accountable, when they are providing information through debates with their opponents, or when giving speeches. However, general citizens usually do not have enough time to fact check every claim a politician makes, and the vast amount of possible claims made by politicians during their debates and speeches may overwhelm even professional journalists or the fact-checkers of fact-checking organisations (such as Snopes.com and Politifact.org). Indeed, it is common for political debates and speeches to include a mix of factual and non-factual information together, making it possible for the fact-checkers and targeted audiences to be deceived, while making it harder for systems to identify the fake information from the entire debate transcript. To reduce the time and effort required for fact-checking, it is common for fact-checkers and systems to focus their efforts on those dubious claims most likely to matter to the targeted audience. This paper describes an automatic system, that analyses a debate, or a speech, by extracting and ranking all sentences that are worth checking before flagging them to the users by order of their likelihood of being suspicious or fake. Motivated by advances in both language modelling and KG modelling, a recent model, ERNIE [14], demonstrated that by jointly training the language representation and the entity representation, it is possible to leverage the information from both the language representation and entities, thus benefiting a wide range of tasks that require both text and entity information. Building on this work, we hypothesise that the embedded entity vectors obtained from KG embeddings (which we call entity embeddings) can improve the identification and ranking of check-worthy sentences. For example, Figure 1 demonstrates an example, where the two entities highlight the important components of the sentence, thus helping to identify the sentence as check-worthy. We propose a novel framework, which combines recent neural language models with an entity pair representation for each pair of entities in the sentence, where the entity pair representation is obtained by adequately combining two entity embeddings extracted from an embedded Knowledge Graph (KG). Our proposed framework allows us to capture rich information about both the language and the entities present in each sentence thereby allowing to better predict their likelihood of being check-worthy. Our framework can be uniformly instantiated to tackle the check-worthiness task either as a text classification or ranking task thereby providing a general and flexible solution for identifying sentences of interest to users. Compared to previous work, our proposed framework has three salient aspects: Related Work In the following, we provide an overview of the key related approaches in language models (Section 2.1), knowledge graph embeddings (Section 2.2) and check-worthiness identification (Section 2.3). Language Models There are several commonly used methods for representing text, including bag-of-words (BoW) (e.g. TF.IDF) [18], partsof-speech (POS) [19], and word embeddings (e.g. Word2Vec) [20] representations. Building upon word embeddings, deeper long short term memory (LSTM) neural networks (NN), are now commonly used to represent text in a sequential manner, capturing the semantic meaning of the text based on previous tokens. LSTMs encapsulate information about previous tokens, whereas Word2Vec representations are usually based on small skip-grams or windows of tokens. Such models do not take the future context into account when learning to predict language. To address this disadvantage, researchers have developed bidirectional LSTMs (BiLSTMs) and attention-based neural network models (such as BERT [15], ALBERT [16], and RoBERTa [17]) that not only capture both previous and future tokens in a sentence, but also use the attention mechanism to identify relevant contexts within or between sentences. Moreover, the aforementioned models combine the advantages of a large complex neural network given their pre-training on a large corpus, to create pre-trained neural language models (e.g. BERT is trained on Wikipedia, BookCorpus, and Common Crawl [15]), where the subjective bias from any small training data is minimised. Being based on a state-of-the-art pre-trained language model architecture that can be modified and that has been shown to consistently outperform other models in many tasks [21,22], we use BERT-related language models (i.e., BERT, ALBERT, and RoBERTa) as the base language models in our experiments. Knowledge Graph Embedding Models A knowledge graph (KG) usually contains entities (nodes) and finite types of relationships (different types of edges) between two entities, and can be viewed as a multi-relational graph. Each edge in the KG is represented by a triplet e = e h , r, e t , indicating that the head entity e h and tail entity e t are connected by relation r, e.g., Donald_Trump, NomineeOf, United_States_presidential_election_2016 . Such a representation of the structured data is effective at representing factual and trackable relationships, and thus can facilitate the fact-checking processes (e.g., [23]). However, a KG is relatively hard to embed into a lower dimensional vector space. There are many existing approaches [12,24,25,26] that learn embeddings from KGs, by training an unsupervised model based on the co-occurrence of entity pairs and relations. Generally, there are two types of models that are widely used to train KG embeddings: distance-based KG embeddings with "facts alone" models [12,26,27] trained on a semantic triplet graph alone (such as FB15k [12]), while semantic-based entity embeddings [24,25] also use the information contained in the corresponding entity descriptions (e.g. Wikipedia pages). We describe these two types of models in turn in the next two subsections. Facts Alone KG Embedding There are two facts alone knowledge bases that are widely used in training KG embeddings, namely FB15k and WN18 [12]. The structure of a pure triplet (i.e., a triplet of the form e = e h , r, e t , without any additional descriptions for e h , r, e t ) in such knowledge bases enables KG to represent information in a hierarchical and graphical manner. Such representation can be represented in a lower dimension space using graph embeddings, where the learning of the scoring functions is generally based on distances between entities and relationships. Specifically, the Euclidean distance between entities, is used to project the entities based on their relationship with one another, whether these entities and relationships are translated into the same vector space (e.g., TransE [12]), or into different spaces (e.g., TransR [28]); or projected into different vector spaces with tensor factorisation (e.g., RESCAL [25], DistMult [29]). The advances in deep neural networks also encouraged researchers to deploy deep neutral networks on graph-structured data, such as data encapsulated in a KG. For example, Li and Madden [30] combined a graph embedding method node2vec [31] with the cascade embedding method, achieving a better performance at predicting triplets, than using TransE alone. Recently, complex space embedding has also been applied to KG embeddings (e.g., ComplEx [13], RotateE [32], QuatE [33]), where the complex valued embedding allows the binary relationship embeddings to represent both symmetrical and asymmetrical relationships (e.g., Stanley Kubrick, directed, Dr.Strangelove (asymmetrical) cannot be represented as Dr.Strangelove, directed, Stanley Kubrick , while Barack Obama, married to, Michelle Obama (symmetrical) can also be represented as Michelle Obama, married to, Barack Obama ). Finally, the hyperbolic space, with the ability to represent discrete trees in a continuous analogue, has been used for modelling a KG (e.g., MuRP, RotE and RotH [27] ), where the multiple possible hierarchical relations of one entity can be modelled simultaneously, resulting in a fewer dimensions of hyperbolic embeddings thereby achieving better performances than those obtained by the Euclidean distance methods. Aside from the general knowledge bases, specific knowledge bases have been developed to facilitate the KG embeddings of special knowledge. For example, museum information and other unregimented data can be converted into e = e h , r, e t triplets [34], while crime-related information can be extracted from newspapers, to build a speciality knowledge base for criminology [35]. Semantic-based KG Embeddings Some knowledge bases (e.g., DBpedia) contain more information than just triplets of entities and relationships (e.g. text descriptions for entities, relationships, and their possible features, such as Stanley Kubrick, directed, Dr.Strangelove, a comedy/war movie )). Hence, a semantic analysis of the available descriptive texts allows algorithms to better capture each entity and its meaning, where a hyperlink between entities serves as a relationship between the two linked entities. To this end, joint training an entity embedding with semantic embeddings can benefit one another. Researchers have explored traditional machine learning methods on jointly trained embeddings, such as random walk [36], He et al. [37] used deep neural networks to compute representations of entities and contexts of mentions from the KB, while Yamada et al. [11] used a skip-gram method, and trained it on Wikipedia data to obtain the entity embeddings and the associated word embeddings. More recently, some researchers (e.g., ERNIE [14], KnowBERT [38]) have explored the use of joint training knowledge graph embeddings along with a BERT language model, and showed promising results in several downstream tasks. Bosselut et al. [39] explored whether using the attention mechanism (similar to that for training the BERT model) to enrich a knowledge base embedding with "common sense knowledge" embedded in text content is beneficial for more complete KG embeddings. The resulting model, named Comet, puts more emphasis on general information represented as entities (e.g., nap, having sub-event, dosing off ). Conclusions The question of whether the embedded entities are beneficial to suspicious claim detection and/or fake news has not yet been studied. For example, the aforementioned Comet model focuses on representing common knowledge instead of entities, which is not suitable for our task. Hence, we do not experiment with this KG embedding model in our present study. However, deploying a joint training of the KB embeddings with the language model may result in less accurate entity embedding information, as well as increase the needed training effort. Hence, in this paper, we propose to combine KG embeddings with both neural and BoW language representations, to ascertain whether using additional entity information from text enhances the identification of suspicious claims for fact-checking. In the following, we review existing related work about fact-checking and suspicious claim identification. Check-Worthiness Identifying check-worthy sentences is a task that identifies the most suspicious and potentially damaging sentences, from a given news article or a political debate that has been divided into sentences. The identified check-worthy sentences should then be prioritised in fact-checking process. The ClaimBuster system [1] was the first work to target the assessment of the check-worthiness of sentences. It was trained on data manually labelled as "non-factual", "unimportant factual", or "check-worthy factual", and deployed SVM classifiers with features such as sentiment, TF.IDF, POS, and named entity linking (NEL). Focusing on debates from the US 2016 Presidential Campaign, Gencheva et al. [40] found that if a sentence is an interruption by one participant in the middle of a long speech by another participant, that was more likely to be selected as check-worthy by at least one news organisation. There are many follow-up works [41,42,43] that have focused on deploying different learning strategies (e.g. neural networks, SVM with various features) in reproducing the check-worthy sentences selection process of a news organisation. In the 2019 and 2020 editions of the CLEF'2019 CheckThat! Lab, datasets of check-worthy sentences from the 2016 US presidential debate were provided -these were used for 2019 Task 1 [2] and 2020 Task 5 [3]. The top 5 performing groups in the official leader board of the CLEF' 2019 CheckThat! Lab are Copenhagen [4], TheEarthIsFlat [6], IPIAN [44], Terrier [10], and UAICS [45]. The 3 groups in the official leader board of the CLEF'2020 CheckThat! Lab task 5 are NLP&IR@UNED [46], UAICS [47], and TOBB ETU [48]. Table 1 provides an overview of the approaches and techniques used by these top performing groups. Among the groups and systems mentioned in Table 1, the approach deployed by the Terrier group as reported by Su et al. [10] is the closest work to our proposed framework in that they also addressed named entity linking, albeit their approach made use of a KG for only the similarity and relatedness between two entities, which we will also be adopting in our experiments. However, Su et al. [10] calculated the similarity and relatedness -in terms of KG structuresbetween pairs of entities in the sentences. In contrast, our work here uses recent advances in dense entity embeddings [11,12,28,25,29,13] to provide richer information for suspicious claim identification. We hypothesise that by integrating entity pair representations, we can improve the performance of pure neural language models such as BERT or ALBERT, when identifying check-worthy sentences. It is of note that the best 2019 performing team (Copenhagen) achieved only a mean average precision (MAP) of 0.1660 using the 2019 dataset, while the top 2020 performing team (NLP&IR@UNED) achieved a MAP of 0.0867, indicating the difficulty of the task. For a fair comparison with existing models on this challenging task, our present work also uses the CLEF'2019 & 2020 CheckThat! datasets. In the following section we describe the task of check-worthiness prediction, and our proposed entity-assisted language models to address it. Check-Worthiness Prediction using Entity-Assisted Language Models Given a document d, we aim to estimate the check-worthiness of each sentence s i ∈ d. This can be formulated as a classification task, aiming to predict (denotedŷ i ) for each sentence if a human would label that sentence as check-worthy or not (c.f. y i ). The task can also be formulated as a ranking task, such that the predicted most check-worthy sentences are ranked highest -indeed, this is the task formulation taken by the CLEF' 2019 and 2020 CheckThat! Labs [2,49]. In our present study, we propose a uniform framework, which addresses the estimation of check-worthiness both as classification and ranking tasks, when measuring the effectiveness of our models. Our proposed uniform framework for tackling the identification of the check-worthiness of each sentence consists of two components: text representation through the use of language models, and an entity pair 1 representation obtained from entity embeddings -discussed further in Section 3.1. Each sentence is represented by a language model (denoted 1 Standard Universal Sentence Encoder 2 Readability, sentence context, subjectivity, Sentiment 1 We also experimented with sentence representation combined with a single entity, and sentence combined with three entities, and neither performs well in this task. For the ease of reading, we do not present the equations and experiments for such structure. by l rep ), which is discussed further in Section 3.2. There are three steps involved in representing a pair of entities appearing in a sentence: 1. Resolving all entities that appear in each sentence to the corresponding entity using entity linking [50]; 2. Transforming the resolved entities into dense entity embeddings through the application of KG embeddings (denoted by m ent ) -we discuss the choice of KG embeddings in Section 3.3; 3. Each pair of entity embeddings are combined through a combination method (denoted by e com ) to form a single representation for the entity pair. Note that, for sentences that contain more than two entities, every two entities form an entity pair. Overall Structure of Proposed Framework In order to leverage the semantic representation of various language models, as well as the entities in sentences, for each sentence, we propose to combine its language representation along with an entity pair representation for each pair of entities in the sentence using a neural network framework. Firstly, considering a sentence s i in which a set of entities E(s i ) have been identified through application of an entity linker. Our model is based on pairs of entities, thus we consider input instances x i , based on pairs of distinct entities: x i ∈ { s i , e h , e t ∀ e h , e t ∈ E(s i ) × E(s i )} (1) where e h and e t are the head and tail entities. For ease of notation, let x i ∈ s i denote a particular instance x i obtained from s i using Equation (1). Then, given an input instance x i , we develop two separate models: f cls (x i ) for sentence classification and f rank (x i ) for ranking. Furthermore, for combining the embeddings of a given entity pair, we use two different methods as explained below. In particular, Figure 2 shows the architecture of our proposed framework, including the two different methods of entity evidence combination. In the input stage, we use the sentence as input to a language model, so as to obtain the text representation of the input sentence at a semantic level, i.e.:, l rep = LanguageM odel(s i )(2) For each entity pair in an input instance, we represent the relationship (e rep ) between e h and e t in a high dimensional space. Thus, we firstly use an existing KG embedding model m ent to extract entity embeddings − → e h and − → e t for entities e h and e t . Next, we use a combination method e com to obtain the entity pair representation (e rep ). Specifically, as combination methods, we use the element-wise product operation (denoted by emb_prod), and the concatenation operation (denoted by emb_concat). This process can be represented as follows: − → e h = m ent (e h ), − → e t = m ent (e t ) (3) e com ∈ {emb_prod, emb_concat} (4) e rep = e com ( − → e h , − → e t )(5) It is of note that we select emb_prod and emb_concat because of their wide use as neural operators for combining two vectors (e.g., [51,15], resp.). We combine the text representation and entities pair together, to form the input instance representation x i , by concatenating the language representation (l rep ) with the entity pair representation (e rep ): 2 x i = l rep ⊕e rep Next, x i can be used both as part of a classification f cls () and ranking f rank () task. In our experiments, f cls () is a fully connected layer with a softmax activation function that estimates the likelihood of each class, while f rank () is a fully connected layer with a sigmoid activation function to obtain the check-worthiness score ∈ (0, 1) for ranking the sentences in descending order: f cls (x i ) = e ((x i ⊗k)+b) j e ((x j ⊗k)+b) (7) f rank (x i ) = 1 1 + e ((−x i ⊗k)+b)(8) where k denotes a fully connected layer kernel and b denotes bias. The objective of our experiments is to identify the most effective f cls () and f rank () models, for classifying and ranking check-worthy sentences, respectively. Sentences may contain more than one pair of entities, with corresponding different levels of check-worthiness. For these cases, we assume that as long as at least one pair of entities is check-worthy within a sentence, the sentence is check worthy. Thus, the obtained f cls () and f rank () models are applied for each pair of entities in a sentence. Hence, to obtain the final check-worthiness of a given sentence, we take the maximum check-worthiness label/score across all pairs as follows: y cls i = max(f cls (x i )) ∀x i ∈ s i (9) y rank i = max(f rank (x i )) ∀x i ∈ s i(10) where x i ∈ s i denotes an input instance x i occurring in sentence s i . Language Models We aim to understand the robustness of using entity embeddings across a number of language models. In particular, we use a BiLSTM model with an attention mechanism as a representative of non-pretrained language models. We also use several BERT-related neural language models (BERT, ALBERT, RoBERTa) to represent the current state-of-the-art pre-trained language models. Finally, TF.IDF vectors are used as a representative of traditional BoW models. Obtaining Entity Embeddings from KG Embedding Models There are multiple ways to analyse entities appearing in the sentence that can benefit the identification of check-worthy material. For example, Ciampaglia et al. [52] showed that the graph distance between two entities within a KG (i.e. the number of steps on the graph to reach one entity from another) could be used to improve fake news detection accuracy when applying an entity linking method on news articles. On the other hand, Su et al. [10] showed that by using additional entity features such as the similarity and relatedness of entity pairs computed using graph distance, an SVM classifier is able to identify check-worthy sentences more accurately than using TF.IDF features alone. However, using graph distance considers only the number of hops between two entities within the knowledge graph, and therefore does not address other possible relationships between the entities (e.g. a person (an entity) being the president of a country (another entity)). This means that using only the KG's ontology structure results in less information compared to using embeddings that may capture more entity relationships. Thus, we instead propose to obtain the entity representations using KG embedding models -such as those introduced in Section 2.2 (e.g. [11,12,13,28,25,29]). This allow us to acquire the implicit and hidden KG-based relationships between two entities that are encoded in the embedding vectors that have been learned by a particular model. Indeed, we focus on using pairs of entities, following [52,10]. 3 Different KG embedding models can return varying results when given the same entity and task. For example, Table 2 shows the four most similar entities for the President of the United States Barack Obama obtained using six different KG embedding models that we use in this study. Specifically, Wikipedia2Vec returns the entities that appear closer to the entity Barack Obama in the sentence, while the other 5 models show a variety of very specific entities that Barack Obama has a relationship with (e.g., the law he passed, the article he wrote, the person he attended the same school with). Such differences in the output provided by the KG embedding models are due to the varying datasets and data structures used to train the models. Therefore, the key argument of this paper is that by including the entity embeddings − → e for each entity e (appearing in the sentence) into our models, we are able to consider the KG-based network relationships of entities in a sentence, 3 Indeed, as we later show in Section 4, sentences containing 2-4 entities are the most frequent in this dataset. when making predictions about the check-worthiness of a sentence. Indeed, entities that are far apart on a simpler word embeddings space may be closer on the entity embedding space, and combining the word embeddings and entity embeddings may be able to bring these two types of information together. Overall, this provides more evidence about the expected co-occurrence of different types of entities within a sentence for identifying those sentences requiring fact-checking. Experimental Setup Our experiments address the following research questions: • RQ1: Do BERT-related language models outperform the TF.IDF and BiLSTM baselines in identifying check-worthy sentences? • RQ2: Does the use of entity embeddings improve the language models' accuracy in identifying check-worthy sentences? • RQ3: Which combination method e com performs the best in improving the performance of text representations at identifying check-worthy sentences? • RQ4: Which KG embedding model m ent provides entity embeddings that best assists the language models? Moreover, from Section 3, the identification of check-worthy sentences can be considered either as a classification task, or instead as a ranking task (as defined by the CLEF' CheckThat! Lab organisers). Hence, in the following experiments, we provide conclusions for all RQs from both the classification and ranking perspectives. In the remainder of this section, we describe the experimental setup used to address our four research questions. Dataset All our experiments use both the CLEF'2019 & 2020 CheckThat! datasets. The CLEF'2019 & 2020 datasets consist of transcripts of US political debates and speeches in the time period 2016-2019, collected from various news outlets 4 . Each sentence has been manually compared with factcheck.org by the organisers. If the sentence appeared in factcheck.org and is fact checked, it is labelled as a check-worthy claim. Table 3 shows a dataset extracted from a speech by Senator Ted Cruz. The CLEF' 2019 & 2020 CheckThat! Labs provided data split for training and testing purposes, which we also use in this paper. Table 4 shows the statistics of the training and testing sets. In particular, we observe that the prevalence of check-worthy sentences is reduced in the 2020 dataset compared to the 2019 dataset. Next, as our approach makes use of entity occurrences, in Figure 3a we show the proportion of each entity type appearing in the 2019 dataset 5 . In particular, it can be seen that the Person and Location types are the most commonly identified in the dataset, and together they account for 90% of all the entities detected. Figure 3b shows the number of entities appearing in each sentence. We observe that sentences with 0-2 entities account for more than 40% of the sentences, while sentences with 3 entities account for ∼15% of sentences. The observation of the distributions of the number of entities present in each sentence further strengthens the reasons for using entity pairs (described in Section 3.3). Models and Baselines In this section, we describe the tools and methods we use in our experiments, along with the baseline approaches. 4 ABC, Washington Post, CSPAN, etc [49], in English only 5 Similar distributions were observed for the 2020 dataset, and hence are omitted. Pre-processing: To allow a fair comparison with previous methods, we follow the pre-processing procedure of Su et al. [10], which includes first person resolution and co-reference resolution 6 . In doing so, we aim to ensure that any implied entities in the text are therefore explicitly available for analysis by the later stages of our framework (e.g. the language models and entity linking). In particular, we replace the first person pronouns with the speaker's name, and use the coreference resolution package 7 implemented by Lee et al. [53]. 6 the co-referent resolution considers two sentences at a time, align with that of [10]. Named entity linking: To explicitly address the entities that occur in each sentence, we deploy a named entity linking method to extract entities from each sentence. In our experiments, we use DBpedia Spotlight 8 to extract entities from each pre-processed sentence, with the confidence threshold set to 0.35, following Su et al. [10] 9 . Entity embeddings: We use six sources of entity embeddings to represent the embedded entity pairs 10 , as follows: • Wikipedia2Vec uses the extended skip-gram methods with a link-based measure [54] and an anchor context model to learn the embeddings of entities. We use pre-trained Wikipedia embeddings (window size=10, iteration=10, negative=15, dimensions=300) [11] 11 . • TransE [12] aims to embed a triplet e = e h , r, e t into the same lower dimensional space, where − → e h + − → r = − → e t . • TransR [28]is built upon TransE, where the relation embedding is projected into a separate relation space, in order to more accurately represent the rich and diverse information between entities and relations. • RESCAL [25] uses a three-way tensor learning method to model the triplet of e = e h , r, e t , for more flexible representation of the relationship and entities. • DISTMult [29] uses a single vector to represent both entities and the relation by simplifying the bi-linear interaction between the entity and the relation, where the relation vector is represented using the diagonal matrix of the interaction. • ComplEx [13] uses complex embeddings and the Hermitian dot product to represent the relation between two entities, and achieved a better performance than its predecessors on the entity-linking task. For the TransE, TransR, RESCAL, DistMult and ComplEx models, we use triplets extracted from Freebase (FB15K) [12] as training data. These models are trained using code provided by Zheng et al. [55] 12 . Language representations: We use five different text representation models to represent each sentence, as follows: • TF.IDF is a commonly used BoW model to represent the text based on the word frequencies. We include TF.IDF as a baseline. • BiLSTM+attention (denote as BiLSTM+att) is widely used in the literature to learn a language model from the training data. It appeared in several solutions [4], which were deployed in the CLEF'2019 CheckThat! Lab, and demonstrated that an LSTM-based language model can effectively represent the sentences in the check-worthiness identification task. Thus, in this paper, we use BiLSTM+att as a non-pretrained language model baseline, in order to obtain a fair comparison among all the language representation methods. • BERT [15] is a state-of-the-art pre-trained language model that has been shown to be effective in many information retrieval and natural language processing tasks [56,57,22]. In this study, we are also interested in determining if the BERT model also performs well on the very specific task of check-worthy sentence identification, or if it can be enhanced by supplementary information such as entity embeddings (as discussed in the next section). • ALBERT [16] is a derivative of the original BERT model that. aims to reduce the number of parameters. Specifically, ALBERT uses a factorised embedding parameterisation method to decompose the vocabulary size and the hidden layer size, by projecting the vocabulary twice rather than once. Moreover, cross-layer parameter sharing and intersentence coherence loss are used to further reduce the need of parameters updating. ALBERT achieved a new SOTA performance with less parameters and shorter training time, compared to the original BERT model. • RoBERTa [17] aims to improve over BERT by training the model for more iterations, using longer sentence sequences, with bigger batches over more data. RoBERTa also removes the next sentence prediction objective in training. Similar to the ALBERT model, RoBERTa results in improved performance over the standard BERT model. We use the HuggingFace language model implementations [58] 13 . Specifically, we use the BERT-Cased English model (12-layer, 768-hidden, 12-heads, 110M parameters); the Albert-base-v2 English model (12-layer, 128-hidden, 12-heads, 1M parameters); and the RoBERTa-base English model (12-layer, 768-hidden, 12-heads, 125M parameters). We fine-tune all the BERT-related language models on the training datasets. All other parameters remain at their recommended settings. Baselines: We compare our generated Entity-Assisted models to the following baselines: 8 https://www.dbpedia-spotlight.org/ 9 We note that there are better performing entity linking models, however, to maintain compatibility with the previous research we only use DBpedia Spotlight as the entity linking method in this study 10 We acknowledge that these entity pair representations can also be used to calculate their similarities, however our preliminary experiments shows that such methods produce less than satisfactory results, thus we omit such setup. 11 ALBERT 0.14 0.11 0.12 1-4 6 RoBERTa 0.14 0.11 0.11 [1][2][3][4] • TF.IDF + SVM (denoted by SVM(TF.IDF): We apply a SVM text classifier using TF.IDF feature vectors. We select our hyperparameters by applying cross validation on the training data. Specifically, we use the sci-kit learn SVM implementation with an RBF kernel, a C penalty of 10, and a γ of 0.1 in our trained SVM classifier. We use class weights based on the training data to prevent the imbalanced data from compromising our experimental results. For the classification task we obtain the predicted class label for each sentence from f cls () (as per Equation (7)), while for the ranking task we obtain a score for each sentence in the range (0, 1) from f rank () (as per Equation (8)). We use the same functions for SVM(TF.IDF) similarity (introduced below). • SVM(TF.IDF) + Entity Similarity and Relatedness (denoted by similarity): Following Su et al. [10], we append two graph-based entity similarity and relatedness scores -obtained using Sematch [59] -as well as a feature denoting the number of entities, to the TF.IDF feature vectors of the SVM model. • BiLSTM + Att: We deploy a BiLSTM + Att model (100 hidden units) with an attention mechanism, implemented using Tensorflow. We initialise the embedding layer of BiLSTM using the pre-trained GloVe embeddings (300 dimensions). • CLEF'2019 & 2020 CheckThat! Lab leaderboards: For the ranking task, we additionally compare with the runs of the top three groups on the official CLEF'2019 & 2020 leader boards 14 . Evaluation Metrics For evaluating the classification accuracy, we use the standard classification metrics (Precision, Recall, F1). Significant differences are measured using the McNemar's test. 15 On the other hand, for evaluating the ranking effectiveness, we apply the ranking metrics used by the CheckThat! Lab organisers, namely Mean Average Precision (MAP), Reciprocal Rank (RR), and Precision at rank k (P@k, k={1,5,10,20,50}). Means are calculated over the seven and twenty debates and speeches in the CheckThat! 2019 and 2020 test sets, respectively -therefore, due to the small number of rankings being evaluated, significance testing is not meaningful. Finally, note that it is not possible to evaluate the CLEF'2019 & 2020 CheckThat! Lab participants' approaches using the classification metrics -this is because the participants' runs have scores rather than predicted labels, and do not contain predictions for all sentences in the dataset. Further, it is also not possible to combine our approach with the participants' runs, since we do not have the predicted scores of the participants' runs on the training sets. Experimental Results In this section, we present the results of the experiments that address RQs 1 -4. In particular, for both the check-worthy sentence classification and ranking tasks, Sections 5.1 -5.4 respectively address: the effectiveness of the BERT-related language models; the usefulness of entity embeddings; the most effective combination method for representing entity pairs; and the most effective KG embedding model from which to obtain the entity embeddings. Tables 5, 7, and 9 presents the attained classification results obtained on the CLEF CheckThat! 2019 dataset, to address RQ 1, RQs 2-3, and RQ 4 respectively. Similarly, Tables 6, 8, and 10 present the attained ranking performances on both the CLEF CheckThat! 2019 & 2020 datasets, for RQ 1, RQs 2-3, and RQ 4 respectively. Furthermore, Tables 12 & 13 summarise the performance of a salient subset of approaches on the classification and ranking tasks, respectively. In order to obtain further insights of this study, we conduct failure analysis and case studies in Section 5.5. Finally, in Section 5.6 we summarise and discuss our main findings. RQ1: BERT-related Language Models vs. Baselines We firstly consider Table 5, which reports the attained accuracies when treating check-worthy sentence identification as a classification task, on the CLEF CheckThat! 2019 dataset. Firstly, in terms of F1, we note the relative weak performance of a classical SVM classifier with TF.IDF features (row 2), which performs equivalently to a random classifier. Indeed, while the SVM classifier has been trained using class weights to alleviate the issue of class imbalance, the low performance of SVM illustrates the difficulty of this task, and underlines that simply matching on what is being said by the speakers is insufficient to attain high accuracies on this task. Next, the BiLSTM+att classifier (row 3) markedly outperforms the random classifier, demonstrating that the deployment of pre-trained (i.e., GloVe) word embeddings allows a more flexible classifier not tied to the exact matching of tokens. Moreover, the use of the attention mechanism in BiLSTM also emphasises the importance of the context of each word. Finally, the state-of-the-art BERT-related models (BERT model, row 4; ALBERT model, row 5, and RoBERTa model, row 6) significantly outperform the random classifier, the SVM classifiers, and the BiLSTM+att classifiers. Thus, we conclude that, when treating the task as a classification task, all of the BERT-related language models can significantly outperform the SVM and BiLSTM+att classifiers. Among all the BERT-related models, ALBERT exhibits the highest performance. Moving next to the ranking task on the 2019 dataset, Table 6 shows that the BERT model (row 3) performance is less than that of a classical SVM classifier using TF.IDF features (row 1). Both ALBERT (row 4) and RoBERTa (row 5) outperform SVM(TF.IDF) and BiLSTM+att (rows 1, 2) in terms of MRR. However, in terms of MAP, both ALBERT and RoBERTa only outperform SVM(TF.IDF) (row 1), and still underperform compared to BiLSTM+att (row 2). Next, when considering the results of the ranking task on the 2020 dataset, BERT (row 15), ALBERT (row 16) and RoBERTa (row 17) models all outperform BiLSTM+att (row 14) and SVM(TF.IDF) (row 13) on both MAP and MRR. While the contrast between the F1 classification and the ranking results on the 2019 dataset is notable, the low classification recall for all models suggests that BERT, ALBERT, and RoBERTa (c.f. rows 4, 5, 6 in Table 5) cannot retrieve the most difficult check-worthy sentences, and hence also exhibit low MAP performances in the ranking task. From Table 13 we observe the inconsistent performances for the same language model across the 2019 and 2020 ranking datasets (i.e., row 1 vs. 16 Table 4). Overall, in answer to RQ1, we conclude that while the BERT, ALBERT, and RoBERTa models perform well at classifying check-worthy sentences, for ranking they are most effective at higher rank sentences. On both tasks, ALBERT performs the best among the BERT-related language models. RQ2: Using Entity Embeddings We now examine the impact of entities at improving the classification accuracy. Firstly, from Table 7, we note that the F1 performance of the SVM classifier is improved by adding the entity similarity scores (row 2 vs row 1), echoing our earlier observations [10] for ranking, on the 2019 dataset. Similarly, adding entity emb_prod (row 4) element-wise product operation between language representation and entity pair representation and entity emb_concat (row 3) concatenating language representation and entity pair representation also improve the SVM classifier's performance, using the 2019 dataset, in terms of precision, recall and F1 compared to SVM(TF.IDF) without entity information. Next, we observe that all of the neural language models (i.e., BiLSTM+att, BERT, ALBERT, RoBERTa) also exhibit a significantly improved accuracy when combined with entity embeddings (rows 7 & 8 vs. 5; rows 11 & 12 vs. 9; rows 15 & 16 vs. 13; rows 19 & 20 vs 17). On the contrary, even though we do observe an improvement when the neural models are combined with entity similarities (row 6 vs. 5; row 10 vs. 9, row 14 vs. 13; row 18 vs. 17), the improvement is not significant. Moreover, combining the neural models with the entity embedding information, both entity combination methods significantly outperform the corresponding language model when combined with simpler entity similarities. Indeed, our proposed entity-assisted ALBERT classifiers using the emb_concat method (row 16) attains the highest overall classification performance (an F1 score of 0.18). Table 7 further shows that almost all neural models with all types of entity embeddings outperform the corresponding language models alone, in terms of F1. Thus, we conclude that the entity information (the entity similarity and embedding for the SVM classifier, the entity embeddings in the neural language models) can indeed improve the classification accuracy in the identification of check-worthy sentences. Turning to the ranking task, in Table 8, we observe that the use of entities (i.e., entity similarities, emb_concat, and emb_prod) enhances most of the approaches: the effectiveness of the SVM classifier is enhanced on MAP, P@1, and P@10. On the other hand, while BiLSTM+att is enhanced for MRR and P@1, when combined with any type of entity information, the MAP performances are damaged by the entity embeddings (rows 6-8 vs. 5). Finally, the BERT-related models (i.e., BERT, ALBERT, RoBERTa) are enhanced by all three types of entity information, regardless of the entity embedding combination model used, in terms of MAP, MRR, and P@1 (rows 10-12 vs. 9; rows 14-16 vs. 13; rows 18-20 vs. 17). When tested on the 2020 dataset, Table 13 shows that the ComplEx KG embedding model together with the emb_concat method, consistently improves all the neural networks' performance on all metrics. Thus, we conclude that entity embeddings can consistently enhance the BiLSTM+att models for ranking on high precision metrics such as MRR and P@1, as well as enhance the SVM(TF.IDF) and neural language models (i.e., BERT, ALBERT, RoBERTa) across the evaluation metrics. Therefore, in response to RQ2, we conclude that using entity embeddings -regardless of the KG embedding model from which the entity embeddings are obtained -does help to improve the BERT-related language models' performance, on both precision and recall for the classification task, and on MAP, MRR and P@1 for the ranking tasks. RQ3: Entity Representation When considering identifying check-worthy sentences as a classification task, Table 7 shows that all of the SVM(TF.IDF) and BERT-related language models are significantly improved when combined with entity embeddings, over the language models alone or with entity similarities. Meanwhile, we observe that using emb_concat only marginally outperforms emb_prod, without significant differences (row 3 vs. 4; row 7 vs. 8; row 11 vs. 12; row 15 vs. 16; row 19 vs. 20). Moreover, the ALBERT model with the emb_concat method using the Wikipedia2Vec KG embedding model (row 15) achieves the highest F1 score among all of the tested models shown in Table reftab:classification-RQ2. Thus, we conclude that using entity embeddings is more effective than using the entity similarity method suggested by Su et al. [10] on the classification task, which uses graph distance for estimating the similarity between entities. Next, when considering the ranking task, Table 8 shows that the BiLSTM+att, and BERT-related languages models all exhibit improved MRR and P@1 when combined with entity embeddings using the concatenation method, outperforming the entity similarity method ( 20). Thus, we conclude that for the ranking task, the emb_concat model is more effective than emb-prob, and both embedding methods are more effective than the entity similarity baseline (rows 2, 6, 10, 14, 18). Overall, in answer to RQ3, we conclude that using embedding entities obtained from KG embedding models, regardless of the representation method, improves all three BERT-based language representations better than the entity similarity information, with emb_concat exhibiting the highest effectiveness on both for the classification task (using the 2019 dataset) and the ranking task (using the 2019 & 2020 datasets). [13,29]. For the ranking task, Table 10 shows that ALBERT + ComplEx achieves the best performance among our experiments, obtaining 0.1821, a tie with the best performing run in the official leader board, on the 2019 dataset. Moreover, it also shows that ALBERT + ComplEx obtains the highest MAP in the 2020 dataset among all the models we tested, as well as the models in the leader board. Under further investigation, we found that ALBERT + ComplEx successfully identified the single check-worthy sentence within one debate of the test set, and therefore obtained the highest improvement on MAP. We further conducted a case study in order to understand why ComplEx can achieve the best performance consistently. Table 11 presents 2 cases where ComplEx model togehter with ALBERT language model successfully identified the check-worthy sentences, while other entity embedding models did not. Upon further investigation, we found that in these two cases, entity 1 and entity 2 are not related to each other closely, while Wikepedia did not mention entity 2 in Table 10: Ranking performances on CheckThat! 2019 dataset, using emb_concat as entity representation combination method, while alternating language models l rep and entity embedding models m ent . Bold indicate the best performance. And one state said -you know, it was interesting, one of the states we won, Wisconsin -I didn't even realize this until fairly recently -that was the one state Ronald Reagan didn't win when he ran the board his second time. Wisconsin Ronald Reagan the articles about entity 1. We therefore postulate that ComplEx is able to identify hidden relations between two weakly associated entities better than other entity embedding models. Overall, we conclude that among all 6 KG embedding models we tested, ComplEx produces consistently the highest performance. Failure Analysis Due to the overall low performance of our models, we aim to identify the bottleneck of our framework, on the task of check-worth sentences identification. Table 15 shows 6 sentences, with number of identified entities, and if the classifier correctly identified the sentence as check-worthy. We observe that in the three of false negative cases (row 2,4, and 6), the number of entities are either less than or equals to 2. We therefore postulate that the number of entities per sentence can indeed affect the performance of our proposed framework. Recap of Main Findings In this section, we recap on our main findings for RQs 1-4 and indicate the implications of our study, by use of Tables 12 & 13. For each language model, the summarising tables present results obtained using three conditions: language model only; with entity pair representation using Wikipedia2Vec KG embedding model and using emb_concat method; and with entity pair representation using ComplEx embedding and using emb_concat method. We do not include emb_prod method in our summarising tables, as our results for RQ3 showed that emb_concat consistently outperforms emb_prod across both 2019 & 2020 datasets and both classification and ranking tasks (see Section 5.3). For the classification task (Table 12, on the 2019 dataset), we highlight our conclusion from RQ1 (see Section 5.1) that the ALBERT language model (rows 11 -13) significantly outperforms all other language models. We also confirm our conclusion from RQ2 (see Section 5. In short, the findings of our study can thus be summarised as follows: • Deep neural language models help identify the sentences that require further manual fact-checking; • Embedded entities within sentence help identify the sentences that requires further manual fact-checking; • The most effective way to combine entity pair representation with text representation is to concatenate two vectors together; • The tested facts-alone KG embedding models perform better than tested semantic KG embedding method (i.e., Wikipedia2Vec). The best performing KG embedding model in our study is the ComplEx model. • The performance of our framework is affected by the number of entities present in the sentence. Since Donald Trump prefer sentences with less entities, it's difficult to identify his sentences as check-worthy than other candidates. Conclusions In this paper, we proposed a uniform framework for the task of check-worthy sentence identification, formulated as either a classification or a ranking task. We proposed to use BERT-related pre-trained language representations, and, in a novel manner, integrated entity embeddings obtained from knowledge graphs into the classifier and ranker. When considering the check-worthy sentence identification as a classification task, our experiments -conducted using the CLEF'2019 and 2020 CheckThat! datasets -showed that the ALBERT model outperforms the SVM(TF.IDF), BiLSTM+att, BERT, and RoBERTa text representation models. Moreover, the application of our proposed Entity-Assisted language models further improved the performance of the SVM(TF.IDF), BiLSTM+att, and all the BERT-related language models, over the models that combine language models with entity similarities and relatedness. When considered as a ranking task, we found that all types of entity embeddings improve all the language models in identifying the top ranked check-worthy sentences, but do not perform as well in the lower ranks. Furthermore, on both classification and ranking tasks, using ALBERT for text representation consistently performs the best among all the tested text representation models, with or without entity features. In addition, we found that use of the DistMult and ComplEx KG embedding models both improve all the language models the most, while ALBERT + ComplEx achieved the best F1 on classification task on the 2019 dataset, and the best MAP performance on both the 2019 (a tie with the best performing submission) and 2020 datasets. Thus, we conclude that our framework, which combines deep learning language models with embedded entity representations in a novel manner, can achieve the state-of-the-art performance in identifying check-worthy sentences. Moreover, we argue that given the flexibility of our framework, achieved by concatenating sentence and entities representations instead of jointly training them, it can be applied to a number of sentence classification tasks, using an appropriate language model and an appropriate KG embedding model. Our work also gives support to future workflows where journalists and representatives of other fact-checking organisations could benefit from accurate assistive classifiers, to focus their efforts only on a subset of suspicious claims that are worth checking thereby ensuring a faster and wider dissemination of news. Our future research plan regarding the check-worthiness task are 2-fold: first, we plan to enrich the information of the identified entities, (e.g., by adding the type of the entity, and the place and time of the entity, if applicable); second, we aim to research new approaches leveraging social media in check-worthy sentences classification and retrieval. Figure 2 : 2Our proposed Entity-Assisted Language model framework. ) Types of entities. ) Number of entities per sentence. Figure 3 : 3Distribution of the entity types, and the number of entities per sentence, in the CLEF CheckThat! 2019 dataset. Entities are detected using DBpedia Spotlight. Note that we omit the figures for 2020 dataset, since we observe similar distributions. the markedly different proportion of positive examples in the two test sets (as illustrated by the percentage of the check-worthy sentences in Table 1 : 1Methods used by the top five performing groups in CLEF' 2019 & 2020 CheckThat! lab.Model Learning models Features NN (LSTM, Feedforward) SVM Logistic regression Naive Bayes word em- beddings syntactic dependence embeddings SUSE 1 Bag of (NE, POS, Words, n-grams) hand- crafted features 2 2019 Copenhagen TheEarthIsFlat IPIAN Terrier UAICS 2020 NLP&IR@UNED UAICS TOBB ETU P Table 2 : 2Examples of the most similar entities to Barack Obama, using each of the KG embedding models.KG embedding model Most similar entities to Barack Obama, in descending order from left to right Wikipedia2Vec Michelle Obama John McCain US presidential election George W. Bush TransE Women's History Month A Child's History of ... Thickness network ... Benito Pérez Galdós TransR Executive Order 13654 BODY SIZES OF ... ynisca kigomensis hypothetical protein ... RESCAL Neural representation ... Natalie Grinczer Octavia E. Butler Giuseppe Pozzobonelli DISTMult live preview Neonatal peripherally ... KSC -STS-3 Rollout ... A large sex difference on ... ComplEx Peter B. Olney James Willard Hurst Robert H. McKercher William Schwarzer Table 3 : 3A debate transcript from the CLEF'2019 CheckThat! dataset. Sentences are labelled check-worthy (1) or not (0). You know, in the past couple of weeks the Wall Street Journal had a very interesting article about the state of Arizona.CruzArizona put in very tough laws on illegal immigration, and the result was illegal immigrants fled the state, and what's happened there -it was a very interesting article.CruzSome of the business owners complained that the wages they had to pay workers went up, and from their perspective that was a bad thing.CruzBut, what the state of Arizona has seen is the dollars they're spending on welfare, on prisons, and education, all of those have dropped by hundreds of millions of dollars.Speaker Sentence Label Cruz 0 1 0 1 Table 4 : 4Statistics of the CLEF'2019 & 2020 CheckThat! datasets.Training Testing # of debates/speeches 19 7 2019 # of total sentences 16,421 7,079 # of check-worthy sentences 433 110 % of check-worthy sentences 2.637% 2.554% # of debates/speeches 50 20 2020 # of total sentences 42,776 21,514 # of check-worthy sentences 487 136 % of check-worthy sentences 1.138% 0.632% Table 5 : 5Classification performances on CheckThat! 2019 dataset, alternating language models l rep only. Bold indicate the best performance; Numbers in the Significance column indicate that the model is significantly better than the numbered model (McNemar's Test, p<0.01).# lrep P R F1 Significance 1 Random Classifier 0.01 0.01 0.01 - 2 SVM(TF.IDF) 0.01 0.01 0.01 - 3 BiLSTM+att 0.12 0.07 0.09 1,2 4 BERT 0.12 0.09 0.10 1-3 5 Table 6 : 6Ranking performances on CheckThat! 2019 and 2020 dataset, alternating language models l rep only. Bold indicate the best performance in each group.# lrep MAP MRR P@1 P@5 P@10 P@20 P@50 CLEF'2019 CheckThat! Experimental results 1 SVM(TF.IDF) 0.1193 0.3513 0.1429 0.2571 0.1571 0.1714 0.1086 2 BiLSTM+att Table 7 : 7Classification performances on CheckThat! 2019 dataset, alternating language models l rep and entity embedding models m ent , and entity representation combination models e com . Bold indicate the best performance; Numbers in the Significance column indicate that the model is significantly better than the numbered model (McNemar's Test, p<0.01).# lrep ment ecom P R F1 Significance 1 SVM(TF.IDF) - - 0.01 0.01 0.01 - 2 SVM(TF.IDF) Wikipedia2Vec similarity 0.04 0.03 0.03 1 3 SVM(TF.IDF) Wikipedia2Vec emb_concat 0.06 0.05 0.05 1,2 4 SVM(TF.IDF) Wikipedia2Vec emb_prod 0.05 0.04 0.04 1,2 5 BiLSTM+att - - 0.12 0.07 0.09 1-4 6 BiLSTM+att Wikipedia2Vec similarity 0.12 0.08 0.1 1-5 7 BiLSTM+att Wikipedia2Vec emb_concat 0.13 0.1 0.11 1-6 8 BiLSTM+att Wikipedia2Vec emb_prod 0.12 0.09 0.1 1-5 9 BERT - - 0.12 0.09 0.1 1-5 10 BERT Wikipedia2Vec similarity 0.12 0.1 0.11 1-6 11 BERT Wikipedia2Vec emb_concat 0.19 0.11 0.14 1-10, 13 12 BERT Wikipedia2Vec emb_prod 0.18 0.11 0.13 1-10, 13 13 ALBERT - - 0.14 0.11 0.12 1-10 14 ALBERT Wikipedia2Vec similarity 0.14 0.14 0.14 1-10, 13 15 ALBERT Wikipedia2Vec emb_concat 0.22 0.15 0.18 1-14, 17-20 16 ALBERT Wikipedia2Vec emb_prod 0.20 0.14 0.16 1-14, 17, 18 17 RoBERTa - - 0.14 0.11 0.12 1-10 18 RoBERTa Wikipedia2Vec similarity 0.14 0.13 0.13 1-10 19 RoBERTa Wikipedia2Vec emb_concat 0.21 0.15 0.17 1-14, 17, 18 20 RoBERTa Wikipedia2Vec emb_prod 0.19 0.14 0.16 1-14, 17, 18 Table 8 : 8Ranking performances on CheckThat! 2019 dataset, alternating language models l rep and entity embedding models m ent , and entity representation combination models e com . Bold indicate the best performance. Table 9 : 9Classification performances on CheckThat! 2019 dataset, using emb_concat as entity representation combination method, while alternating language models l rep and entity embedding models m ent .Bold indicate the best Table 11 : 11Two sentences that are correctly identified as check-worthy using ALBERT, ComplEx entity embedding model, and emb_concat model, but are otherwise not identified.Speaker Sentence Entity 1 Entity 2 Donald Trump They want to take away your good health care, and essentially use socialism to turn America into Venezuela and Democrats want to totally open the borders. Venezuela Democrat Donald Trump Table 14 14shows that there are differences numbers of transcript and check-worthy sentences from different parties that participated in the debate. That is, check worthy sentences from interviews and speeches given by Trump alone makes up as much as 60% of the total number of check-worthy sentences. Moreover, there is a noticeable difference Table 12 : 12Classification performances on CheckThat! 2019 dataset. Bold indicate the best performance; Numbers in the column Significance indicate that the model is significantly better than the numbered model (McNemar's Test, p<0.01).# lrep ment ecom P R F1 Significance 1 Random Classifier - - 0.01 0.01 0.01 - 2 SVM(TF.IDF) - - 0.01 0.01 0.01 - 3 SVM(TF.IDF) Wikipedia2Vec emb_concat 0.06 0.05 0.05 1,2 4 SVM(TF.IDF) ComplEx emb_concat 0.07 0.05 0.06 1-2 5 BiLSTM+att - - 0.12 0.07 0.09 1-4 6 BiLSTM+att Wikipedia2Vec emb_concat 0.13 0.10 0.11 1-5 7 BiLSTM+att ComplEx emb_concat 0.14 0.13 0.13 1-6 8 BERT - - 0.12 0.09 0.10 1-5 9 BERT Wikipedia2Vec emb_concat 0.19 0.11 0.14 1-8 10 BERT ComplEx emb_concat 0.20 0.13 0.15 1-9 11 ALBERT - - 0.14 0.11 0.12 1-6,8 12 ALBERT Wikipedia2Vec emb_concat 0.22 0.15 0.18 1-10,13 13 ALBERT ComplEx emb_concat 0.25 0.16 0.20 1-12,14-16 14 RoBERTa - - 0.14 0.11 0.11 1-6,8 15 RoBERTa Wikipedia2Vec emb_concat 0.21 0.15 0.17 1-11,14 16 RoBERTa ComplEx emb_concat 0.24 0.14 0.18 1-12,14,15 Table 13 : 13Performances on the ranking metrics on both CLEF' 2019 & 2020 CheckThat! dataset ). Bold denotes the best performance for a given measure in a given year. Experimental results using CLEF'2020 CheckThat! dataset 16 SVM(TF.IDF) between transcript including Democrat candidates and Republic candidates. For example, check-worthy sentences from Democratic debates have a much higher number of entities detected per check-worth sentences, than those from Republican candidates. Further, we notice a strong correlation between the number of entities per check-worth sentences and the recall from the classification task, i.e., Democratic debates has an average number of entities per check-worthy sentences of 2.62, and has a recall of 0.31, compared to the republican debate that has 2 entities per check-worth sentences and only 0.15 recall, and transcripts considering Trump alone has an average number of entities as low as 1.16, with a recall of 0.11.# lrep ment ecom MAP MRR P@1 P@5 P@10 P@20 P@50 Experimental Results using CLEF'2019 CheckThat! dataset 1 SVM(TF.IDF) - - 0.1193 0.3513 0.1429 0.2571 0.1571 0.1714 0.1086 2 SVM(TF.IDF) Wikepedia2Vec emb_concat 0.1332 0.3361 0.3254 0.2000 0.2000 0.1286 0.0915 3 SVM(TF.IDF) ComplEx emb_prod 0.1332 0.3158 0.3098 0.2000 0.2571 0.1429 0.0929 4 BiLSTM+att - - 0.1455 0.2432 0.1429 0.1429 0.1429 0.1857 0.1343 5 BiLSTM+att Wikepedia2Vec emb_concat 0.0659 0.3361 0.2857 0.1429 0.1429 0.0714 0.0314 6 BiLSTM+att ComplEx emb_concat 0.0715 0.2257 0.1286 0.1429 0.1429 0.1857 0.1343 7 BERT - - 0.0715 0.2257 0.1429 0.2000 0.1286 0.0857 0.0600 8 BERT Wikepedia2Vec emb_concat 0.1011 0.6196 0.3361 0.1714 0.1429 0.0929 0.0686 9 BERT ComplEx emb_concat 0.1011 0.6196 0.3361 0.2857 0.1714 0.1286 0.0929 10 ALBERT - - 0.1332 0.4176 0.3098 0.2000 0.1429 0.1286 0.0929 11 ALBERT Wikepedia2Vec emb_concat 0.1580 0.6196 0.3098 0.2857 0.2571 0.2286 0.2286 12 ALBERT ComplEx emb_concat 0.1821 0.6196 0.3361 0.3098 0.2857 0.2571 0.0929 13 RoBERTa - - 0.1011 0.3158 0.2286 0.2000 0.1429 0.1286 0.0929 14 RoBERTa Wikepedia2Vec emb_concat 0.1453 0.4176 0.3361 0.2857 0.2571 0.2000 0.2286 15 RoBERTa ComplEx emb_concat 0.1660 0.5174 0.3361 0.3098 0.2000 0.2571 0.2286 - - 0.0946 0.1531 0.0000 0.0600 0.0400 0.0450 0.0240 17 SVM(TF.IDF) ComplEx emb_concat 0.0923 0.1170 0.0000 0.0200 0.0500 0.0675 0.0270 18 BiLSTM+att - - 0.0151 0.0320 0.0000 0.0100 0.0150 0.0075 0.0090 19 BiLSTM+att ComplEx emb_concat 0.0183 0.0320 0.0000 0.0200 0.0100 0.0100 0.0090 20 BERT - - 0.0262 0.0819 0.0500 0.0300 0.0250 0.0125 0.0110 21 BERT ComplEx emb_concat 0.0373 0.0819 0.0500 0.0500 0.0350 0.0175 0.0130 22 ALBERT - - 0.0537 0.2145 0.2000 0.0800 0.0500 0.0250 0.1600 23 ALBERT ComplEx emb_concat 0.1036 0.2644 0.2500 0.0900 0.0550 0.0275 0.0170 24 RoBERTa - - 0.0424 0.1315 0.1000 0.6000 0.0400 0.0200 0.1400 25 RoBERTa ComplEx emb_concat 0.0923 0.1814 0.1500 0.0700 0.0450 0.0225 0.0150 Table 14 : 14Descriptive analysis of the test set of 2020 dataset. Note, this table consist of only check-worthy sentences (denoted as CW). The results we investigate here is obtained using ALBERT language model, ComplEx entity embedding method, and emb_comcat method.debate type # of transcript # of cw cw/transcript # of entities/CW Recall (classification) Democratic 4 26 6.5 2.62 0.31 Republican 1 7 7 2 0.15 Mixed 2 23 11.5 2.57 0.17 Trump alone 13 83 6.38 1.16 0.11 Table 15 : 15A selected cases of check-worthy sentences, the identified entities, and if ALBERT + ComplEx successfully identified it as check-worthy. Bold denotes the identified entities.TrumpBut when you make your car or when you make your air conditioner, and you think you're going to fire all of our workers and open up a new place in another country, and you're going to come through what will be a very strong border, which is alreadyyou see what's happened; 61 percent down now in terms of illegal people coming in.CruzThere are many people in America struggling with exactly what you are, in the wreckage of Obamacare, with skyrocketing premiums, with deductibles that are unaffordable, and with really limited care.Trump's on record extensively supporting intervention in Libya, when Gadhafi was threatening to massacre his population.Speaker Sentence # of entities Predicted correctly Trump Trump was totally against the war in Iraq. 2 Y 1 N Cruz Bernie helped write Obamacare. 2 Y 2 N Clinton 3 Y Clinton And I do think there is an agenda out there, sup- ported by my opponent, to do just that. 0 N Moreover, 2) that entity embeddings improve language models' performance at identifying check-worthy sentences (row 3 & 4 vs. 2, rows 6 & 7 vs. 5, rows 9 & 10 vs. 8, rows 12 & 13 vs. 11, rows 15 & 16 vs. 14). Finally, we highlight our conclusion from RQ4 (see Section 5.4) that the ComplEx embedding method (rows 4, 7, 10, 13 & 16) -which uses the facts-alone KG embedding -significantly outperforms the semantic KG embedding method (i.e., Wikipedia2Vec, rows 3, 6, 9, 12 & 15). For the ranking task (Table 13, on both the 2019 & 2020 datasets), we draw similar conclusions as for classification task: The ALBERT language model (rows 10 -12 for the 2019 dataset, rows 22 & 23 for the 2020 dataset) consistently outperforms all other language models, while using the ComplEx embedding model (rows 3, 6, 9, 12, 15 for the 2019 dataset,17,19,21,23,25 for the 2020 dataset) consistently outperforms all other KG embedding models. Moreover, ALBERT + ComplEx + emb_concat (row 11 for the 2019 dataset, row 20 for the 2020 dataset) obtains the best performance among all tested models. Thus, we conclude that ALBERT, ComplEx, and emb_concat, can best identify and rank check-worthy sentences in a given speech or debate transcript. We use a uniform [−1, . . . , −1] vector to represent any entity not having any embedding in the pre-trained KG embeddings. From https://github.com/apepa/clef2019-factchecking-task1 for 2019 results, and https://github.com/sshaar/ clef2020-factchecking-task5 for 2020 results.15 We evaluate the classification task with only the CLEF'2019 CheckThat! Lab dataset, as our prior results found that it is not possible to derive meaningful results from the 2020 dataset, due to the small number of positive data in the test set. Detecting check-worthy factual claims in presidential debates. Naeemul Hassan, Chengkai Li, Mark Tremayne, Proceedings of CIKM. CIKMNaeemul Hassan, Chengkai Li, and Mark Tremayne. Detecting check-worthy factual claims in presidential debates. In Proceedings of CIKM, pages 1835-1838, 2015. Overview of the CLEF-2019 CheckThat! Lab on automatic identification and verification of claims. task 1: Check-worthiness. Pepa Atanasova, Preslav Nakov, Georgi Karadzhov, Mitra Mohtarami, Giovanni Da San Martino, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopPepa Atanasova, Preslav Nakov, Georgi Karadzhov, Mitra Mohtarami, and Giovanni Da San Martino. Overview of the CLEF-2019 CheckThat! Lab on automatic identification and verification of claims. task 1: Check-worthiness. In Proceedings of CLEF in CEUR Workshop, 2019. Overview of checkthat! 2020: Automatic identification and verification of claims in social media. Giovanni Da San, Maram Martino, Reem Hasanain, Fatima Suwaileh, Nikolay Haouari, Bayan Babulkov, Alex Hamdan, Shaden Nikolov, Zien Sheikh Shaar, Ali, Proceedings of CLEF in CEUR. CLEF in CEURGiovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, Shaden Shaar, and Zien Sheikh Ali. Overview of checkthat! 2020: Automatic identification and verification of claims in social media. In Proceedings of CLEF in CEUR, 2020. Neural weakly supervised fact checkworthiness detection with contrastive sampling-based ranking loss. Casper Hansen, Christian Hansen, J Simonsen, Christina Lioma, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopCasper Hansen, Christian Hansen, J Simonsen, and Christina Lioma. Neural weakly supervised fact check- worthiness detection with contrastive sampling-based ranking loss. In Proceedings of CLEF in CEUR Workshop, 2019. A hybrid model to rank sentences for checkworthiness. Rudra Dhar, Subhabrata Dutta, Dipankar Das, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopRudra Dhar, Subhabrata Dutta, and Dipankar Das. A hybrid model to rank sentences for checkworthiness. In Proceedings of CLEF in CEUR Workshop, 2019. TheEarthIsFlat's submission to CLEF'19 CheckThat! challenge. Luca Favano, Jakob Grue Carman, Simonsen, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopLuca Favano, M Carman, and Jakob Grue Simonsen. TheEarthIsFlat's submission to CLEF'19 CheckThat! challenge. In Proceedings of CLEF in CEUR Workshop, 2019. How to do things with words. Austin John Langshaw, Oxford university press88John Langshaw Austin. How to do things with words, volume 88. Oxford university press, 1975. The representative claim. Michael Saward, Contemporary political theory. 53Michael Saward. The representative claim. Contemporary political theory, 5(3):297-318, 2006. MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Proceedings of EMNLP-IJCNLP. EMNLP-IJCNLPIsabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. In Proceedings of EMNLP-IJCNLP, pages 4685-4697, 2019. Entity detection for check-worthiness prediction: Glasgow Terrier at CLEF CheckThat! 2019. Ting Su, Craig Macdonald, Iadh Ounis, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopTing Su, Craig Macdonald, and Iadh Ounis. Entity detection for check-worthiness prediction: Glasgow Terrier at CLEF CheckThat! 2019. In Proceedings of CLEF in CEUR Workshop, 2019. Joint learning of the embedding of words and entities for named entity disambiguation. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Proceedings of CoNLL. CoNLLIkuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of CoNLL, pages 250-259, 2016. Translating embeddings for modeling multi-relational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Proceedings of NeurIPS. NeurIPSAntoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Proceedings of NeurIPS, pages 2787-2795, 2013. Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, Proceedings of ICML. ICMLThéo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In Proceedings of ICML, pages 2071-2080, 2016. ERNIE: Enhanced language representation with informative entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu, Proceedings of ACL. ACLZhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. ERNIE: Enhanced language representation with informative entities. In Proceedings of ACL, pages 1441-1451, 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of NAACL-HLT. NAACL-HLTJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, page 4171-4186, 2019. ALBERT: A lite BERT for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, Proceedings of ICLR. ICLRZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of ICLR, 2019. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized BERT pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019. A statistical interpretation of term specificity and its application in retrieval. Karen Sparck, Jones , Journal of documentation. 28Karen Sparck Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 28:11-21, 1972. A simple rule-based part of speech tagger. Eric Brill, Proceedings of ANLP. ANLPEric Brill. A simple rule-based part of speech tagger. In Proceedings of ANLP, pages 152-155, 1992. Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of ICLR. ICLRTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In Proceedings of ICLR, 2013. Bert with history answer embedding for conversational question answering. Chen Qu, Liu Yang, Minghui Qiu, Bruce Croft, Yongfeng Zhang, Mohit Iyyer, Proceedings of SIGIR. SIGIRChen Qu, Liu Yang, Minghui Qiu, W Bruce Croft, Yongfeng Zhang, and Mohit Iyyer. Bert with history answer embedding for conversational question answering. In Proceedings of SIGIR, pages 1133-1136, 2019. Applying BERT to document retrieval with Birch. Shengjin Zeynep Akkalyoncu Yilmaz, Wei Wang, Haotian Yang, Jimmy Zhang, Lin, Proceedings of EMNLP. EMNLPZeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, and Jimmy Lin. Applying BERT to document retrieval with Birch. In Proceedings of EMNLP, pages 19-24, 2019. Fact checking in knowledge graphs with ontological subgraph patterns. Peng Lin, Qi Song, Yinghui Wu, Data Science and Engineering. 34Peng Lin, Qi Song, and Yinghui Wu. Fact checking in knowledge graphs with ontological subgraph patterns. Data Science and Engineering, 3(4):341-358, 2018. A semantic matching energy function for learning with multi-relational data. Antoine Bordes, Xavier Glorot, Jason Weston, Yoshua Bengio, Machine Learning. 94Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233-259, 2014. A three-way model for collective learning on multirelational data. Maximilian Nickel, Hans-Peter Volker Tresp, Kriegel, Proceedings of ICML. ICMLMaximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi- relational data. In Proceedings of ICML, pages 809-816, 2011. Knowledge graph embedding by translating on hyperplanes. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, Proceedings of AAAI. AAAIZhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating on hyperplanes. In Proceedings of AAAI, pages 1112-1119, 2014. Low-dimensional hyperbolic knowledge graph embeddings. Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, Christopher Ré, Proceedings of ACL. ACLInes Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. Low-dimensional hyperbolic knowledge graph embeddings. In Proceedings of ACL, pages 6901-6914, 2020. Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, X Liu, Zhu, Proceedings of AAAI. AAAIYankai Lin, Zhiyuan Liu, Maosong Sun, Y Liu, and X Zhu. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI, pages 2181-2187, 2015. Entity-Assisted Language Models for Identifying Check-worthy Sentences. Entity-Assisted Language Models for Identifying Check-worthy Sentences Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, Proceedings of ICLR. ICLRBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR, 2015. Cascade embedding model for knowledge graph inference and retrieval. Information Processing & Management. Daifeng Li, Andrew Madden, 56102093Daifeng Li and Andrew Madden. Cascade embedding model for knowledge graph inference and retrieval. Information Processing & Management, 56(6):102093, 2019. node2vec: Scalable feature learning for networks. Aditya Grover, Jure Leskovec, Proceedings of SIGKDD. SIGKDDAditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of SIGKDD, pages 855-864, 2016. Rotate: Knowledge graph embedding by relational rotation in complex space. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang, Proceedings of ICLR. ICLRZhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. In Proceedings of ICLR, pages 2071-2080, 2018. Quaternion knowledge graph embeddings. Shuai Zhang, Yi Tay, Lina Yao, Qi Liu, Proccedings of NeurIPS. cedings of NeurIPSShuai Zhang, Yi Tay, Lina Yao, and Qi Liu. Quaternion knowledge graph embeddings. In Proccedings of NeurIPS, pages 2735-2745, 2019. Practical non-monotonic knowledge-base system for un-regimented domains: A case-study in digital humanities. A Luis, Noé Pineda, Iván Hernández, Gibrán Torres, Nydia Pineda De Fuentes, Avila, Information Processing & Management. 573102214Luis A Pineda, Noé Hernández, Iván Torres, Gibrán Fuentes, and Nydia Pineda De Avila. Practical non-monotonic knowledge-base system for un-regimented domains: A case-study in digital humanities. Information Processing & Management, 57(3):102214, 2020. Crime base: Towards building a knowledge base for crime entities and their relationships from online news papers. K Srinivasa, Santhi Thilagam, Information Processing & Management. 566102059K Srinivasa and P Santhi Thilagam. Crime base: Towards building a knowledge base for crime entities and their relationships from online news papers. Information Processing & Management, 56(6):102059, 2019. Robust entity linking via random walks. Zhaochen Guo, Denilson Barbosa, Proceedings of CIKM. CIKMZhaochen Guo and Denilson Barbosa. Robust entity linking via random walks. In Proceedings of CIKM, pages 499-508, 2014. Learning entity representation for entity disambiguation. Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, Houfeng Wang, Proceedings of ACL. ACLZhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. Learning entity representation for entity disambiguation. In Proceedings of ACL, pages 30-34, 2013. Knowledge enhanced contextual word representations. E Matthew, Mark Peters, Robert L Neumann, I V Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A Smith, Proceedings of EMNLP-IJCNLP. EMNLP-IJCNLPMatthew E Peters, Mark Neumann, Robert L Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. Knowledge enhanced contextual word representations. In Proceedings of EMNLP-IJCNLP, pages 43-54, 2019. COMET: Commonsense transformers for automatic knowledge graph construction. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Çelikyilmaz, Yejin Choi, Proceedings of ACL. ACLAntoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Çelikyilmaz, and Yejin Choi. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of ACL, pages 4762- 4779, 2019. A context-aware approach for detecting worth-checking claims in political debates. Pepa Gencheva, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, Ivan Koychev, Proceedings of RANLP. RANLPPepa Gencheva, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, and Ivan Koychev. A context-aware approach for detecting worth-checking claims in political debates. In Proceedings of RANLP, pages 267-276, 2017. Tathya: A multi-classifier system for detecting check-worthy statements in political debates. Ayush Patwari, Dan Goldwasser, Saurabh Bagchi, Proceedings of CIKM. CIKMAyush Patwari, Dan Goldwasser, and Saurabh Bagchi. Tathya: A multi-classifier system for detecting check-worthy statements in political debates. In Proceedings of CIKM, pages 2259-2262, 2017. Claimrank: Detecting check-worthy claims in arabic and english. Israa Jaradat, Pepa Gencheva, Alberto Barrón-Cedeño, Lluís Màrquez, Preslav Nakov, Proceedings of NAACL-HLT. NAACL-HLTIsraa Jaradat, Pepa Gencheva, Alberto Barrón-Cedeño, Lluís Màrquez, and Preslav Nakov. Claimrank: Detecting check-worthy claims in arabic and english. In Proceedings of NAACL-HLT, pages 26-30, 2018. It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction. Slavena Vasileva, Pepa Atanasova, Lluís Màrquez, Alberto Barrón-Cedeño, Preslav Nakov, Proceedings of RANLP. RANLPSlavena Vasileva, Pepa Atanasova, Lluís Màrquez, Alberto Barrón-Cedeño, and Preslav Nakov. It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction. In Proceedings of RANLP, pages 1229-1239, 2019. The IPIPAN team participation in the check-worthiness task of the CLEF2019 CheckThat! lab. Jakub Gasior, Piotr Przybyła, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopJakub Gasior and Piotr Przybyła. The IPIPAN team participation in the check-worthiness task of the CLEF2019 CheckThat! lab. In Proceedings of CLEF in CEUR Workshop, 2019. CheckThat! 2019 UAICS. L Coca, Ciprian-Gabriel Cusmuliuc, Adrian Iftene, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopL Coca, Ciprian-Gabriel Cusmuliuc, and Adrian Iftene. CheckThat! 2019 UAICS. In Proceedings of CLEF in CEUR Workshop, 2019. Nlp&ir@ uned at checkthat! 2020: A preliminary approach for check-worthiness and claim retrieval tasks using neural networks and graphs. J Martinez-Rico, Lourdes Araujo, Juan Martinez-Romo, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopJ Martinez-Rico, Lourdes Araujo, and Juan Martinez-Romo. Nlp&ir@ uned at checkthat! 2020: A preliminary approach for check-worthiness and claim retrieval tasks using neural networks and graphs. In Proceedings of CLEF in CEUR Workshop, 2020. Uaics at checkthat! 2020: Fact-checking claim prioritization. Ciprian-Gabriel Cusmuliuc, Lucia-Georgiana Coca, Adrian Iftene, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopCiprian-Gabriel Cusmuliuc, Lucia-Georgiana Coca, and Adrian Iftene. Uaics at checkthat! 2020: Fact-checking claim prioritization. In Proceedings of CLEF in CEUR Workshop, 2020. Tobb etu at checkthat! 2020: Prioritizing english and arabic claims based on check-worthiness. Yavuz Selim Kartal, Mucahid Kutlu, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopYavuz Selim Kartal and Mucahid Kutlu. Tobb etu at checkthat! 2020: Prioritizing english and arabic claims based on check-worthiness. In Proceedings of CLEF in CEUR Workshop, 2020. Overview of checkthat! 2020: Automatic identification and verification of claims in social media. Alberto Barrón-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San, Maram Martino, Reem Hasanain, Fatima Suwaileh, Nikolay Haouari, Bayan Babulkov, Alex Hamdan, Nikolov, Proceedings of CLEF in CEUR Workshop. CLEF in CEUR WorkshopAlberto Barrón-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, et al. Overview of checkthat! 2020: Automatic identification and verification of claims in social media. In Proceedings of CLEF in CEUR Workshop, pages 215-236, 2020. Improving efficiency and accuracy in multilingual entity extraction. Joachim Daiber, Max Jakob, Chris Hokamp, Pablo N Mendes, Proceedings of I-Semantics. I-SemanticsJoachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. Improving efficiency and accuracy in multilingual entity extraction. In Proceedings of I-Semantics, pages 121-124, 2013. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, Proceedings of EMNLP. EMNLPKyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of EMNLP, pages 1724-1734, 2014. Computational fact checking from knowledge networks. Giovanni Luca Ciampaglia, Prashant Shiralkar, M Luis, Johan Rocha, Filippo Bollen, Alessandro Menczer, Flammini, PloS one. 106128193Giovanni Luca Ciampaglia, Prashant Shiralkar, Luis M Rocha, Johan Bollen, Filippo Menczer, and Alessandro Flammini. Computational fact checking from knowledge networks. PloS one, 10(6):e0128193, 2015. Higher-order coreference resolution with coarse-to-fine inference. Kenton Lee, Luheng He, Luke Zettlemoyer, Proceedings of NAACL-HLT. NAACL-HLTKenton Lee, Luheng He, and Luke Zettlemoyer. Higher-order coreference resolution with coarse-to-fine inference. In Proceedings of NAACL-HLT, pages 687-692, 2018. An effective, low-cost measure of semantic relatedness obtained from Wikipedia links. H Ian, David N Witten, Milne, Proceedings of AAAI. AAAIIan H Witten and David N Milne. An effective, low-cost measure of semantic relatedness obtained from Wikipedia links. In Proceedings of AAAI, pages 25-30, 2008. DGL-KE: Training knowledge graph embeddings at scale. Da Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, George Karypis, Proceedings of SIGIR. SIGIRDa Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, and George Karypis. DGL-KE: Training knowledge graph embeddings at scale. In Proceedings of SIGIR, page 739-748, 2020. CEDR: Contextualized embeddings for document ranking. Sean Macavaney, Andrew Yates, Arman Cohan, Nazli Goharian, Proceedings of SIGIR. SIGIRSean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. CEDR: Contextualized embeddings for document ranking. In Proceedings of SIGIR, pages 1101-1104, 2019. Ensembles of recurrent networks for classifying the relationship of fake news titles. Ting Su, Craig Macdonald, Iadh Ounis, Proceedings of SIGIR. SIGIRTing Su, Craig Macdonald, and Iadh Ounis. Ensembles of recurrent networks for classifying the relationship of fake news titles. In Proceedings of SIGIR, pages 893-896, 2019. Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R&apos;emi Louf, Morgan Funtowicz, Jamie Brew, arXiv:1910.03771arXiv preprintThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019. Sematch: Semantic entity search from knowledge graph. Ganggao Zhu, Carlos Angel Iglesias Fernandez, Proceedings of SumPre-HSWI. SumPre-HSWIGanggao Zhu and Carlos Angel Iglesias Fernandez. Sematch: Semantic entity search from knowledge graph. In Proceedings of SumPre-HSWI, 2015.
[ "https://github.com/apepa/clef2019-factchecking-task1", "https://github.com/sshaar/" ]
[ "Achievable information rates of ambient backscatter communications", "Achievable information rates of ambient backscatter communications" ]
[ "Donatella Darsena ", "Giacinto Gelli ", "Francesco Verde " ]
[]
[]
Ambient backscatter is an intriguing wireless communication paradigm that allows small devices to compute and communicate by using only the power they harvest from radio-frequency (RF) signals in the air. Ambient backscattering devices reflect existing RF signals emitted by legacy communications systems, such as digital TV broadcasting, cellular or Wi-Fi ones, which would be otherwise treated as harmful sources of interference. This paper deals with the ultimate performance limits of ambient backscatter systems in broadband fading environments, by considering different amounts of network state information at the receivers. After introducing a detailed signal model of the relevant communication links, we study the influence of physical parameters on the capacity of both legacy and backscatter systems. We find that, under reasonable operative conditions, a legacy system employing multicarrier modulation can turn the RF interference arising from the backscatter process into a form of multipath diversity that can be suitably exploited to noticeably increase its performance. Moreover, we show that, even when employing simple single-carrier modulation techniques, the backscatter system can achieve significant data rates over relatively short distances, especially when the intended recipient of the backscatter signal is co-located with the legacy transmitter, i.e., they are on the same machine.Index TermsAmbient backscatter, ergodic and outage capacity, symbol variance and amplitude constraints, multicarrier systems, performance bounds. D. Darsena is with the DRAFT 1 The main principles of ambient backscatter were first introduced in [3], where also a simple prototype is developed, which harvests DTV energy to achieve D2D communications with rates of 1 kbps over a range of about 8 m outdoor and 5 m indoor. In [4], the same principles are exploited to allow a passive device or tag to directly connect to the Internet by leveraging on an existing Wi-Fi infrastructure. In particular, in the scenario of [4], the tag can establish bidirectional communications with a Wi-Fi device by modulating the channel state information (CSI) or received signal strength indicator (RSSI) of the Wi-Fi channel (in the uplink) or by simple on-off modulation (in downlink), achieving rates of 0.5 kbps in uplink over a range of 1 m and up to 20 kbps over 2.2 m in downlink. A significant improvement over this scheme is the BackFi system proposed in [5], wherein backscatter communications can achieve at least 1 Mbps over a 5m-range in uplink, by exploiting the signal cancellation principles of full-duplex systems [10]. In [6], [7], [8] the ambient backscatter approach is extended to systems where the backscatter receiver (called the reader) is equipped with multiple antennas; moreover, a detailed analysis of the system from a signal processing perspective is carried out, by assuming that the wireless channel obeys a frequency-flat block-fading model. Since the tag employs low-rate differentially-encoded on-off signaling, the reader can decode its information by employing simple noncoherent detection strategies. The performance analysis of the approach proposed in [6], [7], [8] is carried out in terms of bit-error rate (BER), both analytically and by Monte Carlo simulations.Existing research on ambient backscatter has covered both experimental and theoretical aspects. However, to the best of our knowledge, an investigation of the ultimate performance limits of ambient backscatter, in terms of information-theoretic figures, such as the ergodic or outage capacity, is still lacking. We aim at filling this gap, by evaluating in this paper the capacity (i.e., the maximum achievable transmission rate) of ambient backscatter communications systems. Our analysis assumes that the legacy system employs a multicarrier modulation, which is ubiquitous in modern communication systems, whereas the backscatter system transmits at lower bit-rates by adopting simple single-carrier techniques. We evaluate typical information-theoretic figures of merit for both the legacy and the backscatter systems, by assuming a symbol variance constraint for the legacy system and both symbol variance and amplitude constraints for the backscatter one. Our results allow one to assess the maximum data-rate achievable by the backscatter system, and also show somewhat surprisingly that, since the backscatter transmitter acts as a relay towards the legacy receiver, the legacy system can even benefit of ambient backscatter, provided that some reasonable assumptions are met. In other words, ambient backscatter is not only a viable means of opportunistically capitalizing on the energy carried out by RF signals, but it is also a way of turning EM interference into a form of diversity.
null
[ "https://arxiv.org/pdf/1605.04805v1.pdf" ]
1,537,551
1605.04805
738ba68a66eaa07b1d7dc849775e348f5cda6405
Achievable information rates of ambient backscatter communications 16 May 2016 Donatella Darsena Giacinto Gelli Francesco Verde Achievable information rates of ambient backscatter communications 16 May 2016 Ambient backscatter is an intriguing wireless communication paradigm that allows small devices to compute and communicate by using only the power they harvest from radio-frequency (RF) signals in the air. Ambient backscattering devices reflect existing RF signals emitted by legacy communications systems, such as digital TV broadcasting, cellular or Wi-Fi ones, which would be otherwise treated as harmful sources of interference. This paper deals with the ultimate performance limits of ambient backscatter systems in broadband fading environments, by considering different amounts of network state information at the receivers. After introducing a detailed signal model of the relevant communication links, we study the influence of physical parameters on the capacity of both legacy and backscatter systems. We find that, under reasonable operative conditions, a legacy system employing multicarrier modulation can turn the RF interference arising from the backscatter process into a form of multipath diversity that can be suitably exploited to noticeably increase its performance. Moreover, we show that, even when employing simple single-carrier modulation techniques, the backscatter system can achieve significant data rates over relatively short distances, especially when the intended recipient of the backscatter signal is co-located with the legacy transmitter, i.e., they are on the same machine.Index TermsAmbient backscatter, ergodic and outage capacity, symbol variance and amplitude constraints, multicarrier systems, performance bounds. D. Darsena is with the DRAFT 1 The main principles of ambient backscatter were first introduced in [3], where also a simple prototype is developed, which harvests DTV energy to achieve D2D communications with rates of 1 kbps over a range of about 8 m outdoor and 5 m indoor. In [4], the same principles are exploited to allow a passive device or tag to directly connect to the Internet by leveraging on an existing Wi-Fi infrastructure. In particular, in the scenario of [4], the tag can establish bidirectional communications with a Wi-Fi device by modulating the channel state information (CSI) or received signal strength indicator (RSSI) of the Wi-Fi channel (in the uplink) or by simple on-off modulation (in downlink), achieving rates of 0.5 kbps in uplink over a range of 1 m and up to 20 kbps over 2.2 m in downlink. A significant improvement over this scheme is the BackFi system proposed in [5], wherein backscatter communications can achieve at least 1 Mbps over a 5m-range in uplink, by exploiting the signal cancellation principles of full-duplex systems [10]. In [6], [7], [8] the ambient backscatter approach is extended to systems where the backscatter receiver (called the reader) is equipped with multiple antennas; moreover, a detailed analysis of the system from a signal processing perspective is carried out, by assuming that the wireless channel obeys a frequency-flat block-fading model. Since the tag employs low-rate differentially-encoded on-off signaling, the reader can decode its information by employing simple noncoherent detection strategies. The performance analysis of the approach proposed in [6], [7], [8] is carried out in terms of bit-error rate (BER), both analytically and by Monte Carlo simulations.Existing research on ambient backscatter has covered both experimental and theoretical aspects. However, to the best of our knowledge, an investigation of the ultimate performance limits of ambient backscatter, in terms of information-theoretic figures, such as the ergodic or outage capacity, is still lacking. We aim at filling this gap, by evaluating in this paper the capacity (i.e., the maximum achievable transmission rate) of ambient backscatter communications systems. Our analysis assumes that the legacy system employs a multicarrier modulation, which is ubiquitous in modern communication systems, whereas the backscatter system transmits at lower bit-rates by adopting simple single-carrier techniques. We evaluate typical information-theoretic figures of merit for both the legacy and the backscatter systems, by assuming a symbol variance constraint for the legacy system and both symbol variance and amplitude constraints for the backscatter one. Our results allow one to assess the maximum data-rate achievable by the backscatter system, and also show somewhat surprisingly that, since the backscatter transmitter acts as a relay towards the legacy receiver, the legacy system can even benefit of ambient backscatter, provided that some reasonable assumptions are met. In other words, ambient backscatter is not only a viable means of opportunistically capitalizing on the energy carried out by RF signals, but it is also a way of turning EM interference into a form of diversity. I. INTRODUCTION Electromagnetic (EM) interference, also called radio-frequency (RF) interference, has been traditionally treated as a disturbance in the design of wireless communications systems. However, RF signals carry information as well as energy at the same time. Such a dual nature of EM interference is stimulating a significant interest in communications systems powered by harvested ambient energy. In particular, ambient backscatter has emerged as a novel communication paradigm, where a small passive device can transmit its own data by backscattering the EM/RF wave deriving from existing or legacy communication systems, such as digital TV (DTV) broadcasting, cellular systems, or wireless local area networks (LANs), e.g., Wi-Fi. Unlike traditional backscatter systems, such as radio frequency identification (RFID) ones [1], [2], ambient backscatter does not require a dedicated reader, which allows for direct device-to-device (D2D) and even multi-hop communications. Recently, this new communication paradigm has been receiving much attention [3], [4], [5], [6], [7], [8], since it can be embedded into inexpensive objects in order to fulfil the ubiquitous and pervasive communication vision of the Internet-of-Things (IoT) [9]. The paper is organized as follows. The system model is introduced in Section II. General assumptions underlying the performance analysis are pointed out in Section III. The analytical performance analysis is carried out in Sections IV and V for the legacy and the backscatter system, respectively. Numerical results corroborating our analysis are reported in Section VI. Conclusions are drawn in Section VII. II. AMBIENT BACKSCATTER SYSTEM MODEL In this section, we introduce a model for ambient backscatter communications that harvest energy from legacy transmissions: our model builds on the previous works [3], [4], [5]. The considered wireless network is depicted in Fig. 1: it is composed of a legacy 1 transmitter-receiver (LTx/LRx) pair and a backscatter transmitter (BTx) that wishes to transmit information-bearing symbols to an intended recipient (BRx). In the sequel, the devices LTx, BTx, LRx, and BRx will be labelled as nodes 1, 2, 3, and 4, respectively. Specifically, the LTx and LRx are active devices, i.e., they have internal power sources to modulate and demodulate, respectively, the relevant RF signals. On the other hand, the BTx is a passive device, i.e., it does not include any active RF component, and communicates using only the power that it harvests from the RF signals transmitted by the LTx. Finally, the BRx may be either passive or might use typical active RF electronics to demodulate the signal backscattered by the BTx. The LTx adopts a multicarrier modulation scheme with M subcarriers. The block of data to be transmitted by the LTx within the nth (n ∈ Z) frame of length T s is denoted as s(n) [s (0) (n), s (1) (n), . . . , s (M −1) (n)] T ∈ C M , whose entries are independent and identically distributed It results that T s P T c , with P M + L cp and T c denoting the sampling period of the legacy system. The data block transmitted by the LTx can be compactly expressed [12] as u(n) = T cp W IDFT s(n), where T cp [I T cp , I M ] T ∈ R P ×M , with I cp ∈ R Lcp×M obtained from I M by picking its last L cp rows, and W IDFT ∈ C M ×M is the unitary symmetric IDFT matrix [12]. 2 The entries of u(n) are subject to D/A plus RF conversion for transmission over the wireless channel. On the other hand, due to its power limitation, the BTx transmits in a narrower bandwidth with respect to the legacy system (higher data rates consume more power and energy). Specifically, the BTx has a Q-order symbol sequence {b(n)} n∈Z ∈ B {β 1 , β 2 , . . . , β Q } of i.i.d. zero-mean circularly symmetric complex symbols destined for the BRx, with variance σ 2 b E[|b(n)| 2 ], for any n ∈ Z, and signaling interval T s . Such a sequence is arranged in consecutive frames of B ∈ N symbols, whose duration is less than or equal to the coherence time T coh B T s of the channels. It is noteworthy that one symbol is transmitted by the BTx per each frame of the legacy system. A. Signal backscattered by the BTx Since the BTx is passive, it cannot initiate transmissions on its own. Once the LTx transmits the block u(n), the EM wave propagates toward the BTx. When the wave reaches the BTx, its antenna is excited and the RF power is converted to direct current (DC) power through a power harvester. This DC voltage is then able to power the control logic on the chip, whose task is to modulate the reflected EM wave. Regarding the 1 → 2 link, a frequency-selective and quasi-static channel model is assumed. Specifically, during an interval of duration T coh , the channel impulse response spans L 12 ∈ N sampling periods T c ; hence, the resulting discrete-time channel c 12 (ℓ) is a causal system of order L 12 , i.e., c 12 (ℓ) ≡ 0 for ℓ ∈ {0, 1, . . . , L 12 }. Moreover, the 1 → 2 link is characterized by the (integer) time offset (TO) θ 12 ∈ N, 2 Besides standard notations, we adopt the following ones: matrices [vectors] are denoted with upper [lower] case boldface letters (e.g., A or a); the superscripts * , T, H, and −1 denote the conjugate, the transpose, the conjugate transpose, and the inverse of a matrix, respectively; log(·) is taken to the base 2; the operator E(·) denotes ensemble averaging; Om×n ∈ R m×n and Im ∈ R m×m denote the null and the identity matrices, respectively; matrix A = diag(a0, a1, . . . , an−1) is diagonal; F ∈ R n×n and B ∈ R n×n denote the Toeplitz "forward shift" and "backward shift" matrices [11], respectively, where the first column of F and the first row of B are given by [0, 1, 0, . . . , 0] T and [0, 1, 0, . . . , 0], respectively; a circular symmetric complex Gaussian random vector x ∈ C n with mean µ ∈ C n and covariance matrix K ∈ C n×n is denoted as x ∼ CN (µ, K). modeling the fact that the BTx does not know where the multicarrier blocks of the legacy system start. 3 Finally, since the BTx simply remodulates the carrier of the LTx, we assume in the sequel that the carrier frequency offset (CFO) is negligible. 4 Under these assumptions and provided that L 12 + θ 12 ≤ P − 1, 5 the baseband-equivalent block received by the BTx within the nth frame can be written as r 2 (n) = C (0) 12 u(n) + C (1) 12 u(n − 1)(1) where r 2 (n) [r (0) 2 (n), r(1) 2 (n), . . . , r (P −1) 2 (n)] T ∈ C P , C (0) 12 L12 ℓ=0 c 12 (ℓ) F ℓ+θ12 ∈ C P ×P (2) C (1) 12 L12 ℓ=0 c 12 (ℓ) B P −ℓ−θ12 ∈ C P ×P(3) are Toeplitz lower-and upper-triangular matrices, respectively, and we have neglected the noise introduced by the BTx [1], [13], since the latter employs only passive components and does not perform sophisticated signal processing operations. It is worth noticing that the last P − L 12 − θ 12 rows of the matrix C are identically zero, that is, the interblock interference (IBI) contribution is entirely contained in the first L 12 + θ 12 entries of the received vector r 2 (n). In our ambient backscatter framework, the BTx acts as a digital multilevel modulator, mapping each information symbol onto a set of Q waveforms by means of a proper variation of its chip impedance [14]. To elaborate upon this point, Fig. 2 reports the equivalent Thévenin circuit [15] of the BTx front-end, where the sine wave generator V 0 models the sinusoidal voltage induced by the power density of the incident EM field, Z a = R a + j X a ∈ C is the antenna impedance, and Z c q = R c q + j X c q ∈ C are Q distinct values of the BTx chip impedance, for q ∈ Q {1, 2, . . . , Q}. The maximum power available from the generator is given by P c max |V 0 | 2 /(8 R a ). At the reference plane denoted by the dashed line in Fig. 2, due to the impedance discontinuity, two power waves are generated: a (nonreflecting) forward wave propagating to the right and a (reflecting) backward wave giving rise to the backscattered field. When the switch S q is closed, i.e., the chip impedance of the BTx takes on the value Z c q , the average power harvested by the BTx is given [16] by P c q = (1 − |Γ q | 2 ) P c max 3 The fractional TO is incorporated as part of {c12(ℓ)} L 12 ℓ=0 . 4 A CFO may occur as a result of the Doppler effect from a mobile BTx, which is an unimportant phenomenon in backscatter systems [3], [4], [5]. 5 In general, the received block within the nth frame is affected not only by the IBI of the previous frame n − 1 but also by the IBI of the (n − 2)th frame. The assumption L ik + θ ik ≤ P − 1 ensures that the sum of the TO and the channel order turns out to be within one frame, such that the nth received block is impaired only by the IBI of the previous frame. (q ∈ Q), where 6 Γ q = (Z a ) * − Z c q Z a + Z c q(4) is the power wave reflection coefficient Γ q ∈ C. The squared magnitude 0 ≤ |Γ q | 2 ≤ 1 of the power wave reflection coefficient is referred to as the power reflection coefficient [16]: it measures the fraction of P c max that is not delivered to the chip of the BTx. It is worth noticing that, if (Z a ) * = Z c q (impedance matching condition), then Γ q = 0: in this case, the tag achieves maximum average power harvesting P c max and, in theory, there is no backscattered field. Hence, an impedance mismatch (Z a ) * = Z c q is necessary to reflect part of the energy from the BTx antenna back to the intended recipient BRx. The symbol sequence {b(n)} n∈Z can be embedded in the backscattered signal by carefully choosing the chip impedances Z c 1 , Z c 2 , . . . , Z c Q . Each chip impedance in Fig. 2 corresponds to a point of the symbol constellation B. More precisely, to produce impedance values realizable with passive components, all the power wave reflection coefficients Γ 1 , Γ 2 , . . . , Γ Q are confined in the complex plane within a circle centered at the origin with radius smaller than or equal to one. These coefficients are then scaled by a constant 0 ≤ α ≤ 1 such that Γ q = α β q (q ∈ Q)(5) with |β q | ≤ 1. Eq. (5) establishes a one-to-one mapping between the information symbols of the BTx and the power wave reflection coefficients of its chip. Such a mapping is generally referred to as backscatter or load modulation [14]. The choice of α governs the harvesting-performance tradeoff of the backscatter communication process. Indeed, values of α closer to one allows the BTx to reflect increasing amounts of the incident field back to the BRx, resulting thus in greater backscatter signal strengths (i.e., for a target symbol error probability at the BRx, larger communication ranges). On the other hand, values of α much smaller than one allow a larger part of the incident field to be absorbed by the RF-to-DC conversion circuits of the BTx, hence improving power conversion (i.e., P c q ) at the expense of backscatter signal strength. We note that α = 0 accounts for the case when the backscatter system is in sleep mode and, hence, only the legacy transmission is active. Once α and B have been chosen in accordance with certain criteria [1] and, thus, the power wave reflection coefficients are identified through (5), the chip impedances Z c 1 , Z c 2 , . . . , Z c Q corresponding to the designed signal constellation can be obtained from (4) as follows Z c q = (Z a ) * − Z a Γ q 1 + Γ q (q ∈ Q)(6) where Z a is a given parameter. In practice, some constraints may be imposed on the chip impedances (6): for instance, to use high-quality electronic components and/or reduce the physical size of the BTx, it might be required to use resistors and capacitors, by hence eliminating inductors [14]. According to the antenna scatterer theorem [18], the EM field backscattered from the antenna of the BTx can be divided [18] into load-dependent (or antenna mode) scattering and load-independent (or structural mode) one: the former component can be associated with re-radiated power and depends on the chip impedances of the BTx, whereas the latter one can be interpreted as scattering from an open-circuited antenna. Therefore, with reference to antenna mode scattering and accounting for (5), the pth basebandequivalent T c -spaced sample backscattered by the BTx during the nth frame of the legacy system assumes the expression x (p) 2 (n) = Γ(n) r (p) 2 (n) (p ∈ P), where Γ(n) α b(n) is a discrete random variable assuming the values Γ 1 , Γ 2 , . . . , Γ Q , whereas b(n) ∈ B is the symbol transmitted by the BTx during the nth frame. The corresponding block model reads as x 2 (n) = Γ(n) r 2 (n) = α b(n) r 2 (n)(7) where x 2 (n) [x (0) 2 (n), x(1) 2 (n), . . . , x (P −1) 2 (n)] T ∈ C P and r 2 (n) is given by (1). B. Signal received by the LRx With reference to the 1 → 3 and 2 → 3 links, we maintain the same assumptions previously made for the 1 → 2 link: basically, for i ∈ {1, 2}, within the coherence time T coh , the resulting discrete-time channel c i3 (ℓ) is a causal system of order L i3 , i.e., c i3 (ℓ) ≡ 0 for ℓ ∈ {0, 1, . . . , L i3 }, and θ i3 ∈ N is the [19]. Provided that L 13 + θ 13 ≤ P − 1 and L 23 + θ 23 ≤ P − 1 (see footnote 5), accounting for (1) and (7), after CFO compensation, the baseband-equivalent vector received by the LRx within the nth frame of the legacy system can be expressed as r 3 (n) = C (0) 13 u(n) + C (1) 13 u(n − 1) + C (0) 23 x 2 (n) + C (1) 23 x 2 (n − 1) + v 3 (n) = C (0) 13 + α b(n) C (0) 23 C (0) 12 u(n)+ C (1) 13 + α b(n) C (0) 23 C (1) 12 + α b(n − 1) C (1) 23 C (0) 12 u(n − 1)+ v 3 (n) (8) where { C (0) 13 , C(1)13 } and { C (0) 23 , C(1) 23 } can be obtained from (2) and (3) by replacing {L 12 , c 12 (ℓ), θ 12 } with {L 13 , c 13 (ℓ), θ 13 } and {L 23 , c 23 (ℓ), θ 23 }, respectively, and v 3 (n) ∈ C P accounts for the structural mode scattering, which is independent of the BTx chip impedances, as well as for thermal noise. We have also observed that C L 12 + L 23 + θ 12 + θ 23 ≤ P − 1 .(9) The set of lower (upper) triangular Toeplitz matrices possesses an eminent algebraic structure: indeed, such a set is an algebra [11]. In particular, the product of any lower (upper) triangular Toeplitz matrices is a lower (upper) triangular Toeplitz matrix, too. Indeed, it is directly verified that, if (9) holds, the product C the IBI contribution in (8) can be completely discarded by dropping the first L cp components of r 3 (n), since it is verified by direct inspection that: (i) only the first L 12 + L 23 + θ 12 + θ 23 rows of C 13 are identically zero. Therefore, if (10) is fulfilled, after discarding the CP, performing M -point discrete Fourier transform (DFT), the resulting frequency-domain data block r 3 (n) ∈ C M is given by r 3 (n) = Ψ 3 s(n) + v 3 (n)(11) where Ψ 3 diag[Ψ 3 (0), Ψ 3 (1), . . . , Ψ 3 (M − 1)], whose diagonal entries are given by Ψ 3 (m) Ψ 13 (m) + α b(n) Ψ 12 (m) Ψ 23 (m)(12) for m ∈ M, with Ψ ik (m) e −j 2π M θikm Lik ℓ=0 c ik (ℓ) e −j 2π M ℓm(13) and v 3 (n) ∈ C M is obtained from v 3 (n) by discarding its first L cp entries and performing M -point DFT. Remark 1: It is noteworthy from (8)-(11) that the signal backscattered by the BTx may create additional paths from the LTx to the LRx, which increases multipath propagation on the legacy channel. In particular, if L 12 + L 23 + θ 12 + θ 23 > L 13 + θ 13 , in accordance with (10), such an additional multipath requires a corresponding increase of the CP length in order to avoid both IBI and intercarrier interference (ICI) after CP removal, which may worsen the performance of the legacy system. In summary, the price to pay for allowing ambient backscatter is an oversizing of the CP length, thus leading to an inherent reduction of the transmission data rate of the legacy system. However, such a loss turns out to be negligible if the number M of subcarriers is significantly greater than L cp . Most important, we show in Section IV that, if the legacy system is designed to fulfil (10), it might even achieve a performance gain. Remark 2: We note that assumption (10) requires only upper bounds (rather than the exact knowledge) on the channel orders and TOs. This is a reasonable assumption in the considered scenario. Indeed, in general, depending on the transmitted signal parameters (carrier frequency and bandwidth) and environment (indoor or outdoor), the maximum channel multipath spread is known. For legacy systems, particular synchronization policies are typically adopted to drastically reduce the asynchronisms [20], whereas, for ambient backscatter communications, the distances among the LTx, BTx, and the BRx are very small. Therefore, the TOs are confined to a small uncertainty interval, whose support can be typically predicted. C. Signal received by the BRx Concerning the 1 → 4 and 2 → 4 links, we maintain the same assumptions previously made for the 1 → 2, 1 → 3, and 2 → 3 links: in summary, for i ∈ {1, 2}, within the coherence time T coh , the resulting discrete-time channel c i4 (ℓ) is a causal system of order L i4 , i.e., c i4 (ℓ) ≡ 0 for ℓ ∈ {0, 1, . . . , L i4 }, and θ i4 ∈ N is the corresponding TO. Similarly to Subsection II-B, we assume that the 1 → 4 and 2 → 4 links have the same CFO, which will be denoted as ν ∈ (−1/2, 1/2) in the sequel (it is normalized to the subcarrier spacing 1/T c ). Under the assumption that L 14 + θ 14 ≤ P − 1 and L 24 + θ 24 ≤ P − 1 (see footnote 5), the basebandequivalent block received by the BRx within the nth frame of the legacy system can be expressed as shown May 17, 2016 DRAFT r 4 (n) = e j 2π M νnP Σ ν C (0) 14 u(n) + C (1) 14 u(n − 1) + C (0) 24 x 2 (n) + C (1) 24 x 2 (n − 1) + v 4 (n) = α e j 2π M νnP Σ ν C (0) 24 C (0) 12 u(n) + C (0) 24 C (1) 12 u(n − 1) b(n) + α e j 2π M νnP Σ ν C (1) 24 C (0) 12 u(n − 1) b(n − 1) + e j 2π M νnP Σ ν C (0) 14 u(n) + C (1) 14 u(n − 1) + v 4 (n) (14) at the top of this page in (14), where { C (0) 14 , C(1)14 } and { C (0) 24 , C(1) 24 } can be obtained from (2) and (3) by replacing {L 12 , c 12 (ℓ), θ 12 } with {L 14 , c 14 (ℓ), θ 14 } and {L 24 , c 24 (ℓ), θ 24 }, respectively, we have defined the diagonal matrix Σ ν diag[1, e j 2π M ν , . . . , e j 2π M ν(P −1) ] ∈ C P ×P , and v 4 (n)C P accounts for both the structural mode scattering and thermal noise. Remark 3: It is important to notice from (14) that the BRx experiences frequency-selective fast fading, since: (i) the received signal is corrupted by the intersymbol interference (ISI) of the previous symbol b(n − 1); (ii) the channel tap seen by the BRx varies with time from sampling period to sampling period, due to its dependence on the data {u (p) (n)} P −1 p=0 transmitted by the LTx, and such a variation is P -times faster than the symbol rate 1/T s of the backscatter system. Interestingly, by observing that the nonzero entries of Σ ν C 14 are identically zero, the BRx can resort to a simple detection technique to completely remove its own ISI and partially mitigate the interference generated by the legacy transmission. More specifically, this can be obtained by dropping the first L b ≥ max (L 14 + θ 14 , L 24 + θ 24 )(15) components of r 4 (n). This operation is accomplished by defining the matrix R b [O N ×Lb , I N ] ∈ R N ×P , with N P − L b > 0, and forming at the receiver the product R b r 4 (n). So doing, one has R b r 4 (n) = α c 4 b(n) + d 4 (n)(16) with c 4 e j 2π M νnP R b Σ ν C (0) 24 C (0) 12 u(n) + C (0) 24 C (1) 12 u(n − 1) ∈ C N (17) d 4 (n) e j 2π M νnP R b Σ ν C (0) 14 u(n) + R b v 4 (n) ∈ C N(18) where it results that R b Σ ν C 24 C (0) 12 = O N ×P and R b Σ ν C(1)14 = O N ×P .(1) To fulfil (15), some a priori knowledge is required at the BRx, which can be acquired in practice (see Remark 2): as explained in May 17, 2016 DRAFT Section V, such a knowledge at the BRx depend on whether the BRx and LTx are spatially-separated nodes [3] or they are the co-located [4], [5], i.e., they are on the same machine. III. GENERAL ASSUMPTIONS FOR THE ANALYTICAL PERFORMANCE ANALYSIS The goal of the forthcoming Sections IV and V is twofold. First, we aim at showing in Section IV what is the influence of the backscatter communication on the achievable rates of the legacy system, by assuming that the CP is long enough, i.e., inequality (10) is fulfilled. Second, under assumption (15), we highlight in Section V what are the ultimate rates of the backscatter communication, by considering either the case when the nodes BRx and LTx are co-located [4], [5] or the situation in which they are spatially-separated nodes [3]. General assumptions are reported in the sequel. For i ∈ {1, 2} and k ∈ {2, 3, 4}, with i = k, the channel samples c ik (0), c ik (1), . . . , c ik (L ik ) (encompassing the physical channel as well as the transmit/receive filters) are modeled as i.i.d. zero-mean circularly symmetric complex Gaussian random coefficients (Rayleigh fading model), 7 which are constant within the coherence time T coh , but are allowed to vary independently in different coherence intervals; the variance E[|c ik (ℓ)| 2 ] σ 2 ik /(L ik + 1) of the i → k linkstatistically independent of c i2k2 (ℓ) for i 1 = i 2 or k 1 = k 2 . Since c ik (ℓ) is a circularly symmetric complex Gaussian random variable by assumption, then c ik (ℓ) and c ik (ℓ) e −j 2π M (ℓ+θik)m have the same probability distribution [21], i.e., c ik (ℓ) e −j 2π M (ℓ+θik)m ∼ CN [0, σ 2 ik /(L ik + 1)] , for any ℓ, m, and n. Consequently, one has Ψ ik (m) ∼ CN (0, σ 2 ik ). It is seen from (13) that, even if the time-domain channel taps {c ik (ℓ)} Lik ℓ=0 are assumed to be uncorrelated, the corresponding DFT samples Ψ ik (m 1 ) and Ψ ik (m 2 ) turn out to be correlated, for m 1 = m 2 ∈ M. For k ∈ {3, 4}, we assume that v k (n) ∼ CN (0 P , σ 2 vk I P ) with E[ v k (n 1 ) v H k (n 2 )] = O P ×P , for n 1 = n 2 ∈ Z. Finally, channel coefficients, information-bearing symbols, and noise samples are all modeled as statistically independent random variables. IV. CAPACITY ANALYSIS OF THE LEGACY SYSTEM Since the detection process at the LRx is carried out on a frame-by-frame basis, we omit the dependence on the frame index n hereinafter. Under the assumption that the realization Ξ 3 of Ψ 3 is known at the LRx (but not at the LTx), the channel output of (11) is the pair (r 3 , Ψ 3 ). Therefore, the (coherent) ergodic (or Shannon) capacity of (11) is defined as (see, e.g., [22]) C 3 sup f (s)∈Is I(s; r 3 , Ψ 3 ) M (in b/s/Hz)(19) where f (s) is the probability density function (pdf) of s, I s is the set of admissible input distributions having the variance constraint E( s 2 ) = M σ 2 s and I(s; r 3 , Ψ 3 ) denotes the mutual information [23], [24] between s and (r 3 , Ψ 3 ). The ergodic capacity can be achieved if the length of the codebook is long enough to reflect the ergodic nature of fading [25] (i.e., the duration of each transmitted codeword is much greater than the channel coherence time). By using the chain rule for mutual information [23], [24] and observing that s and Ψ 3 are statistically independent, it results that I(s; r 3 , Ψ 3 ) = I(s; Ψ 3 ) + I(s; r 3 | Ψ 3 ) = I(s; r 3 | Ψ 3 ) = E Ψ3 [I(s; r 3 | Ψ 3 = Ξ 3 )], where I(s; r 3 | Ψ 3 ) is the mutual information between s and r 3 , given Ψ 3 . It is shown in [22] that, given Ψ 3 = Ξ 3 , the input distribution that maximizes I(s; r 3 | Ψ 3 = Ξ 3 ) is s ∼ CN (0 M , σ 2 s I M ) and the corresponding maximal mutual information I max (s; r 3 | Ψ 3 = Ξ 3 ) is given by I max (s; r 3 | Ψ 3 = Ξ 3 ) = log det I M + σ 2 s σ 2 v3 Ξ 3 Ξ H 3 .(20) Consequently, one has C 3 = 1 M E log det I M + σ 2 s σ 2 v3 Ψ 3 Ψ H 3 = 1 M M −1 m=0 E log 1 + σ 2 s σ 2 v3 |Ψ 3 (m)| 2(21) where Ψ 3 (m) has been defined in (12). A first step towards the analytical computation of C 3 consists of observing that, conditioned on the product b Ψ 12 (m), |Ψ 3 (m)| 2 turns out to be exponentially distributed with mean σ 2 13 +α 2 σ 2 23 |b| 2 |Ψ 12 (m)| 2 (m ∈ M). Thus, by applying the conditional expectation rule [31], one obtains C 3 = − log e M M −1 m=0 E e 1/Υ3(m) Ei [−1/Υ 3 (m)](22) where Ei(x) x −∞ e u /u du denotes the exponential integral function, for x < 0, and Υ 3 (m) Γ 13 1 + α 2 σ 2 23 σ 2 13 |b| 2 |Ψ 12 (m)| 2(23) with Γ 13 (σ 2 13 σ 2 s )/σ 2 v3 representing the average signal-to-noise ratio (SNR) over the 1 → 3 link. When the backscatter system is inactive, i.e., α = 0, in accordance with [26], the ergodic capacity of the legacy system is given by C 3 | α=0 = −e 1/Γ13 Ei (−1/Γ 13 ) log e .(24) A first result can be obtained by comparing (22) and (24). Indeed, since Υ 3 (m) ≥ Γ 13 for any realizations of |b| 2 and |Ψ 12 (m)| 2 and, moreover, −e 1/x Ei(−1/x) is a monotonically increasing function of x ≥ 0, it follows that C 3 ≥ C 3 | α=0 . Remark 4: If the constraint (10) on the CP length is satisfied, then backscatter communications can even increase the ergodic capacity of the legacy system. Strictly speaking, the interference generated by the backscatter communication is turned into a form of diversity for the legacy system. To assess the performance gain ∆C 3 C 3 − C 3 | α=0 , we use asymptotic expressions for C 3 by considering both low-and high-SNR regimes. With this goal in mind, we assume that σ 2 ik = d −η ik , where d ik is the distance between nodes i and k, and η denotes the path-loss exponent. Specifically, since −e 1/x Ei(−1/x) → x as x → 0 [26], in the low-SNR regime, i.e., SNR L σ 2 s /σ 2 v3 → 0, one has C 3 | α=0 → Γ 13 log e(25) and C 3 → log e M M −1 m=0 E[Υ 3 (m)] = Γ 13 1 + α 2 σ 2 b σ 2 12 σ 2 23 σ 2 13 log e , for SNR L → 0(26) which leads to ∆C 3 → α 2 σ 2 b σ 2 s σ 2 v3 1 d 12 d 23 η log e , for SNR L → 0 .(27) where, according to (5), it results that σ 2 b = E(|b| 2 ) ≤ 1. On the other hand, by using the fact that −e 1/x Ei(−1/x) → log(1 + x) − γ as x → +∞ [26], where γ lim i→∞ i −1 i k=1 k −1 − log(i) ≈ 0 .57721 is the Euler-Mascheroni constant, we have that, in the high-SNR regime, i.e., when SNR L → +∞, C 3 | α=0 → [log(1 + Γ 13 ) − γ] log e(28) and, moreover, C 3 → log e M M −1 m=0 E{log[1 + Υ 3 (m)] − γ} , for SNR L → +∞ .(29) To analytically compute the ensemble average in (29), we assume that the backscatter system employs a constant-modulus constellation, e.g., Q-ary phase-shift keying (PSK), with average energy σ 2 b = 1. Henceforth, |b| = 1 and, by observing that |Ψ 12 (m)| 2 is exponentially distributed with mean σ 2 12 , after some calculations, one has ∆C 3 → −e 1/Ω3 Ei (−1/Ω 3 ) log 2 e , for SNR L → +∞(30) with Ω 3 α 2 σ 2 23 σ 2 12 σ 2 13 = α 2 d 13 d 12 d 23 η(31) where we observed that Γ 13 /(1 + Γ 13 ) → 1 as SNR L → +∞. Two remarks are now in order. Remark 5: The capacity gain ∆C 3 increases with α 2 , that is, the greater the backscatter signal strength, the greater the capacity gain of the legacy system. Such a result directly comes from the fact that the backscatter device can be regarded as a non-regenerative relay for the legacy system. Remark 6: With reference to Fig. 1, let the angle φ between nodes 2 and 3 and the distance d 13 between the LTx and the LRx be fixed. As a consequence of the Carnot's cosine law d 23 = (d 2 12 + d 2 13 − 2 d 12 d 13 cos φ) 1/2 , which can be substituted in (27) and (31). By using standard calculus concepts, it can be verified that, in both low-and high-SNR regimes, ∆C 3 is not a monotonic function of the distance d 12 between the LTx and the BTx, for each φ ∈ [0, 2π). Indeed, it results that ∆C 3 is a strictly decreasing function of d 12 /d 13 when 9 cos 2 φ − 8 < 0, i.e., the angle φ belongs to the set A arccos(2 √ 2/3) < a < π − arccos(2 √ 2/3) and π + arccos(2 √ 2/3) < a < 2π − arccos(2 √ 2/3)(32) i.e., the capacity gain decreases as the BTx moves away from the LTx. On the other hand, when 9 cos 2 φ− 8 ≥ 0, i.e., φ ∈ A, the capacity ∆C 3 monotonically increases for d min (φ) ≤ d 12 /d 13 ≤ d max (φ), with d min (φ) max 0, 3 cos φ − 9 cos 2 φ − 8 4 (33) d max (φ) max 0, 3 cos φ + 9 cos 2 φ − 8 4(34) otherwise, it monotonically decreases. For instance, if LTx, BTx, and LRx lie on the same line, i.e., φ = 0, the function ∆C 3 monotonically decreases for 0 < d 12 /d 13 ≤ 1/2 and d 12 /d 13 > 1, while it increases when 1/2 < d 12 /d 13 < 1. In this case, the capacity gain of the legacy system increases as the BTx gets closer and closer to either the LTx or the LRx. If no significant channel variability occurs during the whole legacy transmission (i.e., the transmission duration of the codeword is comparable to the channel coherence time), a capacity in the ergodic sense does not exist. In this case, the concept of capacity versus outage has to be used [25], [26]. Assume that codewords extend over a single legacy frame and let the LTx encode data at a rate of R s b/s/Hz, the outage probability of the legacy system is defined as P out,3 P 1 M M −1 m=0 log 1 + σ 2 s σ 2 v3 |Ψ 3 (m)| 2 < R s .(35) However, for the problem at hand, P out,3 is hard to compute analytically and does not lead to easily interpretable results. Therefore, we resort to numerical simulations presented in Section VI to show the influence of the main system parameters on the outage probability of the legacy communication. V. CAPACITY ANALYSIS OF THE BACKSCATTER SYSTEM In the subsequent analysis, we separately consider two different network configurations. In the former case, we focus on the scenario where the intended recipient of the backscatter communication BRx and the legacy transmitter LTx are co-located, which is the situation considered in [4], [5]. In the latter case, we study the scenario where the BRx and the LTx are spatially-separated nodes, which is the situation considered in [3]. For simplicity, we remove the IBI in (17) by replacing condition (15) with the more restrictive one L b ≥ max (L 14 + θ 14 , L 24 + θ 24 , L 12 + L 24 + θ 12 + θ 24 ) .(36) Under the assumption that L 12 + L 24 + θ 12 + θ 24 ≤ P − 1(37) since only the first L 12 +L 24 +θ 12 +θ 24 rows of C (17) is not the best choice, since it does not allow one to exploit the entire channel energy. However, such a contribution becomes negligible for large values of N (i.e., P ). Moreover, we assume herein that, in both the aforementioned cases, the number of samples L b discarded from the received backscatter signal (14) is just equal to L cp . We note that, when L cp = L b , then N = M in (16)- (18). In this case, if (9) holds, the product C . This implies that the CP of the legacy system has to be designed to satisfy both inequalities (10) and (36). We would like to point out that, even though such an assumption is made only to keep the analysis relatively simple from a mathematical viewpoint, it is quite reasonable for small area networks. A. The BRx and LTx are co-located When the intended recipient of the backscatter signal and the legacy transmitter are co-located, the reference signal model can be obtained from (16)-(18) by replacing the subscript 4 with 1 and setting ν = 0, which implies that Σ ν = I P . In this case, the matrix C (0) 11 models a self-interference channel and C (0) 11 u(n) represents direct leakage between the LTx transmit/receive chains and/or reflections by other objects in the environment [5]. It is worth observing that the symbol vector s(n) [and, thus, u(n)] is perfectly known at the LTx, whereas the parameters {c 121 , θ 12 + θ 21 }, which uniquely identify the matrix C the BTx. More precisely, the self-interference parameters {c 11 , θ 11 } can be estimated when there is no backscatter transmission: this can be obtained at the protocol level by employing a silent period of few symbols at the beginning of the packet [5], during which the BTx does not backscatter (i.e., α = 0). Once c 11 and θ 11 have been estimated by means of standard techniques [27], the self-interference contribution can be subtracted from (16). After the silent period, the BTx modulates training symbols on the backscatter signal [5], which can be used to estimate {c 121 , θ 12 + θ 21 } through conventional methods [27]. After performing the DFT, one gets r 1 (n) W DFT R b r 1 (n) − C (0) 11 u(n) = α ψ(n) b(n) + v 1 (n)(38) where W DFT W −1 IDFT = W H IDFT defines the unitary symmetric DFT matrix [12] and the nonzero entries of the diagonal matrix Ψ ik diag[Ψ ik (0), Ψ ik (1), . . . , Ψ ik (M − 1)](39) are given by (13), ψ(n) ψ(n) ∈ C M , and v 1 (n) W DFT R b v 1 (n) ∈ C M . On the basis of the above discussion, the vector ψ(n) is assumed to be known at the LRx and, thus, coherent receiving rules can be adopted at the LRx. Moreover, we will omit the dependence on the frame index n hereinafter. According to (38), given ψ, a sufficient statistic for detecting b from r 1 is given by the scalar z 1 ψ H r 1 = α ψ 2 b + ψ H v 1 .(40) Since sufficient statistics preserve mutual information [23], [24], one has I(b; r 1 , ψ) = I(b; z 1 , ψ). Therefore, the coherent ergodic capacity of (38) is given by C 1 sup f (b)∈Ib I(b; z 1 , ψ) M (in b/s/Hz)(41) where I b is the set of admissible input distributions f (b) fulfilling both the variance constraint E(|b| 2 ) = σ 2 b and, according to (5), the amplitude constraint |b| ≤ 1 almost surely (a.s.). We remember that, since the average of a random variable cannot exceed its maximal value, the amplitude constraint implies that σ 2 b ≤ 1. We observe that (40) is a conditionally Gaussian channel, given b and ψ. It was shown in [28] that the capacity-achieving input distribution for conditional Gaussian channels under variance and amplitude constraints is discrete with a finite number of mass points. Therefore, there is no loss of generality in confining f (b) to the set of discrete distributions. To this goal, let b be a discrete random variable taking on the value β q ∈ B with probability p q , for each q ∈ Q, such that |β q | ≤ 1, E(|b| 2 ) = σ 2 b , and q∈Q p q = 1. Using the same arguments of Subsection IV, one gets I(b; z 1 , ψ) = I(b; z 1 | ψ) = E ψ [I(b; z 1 | ψ = ξ)] .(42) For the discrete input b, the mutual information I(b; z 1 | ψ = ξ) is given by I(b; z 1 | ψ = ξ) = h(z 1 | ψ = ξ) − h(z 1 | b, ψ = ξ) (43) where h(z 1 | ψ = ξ) = − C f z1 | ψ=ξ (x) log f z1 | ψ=ξ (x) dx(44) is the differential entropy [23], [24] of z 1 | ψ = ξ, whereas h(z 1 | b, ψ = ξ) = h(α ψ 2 b + ψ H v 1 | b, ψ = ξ) = h(ψ H v 1 | b, ψ = ξ) = h(ξ H v 1 )(45) turns out to be the differential entropy of ξ H v 1 ∼ CN (0, σ 2 v1 ξ 2 ), which is given (see, e.g., [29]) by h(ξ H v 1 ) = log(πe E[|ξ H v 1 | 2 ]) . It is noteworthy that, given ψ = ξ, the output distribution f z1 | ψ=ξ (x) = Q q=1 p q f z1 | b=βq,ψ=ξ (x)(46) is a Gaussian mixture since z 1 | b = β q , ψ = ξ ∼ CN (α ξ 2 β q , σ 2 v1 ξ 2 ). By virtue of (43), the optimization problem (41) 1) Upper bound on the capacity C 1 : An upper bound on the ergodic capacity C 1 can be obtained by resorting to the maximum-entropy theorem for complex random variables [29], which allows one to state that h(z 1 | ψ = ξ) ≤ log πe E |z 1 | 2 ψ = ξ = log α 2 σ 2 b ξ 4 + σ 2 v1 ξ 2 .(47) By substituting (47) in (43) and accounting for (41)-(42), one gets the upper bound C 1 ≤ C 1,upper 1 M E [log (1 + SNR B,1 Θ 121 )](48) with SNR B,1 α 2 σ 2 b /σ 2 v1 and Θ 121 M −1 m=0 |s (m) | 2 |Ψ 12 (m)| 2 |Ψ 21 (m)| 2 .(49) It can be shown that, as Q grows, C 1 approaches C 1,upper exponentially fast [30]. In the general case, the evaluation of the expectation in (48) is significantly complicated and will be numerically carried out in Section VI. Herein, we shall resort to a simpler asymptotic analysis by assuming that M is sufficiently large. It follows from the law of large numbers [31] that, as M gets large, the random 2) Lower bound on the capacity C 1 : By resorting to random coding arguments (see, e.g., [32]), it can be shown that the cut-off rate, which is defined as follows −log C   Q q=1 p q f z1 | b=βq,ψ=ξ (x)   2 dx = − log Q q1=1 Q q2=1 p q1 p q2 C 1 πσ 2 v1 ξ 2 e − |x−α ξ 2 βq 1 | 2 +|x−α ξ 2 βq 2 | 2 2 σ 2 v 1 ξ 2 dx = − log Q q1=1 Q q2=1 p q1 p q2 e − α 2 ξ 2 |βq 1 −βq 2 | 2 4 σ 2 v 1 C 1 πσ 2 v1 ξ 2 e − x−α ξ 2 βq 1 +βq 2 2 2 σ 2 v 1 ξ 2 dx = − log Q q1=1 Q q2=1 p q1 p q2 e − α 2 ξ 2 |βq 1 −βq 2 | 2 4 σ 2 v 1(52)C 1 ≤ C 1,upper | M ≫ 1 1 M log 1 + SNR B,1 M σ 2 s σ 4 12 = 1 M log 1 + SNR B,1 M σ 2 s (d 2 12 ) η .R 1 max p1,p2,...,pQ − log C   Q q=1 p q f z1 | b=βq,ψ=ξ (x)   2 dx (51) is a lower bound on I(b; z 1 | ψ = ξ) at any SNR. By using the properties of the logarithmic function, we observe that the objective function in (51) (41), (42), and (51), one yields C 1 ≥ C 1,lower , with C 1,lower log Q − E log 1 + Q q=2 e −Θ121 SNRB,1 |β1−βq| 2 4 σ 2 b M (53) where SNR B,1 and Θ 121 have been defined in Subsection V-A1. The further lower bound C 1,lower ≥ log Q − E log 1 + (Q − 1) e −Θ121 SNRB,1 δ 2 min 4 σ 2 b M(54) can be obtained by noting that |β 1 − β q | 2 ≥ δ 2 min for each q ∈ Q, where δ min min q1 =q2∈Q |β q1 − β q2 | is the minimum distance between any two data symbols in the signal constellation B. By invoking again the law of large numbers [31], in the large M limit, the following asymptotic expressions of C 1,lower and its lower bound (54) hold C 1,lower|M ≫ 1 log Q − log 1 + Q q=2 e − σ 2 s M SNR B,1 |β1−βq| 2 4 σ 2 b (d 2 12 ) η M ≥ log Q − log 1 + (Q − 1) e − σ 2 s M SNR B,1 δ 2 min 4 σ 2 b (d 2 12 ) η M .C 1,lower|M ≫ 1 → 1 − 1 Q σ 2 s SNR B,1 δ 2 min 4 σ 2 b (d 2 12 ) η (56) that is, the capacity increases linearly with SNR B,1 and monotonically decreases as the distance d 12 raises. B. The BRx and LTx are spatially-separated nodes We consider the scenario where the LTx and BRx are spatially-separated nodes, which is the situation considered in [3]. In this case, taking into account the aforementioned simplifying assumptions (36) and L cp = L b , the reference signal model (16)-(18) becomes R b r 4 (n) = α e j 2π M νnP R b Σ ν C (0) 24 C (0) 12 u(n) b(n) + d 4 (n) (57) with d 4 (n) = e j 2π M νnP R b Σ ν C (0) 14 u(n) + R b v 4 (n) ∈ C M .(58) Compared to the case studied in Subsection V-A, there are two key differences: (i) the receiver has no knowledge of the data block u(n) = T cp W IDFT s(n) transmitted by the LTx; (ii) there is a nonzero CFO ν between the received carrier and the local sinusoids used for signal demodulation. can be accomplished at the BRx by resorting to noncoherent detection rules. The noncoherent ergodic capacity of (57)-(58) is given by the supremum of the mutual information I[b; R b r 4 (n)] over the set I b of admissible input distribution satisfying both the variance and amplitude constraints. Evaluation of the noncoherent ergodic capacity with only a variance constraint has been studied in [33], [34], [35] under the assumption that the channel matrix [corresponding to e j 2π M νnP R b Σ ν C resorting to standard estimators [20], [27]. However, it should be observed that the interference contribution e j 2π M νnP R b Σ ν C(0) 14 u(n) cannot be subtracted from (57) since the information-bearing data in u(n) are unknown at the BRx (only the pilots and their locations are assumed to be known). Once ν has been estimated, the vector r 4 (n) can be counter-rotated at the angular speed 2πν/M , thus yielding r 4 R b r 4 = α (W IDFT Ψ 12 Ψ 24 s) b + W IDFT Ψ 14 s + v 4(59) with v 4 R b v 4 ∈ C M , where s ∼ CN (0 M , σ 2 s I M ) is the capacity-achieving distribution for the legacy system (see Section IV) and we have again omitted the dependence of the frame index n. Since Ω 124 Ψ 12 Ψ 24 ∈ C M ×M and Ω 14 Ψ 14 ∈ C M ×M are known but s is unknown, we refer to (59) as the partially-coherent channel model. The partially-coherent ergodic capacity of (59) is given by C 4 sup f (b)∈Ib I(b; r 4 , Ω 124 , Ω 14 )(60) where I b is the set of admissible input distributions fulfilling E(|b| 2 ) = σ 2 b and |b| ≤ 1 a.s., and Similarly to the case studied in Subsection V-A, closed-form expressions for C 4 and the corresponding capacity-achieving discrete distribution f (b) are unavailable. Therefore, we derive upper and lower bounds on C 4 . 1) Upper bound on the capacity C 4 : An upper bound on C 4 can be obtained by assuming that the BRx has the additional perfect knowledge of s. Indeed, by using the chain rule for mutual information [23], [24], it can be proven that it results from (60) that I(b; r 4 | Ω 124 ,C 4 ≤ C 4,upper 1 M E [log (1 + SNR B,4 Θ 124 )](66) with Θ 124 M −1 m=0 |s (m) | 2 |Ψ 12 (m)| 2 |Ψ 24 (m)| 2 .(67) Such an upper bound is achieved when the BRx is able to reliably estimate the legacy symbols and Q → +∞. It should be noted that (66) is similar to (48). Thus, the asymptotic analysis reported in Subsection V-A1 soon after (48) can be applied to (66) with minor modifications. In particular, in the large M limit, one obtains R 4 = − log Q q1=1 Q q2=1 1 Q 2 C M e − x H [ K −1 4 (βq 1 )+K −1 4 (βq 2 ) ] x 2 π M det [K 4 (β q1 )] det [K 4 (β q2 )] dx = − log Q q1=1 Q q2=1 det 2 K −1 4 (β q1 ) + K −1 4 (β q2 ) −1 Q 2 det [K 4 (β q1 )] det [K 4 (β q2 )] C M e − x H [ K −1 4 (βq 1 )+K −1 4 (βq 2 ) ] x 2 π M det 2 K −1 4 (β q1 ) + K −1 4 (β q2 ) −1 dx = log Q − log      1 + 2 M Q Q q1=1 Q q2 = 1 q2 = q1 1 det [R 4 (β q1 )] det [R 4 (β q2 )]det[R −1 4 (β q1 ) + R −1 4 (β q2 )]     (71) where, by virtue of the Carnot's cosine law, the distances d 12 and d 24 are related by d 24 = (d 2 12 + d 2 14 − 2 d 12 d 14 cos θ) 1/2 , with θ being the angle opposite to the 2 → 4 link (see Fig. 1). 2) Lower bound on the capacity C 4 : As in Subsection V-A2, we rely on the fact that R 4 ≤ I(b; r 4 | Ω 124 = Ξ 124 , Ω 14 = Ξ 14 )(69) where R 4 is the cut-off rate when the backscatter symbols are assumed to be equiprobale, that is, R 4 − log C M   1 Q Q q=1 f r4 | b=βq,Ω124=Ξ124,Ω14=Ξ14 (x)   2 dx .(70) Eq. (59) shows that r 4 | b = β q , Ω 124 = Ξ 124 , Ω 14 = Ξ 14 ∼ CN [0 M , K 4 (β q , Ξ 124 , Ξ 14 )], with K 4 (β q , Ξ 124 , Ξ 14 ) E(r 4 r H 4 b = β q , Ω 124 = Ξ 124 , Ω 14 = Ξ 14 ) = W IDFT R 4 (β q , Ξ 124 , Ξ 14 ) W DFT , where R 4 (β q , Ξ 124 , Ξ 14 ) α 2 σ 2 s Ξ 124 Ξ * 124 |β q | 2 + α σ 2 s Ξ 124 Ξ * 14 β q + α σ 2 s Ξ * 124 Ξ 14 β * q + σ 2 s Ξ 14 Ξ * 14 + σ 2 v4 I M is a diagonal matrix. By using the properties of the determinant [11], we observe that R 4 can be explicated as reported at the top of this page in (71), where we have omitted to explicitly indicate the C 4 ≥ C 4,lower log Q − E      log      1 + 2 M Q Q q1=1 Q q2 = 1 q2 = q1 M −1 m=0 Λ q1 (m) Λ q2 (m) Λ q1 (m) + Λ q2 (m)           M(72) dependence of K 4 (·) and R 4 (·) on Ξ 124 and Ξ 14 , and the last integral is the hypervolume of a multivariate complex Gaussian pdf. By virtue of (61) and (69), the capacity (60) is lower bounded as shown at the top of this page in (72), with Λ q (m) α 2 σ 2 s |Ψ 12 (m)| 2 |Ψ 24 (m)| 2 |β q | 2 + 2 α σ 2 s ℜ {Ψ 12 (m) Ψ 24 (m) Ψ * 14 (m) β q } + σ 2 s |Ψ 14 (m)| 2 + σ 2 v4 .(73) In addition to noise, another additive source of performance degradation is the interference generated by the legacy system over the 1 → 4 link, which may seriously limit the achievable rates of the backscatter system in the high-SNR region. Remark 10: It is verified from (72) that C 4,lower → 0 if Λ q1 (m) → Λ q2 (m) for each q 1 = q 2 ∈ Q. For instance, this happens when the second and third summands in the RHS of (73) are dominant over the first and second ones, i.e., when interference and/or noise dominates the backscatter signal. The dependence of the C 4,lower on the distance d 12 between the LTx and the BTx is not easily deduced from (72) and such a behavior will be studied numerically in Section VI. To gain some useful insights, we consider the special case of a 2-PSK (i.e., BPSK), where β 1 = −β 2 = 1. In this case, eq. (72) becomes C bpsk 4,lower = 1 M − 1 M E log 1 + M −1 m=0 1 − Λ 2 1 (m) Λ 2 2 (m) ≥ 1 M − 1 M log    1 + E M −1 m=0 1 − Λ 2 1 (m) Λ 2 2 (m)    ≈ 1 M − 1 M log    1 + M −1 m=0 1 − E Λ 2 1 (m) Λ 2 2 (m)    (74) with Λ 1 (m) α 2 σ 2 s |Ψ 12 (m)| 2 |Ψ 24 (m)| 2 + σ 2 s |Ψ 14 (m)| 2 + σ 2 v4 (75) Λ 2 (m) 2 α σ 2 s ℜ {Ψ 12 (m) Ψ 24 (m) Ψ * 14 (m)}(76) where the inequality in (74) comes from the application of the Jensens's inequality to the concave function log(1 + √ x), whereas the approximation is obtained by neglecting the correlation between the random variables Λ 2 1 (m 1 )/Λ 2 2 (m 1 ) and Λ 2 1 (m 2 )/Λ 2 2 (m 2 ), for m 1 = m 2 ∈ M. The first-order Taylor expansion of 1 M − 1 M log 1 + [1 − J(d 12 )] M/2 .(78) Remark 11: For a fixed value of d 14 and θ (see Fig. 1), by using standard concepts of mathematical analysis, 11 it can be shown that, if θ ∈ A, then J(d 12 ) is a unimodal function, exhibiting a maximum when D(d 12 ) = √ 2. On the other hand, when θ ∈ A, the function J(d 12 ) is multimodal having multiple local extrema points. VI. NUMERICAL PERFORMANCE ANALYSIS We present the Monte Carlo numerical analysis of the considered ambient backscatter network to validate and complete our theoretical analysis, with reference to both legacy and backscatter systems. All the ensemble averages (with respect to all the relevant fading channels and information-bearing symbols) and the outage probability of the legacy system are evaluated through 10 6 independent Monte Carlo runs. In all the experiments, we adopted the following simulation setting. With reference to the Cartesian plane in Fig. 1, all the distances are normalized with respect to d 13 = 1. Specifically, the nodes 1 (LTx) and 3 whereas the corresponding time offsets are fixed to θ 13 = θ 12 = θ 23 = 1, respectively. Moreover, the 9 Let f (X, Y ) X/Y be a transformation of the two random variables X and Y . Let µX E(X) and µY E(Y ), the firstorder Taylor approximation for E[f (X, Y )] is given by E[f (X, Y )] = f (µX , µY )+E[f ′ x (µX , µY ) (X−µX )]+E[f ′ y (µX , µY ) (Y − µY )] = f (µX , µY ) = µX /µY , where f ′ x (·) and f ′ y (·) are the partial derivatives of the function f (x, y) with respect to the realvalued variables x and y, respectively. 10 Using similar bounding/approximation techniques, a lower bound on C4,lower can be obtained for an arbitrary M -ary backscatter signal constellation, which however does not lend itself to easily interpretable results. 11 Details are omitted in the interest of saving space. path-loss exponent is chosen equal to η = 3. For the evaluation of the outage probability of the legacy system, we chose R s = 6 b/s/Hz in (35). A. Performance of the legacy system Figs. 3 and 4 depict the ergodic capacity C 3 of the legacy system given by (22), in comparison with the ergodic capacity (24) when the backscatter system is in sleep mode (referred to as "w/o backscatter"), with SNR L = σ 2 s /σ 2 v3 = 20 dB and φ ∈ {π/18, π/3}. In Fig. 3 It is important to observe that even small values of ∆C 3 lead to significant increments in terms of data rate for the legacy transmission. For instance, it can be seen from results of Fig. 3 and 8 its upper bound C 1,upper given by (48) and lower bound C 1,lower given by (53) for PSK modulations, respectively, as a function of d 12 /d 13 for different values of SNR B,1 . We also report in Fig. 8 the worst-case ergodic capacity of the backscatter system for the 4-ASK case, which is obtained by averaging (52) with respect to ψ. As predicted by the performance analysis developed in Subsection V-A, both the upper and lower bounds are monotonically decreasing function of the distance between the LTx and the BTx, for each value of SNR B,1 . Moreover, when d 12 is sufficiently smaller than d 13 , it results that C 1,lower ≈ (log Q)/M = 0.0625, for each considered value of SNR B,1 . The slight performance advantage offered by the QPSK signal constellation over the 4-ASK one is due to the fact that the PSK modulation maximizes the cut-off rate in the case of equiprobable symbols (see Subsection V-A2). Results not reported here show that the gap between C 1,upper and C 1,lower is reduced for increasing values of Q. Let us focus on the case when the LTx is a Wi-Fi AP transmitting over a bandwidth of 20 MHz, which might be used to connect the BTx to the Internet [4], [5]. In this scenario, by considering an indoor Wi-Fi network with d 13 = 100 m, we obtain from Fig. 8 that the backscatter communication can achieve at least 1.25 Mbps up to a range of 50 − 70 m, even for very small values of SNR B,1 . As a comparison, we underline that the prototype presented in [5] is able to achieve communication rates up to 1 − 5 Mbps at a range of 1 − 5 m. Therefore, compared to [5], it is possible in theory to largely extend the communication range, without significantly reducing the data rate. C. Performance of the backscatter system when the LTx and BRx are spatially-separated nodes The last scenario under investigation is when the nodes LTx and BRx are distinct one from the other, with θ ∈ {π/18, π/3} and d 14 = 1. Fig. 9 depicts the upper bound C 4,upper given by (66) as a function of d 12 /d 14 , for different values of SNR B,4 . Results corroborate the discussion reported in Remark 9, for each value of SNR B,4 . In particular, if θ = π/3 ∈ A, then C 4,upper monotonically decreases as the distance between the BTx and the LTx increases; when θ = π/18 ∈ A, the capacity C 4,upper assumes a global maximum when the BTx tends to be close by the LTx and a local maximum when the BTx is near the BRx, i.e., d 12 /d 14 = d max (π/18) = 0.9520, by taking on a local minimum at d 12 /d 14 = d min (π/18) = 0.5252. In Fig. 10, the capacity C 4,lower given by (72) is reported as a function of the SNR B,4 for different backscatter signal constellations, with d 12 /d 14 = 0.2, whereas C 4,lower is reported in Fig. 11 as a function of d 12 /d 14 , with SNR B,4 = −20 dB. It is seen that, also in this case, PSK constellations ensure better performance in terms of cut-off rate when the symbols are equiprobable. Another interesting conclusion that can be drawn from Fig. 10 is that all curves exhibit a capacity saturation effect, for vanishingly small noise, which is due to the interference generated by the legacy system over the 1 → 4 link. Moreover, independently of the considered backscatter signal constellation, the capacity C 4,lower is a monomodal function of d 12 /d 14 having a maximum at d 12 /d 14 ≈ 0.3 when θ = π/3 ∈ A, whereas, for θ = π/18 ∈ A, it is multimodal by exhibiting slight fluctuations over a large interval of distances ranging from d 12 /d 14 ≈ 0.2 to d 12 /d 14 ≈ 1.2. Let us consider the practical scenario when the LTx is a TV tower broadcasting over a bandwidth of 6 MHz [3], with d 14 = 4 Km. In this case, according to the results of Fig. 10, by employing a QPSK backscatter signal constellation, the worst-case achievable data rate is equal to 360 kbps over a distance of 800 m at SNR B,4 = −10 dB. As a comparison, we underline that the prototype presented in [3] is able to achieve information rates of 1 kbps over a distance of 5 − 8 m. Therefore, compared to [3], it is possible in theory to significantly extend both the communication range and the data rate. VII. CONCLUSIONS We developed a general framework for evaluating the ultimate achievable rates of a point-to-point backscatter communication network, by considering the influence of the backscatter transmission on the performance of the legacy system, from which energy is opportunistically harvested. Our theoretical results show that, in principle, ambient backscatter allows a passive device to achieve significant communication rates over short distances. As a by-product, the backscatter transmission can even ensure a performance improvement of the legacy system, provided that the latter one is designed to exploit the additional diversity arising from the backscatter process. In view of the prototypes and experiments presented in [3], [4], [5], we highlight that there is plenty of scope for performance improvement, which mandates the use of advanced signal processing techniques, especially at the intended recipient of the backscatter information. Moreover, results of our performance analysis pave the way towards various system-level optimizations. Among the others, an interesting issue is to analytically determine what is the optimal choice of E[|Γ(n)| 2 ] that ensures the best tradeoff between performance of legacy/backscatter systems and energy harvesting at the passive backscatter transmitter. Figure 1 . 1The considered wireless network model: in red, the legacy transmitting (node 1) and receiving (node 3) devices; in green, the backscatter transmitter (node 2) and its intended recipient (node 4). (i.i.d.) zero-mean circularly symmetric complex symbols, with variance σ 2 s E[|s (m) (n)| 2 ], for any m ∈ M {0, 1, . . . , M − 1} and n ∈ Z. The vector s(n) is subject to conventional multicarrier precoding, encompassing M -point inverse discrete Fourier transform (IDFT), followed by cyclic prefix (CP) insertion of length L cp < M . Figure 2 . 2Equivalent Thévenin circuit of the multilevel backscatter transmitter. O P ×P , under the assumption that a lower-triangular Toeplitz matrix having as first column[0 T θ12+θ23 , c T 123 (n), 0 T P −L12−L23−θ12−θ23−1 ] T ,where the vector c 123 ∈ C L12+L23+1 collects the samples of the (linear) convolution between {c 12 (ℓ)} L12 ℓ=0 and {c 23 (ℓ)} L23 ℓ=0 . Under the assumption that L cp ≥ max(L 13 + θ 13 , L 12 + L 23 + θ 12 + θ 23 ) ; (ii) the last P − L 23 − θ 23 rows of the matrix C located within its first L 23 + θ 23 rows; (iii) the last P − L 13 − θ 13 rows of C located within its first L 24 + θ 24 rows and the last P − L 14 − θ 14 rows of Σ ν C O not be zero, one thus has R b Σ ν C N ×P . Obviously, removing the IBI in a lower-triangular Toeplitz matrix having as first column [0 T θ12+θ24 , c T 124 , 0 T P −L12−L24−θ12−θ24−1 ] T , where the vector c 124 ∈ C L12+L24+1 collects the samples of the (linear) convolution between {c 12 (ℓ)} L12 ℓ=0 and {c 24 (ℓ)} L24 ℓ=0 and {c 11 , θ 11 }, which uniquely identify the matrix C (0) 11 , with c 11 [c 11 (0), c 11 (1), . . . , c 11 (L 11 )] T ∈ C L11+1 , can be estimated by allowing the insertion of training data within each packet of B symbols transmitted by May 17, 2016 DRAFT is equivalent to the supremization of E ψ [h(z 1 | ψ = ξ)] under the variance and amplitude constraints. However, the entropy h(z 1 | ψ = ξ) cannot be calculated in closed form due to the logarithm of a sum of exponential functions. As a consequence, an analytical expression for the optimizing probability mass function (pmf) of b is not available for the general case, neither there exists a closed-form formula for the corresponding capacity. Henceforth, upper and lower bounds on C 1 given by (41) are developed in the subsequent subsections. ( 50 ) 7 : 507Remark When M is sufficiently large, the upper bound (48) is a monotonically increasing function of SNR B,1 and 1/d 12 . In other words, significant high values of C 1 are obtained when the BTx reflects a large part of the incident EM wave and/or the BTx is very close to the LTx. can be explicated as reported at the top of this page in (52), where the last but one equality is obtained by completion of the square in the exponent, whereas the last integral is 1 for any choice of the symbol set B, since it is recognized as the integral of a univariate complex Gaussian pdf. Eq. (52) is valid for any finitesize symbol constellation, such as quadrature-amplitude modulation (QAM), PSK, orthogonal, lattice-type, or other. It is verified [32] that, for symbol constellations where the set of distances to other neighbors is invariant to the choice of the reference point, e.g., PSK and orthogonal modulations, the equiprobable assignment on the backscatter symbols (i.e., p q = 1/Q ∀q ∈ Q) maximizes (52). Therefore, remembering 8 For any value of M , eq. (50) is as an upper bound on (48): by Jensen's inequality, E[log (1 + SNRB,1 Θ121)] ≤ log[1 + SNRB,1 E(Θ121)], with E(Θ121) = M σ 2 s σ 2 12 σ 2 21 . ( 55 ) 55Remark 8: The lower bound C 1,lower approaches (log Q)/M as SNR B,1 increases or the distance d 12 between the LTx and the BTx decreases. On the other hand, when x → 0, the function log(1 + A e −Bx ) can be approximated using the first two terms of its Mac Laurin series expansion, i.e., log(1 + A e −B x ) ≈ log(1 + A) − A B (1 + A) −1 x, in the low-SNR regime SNR B,1 → 0 or when d 12 → +∞, hence getting BRx does not have any a priori knowledge regarding the legacy transmission, recovery of b(n) (n) in our framework] and noise [corresponding to d 4 (n) in our framework] follow a Gaussian distribution. In the case under study, evaluation of the noncoherent ergodic capacity is further complicated by the non-Gaussian nature of both e j 2π M νnP R b Σ ν C (n) and d 4 (n), as well as by the amplitude constraint |b| ≤ 1.To avoid incurring the data-rate penalty of the noncoherent communication scheme, we study the case where, besides having knowledge of the training symbols transmitted by the BTx, the BRx additionally knows the pilot symbols sent by the LTx in each frame. Under this assumption, following the same protocol outlined in Subsubsection V-A, during the silent period of the BTx (i.e., when α = 0), the BRx receives the signal d 4 (n), from which it can estimate the CFO ν and the parameters of the channel matrix C I(b; r 4 , 4Ω 124 , Ω 14 ) = I(b; r 4 | Ω 124 , Ω 14 ) = E Ω 124 ,Ω14 [I(b; r 4 | Ω 124 = Ξ 124 , Ω 14 = Ξ 14 )] . Ω 14 , s) = I(b; r 4 , | Ω 124 , Ω 14 ) + I(b; s | r 4 , Ω 124 , Ω 14 ) ≥ I(b; r 4 , | Ω 124 , Ω 14 ) (62) since I(b; r 4 , | Ω 124 , Ω 14 ) ≥ 0 by definition. Moreover, because subtracting a constant does not change mutual information[23],[24], one hasI(b; r 4 | Ω 124 , Ω 14 , s) = I(b; r 4 − W IDFT Ω 14 s | Ω 124 , Ω 14 , s) = E Ω124,Ω14,s [I(b; r 4 − W IDFT Ω 14 s | Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a)](63)that is, since the BRx knows Ω 14 and s, it can estimate b by subtracting W IDFT Ω 14 s from (59), henceyieldingr 4 r 4 − W IDFT Ω 14 s = α W IDFT Ω 124 s b + v 4 . It follows that I(b; r 4 − Ω 14 s | Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a) = h(r 4 | Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a) − h(r 4 | b, Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a) (64) where h(r 4 | b, Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a) = h(v 4 ) = M log(πe σ 2 v4 ). As a consequence of the maximum-entropy theorem for complex random variables[29], one can obtain a further upper bound on(64) by observing that h(r 4 | Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a) ≤ log (πe) M det E r 4r H 4 Ω 124 = Ξ 124 , Ω 14 = Ξ 14 , s = a = log (πe σ 2 v4 ) M 1 + SNR B,4 Ξ 124 a 2 (65) with SNR B,4 α 2 σ 2 b /σ 2 v4 , where we have used the facts [11] that: (i) det(A B) = det(A) det(B) for arbitrary nonsingular matrices A ∈ C n×n and B ∈ C n×n ; (ii) det(W IDFT ) det(W DFT ) = 1; (iii) forarbitrary vectors x ∈ C n and y ∈ C n , det(I n + x y H ) = 1 + x H y. Henceforth, accounting for (62)-(65), C 4 ≤ 4C 4,upper | M ≫ 1 1 M log 1 + SNR B,4 M σ 2 s σ 2 12 σ 2 24 = 1 M log 1 + SNR B,4 M σ 2 s (d 12 d 24 ) η(68) Remark 9 : 9For a fixed value of d 14 and θ, the capacity C 4,upper | M ≫ 1 as a function of d 12 exhibits the same behavior of ∆C 3 (see Remark 6). In a nutshell, when θ ∈ A, the upper bound C 4,upper | M ≫ 1 is a strictly decreasing function of d 12 /d 14 , whereas, when θ ∈ A, it monotonically increases for d min (θ) ≤ d 12 /d 14 ≤ d max (θ), otherwise, it monotonically decreases. In other words, if the BRx reliably estimates the legacy symbols, in the former case, the capacity C 4 of the backscatter system decreases while the BTx is departing from the LTx, whereas, in latter one, it increases as the BTx approaches to either the LTx or an intermediate point between the LTx and the BRx. ( LRx) have coordinates equal to (−0.5, 0) and (0.5, 0), respectively. In all the plots where the distance d 12 varies, the node 2 (BTx) moves along the line joining the nodes 1 and 2 (see Fig. 1). The multicarrier legacy system employs M = 32 subcarriers and a CP of length L cp = 8. The legacy symbols are generated according to the corresponding capacity-achieving distribution s ∼ CN (0 M , σ 2 s I M ), with σ 2 s = 1. On the other hand, the symbols transmitted by the backscatter device are equiprobably drawn from BPSK, 4-PSK (i.e., QPSK), and quaternary amplitude-shift keying (ASK) signal constellations, with average energyσ 2 b = 1.The order of the discrete-time channels between the nodes is set equal to L 13 = L 12 = L 23 = 3, Figure 3 . 3Ergodic capacity of the legacy system versus E[|Γ(n)| 2 ] = α 2 for two backscatter signal constellations and two values of the angle φ. Figure 4 . 4Ergodic capacity of the legacy system versus d12/d13 for two backscatter signal constellations and two values of the angle φ. Figure 5 . 5Outage probability of the legacy system versus E[|Γ(n)| 2 ] = α 2 for two backscatter signal constellations and two values of the angle φ. Figure 6 . 6Outage probability of the legacy system versus d12/d13 for two backscatter signal constellations and two values of the angle φ. Figure 7 . 7, the capacity values are reported as a function of the mean square power wave reflection coefficient E[|Γ(n)| 2 ] = α 2 σ 2 b , with d 12 /d 13 = 0.2, whereas they are plotted against d 12 /d 13 in Fig. 4, with E[|Γ(n)| 2 ] = −20 dB. Best-case ergodic capacity of the backscatter system versus d12/d13 for different values of SNRB,1 (LTx and BRx are co-located). Figure 8 . 8Worst-case ergodic capacity of the backscatter system versus d12/d13 for different values of SNRB,1 (LTx and BRx are co-located).As annunciated in Remark 4, the capacity of the legacy system cannot degrade in the presence of the backscatter transmission, in each operative condition. In particular, the performance gain ∆C 3 =C 3 − C 3 | α=0 becomes relevant either when E[|Γ(n)| 2 ] is sufficiently large (seeRemark 5) or the BTx is very close to the LTx. In particular, it is seen that, for a fixed Q, the choice of the backscatter signal constellation (ASK or PSK) does not lead to significantly different values of C 3 . Moreover, results ofFig. 4confirm the trends analytically predicted in Remark 6, by showing that C 3 monotonically decreases as the BTx moves away from the LTx when φ = π/3 ∈ A; on the other hand, when φ = π/18 ∈ A, the capacity C 3 exhibits a local minimum at d 12 /d 13 = d min (π/18) = 0.5252 and a local maximum at d 12 /d 13 = d max (π/18) = 0.9520 (i.e., near the LRx). Similar conclusions can be drawn from the outage probability P out,3 given by(35), as reported in Figs. 5 and 6. Figure 9 . 9Best-case ergodic capacity of the backscatter system versus d12/d14 for different values of SNRB,4 (LTx and BRx are spatially-separated nodes). Figure 10 .Figure 11 . 1011Worst-case ergodic capacity of the backscatter system versus SNRB,4 for three backscatter signal constellations and two values of the angle φ (LTx and BRx are spatially-separated nodes). Worst-case ergodic capacity of the backscatter system versus d12/d14 for two backscatter signal constellations and two values of the angle φ (LTx and BRx are spatially-separated nodes). ...........antenna chip Z a S 1 S 2 S Q V 0 Z c 1 Z c 2 Z c Q backward wave forward wave ......... transmissions occur at the same RF frequency. For such a reason, we assume that the corresponding CFOs are equal and can be accurately estimated and compensated at the LRx through conventional techniquesMay 17, 2016 DRAFT corresponding TO. Since the BTx reflects the RF signal transmitted by the LTx, both 1 → 3 and 2 → 3 depends on the corresponding average path loss. Fading coefficients of different links are statistically independent among themselves, i.e., c i1k1 (ℓ) is that, when φ = π/18 and the BTx employs a QPSK modulation, one gets ∆C 3 = 0.1315 b/s/Hz at E[|Γ(n)| 2 ] = −40 dB. In this case, if the LTx is a TV tower broadcasting over a bandwidth of 6 MHz [3], then the data-rate gain is equal to 789 kbps; on the other hand, if the LTx is a Wi-Fi access point (AP) operating over a bandwidth of 20 MHz [4], [5], the gain is 2.63 Mbps.B. Performance of the backscatter system when the LTx and BRx are co-locatedHerein, we focus on the ergodic capacity C 1 of the backscatter system when the intended recipient BRx of the backscatter transmission is just the energy source LTx. More precisely, we report in Figs. 70.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 d 12 /d 14 C 4, upper (bits / s / Hz) θ=π/18; SNR B 4 =0dB θ=π/3;SNR B 4 =0dB θ=π/18;SNR B 4 =−10dB θ=π/3;SNR B 4 =−10dB θ=π/18;SNR B 4 =−20dB θ=π/3;SNR B 4 =−20dB 0.05 Hereinafter, similarly to[3], the term "legacy" refers to existing wireless communications technologies, such as, e.g., DTV, cellular, and Wi-Fi systems. The power wave reflection coefficient Γq depends on the chip impedance that, in its turn, depends on the chip input power. A linearized model is herein assumed for the power wave reflection coefficient[17], according to which Γq does not depend on the incident power. Although the transmit/receive filters might introduce statistical correlation among channel taps, it is a common practice[20] to neglect such a correlation when evaluating the performance of multicarrier systems.May 17, 2016 DRAFT May 17, 2016 DRAFT Backscatter communication and RFID: Coding, Energy, and MIMO analysis. C Boyer, S Roy, IEEE Trans. Commun. C. Boyer and S. Roy, "Backscatter communication and RFID: Coding, Energy, and MIMO analysis", IEEE Trans. Commun., pp. 770-785, Mar. 2014. Exploiting noncircularity in backscattering communications. D Darsena, G Gelli, F Verde, Proc. of the Twelfth International Symposium on Wireless Communication Systems (ISWCS). of the Twelfth International Symposium on Wireless Communication Systems (ISWCS)Brussels, BelgiumD. Darsena, G. Gelli, and F. Verde, "Exploiting noncircularity in backscattering communications", in Proc. of the Twelfth International Symposium on Wireless Communication Systems (ISWCS), Brussels, Belgium, Aug. 2015, pp. 1-5. Ambient backscatter: wireless communication out of thin air. V Liu, A Parks, V Talla, S Gollakota, D Wetherall, J R Smith, Proc. of ACM SIGCOMM'13. of ACM SIGCOMM'13Hong Kong, ChinaV. Liu, A. Parks, V. Talla, S. Gollakota, D. Wetherall, and J.R. Smith, "Ambient backscatter: wireless communication out of thin air", in Proc. of ACM SIGCOMM'13, Hong Kong, China, Aug. 2013, pp. 39-50. Wi-Fi backscatter: Internet connectivity for RF-powered devices. B Kellogg, A Parks, S Gollakota, J R Smith, D Wetherall, Proc. of ACM SIGCOMM'14. of ACM SIGCOMM'14Chicago, Illinois, USAB. Kellogg, A. Parks, S. Gollakota, J.R. Smith, and D. Wetherall, "Wi-Fi backscatter: Internet connectivity for RF-powered devices", in Proc. of ACM SIGCOMM'14, Chicago, Illinois, USA, Aug. 2014, pp. 607-618. BackFi: High throughput WiFi backscatter. D Bharadia, K Joshi, M Kotaru, S Katti, Proc. of ACM SIGCOMM'15. of ACM SIGCOMM'15London, United KingdomD. Bharadia, K. Joshi, M. Kotaru, S. Katti, "BackFi: High throughput WiFi backscatter", in Proc. of ACM SIGCOMM'15, London, United Kingdom, Aug. 2015, pp. 283-296. Signal detection for ambient backscatter system with multiple receiving antennas. Z Ma, T Zeng, G Wang, F Gao, Proc. of IEEE 14th Canadian Workshop on Information Theory (CWIT). of IEEE 14th Canadian Workshop on Information Theory (CWIT)St. John's, NL, CanadaZ. Ma, T. Zeng, G. Wang, and F. Gao, "Signal detection for ambient backscatter system with multiple receiving antennas", in Proc. of IEEE 14th Canadian Workshop on Information Theory (CWIT), St. John's, NL, Canada, July 2015, pp. 50-53. Signal detection and BER analysis for RF-powered devices utilizing ambient backscatter. K Lu, G Wang, F Qu, Z Zhong, Proc. of International Conference on Wireless Communications & Signal Processing (WCSP). of International Conference on Wireless Communications & Signal essing (WCSP)Nanjing, ChinaK. Lu, G. Wang, F. Qu, and Z. Zhong, "Signal detection and BER analysis for RF-powered devices utilizing ambient backscatter", in Proc. of International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, Oct. 2015, pp. 1-5. Uplink Detection and BER Analysis for Ambient Backscatter Communication Systems. G Wang, F Gao, Z Dou, C Tellambura, Proc. of IEEE Global Communications Conference (Globecom). of IEEE Global Communications Conference (Globecom)San Diego, CA, USAG. Wang, F. Gao, Z. Dou, and C. Tellambura, "Uplink Detection and BER Analysis for Ambient Backscatter Communication Systems", in Proc. of IEEE Global Communications Conference (Globecom), San Diego, CA, USA, Dec. 2015, pp. 1-6. Machine-type communications: Current status and future perspectives toward 5G systems. H Shariatmadari, IEEE Commun. Magazine. H. Shariatmadari et al., "Machine-type communications: Current status and future perspectives toward 5G systems", IEEE Commun. Magazine, pp. 10-17, Sep. 2015. In-band full-duplex wireless: Challenges and opportunities. A , IEEE J. Select. Areas Commun. 32A. Sabharwal et al., "In-band full-duplex wireless: Challenges and opportunities," IEEE J. Select. Areas Commun., vol. 32, pp. 1637-1652, Sep. 2014. R A Horn, C R Johnson, Matrix Analysis. CambridgeCambridge University PressR. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge: Cambridge University Press, 1990. Wireless multicarrier communications -where Fourier meets Shannon. Z Wang, G B Giannakis, IEEE Signal Processing Magazine. Z. Wang and G.B. Giannakis, "Wireless multicarrier communications -where Fourier meets Shannon," IEEE Signal Processing Magazine, pp. 29-48, May 2000. Communication by means of reflected power. H Stockman, Proc. IRE. IREH. Stockman,"Communication by means of reflected power", Proc. IRE, pp. 1196-1204, Oct. 1948. Quadrature amplitude modulated backscatter in passive and semipassive UHF RFID systems. S J Thomas, E Wheeler, J Teizer, M S Reynolds, IEEE Trans. Microw. Theory Tech. S.J. Thomas, E. Wheeler, J. Teizer, and M.S. Reynolds, "Quadrature amplitude modulated backscatter in passive and semipassive UHF RFID systems", IEEE Trans. Microw. Theory Tech., pp. 1175-1182, Apr. 2012. The measurement and interpretation of antenna scattering. D D King, Proc. IRE. IRED.D. King,"The measurement and interpretation of antenna scattering", Proc. IRE, pp. 770-777, July 1949. Power waves and the scattering matrix. K Kurokawa, IEEE Trans. Microw. Theory Tech. K. Kurokawa, "Power waves and the scattering matrix", IEEE Trans. Microw. Theory Tech., pp. 194-202, Mar. 1965. Tag-based sensing and positioning in passive UHF RFID: Tag reflection. D Arnitz, U Muehlmann, K Witrisal, Proc. of 3rd Int. of 3rd IntCartagena, SpainD. Arnitz, U. Muehlmann, and K. Witrisal, "Tag-based sensing and positioning in passive UHF RFID: Tag reflection", in Proc. of 3rd Int. EURASIP Workshop RFID Technol., Cartagena, Spain, Sep. 2010, pp. 51-56. Relationships between antennas as scatters and radiators. R C Hansen, Proc. IEEE. IEEER.C. Hansen,"Relationships between antennas as scatters and radiators", Proc. IEEE, pp. 659-662, May 1969. Synchronization Techniques for Digital Receivers. U Mengali, A N , PlenumNew YorkU. Mengali and A.N. D'Andrea, Synchronization Techniques for Digital Receivers, New York: Plenum, 1997. Synchronization techniques for orthogonal frequency division multiple access (OFDMA): a tutorial review. M Morelli, C.-C J Kuo, M.-O Pun, Proc. IEEE. IEEE95M. Morelli, C.-C.J. Kuo, and M.-O. Pun, "Synchronization techniques for orthogonal frequency division multiple access (OFDMA): a tutorial review," Proc. IEEE, vol. 95, pp. 1394-1427, July 2007. On circularity. B Picinbono, IEEE Trans. Signal Process. 42B. Picinbono, "On circularity," IEEE Trans. Signal Process., vol. 42, pp. 3473-3482, Dec. 1994. Capacity of multi-antenna Gaussian channels. I E Telatar, Eur. Trans. Telecommun. 10I.E. Telatar, "Capacity of multi-antenna Gaussian channels," Eur. Trans. Telecommun., vol. 10, pp. 585-595, Nov./Dec. 1999. Information Theory and Reliable Communication. R G Gallager, WileyNew YorkR.G. Gallager, Information Theory and Reliable Communication, New York: Wiley, 1968. Elements of Information Theory. T M Cover, J A Thomas, WileyNew YorkT.M. Cover and J.A. Thomas, Elements of Information Theory, New York: Wiley, 1991. Fading Channels: Information-Theoretic and Communications Aspects. E Biglieri, J G Proakis, S Shamai, IEEE Trans. Inf. Theory. 44E. Biglieri, J.G. Proakis, and S. Shamai, "Fading Channels: Information-Theoretic and Communications Aspects," IEEE Trans. Inf. Theory, vol. 44, pp. 2916-2692, Oct. 1998. Information theoretic considerations for cellular mobile radio. L Ozarow, S Shamai, A Wyner, IEEE Trans. Veh. Technol. 43L. Ozarow, S. Shamai, and A. Wyner, "Information theoretic considerations for cellular mobile radio," IEEE Trans. Veh. Technol., vol. 43, pp. 359 -378, May 1994. A comparison of pilot-aided channel estimation methods for OFDM systems. M Morelli, U Mengali, IEEE Trans. Signal Process. M. Morelli and U. Mengali, "A comparison of pilot-aided channel estimation methods for OFDM systems," IEEE Trans. Signal Process., pp. 3065-3073, Dec. 2001. The information capacity of amplitude and variance-constrained scalar Gaussian channels. J G Smith, Inf. Contr. J G. Smith, "The information capacity of amplitude and variance-constrained scalar Gaussian channels," Inf. Contr., pp. 203-219, 1971. Proper complex random processes with applications to information theory. F D Neeser, J L Massey, IEEE Trans. Inf. Theory. F.D. Neeser and J.L. Massey, "Proper complex random processes with applications to information theory," IEEE Trans. Inf. Theory, pp. 1293-1302, July 1993. The impact of constellation cardinality on Gaussian channel capacity. Y Wu, S Verdú, Proc. of the 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton). of the 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton)Allerton, IL, USAY. Wu and S. Verdú, "The impact of constellation cardinality on Gaussian channel capacity," in Proc. of the 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Allerton, IL, USA, Sep.-Oct. 2010, pp. 620-628. A Papoulis, Probability, Random variables, and Stochastics Processes. SingaporeMcGraw-Hill3rd ed.A. Papoulis. Probability, Random variables, and Stochastics Processes (3rd ed.). McGraw-Hill, Singapore, 1991. Digital Modulation and Coding. S G Wilson, Prentice HallEnglewood Cliffs, NJS.G. Wilson. Digital Modulation and Coding. Englewood Cliffs, NJ: Prentice Hall, 1996. Capacity of a mobile multiple-antenna communication link in Rayleigh flat fading. T L Marzetta, B M Hochwald, IEEE Trans. Inf. Theory. 45T.L. Marzetta and B.M. Hochwald, "Capacity of a mobile multiple-antenna communication link in Rayleigh flat fading," IEEE Trans. Inf. Theory, vol. 45, pp. 139-157, Jan. 1999. Unitary space-time modulation for multiple-antenna communications in Rayleigh flat fading. B M Hochwald, T L Marzetta, IEEE Trans. Inf. Theory. 46B.M. Hochwald and T.L. Marzetta, "Unitary space-time modulation for multiple-antenna communications in Rayleigh flat fading," IEEE Trans. Inf. Theory, vol. 46, pp. 543-564, Mar. 2000. Communication on the Grassmann Manifold: A geometric approach to the noncoherent multipleantenna channel. L Zheng, D N C Tse, IEEE Trans. Inf. Theory. 48L. Zheng and D.N.C. Tse, "Communication on the Grassmann Manifold: A geometric approach to the noncoherent multiple- antenna channel," IEEE Trans. Inf. Theory, vol. 48, pp. 359-383, Feb. 2002.
[]
[ "EXPLICIT VALUES FOR RAMANUJAN'S THETA FUNCTION ϕ(q)", "EXPLICIT VALUES FOR RAMANUJAN'S THETA FUNCTION ϕ(q)" ]
[ "Bruce C Berndt ", "Andörs Rebák " ]
[]
[]
This paper provides a survey of particular values of Ramanujan's theta function ϕ(q) = ∞ n=−∞ q n 2 , when q = e −π √ n , where n is a positive rational number. First, descriptions of the tools used to evaluate theta functions are given. Second, classical values are briefly discussed. Third, certain values due to Ramanujan and later authors are given. Fourth, the methods that are used to determine these values are described. Lastly, an incomplete evaluation found in Ramanujan's lost notebook, but now completed and proved, is discussed with a sketch of its proof.Dedicated to the memory of Srinivasa Ramanujan
10.46298/hrj.2022.8923
[ "https://export.arxiv.org/pdf/2112.11882v2.pdf" ]
245,851,672
2112.11882
678c9cc99be0614c5e7bcd554bcc15638eb8df52
EXPLICIT VALUES FOR RAMANUJAN'S THETA FUNCTION ϕ(q) 22 Dec 2022 Bruce C Berndt Andörs Rebák EXPLICIT VALUES FOR RAMANUJAN'S THETA FUNCTION ϕ(q) 22 Dec 2022 This paper provides a survey of particular values of Ramanujan's theta function ϕ(q) = ∞ n=−∞ q n 2 , when q = e −π √ n , where n is a positive rational number. First, descriptions of the tools used to evaluate theta functions are given. Second, classical values are briefly discussed. Third, certain values due to Ramanujan and later authors are given. Fourth, the methods that are used to determine these values are described. Lastly, an incomplete evaluation found in Ramanujan's lost notebook, but now completed and proved, is discussed with a sketch of its proof.Dedicated to the memory of Srinivasa Ramanujan INTRODUCTION Ramanujan loved to find closed-form evaluations of many items, e.g., definite integrals, infinite series, infinite products, class invariants, singular moduli, and theta functions. Theta functions were at the epicenter of a significant portion of his research. In his final work on mock theta functions, the behavior of theta functions near their boundary of convergence on the unit circle was perhaps his chief motivating factor. Not only did Ramanujan enjoy calculating special values of individual theta functions, but he also had a marvellous insight for finding certain quotients of theta functions that yield elegant evaluations. The purpose of this paper is to provide a survey of explicit values of perhaps the most important theta function, ϕ(q), in Ramanujan's notation, which we define below. In the final section, special attention is given to a mysterious, incomplete identity found in Ramanujan's lost notebook [13, p. 206], which George Andrews and the first author left unfinished in their second volume on the lost notebook [2, p. 181]. Ramanujan's enigmatic entry has now been completed and proved by the second author [14]. RAMANUJAN'S THETA FUNCTIONS Ramanujan's most general theta function f (a, b) is defined by [12,Volume 2,p. 197], [3, p. 34] f (a, b) := ∞ n=−∞ a n(n+1)/2 b n(n−1)/2 , |ab| < 1. (2.1) In classical notation, a = qe 2iz and b = qe −2iz , where |q| < 1, z ∈ C, and Im(z) > 0. This definition of a theta function apparently originates with Ramanujan, i.e., to the best of our knowledge, no previous researcher had defined a general theta function by (2.1). For his purposes, the notation (2.1) was far more advantageous and easier to use than the classical notation. In particular, note the symmetry in a and b in (2.1), i.e., f (a, b) = f (b, a). Ramanujan loved symmetry. Often, in expressing a function, identity, or theorem, if it were possible to state it symmetrically, he would do so. The symmetry reflected in the definition of f (a, b) is inherited by its representation by the Jacobi triple product identity, perhaps the most useful property of theta functions, given by [12,Volume 2,p. 197], [3, p. 35, Entry 19] f (a, b) = (−a; ab) ı (−b; ab) ı (ab; ab) ı , |ab| < 1, (2.2) where (a; q) ı := lim n→ı (a; q) n , |q| < 1, and (a; q) n := n−1 k=0 (1 − aq k ), n ≥ 1, (a; q) 0 := 1. In Ramanujan's notation, the three most important special cases of f (a, b) in their series and product representations from (2.1) and (2.2), respectively, are defined by ϕ(q) :=f (q, q) = ı n=−ı q n 2 = (−q; q 2 ) 2 ı (q 2 ; q 2 ) ı , (2.3) ψ(q) :=f (q, q 3 ) = ı n=0 q n(n+1)/2 = (q 2 ; q 2 ) ı (q; q 2 ) ı , (2.4) and f (−q) :=f (−q, −q 2 ) = ı n=−ı (−1) n q n(3n−1)/2 = (q; q) ı . (2.5) In this paper, we focus on explicit values of ϕ(q) of the form ϕ(e −π √ n ), (2.6) where n is a positive rational number. In particular, concentration is given in those cases when n is the square of a positive integer. The representation (2.6) appears naturally in the theory of class invariants and singular moduli, which, along with modular equations, provide the most useful known tools for determining exact values of theta functions. Ramanujan apparently used these connections to calculate several original values of ϕ(e −nπ ). BACKGROUND NEEDED FOR THE DETERMINATION OF VALUES FOR ϕ(q) Recall that the ordinary hypergeometric function 2 F 1 is defined for |z| < 1 by 2 F 1 (a, b; c; z) := ı n=0 (a) n (b) n (c) n n! z n , where (a) 0 := 1 and (a) n := a(a + 1)(a + 2) · · · (a + n − 1), n ≥ 1. We use Ramanujan's notation to state one of the fundamental results in the classical theory of elliptic and theta functions, namely [ =: e −y . This then implies an equation of the form Ω 1 − √ 1 − x 1 + √ 1 − x 2 , e −2y , 1 2 z 1 + √ 1 − x = 0,(3.4) which we call obtaining a formula by duplication. The equation (3.3) also implies that Ω 4 √ x (1 + √ x) 2 , e −y/2 , z(1 + √ x) = 0, (3.5) which we call obtaining a formula by dimidiation. Lastly, (3.3) implies that Ω x x − 1 , −e −y , z √ 1 − x = 0, (3.6) which we call obtaining a formula by change of sign. Proofs for all three processes can be found in [3, pp. 125, 126]. Modular equations, singular moduli, and class invariants are the keys to determining specific values of ϕ(e −π √ n ). To define a modular equation, we first need to define the complete elliptic integral of the first kind K(k), namely, K(k) := π/2 0 dt 1 − k 2 sin 2 t = π 2 2 F 1 ( 1 2 , 1 2 ; 1; k 2 ), |k| < 1. (3.7) The number k is called the modulus, and k ′ := √ 1 − k 2 is the complementary modulus. Let K, K ′ , L, and L ′ be complete elliptic integrals of the first kind associated with the moduli k, k ′ , ℓ, and ℓ ′ , respectively. In his notebooks [12], Ramanujan always used the notations α = k 2 and β = ℓ 2 , and so we shall also do this in the remainder of this article. Suppose that for some positive integer n, the equality n 2 F 1 ( 1 2 , 1 2 ; 1; 1 − α) 2 F 1 ( 1 2 , 1 2 ; 1; α) = 2 F 1 ( 1 2 , 1 2 ; 1; 1 − β) 2 F 1 ( 1 2 , 1 2 ; 1; β) (3.8) holds. Then a modular equation of degree n is a relation between α and β that is induced by (3.8). The multiplier m for a modular equation of degree n is defined by [3, p. 230] Let n be a positive rational number. Referring to (3.1) and (3.7), we define α n by m := ϕ 2 (q) ϕ 2 (q n ) = 2 F 1 ( 1 2 , 1 2 ; 1; α) 2 F 1 ( 1 2 , 1 2 ; 1; β) ,(3.ϕ 2 (e −π √ n ) = 2 F 1 ( 1 2 , 1 2 ; 1; α n ) = 2 π K( √ α n ). (3.10) Then √ α n is called a singular modulus. The equation (3.10) shows that the values of theta functions, hypergeometric functions, and complete elliptic integrals are intimately related, i.e., the evaluation of any one of these three quantities in (3.10) yields a value for each of the other two objects. In the literature, perhaps more attention has been given to the evaluation of K( √ α n ). The techniques that are used typically express the values of K( √ α n ) in terms of gamma functions. Moreover, Selberg and Chowla [15] showed that for any singular modulus √ α n , K( √ α n ) can be expressed in terms of gamma functions. For specific evaluations, see the papers by J. M. Borwein and I. J. Zucker [9] and Zucker [19]. A complete list of the values of ϕ(e −π √ n ), 1 ≤ n ≤ 16, can be found in the well-known treatise [ χ(q) := (−q; q 2 ) ∞ = 2 1/6 {α(1 − α)/q} −1/24 , |q| < 1, (3.11) where in the latter representation, q is given by (3.2) (with x replaced by α), and where a proof of the latter representation of χ(q) can be found in [3, p. 124]. If q = e −π √ n , where n is a positive rational number, then the class invariant G n is defined by G n := 2 −1/4 q −1/24 χ(q). (3.12) Using the latter representation for χ in (3.11) and (3.12), we deduce that G n = {4α(1 − α)} −1/24 . (3.13) If q = e −π and β has degree n over α, it follows from (3.13) that G n 2 = {4β(1 − β)} −1/24 . (3.14) To explicitly determine a value of ϕ(e −nπ ) for a certain positive integer n, we choose an appropriate modular equation[s] of degree n that frequently contains the multiplier m, given by (3.9). We now realize that we are free to choose any convenient value of α, 0 < α < 1. We thus will obtain an equation[s] involving m and β. Our goal is to express our equation [s] in terms of m and a class invariant G n 2 , whose value is known. Amazingly, Ramanujan calculated a total of 116 different class invariants. See a complete table of Ramanujan's class invariants in [5, pp. 189-204]. CLASSICAL VALUES The following values for ϕ(e −π ), ϕ(e −π √ 2 ), and ϕ(e −2π ), are classical and were also discovered by Ramanujan [ ϕ(e −π ) = π 1/4 Γ( 3 4 ) , ϕ(e −π √ 2 ) = Γ( 9 8 ) Γ( 5 4 ) Γ( 1 4 ) 2 1/4 π , and ϕ(e −2π ) = 2 + √ 2 2 π 1/4 Γ( 3 4 ) . ϕ(e −4y ) = 1 2 √ z 1 + (1 − α) 1/4 . (4.2) Thus, beginning with ϕ(e −π ) and repeating the aforementioned three processes, we can obtain an infinite family of evaluations that can be added to those in (4.1). For the remainder of this paper, concentration is given to ϕ(e −nπ ), n ≥ 3. [11], wherein the second part is equivalent to establishing the value VALUES ϕ(e −5π ) = ϕ(e −π ) 5 √ 5 − 10 ,(5.1 2 + ∞ n=1 e −πn 2 x cos πn 2 √ 1 − x 2 = √ 2 + √ 1 + x √ 1 − x ∞ n=1 e −πn 2 x sin πn 2 √ 1 − x 2 . For further discussion, see [7, pp. 32, 33]. In addition to the value of ϕ(e −5π ), values of ϕ(e −3π ), ϕ(e −7π ), ϕ(e −9π ), and ϕ(e −45π ) were also recorded by Ramanujan in his first notebook [12, Volume 1, pp. 284, 297, 287, 312], [5, pp. 327, 328]. They were first proved in print by Heng Huat Chan and the first author [6]. We record these four values: ϕ(e −3π ) ϕ(e −π ) = 1 4 6 √ 3 − 9 , (5.2) ϕ 2 (e −7π ) ϕ 2 (e −π ) = 13 + √ 7 + 7 + 3 √ 7 14 (28) 1/8 , (5.3) ϕ(e −9π ) ϕ(e −π ) = 1 + 3 2( √ 3 + 1) 3 ,(5.4) and ϕ(e −45π ) ϕ(e −π ) = 3 + √ 5 + √ 3 + √ 5 + (60) 1/4 3 2 + √ 3 3 10 + 10 √ 5 . We provide a proof of only the evaluation (5.2); it is taken from [6] and [5, pp. 329, 330]. Proof. If β has degree 3 over α, then one of Ramanujan's 15 modular equations of degree 3 and its reciprocal modular equation are given by [ + 1 − β 1 − α 1/2 − β(1 − β) α(1 − α) 1/2 (5.5) and 9 m 2 = α β 1/2 + 1 − α 1 − β 1/2 − α(1 − α) β(1 − β) 1/2 . (5.6) Set α = 1/2 in (5.5) and (5.6). Next, multiply both sides of (5.6) by 2{β(1 − β)} 1/2 and then subtract the result from (5.5). This gives m 2 − 2{β(1 − β)} 1/2 9 m 2 = 1 − 2{β(1 − β)} 1/2 , which, with the use of (3.14) with n = 3, yields m 2 − 9 m 2 G 12 9 = 1 − 1 G 12 9 . (5.7) Multiply both sides of (5.7) by G 6 9 and use the value [5, p. 189] G 9 = 1 + √ 3 √ 2 1/3 (5.8) to arrive at (G 3 9 m) 2 − 9 (G 3 9 m) 2 = G 6 9 − G −6 9 = 2 √ 3. Hence, (G 3 9 m) 2 = 3 √ 3, and, by (5.8), m 2 = 6 √ 3 − 9. If we now appeal to (3.9), we complete the proof of (5.2). In [6], the first author and Chan also established explicit values for ϕ(e −13π ), ϕ(e −27π ), and ϕ(e −63π ). Values of the associated hypergeometric series were also derived. As Ramanujan undoubtedly did to determine his values, in their proofs, these authors also used Ramanujan's modular equations and class invariants. Next, we provide the three above-mentioned values for ϕ(q). First, define G := G 169 = 1 3   √ 13 + 2 + 13 + 3 √ 13 2 1/3 ×    11 + √ 13 2 + 3 √ 3 1/3 + 11 + √ 13 2 − 3 √ 3 1/3      and a := (G − G −1 ) 3 + 7(G − G −1 ). Then ϕ(e −13π ) ϕ(e −π ) = G −3 a + √ a 2 + 52 2 −1/2 . Next, ϕ(e −27π ) ϕ(e −3π ) = 1 3    1 + ( √ 3 − 1)   3 2( √ 3 + 1) + 1 3 2( √ 3 − 1) − 1   1/3    . (5.9) By combining the identity (5.9) with the value (5.2), we obtain the value of ϕ(e −27π ). Lastly, ϕ(e −63π ) ϕ(e −7π ) = 1 3   1 + 4 + √ 7 − 7 1/4 2 3 √ 3 + √ 7(2 + √ 3) 1/6 × 2 + √ 7 + 7 + 4 √ 7 2 3 + √ 7 + (6 √ 7) 1/4 3 + √ 7 − (6 √ 7) 1/4   . (5.10) Combining the evaluations (5.10) and (5.3), we determine the value of ϕ(e −63π ). VALUES OF ϕ(e −nπ ), MODULAR EQUATIONS; CONTRIBUTIONS OF JINHEE YI In her paper [17], Jinhee Yi established several new values for ϕ(e −nπ ). In subsequent papers [10] and [18] with her colleagues, further new values were derived. We next briefly describe her work and offer a few of her new values of ϕ(e −nπ ). For any positive real numbers n and k, define h k,n := ϕ(e −π √ n/k ) k 1/4 ϕ(e −π √ nk ) and h ′ k,n := ϕ(−e −2π √ n/k ) k 1/4 ϕ(−e −2π √ nk ) . (6.1) Two similar quotients involving the Dedekind eta-function (or f (−q) defined in (2.5)) may be defined. In an elementary way, Yi derived several relations among these four quotients. The following theorem is an example [17, p. 387]. P = ϕ(q) ϕ(q 5 ) and Q = ϕ(q 3 ) ϕ(q 15 ) . Then P Q + 5 P Q = Q P 2 + 3 Q P + 3 P Q − P Q 2 . Employing quotients of theta functions, in particular, those in (6.1), in the aforementioned four modular equations, Yi found several new values of ϕ(e −π √ n ) [17, pp. 391, 394, 396, 398-400]. We offer only a small sampling: ϕ(e − √ 3π ) 3 1/4 ϕ(e −3 √ 3π ) = 1 √ 3 1 − 3 √ 2 + 3 √ 4 , ϕ(e − √ 5/3 π ) 3 1/4 ϕ(e − √ 15 π ) = √ 5 − 1 √ 2 , and ϕ(−e −6π ) ϕ(e −π ) = (1 + √ 3 + √ 2 4 √ 3 3 ) 1/3 2 11/24 3 3/8 ( √ 3 − 1) 1/6 . This study continues in [10] and [18], where further modular equations of 'small' degree are used in conjunction with quotients of theta functions, including (6.1). An example from [10, p. 1325] follows: ϕ(e −2π/ √ 5 ) 5 1/4 ϕ(e −2 √ 5π ) = 2 √ 2a (3 + √ 2 + √ 5 + √ 10)(a − √ 5) , where a := 1 + √ 5 2 + 1 + √ 5 2 . Lastly, an example from [18, p. 772], with corrected sign errors, is given: ϕ(e −π ) √ 3 ϕ(e −9π ) = 2 − √ 3 − 3 √ 4(5 − 3 √ 3) (11 √ 3 − 19) 1/3 − 2(11 √ 3 − 19) 1/3 . (6.2) Comparing (6.2) with (5.4), we see that the evaluations take rather different forms. AN INCOMPLETE THETA FUNCTION EVALUATION; WORKS OF SEUNG HWAN SON AND THE SECOND AUTHOR On page 206 in his lost notebook [13], [2, p. 180] Ramanujan recorded the following identities. Entry 7.1. Let ϕ(q 1/7 ) ϕ(q 7 ) = 1 + u + v + w. (7.1) Then p := uvw = 8q 2 (−q; q 2 ) ∞ (−q 7 ; q 14 ) 7 ∞ (7.2) and (We have corrected a misprint; Ramanujan wrote 7 3/4 instead of 7 −3/4 on the right-hand side of (7.6).) The terms (−) 2/7 in (7.6) were not divulged by Ramanujan. To find the missing terms, one has to first solve the equation in (7.3) for ϕ 4 (q)/ϕ 4 (q 7 ) and choose the correct root. Second, one needs to solve the equation in (7.5) for ξ and use the roots in the correct order in (7.4). Ramanujan gave us no hints on how to do this. Since Ramanujan used the exponent 2/7 on the right-hand side of (7.6), we are almost certain that he had started to write down the same representation that we describe below. Our guess is that he stopped after finding α, β, and γ, but before he figured out their correct order. ϕ 8 (q) ϕ 8 (q 7 ) − (2 + 5p) ϕ 4 (q) ϕ 4 (q 7 ) + (1 − p) 3 = 0. The incomplete assertion (7.6) is one of only a few occasions in his notebooks where Ramanujan did not complete his formula. We provide one example. On page 210 in his lost notebook, Ramanujan indicates that he has found the values of 14 specific Rogers-Ramanujan continued fractions, but he gives the values of only three of them. He evidently knew that he could perform all of these evaluations, but he had other mathematical ideas that were more pressing to investigate. See [1, pp. 62-75] for these evaluations. In their book [2], George Andrews and the first author were led by Ramanujan to correctly determine that in (7.1) [2, p. 181] u := 2q 1/7 f (q 5 , q 9 ) ϕ(q 7 ) , v := 2q 4/7 f (q 3 , q 11 ) ϕ(q 7 ) , w := 2q 9/7 f (q, q 13 ) ϕ(q 7 ) . In a wonderful paper [16], Seung Hwan Son established proofs of (7.2)-(7.5), which were reproduced in [2, pp. 181-184]. When Andrews and the first author wrote their second volume [2] on Ramanujan's lost notebook, they were unable to complete Ramanujan's evaluation of ϕ(e −7π √ 7 )/ϕ(e −π √ 7 ). The completion of (7.6) has recently been accomplished by the second author [14]; a brief sketch of his proof will now be given. As prescribed by Ramanujan in (7.6), we set q = e −π/ √ 7 in (7.1). Next, from (7.2), p = 1. Then, we have to determine the correct root of the quadratic equation (7.3). After doing so, we find that ϕ 4 (q) ϕ 4 (q 7 ) = ϕ 4 (e −π/ √ 7 ) ϕ 4 (e −π √ 7 ) = 7. We now have all the coefficients of the polynomial r(ξ) defined in (7.5). We therefore need to determine the zeros α, β, γ of r(ξ) = ξ 3 − 6ξ 2 + 5ξ − 1, which are 1 (2 cos kπ 7 ) 2 , k = 1, 2, 3. (7.7) It would seem that with (7.7), we can determine u, v, and w in (7.4), and put their values in (7.1) to accomplished our goal of establishing the missing terms in (7.6). However, it remains to determine the correct order of the roots α, β, γ in (7.4 . In [14], the values of ϕ(e −21π ), ϕ(e −35π ), and ϕ(e −49π ) are evaluated as well. CONCLUDING REMARKS Continuing our discussion above for determining further values of ϕ(e −nπ ) from previously determined values, we can, in principal, apply the processes of duplication (3.4), dimidiation (3.5), and change of sign (3.6) to ϕ(e −nπ ) to obtain values of ϕ(±e −n2 a π ), where a ∈ Z. For example, we might attempt to use the values (5.2) and ϕ(e −6π ) given in [17], to find values of ϕ(e −3·2 a π ). However, because of the large number of manipulations, we would most likely need to invoke a computer denesting program. Consequently, if the value of the requisite class invariant is known, it may be easier to directly apply the general procedures described above in an attempt to calculate a certain value of ϕ(±e −n2 a π ) in closed-form, which also may or may not be more elegant than what might be obtained by other means. Because of different approaches, different representations for the same ϕ(e −nπ ) may arise, as we demonstrated above for ϕ(e −9π )/ϕ(e −π ). Readers will have observed that the methods of Yi in [17], [10], and [18] are apparently useful only when the degrees of the modular equations that are employed are 'small.' Likewise, the methods of the first author and Chan, and likely those of Ramanujan as well, become considerably more complicated to use for 'larger' degrees. In particular, we see from (3.14) that class invariants for the square of the index n in ϕ(e −nπ ) are necessary. But the methods of all cited authors are similar. The theta functions ψ(q) and f (−q) can be expressed in terms of ϕ(q) [12, Volume 2, p. 198], [3,pp. 39,40,Entries 24,25]. Consequently, Ramanujan also established formulae for ψ(q) and f (−q) analogous to those for ϕ(q), illustrated by (4.2) above [ [5, pp. 337-351]. The ideas developed in [17] were extended by Yi and several others, and used to find similar values for ψ(q), certain other products of theta functions, and the Rogers-Ramanujan continued fractions. , y, and z related as in (3.1) and (3.2), consider an equation in the form Ω(x, e −y , z) = 0. (3.3) Entry 3. 1 . 1If we replace α by 1 − β, β by 1 − α, and m by n/m, where n is the degree of the modular equation, we obtain a modular equation of the same degree. , much more is true. By the processes of duplication, dimidiation, and change of sign, Ramanujan expressed ϕ(−e −y ), ϕ(e −2y ), ϕ(−e −2y ), ϕ(e −4y ), ϕ(e −y/2 ), ϕ(−e −y/2 ), ϕ(e −y/4 ), and ϕ(−e −y/4 ) OF ϕ(e −nπ ) FOUND BY RAMANUJAN; WORK OF HENG HUAT CHAN AND THE FIRST AUTHOR While in England, Ramanujan submitted a problem to the Journal of the Indian Mathematical Society 12, Volume 2, p. 207], [3, pp. 103, 104, Entry 6]: in terms of the modulus √ α and √ z, which is a factor in each of these expressions[12, Volume 2, p. 210], [3, p. 122, Entry 10]. For example, Entry 23]. There were three claimants for a solution to Ramanujan's problem, one of which was incorrect. For the first part of his question, Ramanujan asked readers to prove that, for |x| < 1,1) which can be found in both Ramanujan's first [12, Volume 1, pp. 285], [5, p. 327] and second notebooks [12, Volume 2, p. 227], [3, pp. 209, 210, Theorem 6.1. For all positive real numbers k, a, b, c, and d, with ab = cd, h a,b h kc,kd = h ka,kb h c,d .She next stated or derived four modular equations (of degrees 4, 9, 15, 15). We give one of the modular equations of degree 15 [17, p. 391], [4, p. 235]. Theorem 6.2. Let ). The choiceIn conclusion, for q = e −π/ √ 7 the values of u, v, and w in (7.1) have been determined. Ramanujan's incomplete formula (7.6) can now be made precise by(α, β, γ) = 1 (2 cos 3π 7 ) 2 , 1 (2 cos 2π 7 ) 2 , 1 (2 cos π 7 ) 2 is correct. Hence, u = cos 2π 7 2 cos 2 3π 7 2/7 , v = cos π 7 2 cos 2 2π 7 2/7 , and w = cos 3π 7 2 cos 2 π 7 2/7 . ϕ(e −7π √ 7 ) = 7 −3/4 ϕ(e −π √ 7 ) 1 + cos π 7 2 cos 2 2π 7 2/7 + cos 2π 7 2 cos 2 3π 7 2/7 + cos 3π 7 2 cos 2 π 7 2/7 12, Volume 2, pp. 210, 211], [3, pp. 123, 124, Entries 11, 12]. See, for example, [5, p. 326] for several explicit values of f (−q) recorded on page 250 in Ramanujan's first notebook [12, Volume 1]. Ramanujan also evaluated a certain quotient of ψ-functions for several values of the parameters [12, Volume 1, pp. 338, 339], Acknowledgments. The authors are grateful to the referee for a careful reading of their paper. Ramanujan's Lost Notebook, Part I. G E Andrews, B C Berndt, SpringerNew YorkG. E. Andrews and B. C. Berndt, Ramanujan's Lost Notebook, Part I, Springer, New York, 2005. Ramanujan's Lost Notebook, Part II. G E Andrews, B C Berndt, SpringerNew YorkG. E. Andrews and B. C. Berndt, Ramanujan's Lost Notebook, Part II, Springer, New York, 2009. Ramanujan's Notebooks, Part III. B C Berndt, Springer-VerlagNew YorkB. C. Berndt, Ramanujan's Notebooks, Part III, Springer-Verlag, New York, 1991. Ramanujan's Notebooks, Part IV. B C Berndt, Springer-VerlagNew YorkB. C. Berndt, Ramanujan's Notebooks, Part IV, Springer-Verlag, New York, 1994. Ramanujan's Notebooks, Part V. B C Berndt, Springer-VerlagNew YorkB. C. Berndt, Ramanujan's Notebooks, Part V, Springer-Verlag, New York, 1998. Ramanujan's explicit values for the classical theta function. B C Berndt, H H Chan, Mathematika. 42B. C. Berndt and H. H. Chan, Ramanujan's explicit values for the classical theta function, Mathematika 42 (1995), 278-294. The problems submitted by Ramanujan to the Journal of the Indian Mathematical Society in Continued Fractions: From Analytic Number Theory to Constructive Approximation. B C Berndt, Y.-S Choi, S.-Y Kang, Contemp. Math. B. C. Berndt and F. GesztesyProvidence, RIAmerican Mathematical Society236B. C. Berndt, Y.-S. Choi, and S.-Y. Kang, The problems submitted by Ramanujan to the Journal of the Indian Mathematical Society in Continued Fractions: From Analytic Number Theory to Constructive Approximation, B. C. Berndt and F. Gesztesy, eds., Contemp. Math. 236, American Mathematical Society, Providence, RI, 1999, pp. 15-56. J M Borwein, P B Borwein, Pi and the AGM. New YorkWileyJ. M. Borwein and P. B. Borwein, Pi and the AGM, Wiley, New York, 1987. Fast evaluation of the gamma function for small rational fractions using complete elliptic integrals of the first kind. J M Borwein, I J Zucker, IMA J. Numerical Anal. 124J. M. Borwein and I. J. Zucker, Fast evaluation of the gamma function for small rational fractions using complete elliptic integrals of the first kind, IMA J. Numerical Anal. 12(4) (1992), 519-526. Some modular equations of degree 5 and their applications. D H Park, J Yi, Bull. Korean Math. Soc. 504D. H. Park and J. Yi, Some modular equations of degree 5 and their applications, Bull. Korean Math. Soc. 50 (2013), No. 4, 1315-1328. Question 629. S Ramanujan, J. Indian Math. Soc. 740S. Ramanujan, Question 629, J. Indian Math. Soc. 7 (1918), 40. Notebooks (2 volumes). S Ramanujan, BombayTata Institute of Fundamental Researchsecond ed.S. Ramanujan, Notebooks (2 volumes), Tata Institute of Fundamental Research, Bombay, 1957; second ed., 2012. The Lost Notebook and Other Unpublished Papers, Narosa. S Ramanujan, New DelhiS. Ramanujan, The Lost Notebook and Other Unpublished Papers, Narosa, New Delhi, 1988. The three missing terms in Ramanujan's septic theta function identity. Ö Rebák, submittedÖ. Rebák, The three missing terms in Ramanujan's septic theta function identity, submitted. On Epstein's zeta-function. A Selberg, S Chowla, J. Reine Angew. Math. 227A. Selberg and S. Chowla, On Epstein's zeta-function, J. Reine Angew. Math. 227 (1967), 86-110. Septic theta function identities in Ramanujan's lost notebook. S H Son, Acta Arith. 2S. H. Son, Septic theta function identities in Ramanujan's lost notebook, Acta Arith. 2 (2001), 361-374. Theta-function identities and the explicit formulas for theta-functions and their applications. J Yi, J. Math. Anal. Applics. 292J. Yi, Theta-function identities and the explicit formulas for theta-functions and their applications, J. Math. Anal. Applics. 292 (2004), 381-400. On some modular equations and their applications I. J Yi, M G Cho, J H Kim, S H Lee, J M Yu, D H Paek, Bull. Korean Math. Soc. 503J. Yi, M. G. Cho, J. H. Kim, S. H. Lee, J. M. Yu, and D. H. Paek, On some modular equations and their applications I, Bull. Korean Math. Soc. 50 (2013), No. 3, 761-776. The evaluation in terms of Γ-functions of the periods of elliptic curves admitting complex multiplication. I J Zucker, Math. Proc. Cambridge Philos. Soc. 82I. J. Zucker, The evaluation in terms of Γ-functions of the periods of elliptic curves admitting complex multiplication, Math. Proc. Cambridge Philos. Soc. 82 (1977), 111-118.
[]
[ "Improving BERT Pretraining with Syntactic Supervision", "Improving BERT Pretraining with Syntactic Supervision" ]
[ "Giorgos Tziafas [email protected] \nUniversity of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n\n", "Konstantinos Kogkalidis [email protected] \nUniversity of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n\n", "Gijs Wijnholds [email protected] \nUniversity of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n\n", "Michael Moortgat [email protected] \nUniversity of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n\n" ]
[ "University of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n", "University of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n", "University of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n", "University of Groningen Utrecht Institute of Linguistics OTS\nUtrecht University\n" ]
[]
Bidirectional masked Transformers have become the core theme in the current NLP landscape. Despite their impressive benchmarks, a recurring theme in recent research has been to question such models' capacity for syntactic generalization. In this work, we seek to address this question by adding a supervised, token-level supertagging objective to standard unsupervised pretraining, enabling the explicit incorporation of syntactic biases into the network's training dynamics. Our approach is straightforward to implement, induces a marginal computational overhead and is general enough to adapt to a variety of settings. We apply our methodology on Lassy Large, an automatically annotated corpus of written Dutch. Our experiments suggest that our syntax-aware model performs on par with established baselines, despite Lassy Large being one order of magnitude smaller than commonly used corpora.
null
[ "https://arxiv.org/pdf/2104.10516v1.pdf" ]
233,324,275
2104.10516
5d3562312a3b6b5ba0f4f1fb3500041df2ffcf20
Improving BERT Pretraining with Syntactic Supervision Giorgos Tziafas [email protected] University of Groningen Utrecht Institute of Linguistics OTS Utrecht University Konstantinos Kogkalidis [email protected] University of Groningen Utrecht Institute of Linguistics OTS Utrecht University Gijs Wijnholds [email protected] University of Groningen Utrecht Institute of Linguistics OTS Utrecht University Michael Moortgat [email protected] University of Groningen Utrecht Institute of Linguistics OTS Utrecht University Improving BERT Pretraining with Syntactic Supervision Bidirectional masked Transformers have become the core theme in the current NLP landscape. Despite their impressive benchmarks, a recurring theme in recent research has been to question such models' capacity for syntactic generalization. In this work, we seek to address this question by adding a supervised, token-level supertagging objective to standard unsupervised pretraining, enabling the explicit incorporation of syntactic biases into the network's training dynamics. Our approach is straightforward to implement, induces a marginal computational overhead and is general enough to adapt to a variety of settings. We apply our methodology on Lassy Large, an automatically annotated corpus of written Dutch. Our experiments suggest that our syntax-aware model performs on par with established baselines, despite Lassy Large being one order of magnitude smaller than commonly used corpora. Introduction In recent years, the advent of Transformers (Vaswani et al., 2017) has paved the way for high-performing neural language models, with BERT (Devlin et al., 2019) and its many variants being the main exemplar (Liu et al., 2019;Sanh et al., 2019;Lan et al., 2020). BERT-like models achieve state-of-the-art scores in most major NLP benchmarks via a two-step process. First, they are trained on massive-scale, minimally processed raw text corpora by employing the so-called masked language modeling (MLM) objective. Task-specific refinements are then obtained by fine-tuning the pretrained model on labeled corpora, usually orders of magnitude smaller in size. This pipeline, despite its attested performance, suffers from two key limitations. On the one hand, training a BERT-like model from scratch requires an often prohibitive amount of data and computational resources, barring entry to research projects that lack access to either. On the other hand, a naturally emerging question is whether such models develop an internal notion of syntax. Discovery of structural biases is hindered by their distributed, opaque representations, requiring manually designed probing tasks to extract (Hewitt and Manning, 2019;Tenney et al., 2019;Kim et al., 2020;Clark et al., 2019a;Goldberg, 2019;Hu et al., 2020). Alternatively, when syntactic evaluation becomes the focal point, it is usually deferred to downstream tasks (Kitaev et al., 2019;Zhang et al., 2020a), owing both to the lack of sufficiently large labeled corpora as well as the computational bottleneck imposed by hard-to-parallelize operations. In this work, we seek to alleviate both points by considering them in tandem. Contrary to prior work, we consider the case of introducing explicit syntactic supervision during the pretraining process and investigate whether it can allow for a reduction in the data needs of a BERT-like language model. To facilitate this, we couple the standard unsupervised MLM task with a supervised task, mapping each distinct word to a supertag, an abstract syntactic descriptor of its functional role within the context of its surrounding phrase. In essence, this amounts to simple token-level classification, akin to traditional supertagging (Bangalore and Joshi, 1999), except for parts of the input now being masked. In employing both objectives, we ensure that our model is syntax-aware by construction, while incurring only a negligible computational overhead. We evaluate the trained model's performance in a variety of downstream tasks and find that it performs on par with established models, despite being trained on a significantly smaller corpus. Our preliminary experiments suggest an improvement to pretraining robustness and offer a promising direction for cheaper and faster training of structure-enhanced language models. Reflecting on the added objective, we call our model tagBERT. Background Embedding structural biases in neural language models has been a key theme in recent research. Most syntax-oriented models rely on computationally intensive, hard-to-parallelize operations that constrain their integrability with the state of the art in unsupervised language modeling (Tai et al., 2015;Dyer et al., 2016;Kim et al., 2019). This can be ameliorated by either asynchronous pretraining, relying on accurate but slow oracles (Kuncoro et al., 2019), or multi-task training, where the system is exposed to a syntactic task for only part of its training routine (Clark et al., 2018(Clark et al., , 2019b. In the BERT setting, there have been attempts at modifying the architecture by either overlaying syntactic structure directly on the attention layers of the network (Wang et al., 2019b) or imposing shallow syntactic cues and/or semantic information in a multi-task setting (Zhang et al., 2020b;Zhou et al., 2020). While such a setup allows for efficient parallel pretraining, the rudimentary nature of the utilized annotations typically forfeits fine aspects of sentential structure, such as function-argument relations. In this paper, we adopt lexicalism in the categorial grammar tradition (Ajdukiewicz, 1935;Lambek, 1958;Buszkowski et al., 1988;Steedman, 1993;Moortgat, 1997), according to which (most of) the grammatical structure of a language is encoded in its lexicon via an algebra of types that governs the process of phrasal composition. Under such a regime, the parse tree underlying a sentence can be partially (or even fully, in the case of an adequately "strict" grammar) recovered from its constituent words and their respective types alone. In applied terms, the lexical nature of categorial grammars provides us with the opportunity of capturing syntax in a fully-parallel fashion that is straightforward to incorporate with the masked language modeling objective of BERT-like architectures, a fact so far generally overlooked by machine learning practitioners. This perspective is in line with recent insights arguing for the necessity of explicit supervision for syntactic acquisition (Bailly and Gábor, 2020). The only prerequisite for our methodology is an adequately sized, categorially annotated corpus. Even though gold standard corpora exist for a variety of languages and grammars (Chen and Shanker, 2004;Hockenmaier, 2006;Hockenmaier and Steedman, 2007;Tse and Curran, 2010;Ambati et al., 2018;Kogkalidis et al., 2020b), their size is generally insufficient for training a parameterrich neural language model. This limiting factor can be counteracted by either lexicalizing existing silver-standard corpora of a larger size, or by using an off-the-shelf, high-performance supertagger to annotate the source data prior to pretraining. In both cases the trained system is likely to inherit common errors of the data-generating teacher; the question is whether the added structural biases facilitate faster training of more general language models, despite potential tagging inaccuracies. Methodology Data To facilitate both the data needs of the neural language model and the added supertagging objective we employ Lassy Large (van Noord et al., 2013), a corpus of written Dutch, automatically parsed using the Alpino parser (Bouma et al., 2001). The dataset is comprised of a selection of smaller corpora from varying sources, ranging from excerpts from conventional and modern media to spoken transcripts, enumerating a total of almost 800 million words. Lassy's syntactic analyses take the form of directed acyclic graphs, with nodes corresponding to words or phrases marked with their part-of-speech as well as syntactic category labels and edges denoting dependency relations. To make the analyses applicable for our setup, we lexicalize them using the type extraction algorithm of Kogkalidis et al. (2020b). The algorithm traverses a parse graph and encodes its structure in a linear logic proof, under the general paradigm of categorial type logics (Moortgat, 1997), simultaneously capturing function-argument and dependency structure. Words, i.e. fringe nodes in the graph, are assigned types, abstract syntactic signs that encode a considerable portion of the full structure. Applying the extraction algorithm, we obtain a collection of around 66 million sentences, represented as sequences of word-type pairs. We drop about 20 million of these in a sanitation step, due to either being duplicates or overlapping with any of the evaluation tasks. We tokenize words using a preconstructed WordPiece (Schuster and Nakajima, 2012) vocabulary of 30 000 tokens based on a larger collection of written Dutch corpora (Vries et al., 2019). Further, we keep the 2 883 most frequent types, which suffice to cover 95% of the type occurrences in the dataset, and replace the filtered out types with an UNK token. We finally discard sentences lying in the 5%-tail of the length distribution, and train with 45 million sentences spanning less than 100 sub-word tokens. Model Our model is a faithful replica of BERT BASE , except for having a hidden size of 1 536 instead of 3 072 for the intermediate fully-connected layers, reducing our total parameters from 110 to 79 million. We further employ a linear projection from the model's dimensionality to the number of types in our vocabulary, which we attach to the output of a prespecified encoder block. The projection can be separably applied on the encoder's intermediate representations, allowing us to optionally query the model for a class weighting over types for each input token. This addition accounts to a mere 2.5% of the model's total parameter count and only incurs a negligible computational overhead if explicitly enabled, as it does not interfere with the forward pass when the system is run solely as a contextualization model. If the type classification layer is enabled during pretraining, it introduces a clear error signal that updates all network weights up to the connected encoder block, bolstering the correct acquisition of syntax in the bottom part of the encoding pipeline. Pretraining To train our model, we feed it partially masked sentences following the methodology of Liu et al. (2019); we dynamically mask continuous spans of tokens belonging to the same word and drop the next sentence prediction task, training on single sentences instead. Attaching the type classification layer at the fourth encoder block, we end up with two output streams. 1 One is a prediction over the subword vocabulary for each masked token, as in vanilla BERT, whereas the other comes from the type classifier, yielding a prediction over the type vocabulary for every token, masked or otherwise. 2 1 The choice of depth for the type classifier is due to preliminary experiments where we let a trainable layer weighter freely select from the range of encoder blocks. In the vast majority of runs, most of the importance was interestingly assigned to the fourth layer. 2 Masking entire words for the supertagging task can be seen as a severe form of regularization,à la channel dropout. We obtain a loss function by summing the crossentropy between predictions and truths for each output stream. To deal with the misalignment between subword units and types, we associate every type with the first token of its corresponding word, and mask out predictions spanning subsequent tokens when performing the loss computation. Similarly, we do not penalize predictions over types discarded by the occurrence count filtering (UNK types). For regularization purposes, we randomly replace output types 1% of the time (Wu et al., 2019). Following standard practices, we optimize using AdamW (Loshchilov and Hutter, 2019) with a batch size of 256, shuffling and iterating the dataset 8 times. The learning rate is gradually increased to 10 −4 over 10 000 steps and then decayed to zero using a linear warm-up and decay schedule. Evaluation To evaluate the trained model, we measure its performance on the below selection of downstream tasks, after fine-tuning. We keep our fine-tuning setup as barebones as possible, using Adam (Kingma and Ba, 2014) with a batch size of 32 and a learning rate of 3 × 10 −5 . We apply model selection based on the validation-set performance and report testset results (averaged over three runs) against the available baselines of each task in Table 1. In order to provide fair comparisons, we replicate the evaluation of other models using the same experimental setup. Lassy Small is a gold-standard syntactically annotated corpus for written Dutch (van Noord et al., 2013). We fine-tune a POS tagger on the subset of the corpus that has been converted to Universal Dependency format (Bouma and van Noord, 2017). SoNaR-1 is a curated subset of Lassy Small that includes several layers of manually added annotations (Delaere et al., 2009). We employ the named entity recognition and part-of-speech labels that come packed with the corpus and treat their classification as downstream tasks. The first contains approximately 60 000 samples and 6 class labels encoded in the IOB scheme, whereas the latter contains about 16 000 samples and comes in two varieties: coarse (12 classes) and fine-grained (241 classes, out of which only 223 appear in the training data, many just once). Table 1: Comparative performance for a selection of downstream tasks. We report test set accuracy (%) on all tasks except NER, where we report F1 scores (%) as produced by the CoNLL evaluation script (Tjong Kim Sang, 2002). For a fair comparison, we replicate the fine-tuning process on all pretrained baselines, including truncation of the maximum token length to 100. CoNLL-2002 is a named entity recognition dataset from the corresponding shared task (Tjong Kim Sang, 2002). The dataset contains 4 class labels, also encoded in the IOB scheme, with a total size of approximately 24 000 samples. AEthel is a typelogical derivation dataset, generated by applying the type extraction algorithm to Lassy Small (Kogkalidis et al., 2020b). We replicate the experiments of Kogkalidis et al. (2020a) to train a typelogical grammar parser, but instantiate the encoder part with the baselines of Table 1, and report token-level supertagging accuracy as well as full sentential parsing accuracy in the greedy setting. We note that even though our model is exposed to types during pretraining, their representation format is vastly different during the finetuning process; rather than being classification outputs for each word, they are broken down to their primitive symbols and transduced from the input sequence with auto-regressive seq2seq decoding. In that sense, this task helps us assess the generality of the learned representations. Discussion Our model performs on par across all tasks considered, indicating pretraining robustness comparable to the heavy weight baselines of BERT- (Devlin et al., 2019) and RoBERTa-based (Liu et al., 2019) models. 3 Considering the non-ideal nature of the silver-standard tags, as well as the significantly smaller size of our corpus compared to competing models, our results can be seen as strong evidence in favor of explicitly encoding structural biases in the pretraining process of neural language models. Opting for a lexicalized representation of structure allows for a truly seamless and cost-efficient integration with BERT's core architecture, essentially 3 Implementation code and pretrained model weights will be made available at https://git.io/JOKs4. removing the computational bottleneck of alternating between tensor optimization and structure manipulation. Conclusion We introduced tagBERT, a variation of BERT that is biased towards syntax through coupling the standard MLM loss with a supertagging objective. We trained tagBERT on a modestly sized, silverstandard corpus of written Dutch -after first lexicalizing its annotations -and evaluated the trained model on a number of downstream NLP tasks after fine-tuning. Despite the corpus' modest size, our method is achieving performance comparable to established state-of-the art models. This result is contrary to the ongoing trend of utilizing increasingly more data and augmenting model capacity, instead suggesting potential benefits from incorporating richer annotations in convenient representation formats. Our work aims towards a syntacticallytransparent, cost-efficient language model that combines both the rigor of formal linguistic theories and the representational power of large-scale unsupervised learning. We leave several directions open for future work, including more extensive experimentation with different languages and grammar formalisms, integration with existing pre-trained models in an intermediate-training fashion (Wang et al., 2019a) and exploring architectural adjustments that would allow a two-way dependence or a stronger interfacing between the lexical and syntactic modalities, moving towards structurally-conditioned language generation and structure-aware sentence embeddings, akin to Zanzotto et al. (2020). Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics. Julia Hockenmaier. 2006. Creating a CCGbank and a wide-coverage CCG lexicon for German. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 505-512, Sydney, Australia. Association for Computational Linguistics. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355-396. Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725-1744, Online. Association for Computational Linguistics. Taeuk Kim, Jihun Choi, Daniel Edmiston, and Sanggoo Lee. 2020. Are pre-trained language models aware of phrases? Simple but strong baselines for grammar induction. In International Conference on Learning Representations. Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. 2019. Unsupervised recurrent neural network grammars. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1105-1117, Minneapolis, Minnesota. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499-3505, Florence, Italy. Association for Computational Linguistics.Yoon Nikita English translation "Syntactic Connexion. Kazimierz Ajdukiewicz, Studia philosophica. H. Weber in Mc-Call, S.1Oxford University PressDie syntaktische KonnexitätKazimierz Ajdukiewicz. 1935. Die syntaktische Kon- nexität. Studia philosophica, 1:1-27. English trans- lation "Syntactic Connexion" by H. Weber in Mc- Call, S. (Ed.) Polish Logic, pp. 207-231, Oxford University Press, Oxford, 1967. Hindi CCGbank: A CCG treebank from the Hindi dependency treebank. Language Resources and Evaluation. Bharat Ram Ambati, Tejaswini Deoskar, Mark Steedman, 10.1007/s10579-017-9379-652Bharat Ram Ambati, Tejaswini Deoskar, and Mark Steedman. 2018. Hindi CCGbank: A CCG treebank from the Hindi dependency treebank. Language Re- sources and Evaluation, 52(1):67-100. Emergence of syntax needs minimal supervision. Raphaël Bailly, Kata Gábor, 10.18653/v1/2020.acl-main.46Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsRaphaël Bailly and Kata Gábor. 2020. Emergence of syntax needs minimal supervision. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 477-487, Online. Association for Computational Linguistics. Supertagging: An approach to almost parsing. Srinivas Bangalore, Aravind K Joshi, Computational Linguistics. 252Srinivas Bangalore and Aravind K. Joshi. 1999. Su- pertagging: An approach to almost parsing. Compu- tational Linguistics, 25(2):237-265. Increasing return on annotation investment: The automatic construction of a universal dependency treebank for Dutch. Gosse Bouma, Gertjan Van Noord, Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017). the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017)Gothenburg, SwedenAssociation for Computational LinguisticsGosse Bouma and Gertjan van Noord. 2017. Increas- ing return on annotation investment: The automatic construction of a universal dependency treebank for Dutch. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 19-26, Gothenburg, Sweden. Association for Computational Linguistics. Alpino: Wide-coverage computational analysis of Dutch. Gosse Bouma, Gertjan Van Noord, Robert Malouf, Computational linguistics in the Netherlands. RodopiBrillGosse Bouma, Gertjan Van Noord, and Robert Malouf. 2001. Alpino: Wide-coverage computational anal- ysis of Dutch. In Computational linguistics in the Netherlands 2000, pages 45-59. Brill Rodopi. Categorial grammar. Wojciech Buszkowski, Witold Marciszewski, Johan Van Benthem, 10.1075/llsee.25John Benjamins PublishingWojciech Buszkowski, Witold Marciszewski, and Jo- han van Benthem. 1988. Categorial grammar. John Benjamins Publishing. Automated extraction of TAGs from the Penn treebank. John Chen, K Vijay, Shanker, 10.1007/1-4020-2295-6_4New developments in parsing technology. SpringerJohn Chen and Vijay K Shanker. 2004. Automated ex- traction of TAGs from the Penn treebank. In New developments in parsing technology, pages 73-89. Springer. What does BERT look at? An analysis of BERT's attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, 10.18653/v1/W19-4828Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPFlorence, ItalyAssociation for Computational LinguisticsKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019a. What does BERT look at? An analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics. BAM! born-again multi-task networks for natural language understanding. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D Manning, V Quoc, Le, 10.18653/v1/P19-1595Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsKevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. 2019b. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5931-5937, Florence, Italy. Association for Computational Linguistics. Semi-supervised sequence modeling with cross-view training. Kevin Clark, Minh-Thang Luong, Christopher D Manning, Quoc Le, 10.18653/v1/D18-1217Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsKevin Clark, Minh-Thang Luong, Christopher D. Man- ning, and Quoc Le. 2018. Semi-supervised se- quence modeling with cross-view training. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1914- 1925, Brussels, Belgium. Association for Computa- tional Linguistics. Cultivating trees: Adding several semantic layers to the Lassy treebank in SoNaR. Isabelle Delaere, Veronique Hoste, Paola Monachesi, 7th International workshop on Treebanks and Linguistic Theories (TLT-7). LOT (Landelijke Onderzoekschool TaalwetenschapIsabelle Delaere, Veronique Hoste, and Paola Monach- esi. 2009. Cultivating trees: Adding several seman- tic layers to the Lassy treebank in SoNaR. In 7th International workshop on Treebanks and Linguistic Theories (TLT-7), pages 135-146. LOT (Landelijke Onderzoekschool Taalwetenschap). Pieter Delobelle, Thomas Winters, Bettina Berendt, arXiv:2001.06286RobBERT: a Dutch RoBERTa-based language model. arXiv preprintPieter Delobelle, Thomas Winters, and Bettina Berendt. 2020. RobBERT: a Dutch RoBERTa-based lan- guage model. arXiv preprint arXiv:2001.06286. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. AEthel: Automatically extracted typelogical derivations for Dutch. Konstantinos Kogkalidis, Michael Moortgat, Richard Moot, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationKonstantinos Kogkalidis, Michael Moortgat, and Richard Moot. 2020b. AEthel: Automatically ex- tracted typelogical derivations for Dutch. In Pro- ceedings of The 12th Language Resources and Eval- uation Conference, pages 5257-5266, Marseille, France. European Language Resources Association. Scalable syntaxaware language models using knowledge distillation. Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, Phil Blunsom, 10.18653/v1/P19-1337Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAdhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntax- aware language models using knowledge distillation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3472-3484, Florence, Italy. Association for Compu- tational Linguistics. The mathematics of sentence structure. Joachim Lambek, The American Mathematical Monthly. 653Joachim Lambek. 1958. The mathematics of sentence structure. The American Mathematical Monthly, 65(3):154-170. ALBERT: A lite BERT for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In Interna- tional Conference on Learning Representations. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations. Categorial type logics. Michael Moortgat, Handbook of logic and language. Johan van Benthem and Alice ter MeulenElsevier/MIT PressMichael Moortgat. 1997. Categorial type logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of logic and language, chapter 2, pages 93-177. Elsevier/MIT Press. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Japanese and Korean voice search. Mike Schuster, Kaisuke Nakajima, 10.1109/ICASSP.2012.62890792012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEMike Schuster and Kaisuke Nakajima. 2012. Japanese and Korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. IEEE. Categorial grammar. Mark Steedman, Lingua. 903Mark Steedman. 1993. Categorial grammar. Lingua, 90(3):221-258. Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, 10.3115/v1/P15-1150Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1556-1566, Beijing, China. Association for Computational Linguistics. What do you learn from context? Probing for sentence structure in contextualized word representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, Thomas Mccoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, Ellie Pavlick, International Conference on Learning Representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. Erik F , Tjong Kim Sang, COLING-02: The 6th Conference on Natural Language Learning. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). Chinese CCGbank: extracting CCG derivations from the Penn Chinese treebank. Daniel Tse, R James, Curran, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsBeijing, China. ColingOrganizing CommitteeDaniel Tse and James R. Curran. 2010. Chinese CCG- bank: extracting CCG derivations from the Penn Chinese treebank. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics (Coling 2010), pages 1083-1091, Beijing, China. Coling 2010 Organizing Committee. Jelmer van der Linde, Ineke Schuurman. Gosse Gertjan Van Noord, Frank Bouma, Daniël Van Eynde, De Kok, Erik Tjong Kim Sang, and Vincent Vandeghinste. 2013. Large Scale Syntactic Annotation of Written Dutch: Lassy. Berlin HeidelbergSpringerGertjan van Noord, Gosse Bouma, Frank Van Eynde, Daniël de Kok, Jelmer van der Linde, Ineke Schuur- man, Erik Tjong Kim Sang, and Vincent Vandeghin- ste. 2013. Large Scale Syntactic Annotation of Writ- ten Dutch: Lassy, pages 147-164. Springer Berlin Heidelberg. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc. Andreas Wietse De Vries, Arianna Van Cranenburgh, Tommaso Bisazza, Caselli, Malvina Gertjan Van Noord, Nissim, BERTje: A Dutch BERT model. Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. BERTje: A Dutch BERT model. Bowman. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R Thomas Mccoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R , 10.18653/v1/P19-1439Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAlex Wang, Jan Hula, Patrick Xia, Raghavendra Pap- pagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bow- man. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4465-4476, Florence, Italy. Association for Computational Linguistics. Tree transformer: Integrating tree structures into self-attention. Yaushian Wang, Hung-Yi Lee, Yun-Nung Chen, 10.18653/v1/D19-1098Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsYaushian Wang, Hung-Yi Lee, and Yun-Nung Chen. 2019b. Tree transformer: Integrating tree structures into self-attention. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1061-1070, Hong Kong, China. As- sociation for Computational Linguistics. Stochastic shared embeddings: Data-driven regularization of embedding layers. Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James L Sharpnack, Advances in Neural Information Processing Systems. Curran Associates, Inc32Liwei Wu, Shuqing Li, Cho-Jui Hsieh, and James L Sharpnack. 2019. Stochastic shared embeddings: Data-driven regularization of embedding layers. In Advances in Neural Information Processing Systems, volume 32, pages 24-34. Curran Associates, Inc. KERMIT: Complementing transformer architectures with encoders of explicit syntactic interpretations. Fabio Massimo Zanzotto, Andrea Santilli, Leonardo Ranaldi, Dario Onorati, Pierfrancesco Tommasino, Francesca Fallucchi, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsFabio Massimo Zanzotto, Andrea Santilli, Leonardo Ranaldi, Dario Onorati, Pierfrancesco Tommasino, and Francesca Fallucchi. 2020. KERMIT: Comple- menting transformer architectures with encoders of explicit syntactic interpretations. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 256-267, Online. Association for Computational Linguistics. Fast and accurate neural CRF constituency parsing. Yu Zhang, Houquan Zhou, Zhenghua Li, 10.24963/ijcai.2020/560Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20International Joint Conferences on Artificial Intelligence Organization. Main trackYu Zhang, Houquan Zhou, and Zhenghua Li. 2020a. Fast and accurate neural CRF constituency parsing. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI- 20, pages 4046-4053. International Joint Confer- ences on Artificial Intelligence Organization. Main track. Semantics-aware BERT for language understanding. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou, 10.1609/aaai.v34i05.6510Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020b. Semantics-aware BERT for language understanding. Proceedings of the AAAI Conference on Artificial In- telligence, 34(05):9628-9635. LIMIT-BERT: Linguistics informed multi-task BERT. Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang, Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsJunru Zhou, Zhuosheng Zhang, Hai Zhao, and Shuail- iang Zhang. 2020. LIMIT-BERT: Linguistics in- formed multi-task BERT. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 4450-4461, Online. Association for Computa- tional Linguistics.
[]
[ "Deep learning density functionals for gradient descent optimization", "Deep learning density functionals for gradient descent optimization" ]
[ "E Costa \nSchool of Science and Technology\nPhysics Division\nUniversità di Camerino\n62032CamerinoItaly\n\nINFN-Sezione di Perugia\n06123PerugiaItaly\n", "G Scriva \nSchool of Science and Technology\nPhysics Division\nUniversità di Camerino\n62032CamerinoItaly\n\nINFN-Sezione di Perugia\n06123PerugiaItaly\n", "R Fazio \nAbdus Salam ICTP\nStrada Costiera 11I-34151TriesteItaly\n\nDipartimento di Fisica\nUniversità di Napoli \"Federico II\"\nMonte S. AngeloI-80126NapoliItaly\n", "S Pilati \nSchool of Science and Technology\nPhysics Division\nUniversità di Camerino\n62032CamerinoItaly\n\nINFN-Sezione di Perugia\n06123PerugiaItaly\n" ]
[ "School of Science and Technology\nPhysics Division\nUniversità di Camerino\n62032CamerinoItaly", "INFN-Sezione di Perugia\n06123PerugiaItaly", "School of Science and Technology\nPhysics Division\nUniversità di Camerino\n62032CamerinoItaly", "INFN-Sezione di Perugia\n06123PerugiaItaly", "Abdus Salam ICTP\nStrada Costiera 11I-34151TriesteItaly", "Dipartimento di Fisica\nUniversità di Napoli \"Federico II\"\nMonte S. AngeloI-80126NapoliItaly", "School of Science and Technology\nPhysics Division\nUniversità di Camerino\n62032CamerinoItaly", "INFN-Sezione di Perugia\n06123PerugiaItaly" ]
[]
Machine-learned regression models represent a promising tool to implement accurate and computationally affordable energy-density functionals to solve quantum many-body problems via density functional theory. However, while they can easily be trained to accurately map ground-state density profiles to the corresponding energies, their functional derivatives often turn out to be too noisy, leading to instabilities in self-consistent iterations and in gradient-based searches of the ground-state density profile. We investigate how these instabilities occur when standard deep neural networks are adopted as regression models, and we show how to avoid them by using an ad-hoc convolutional architecture featuring an inter-channel averaging layer. The main testbed we consider is a realistic model for noninteracting atoms in optical speckle disorder. With the inter-channel average, accurate and systematically improvable ground-state energies and density profiles are obtained via gradient-descent optimization, without instabilities nor violations of the variational principle. arXiv:2205.08367v2 [physics.comp-ph] 7 Nov 2022
10.1103/physreve.106.045309
[ "https://export.arxiv.org/pdf/2205.08367v2.pdf" ]
248,834,188
2205.08367
63080ef9b3a5874b7fccacf34d8db1b3247eeabf
Deep learning density functionals for gradient descent optimization E Costa School of Science and Technology Physics Division Università di Camerino 62032CamerinoItaly INFN-Sezione di Perugia 06123PerugiaItaly G Scriva School of Science and Technology Physics Division Università di Camerino 62032CamerinoItaly INFN-Sezione di Perugia 06123PerugiaItaly R Fazio Abdus Salam ICTP Strada Costiera 11I-34151TriesteItaly Dipartimento di Fisica Università di Napoli "Federico II" Monte S. AngeloI-80126NapoliItaly S Pilati School of Science and Technology Physics Division Università di Camerino 62032CamerinoItaly INFN-Sezione di Perugia 06123PerugiaItaly Deep learning density functionals for gradient descent optimization Machine-learned regression models represent a promising tool to implement accurate and computationally affordable energy-density functionals to solve quantum many-body problems via density functional theory. However, while they can easily be trained to accurately map ground-state density profiles to the corresponding energies, their functional derivatives often turn out to be too noisy, leading to instabilities in self-consistent iterations and in gradient-based searches of the ground-state density profile. We investigate how these instabilities occur when standard deep neural networks are adopted as regression models, and we show how to avoid them by using an ad-hoc convolutional architecture featuring an inter-channel averaging layer. The main testbed we consider is a realistic model for noninteracting atoms in optical speckle disorder. With the inter-channel average, accurate and systematically improvable ground-state energies and density profiles are obtained via gradient-descent optimization, without instabilities nor violations of the variational principle. arXiv:2205.08367v2 [physics.comp-ph] 7 Nov 2022 I. INTRODUCTION Density functional theory (DFT) is the workhorse of computational material science and quantum chemistry [1]. It is based on rigorous theorems [2][3][4] certifying that the ground-state energy can be computed minimizing a (generally unknown) functional of the density profile, allowing bypassing computationally prohibitive wave-function based methods. However, the available approximations for the density functional are reliable only for weakly correlated materials, while in the regime of strong electron correlations dramatic failures may occur [5]. In recent years, machine-learning (ML) algorithms have reached remarkable breakthroughs in various branches of physics research [6][7][8][9], and they have also been adopted in the framework of DFT, both for continuous-space [10][11][12][13][14] and for one-dimensional tightbinding models [15][16][17]. These algorithms pave the way to data based approaches to the development of density functionals. Furthermore, they facilitate the implementation of computationally convenient strategies based on orbital-free DFT [18,19]. Most previous studies adopted relatively simple regression models, such as, e.g., kernel ridge regression, showing that moderately large training sets of ground-state density profiles and corresponding energies allow reconstructing remarkably accurate density functionals. Also artificial neural networks have been adopted (see, e.g., [13,19]), considering standard architectures such as the convolutional neural networks (CNNs) popular in the field of computer vision. Unfortunately, in the case of continuous-space models, severe drawbacks have emerged when such ML functionals have been employed in self-consistent calculations and in gradient-based optimizations. Specifically, the functional derivatives turned out to be too noisy, leading to unphysical density profiles and to strong violation of the variational principle [10,19,20]. Some remedial strategies have already been explored. Essentially, they resort to gradient denoising via dimensionality reduction [20] or basis truncation [21], to constrained optimization, or they aim at exploiting additional information (e.g., energy derivatives) in the training process [14,19]. These strategies have provided significant benefits, but they have some limitations, as they might lead to variational biases or they require additional data which is far less accessible. Due to the pivotal role played by DFT, further complementary strategies are highly desirable. In this Article, we investigate the use of deep neural networks as regression models to reconstruct continuousspace density functionals from training data. Our main finding is that a tailored convolutional network featuring an inter-channel averaging operation allows avoiding the drawbacks mentioned above. Following analogous previous studies [10,11,[18][19][20], the testbed we consider is a single particle model, but we mostly focus on a more realistic Hamiltonian which describes ultracold atoms moving in one-dimensional optical speckle patterns. Our analysis is complemented by addressing deepwell models from the literature (see Appendix). Our aim is to develop a sufficiently effective deep-learned functional to allow searching the ground-state energy and the density profile of previously unseen instances of speckle disorder via gradient-descent optimization. We show that the most popular network architectures, namely, the standard CNNs, are inadequate for this task. Indeed, while they provide remarkably accurate energy predictions when fed with exact ground-state density profiles, their functional derivatives are too noisy. This leads to instabilities in the gradient-descent search of the density profile which minimizes the energy functional, unless the accuracy is jeopardized via an early halting of the optimization procedure. We demonstrate that these instabilities can be avoided with the tailored neural network. This is inspired by an ensemble-averaging mechanism, and it features, beyond the standard multi-channel convolutional layers, additional layers that perform an interchannel averaging operation. We show that this feature allows us iterating gradient-descent steps at will, provid-ing accurate results that can be systematically improved by increasing the number of channels. The rest of the article is organized as follows: in Section II we describe the formalism of DFT based on deep learning, the structure of the standard and of the averagechannel neural networks, as well as the gradient-descent technique used to find ground-state energies and densities. The main testbed model we address is described in Section III. Therein we also report details on the dataset, on the protocol used for network training, and on the accuracy reached in the regression task. The results obtained in gradient-descent optimization with the standard and with the average-channel neural networks are compared in Section IV. The instabilities occurring with standard networks are highlighted, and their suppression with the inclusion of the average layer is discussed in detail. Section V provides a summary of the main findings and some comments on future perspectives. To favor comparison with previous studies [10,18], in the Appendix A we report the test of the average-channel neural network in deep potential wells defined by the sum of three Gaussian functions. II. DENSITY FUNCTION THEORY WITH ARTIFICIAL NEURAL NETWORKS ML provides novel promising approaches to learn energy-density functionals for DFT from data. These functionals have the potential to accurately describe strongly correlated systems. However, their variational minimization to search for the ground-state density profile turned out to be problematic due to noisy functional derivatives [20], [11], [10]. This problem is already evident in noninteracting systems. Henceforth, in the following we focus on single-particle problems, but the technique we develop can be applied to interacting systems via the creation of suitable training sets. In this article we consider one-dimensional singleparticle Hamiltonians written in the form: H = − 2 2m d 2 dx 2 + V (x),(1) where is the reduced Planck constant and m is the particle mass. The external potential V (x) is compatible with the adopted periodic boundary conditions. In the framework of DFT, one aims at computing the groundstate energy e gs of the Hamiltonian (1) from a functional of the density profile: e gs = E[n gs ]. Here, n gs indicates the density profile n gs (x) = |ψ gs (x)| 2 ,(2) where ψ gs (x) is the ground-state wave function. The first Hohenberg-Kohn theorem guarantees that, in principle, this functional exists [2]. In practice, it is convenient to separate the known potential energy contribution, seek-ing for a functional for the kinetic energy only [22]: t gs = e gs − L 0 dxn gs (x)V (x) ≡ T [n gs ].(3) It is worth pointing out that we do not adopt the Kohn-Sham formalism. As in previous studies on ML-based DFT [18,19], [20], [11], [10], the orbital-free formalism is used, attempting to approximate the kinetic energy (eventually together with energy terms in interacting systems) as a functional of the density. If successful, this attempt would therefore also lead to a significant reduction of computational cost compared to the more demanding Kohn-Sham approach. Deep-learning techniques can be adopted in the DFT framework. The first task is to train a deep neural network to map ground-state density profiles to the corresponding kinetic energies, therefore learning the unknown functional T [n gs ]. This can be achieved via supervised learning from a dataset including many instances of density profiles associated to the corresponding kinetic energies, {n gs,k , t gs,k }. The integer k labels the instances in the dataset. The parameters of the neural network, collectively denoted with ω, are optimized by minimizing the loss function, namely, the mean squared error L(ω) = 1 N train Ntrain k=1 t gs,k −T gs,k (ω)[n gs,k ] 2 ,(4) whereT gs,k (ω)[n gs,k ] denotes the kinetic energy predicted by the neural network and N train is the number of instances in the training set. This optimization can be performed using the stochastic gradient descent algorithms or one of its many successful variants. A. Neural networks The first regression model we consider is a standard CNN. Its structure is familiar from many fields where deep learning has proven successful such as, e.g, image recognition. It is composed of N b = 3 convolutional blocks. Each block includes a convolutional layer with N c channels (this hyperparameter is specified in Table I), whose filter size is k f = 13 and padding type is periodic, and an average pooling layer with a kernel size k s = 2. Two variants of this network are considered, using two popular activation functions, namely the ReLU function, defined as ReLU(x) = x if x > 0 0 otherwise ,(5) and the Softplus function Softplus(x) = ln (1 + exp(x)).(6) The last convolutional block is processed through a flattening layer and then connected to a dense layer with only one neuron (identity activation function) to generate a scalar output. As discussed in detail in Section IV, these standard CNNs turn out to be inadequate for the DFT framework. In particular, their functional derivatives are too noisy to perform a gradient-based search of the ground state. Therefore, we introduce a novel tailored architecture which features an inter-channel average operation. In the following, this network will be referred to as average-channel CNN (CNN). Specifically, this model is composed of two convolutional blocks, each including N c convolutional channels (k f = 13 and periodic padding), an average pooling layer (k s = 4 in the first block and k s = 2 in the second block), and, notably, an additional layer, where each neuron computes the average of the activations of the corresponding neurons in all channels of the previous layer. The activation function is ReLU. The last convolutional block passes a flatten layer and is then connected to one dense layer with only one neuron, as in the standard CNN case. It is worth pointing out that this average operation reduces the scaling of the number of parameters from quadratic to linear in N c . This allows considering architectures with many channels without facing prohibitive computational costs nor overfitting problems. In each architecture adopted in this article, all convolutional blocks feature the same number of channels. Explorations performed with different numbers lead to similar findings, so we do not discuss them to avoid burdening the presentation. Hereafter, we describe the operations performed by the convolutional blocks more formally. In a standard CNN, the action of the n-th convolutional block corresponds to the following convolution operation: h α n (x) = 1 k s x+ks/2 x−ks/2 dy act   β y+k f /2 y−k f /2 dx W αβ n (y, x )h β n−1 (x ) + v α n   ,(7) where act is the chosen activation function, the matrices W αβ n represent, for each filter α and input channel β, a kernel of size k f , and v α are the set of biases for each filter. Instead, in the CNN, the n-th block has an additional inter-channel average operation, which is expressed as:h n (x) = 1 N c α h α n (x),(8) where h α n (x) = 1 k s x+ks/2 x−ks/2 dy act y+k f /2 y−k f /2 dx W α n (y, x )h n−1 (x ) + v α n(9) represents the previous N c parallel convolution operations. Notice that in Eqs. (7) and (8) integrals actually indicate discrete operations. All the neural networks considered in this work are implemented and trained using the Pytorch library, exploiting automatic differentiation to compute discrete functional derivatives [23]. B. Formalism of gradient-descent optimization Once the kinetic energy functional has been learned by the neural network, and consequently we assume T [n] ≡T ω [n] , both the ground-state energy and the density corresponding to a new instance of the Hamiltonian can be obtained from the variational principle. Indeed, the second Hohenberg-Kohn theorem ensures that the (exact) functional is minimized by the ground-state density profile [2]. This can be expressed using the Euler equation: δT [n] δn(x) + V (x) − µ = 0(10) where µ is a normalization constraint. Its solution can be efficiently obtained using the gradient-descent algorithm, as usually done in orbital-free DFT [24]. Specifically, one iterates the following update rule: n t+1 (x) = n t (x) − η δT [n t ] δn(x) + V (x) − µ t ,(11) starting from a reasonably chosen initial profile n 0 (x). In this equation, η > 0 is the chosen learning rate, the integer t = 0, 1, 2, . . . , t max labels the steps, and the adaptive coefficient µ t is introduced to ensure the normalization condition: dxn t (x) = 1.(12) To ensure the density never becomes negative, it is convenient to perform a variable change in Eq. (11): χ = n(x). This leads to the update rule: χ t+1 (x) = χ t (x)−η δT [χ 2 t ] δχ(x) + 2χ t (x)V (x) − 2χ t (x)µ t ,(13) where the normalization coefficient is computed as: µ t = dx 1 2 δT [χ 2 t ] δχ t (x) χ t (x) + χ 2 t (x)V (x) dx χ 2 t (x) .(14) The computation of the square of the function χ is performed by an additional layer in Pytorch. Henceforth, the functional derivative with respect to χ can be directly computed exploiting automatic differentiation. Whether the gradient descent algorithm reaches the ground-state or not depends on two major issues. First, the optimization might get stuck in a local minimum. Indeed, the optimization landscape is not proven to be convex, even for the (unknown) exact functional. Convexity can be instead proven for an extended functional, defined in a domain including density profiles not corresponding to ground states [4]. This problem can be mitigated by repeated the minimization process starting from different initial profiles, or by introducing random steps based on, e.g., Metropolis-type algorithms. The second issue is the accuracy of the functional derivative. Noisy and inaccurate derivatives might create unphysical density profiles, clearly not corresponding to ground states. Since the regression model was not trained on such profiles, it might provide very inaccurate energy predictions, even lower than the exact ground-state energy e gs . This leads to dramatic failures of the gradient descent optimization, even to large violations of the variational principle. This problem has already been emphasized in the literature, and it was indicated as a major challenge to be overcome for the further development of ML based DFTs. In Refs. [20], [10] [11], the adopted regression model was kernel ridge regression. This model typically requires smaller datasets for training, but it is less efficient than the deep neural networks in systematically extracting further information from larger and larger datasets. Even kernel ridge regression led to noisy derivatives. To circumvent this problem, the authors introduced two main techniques, referred to as local principal component analysis and non linear gradient denoising. They aim at projecting the functional derivative to the manifold tangent to the one spanned by ground-state density profiles. Other works included derivative data in the training process -also adopting standard CNNs -using the Sobolev Loss [18,19]. We emphasize that our goal is to train the regression model using only ground-state density profiles and the corresponding energies, avoiding resorting to less accessible data such as exited-state properties or energy gradients. III. TESTBED MODEL AND TRAINING DATASET The main testbed model we consider is the singleparticle model (1), where the (random) external potential V (x) is designed to represent the effect of optical speckle patterns on ultracold atoms. Notice that another testbed, borrowed from the literature, is considered in the Appendix A, allowing us to further characterize the domain of applicability of the CNN. The speckle potentials can be created by applying a specific filter in Fourier space to a random complex gaussian field [25]. The filter corresponds to the aperture of the optical apparatus used to experimentally create the field, and it fixes the characteristic size of the speckle grains. In fact, with this choice the Hamiltonian (1) describes early coldatom experiments on Anderson localization in one dimension [26,27]. The statistical and the spectral properties of optical speckle patterns are known [25,28,29]. The intensity of the potential V in a point x follows the probability distribution P (V ) = exp − V V 0 ,(15) for V ≥ 0, and P (V ) = 0 otherwise; V 0 ≥ 0 is the average intensity, and it also coincides with the standard deviation. It is the unique parameter determining the disorder strength. The two-point autocorrelation function satisfies the following equation: V (x + x)V (x ) V 2 0 − 1 = sin(πx/γ) 2 (πx/γ) 2 ,(16) where γ determines the correlation length, namely, the size of the typical speckle grains. In the above equation, the brackets · indicate the average over many random realizations of the speckle pattern. This ensemble average coincides with the spatial average for sufficiently large systems. The correlation energy E c = 2 2mγ separates the strong disorder regime V 0 E c , where the low-energy orbitals are localized on a length scale of order γ [30] due to the Anderson localization phenomenon [31], from the weak disorder regime V 0 E c , where their localization length is much larger. Notice that in one dimension any disorder strength induces Anderson localization in a sufficiently large system [32]. In the following, we consider the relatively large system size L = 14γ and the intermediate disorder strength V 0 = 0.5E c . This choice allows generating rather variegate ground-state density profiles with different shapes and varying degrees of localization, depending on the details of the specific realization of the speckle pattern. Therefore, the Hamiltonian (1) represents a stringent testbed for the DFT framework. Two representative instances of the speckle pattern are shown in Fig. 1, together with the corresponding ground-state density profiles. Different random realizations of the speckle pattern can be efficiently generated on a discrete grid with the algorithm described in Refs. [33,34]. We choose a fine grid with N g = 256 points, such that the grid step δ x = L/N g γ. The ground-state energy e gs and the corresponding orbital ψ gs (x) are determined via exact diagonalization using a high-order finite difference formula. Computations performed with finer grids show that the discretization error is negligible. The choice of such a fine grid allows us computing all spatial integrals (see, e.g, those in Eqs. (3) and (14)) with the discrete approximation L 0 dx −→ δ x Ng i=1 . Higher order approximations lead to essentially indistinguishable results for our purposes. Furthermore, the functional derivative in Eq. (13) is computed as δT [ψ] δψ(x) = ∂T [ψ] ∂ψ(x i ) 1 δ x .(17) For this, we exploit Pytorch automatic differentiation. The training of the neural networks, namely, the minimization of the loss function Eq. (4), is performed using the Adam algorithm [35]. The chosen learning rate is l r = 10 −4 , the minibatch size is N b = 100 and the training epochs are N e = 1200. The other parameters of the Adam algorithm are set at their suggested default values. Our global dataset is composed of 150000 instances. As customary in deep-learning studies, we split it into a training set (81%), a validation set (9%), and a test set (10%). To measure the accuracy in the regression task, we consider the coefficient of determination, defined as: R 2 = 1 − Ntest k=1 t gs,k −T ω [n gs,k ] 2 N test σ 2 ,(18) where N test is the number of instances in the test set and σ 2 = 1 Ntest Ntest k=1 (t gs,k −t gs ) 2 is the variance of their kinetic energies; witht gs = 1 Ntest Ntest k=1 t gs,k we denote the average kinetic energy. After training, the two variants of standard CNNs reach remarkable accuracies on the test set, meaning that, when they are provided with an exact ground-state density profile corresponding to a previously unseen speckle pattern, they accurately predict the associated ground-state kinetic energy and, via Eq. (3), also the total energy. The R 2 scores obtained with these two CNNs are reported in Table I. The R 2 scores reached by the CNN are comparable, but slightly inferior, to the ones obtained by the standard CNNs (see Table I). Remarkably, despite of this (slightly) lower performance in kinetic energy predictions, the averaging operation drastically suppresses the noise in the functional derivative, allowing the use of CNN in a gradient-based search of the ground-state energy and density profile. This is discussed in Section IV. IV. RESULTS FOR GRADIENT-DESCENT OPTIMIZATION To be suitable for the DFT framework, the deeplearned functionalT ω [n] should allow iterating the gradient descent process as long as required to reach the minimum of E[n] ≡T ω [n] + dxV (x)n(x). Hereafter, we denote with n min (x) the density profile reached after gradient-descent optimization, and with e min = E[n min (x)] the corresponding energy. The latter represents our estimate for the ground-state energy e gs . Importantly, energies significantly lower than e gs should show the external potentials V (x) representing two instances of the optical speckle pattern (right vertical axis, the unit is the correlation energy Ec) and the corresponding ground-state density profiles (left vertical axis, units of 1/γ). The spatial variable x is in units of the correlation length γ. In panel (a) the exact ground state is compared to the DFT result obtained using the standard CNN with Softplus activation function and using the CNN. In panel (b) the standard CNN with ReLU activation is considered instead. never occur during the optimization process, as they would constitute a violation of the variational principle. As explained in Section II, the possible freezing in a local minimum significantly larger than e gs can be circumvented by repeating the optimization from a different initial profile. Figure 1 displays the density profiles reached after t max = 10000 steps of gradient descent for two representative instances of the speckle pattern. For these and all other results reported below, the learning rate used in gradient descent is η = 10 −3 . Clearly, the standard CNNs lead to unphysical profiles, while the CNN provides an accurate approximation of the exact profile n gs (x). To shed light on this phenomenon, we analyze the behavior of the energy discrepancy ∆e = e gs − e min (19) and of the density discrepancy |∆n| = dx(n gs (x) − n min (x)) 2 ,(20) based on the L 2 metric |n| = dxn 2 (x)(21) along the gradient-descent process. Specifically, we consider the average of the relative energy error ∆e/e gs , of the relative absolute energy error |∆e|/e gs , and of the relative density error |∆n|/|n gs | computed over a test set of 500 instances. Their dependence on the number of gradient-descent steps is shown in Fig. 2. The vertical bars indicate the standard deviation over this test set, meaning that they represent the fluctuations among different realizations of the speckle pattern. For both standard CNNs, after an initial decrease, the average absolute error increases. This means that the optimization process in not reliable, as it should be halted at an unknown intermediate number of steps. The average relative error becomes negative, indicating a violation of the variational principle. The density error also increases after many steps, corresponding to the formation of unphysical density profiles with large spurious spatial fluctuations, as exemplified in Fig. 1. Instead, the absolute relative energy error corresponding to the CNN (with N c = 260 channels) systematically decreases until it saturates around a small value corresponding to ∼ 0.5%. The average error saturates close to ∆e/e gs ∼ 0, meaning that significant violation of the variational principle do not occur. The histograms shown in Fig. 3 compare the energy and the density errors obtained with the CNN and with the standard CNN with ReLU activation function (this outperforms the corresponding model with Softplus activation) after t max = 10000 steps of the gradient descent optimization. The CNN energies are concentrated around zero error, while the standard CNN results are broadly distributed in the region of negative energy errors, corresponding to strong violations of the variational principle. Notably, the energy predictions obtained by performing gradient-descent optimization with the CNN systematically improve when the number of channels N c increases. This effect is shown in Fig. 4. Notice that, for small N c , small violations of the variational principle still occur in rare cases. However, they vanish for larger N c . In fact, the average absolute energy discrepancy obtained after gradient descent with the N c = 260 CNN is |∆e|/e 0.2%, which is approximately twice the corresponding discrepancy obtained on a test set of exact grounds-state profiles n gs (x), namely, |∆e M L |/e 0.1%. Notice that this approximate doubling effect is expected, since the former error is also affected by the approximation in the density profile, while the latter corresponds to the CNN prediction on the exact profile. This means that gradient-descent optimization successfully identifies the ground state, within the residual uncertainty of the ML model. Increasing N c leads also to more accurate density profiles (see panel (b) of Fig. 4) and to the reduction of the spatial noise observed in the results provided by standard CNN (see Fig. 1 and also Refs. [18] [10]). To quantify this spatial noise, we consider the following metric: ∆A = 1 N g Ng i=1 ∇n min (x i ) ∇n gs (x i ) − 1 .(22) It measures the error in the derivative of the density profile. Inaccurate profiles are characterized by large positive values ∆A 1, due to spurious spatial fluctuations, while exact predictions lead to ∆A = 0. We find that large N c values lead to accurate and smooth density profiles (see panel (c) of Fig. 4), indicating the effectiveness of the average-channel layer. Residual local spurious fluctuations in the density profiles might be further suppressed via filtering procedures in post processing. Still, accurate DFT predictions are essential to avoid introducing biases by strong filtering procedures. It is worth further emphasizing that using the standard metrics of deep learning on training and test sets of exact density profiles is not necessarily helpful to predict the performance of the ML functional in the variational minimization of DFT. This is exemplified by the V. CONCLUSIONS The progress in data-based DFT is currently being hindered by the instabilities encountered when using ML functionals in gradient-based optimization. We presented a promising approach to circumvent this problem. This relies on the implementation of a deep neural network tailored to DFT. Specifically, we have shown that when an inter-channel averaging layer is included, beyond the standard convolutional, pooling, and dense layers, gradient-descent optimization can be iterated at will, obtaining accurate ground-state energies and density profiles and avoiding violations of the variational principle beyond residual uncertainties from the imperfect training of the regression model. Our analysis has focused on a realistic one-dimensional model for non-interacting atoms in optical speckle disorder, which leads to rather variegate density profiles compared to models addressed in previous studies on ML based DFT. For completeness, in the Appendix A the performance of the CNN in Gaussian-well models borrowed from the literature is demonstrated. On the one hand, this further analysis indicates the rather general range of applicability of our tailored neural network. On the other hand, it points out the need of training sets including significantly variegate density profiles. To favor future comparative studies, our training datasets are made freely available at Ref. [36]. Additional challenges are going to be faced in the further development of ML techniques for DFT [37]. A further assessment of the network effectiveness should focus on higher dimensional and interacting models. In the DFT formalism, moving from single-particle to manybody problems is less challenging than in wave-function based methods, since observables are still obtained from the single-particle density. Therefore, we expect the CNN to be useful also in the many-body context. Clearly, generating training datasets is, in that case, more demanding [19,38]. The learning process has to be accelerated, and the following strategies could be adopted. Incorporating physics knowledge into the deep-learning framework is a possible strategy [39,40]. Another promising approach is transfer learning. This technique has already proven suitable to accelerate the supervised learning of the ground-state properties of both non-interacting and interacting quantum systems [41][42][43]. Interestingly, even extrapolations were proven feasible, meaning that (scalable) networks trained on relatively small systems provided accurate predictions for larger sizes or larger particle numbers. These techniques might be adopted also in the framework of DFT. We leave these endeavors to future investigations. (adimensional) single particle Hamiltonian reads: H = − 1 2 d 2 dx 2 + V (x),(A1) with the external potential V (x) = − 3 i=1 a i exp − (x − b i ) 2 2c 2 i .(A2) The model parameters a i , b i , and c i are sampled from uniform probability distributions in the following ranges: a i ∈ [1, 40], b i ∈ [1.8, 4.2] , and c i ∈ [0.12, 0.4]. The box size is L = 6, and hard-wall boundary conditions are adopted. The chosen box is L = 6, rather than L = 1 as in Ref. [18], so that the ground-state density profiles essentially vanish before reaching the boundaries. This strongly suppresses the role of the choice of the boundary conditions. Indeed, we find negligible variations in the ground-state energies with hard-wall compared to periodic boundary conditions. With the size L = 1, the boundary effects are sizable, and the density profiles corresponding to different potential instances display small is the length unit, used on the horizontal axis. The density configuration obtained by gradient-descent minimization is compared to the exact density profile. variations when hard-wall boundaries are adopted. This does not allow an effective training of the deep neural network, reintroducing instabilities in the gradient-descent optimization. We find that this problem is solved either enlarging the system size, e.g., to L = 6, as shown hereafter, or by adopting periodic boundary conditions in a small box, as mentioned later on in this paragraph. A CNN is trained on a global dataset of 150000 instances (90% train, 10% validation), using the same structure (2 blocks, pooling size [4,2], kernel size 13, and 260 hidden channels) and the same hyperparameters as in the case of the speckle potential with a number of epochs N e = 3000. However, here the convolutional layers use zero padding, as opposed to the periodic padding adopted for the speckle potentials, which are defined within periodic boundary conditions. The gradient descent performance on a test set of 100 instances is visualized in Fig. 6. Again, we find that gradient-descent optimization (t max = 10000) leads to accurate ground-state energies ( |∆e|/e 0.14%) and density profiles ( |∆n|/|n| 3% ), without sizable violation of the variational condition. For illustrative purposes, an example of Gaussian-well potential, with the corresponding density profile obtained with L = 6, is shown in Fig. 7. To facilitate comparison, we report the model details and the performance metrics using the units and the conventions adopted in some previous studies. These considered one-electron models in atomic units, so that corresponds to the Bohr radius and the energy unit 2 /(m 2 ) corresponds to one Hartree (Ha). The mean kinetic energy of the L = 6 model is t gs 3.994Ha and the standard deviation is quite large, namely σ 1.457Ha, with maximum value 12.957Ha and minimum value 0.2605Ha, corresponding to considerably variegate ground states. The average kinetic-energy prediction-error on a test set of N test = 15000 instances is |∆t ML | ≡ N −1 test Ntest k=1 t gs,k −T ω [n gs,k ] 8 · 10 −4 Ha; the standard deviation of |∆t ML | is σ 11 · 10 −4 Ha. After gradient descent, the average absolute kinetic-energy error is T ω [n min ] − t gs 0.0171Ha and the corresponding standard deviation σ 0.0191Ha. The average absolute density discrepancy is |∆n| = 0.031 and the corresponding standard deviation is σ 0.008. It is also worth mentioning that, considering L = 1 and periodic boundary conditions, the accuracy metrics are comparable as above, namely: |∆e|/e 0.3% and |∆n|/|n| 3%. In this case, periodic padding is used, and the first left and right periodic images of the Gaussian wells are included to make the potential V (r) essentially periodic. The potential parameters are sampled in the following ranges b i ∈ [0.1, 0.9], a i ∈ [1,40] and c i ∈ [0.03, 0.1]. When combined with periodic boundary conditions, these allow creating sufficiently variegate density profiles despite of the small system size. These findings indicate that the CNN represents a flexible regression model to apply ML-based DFT to rather arbitrary external potentials, whereby the density profiles display significant variations among different random realization of the sample. FIG. 1. Panels (a) and (b) show the external potentials V (x) representing two instances of the optical speckle pattern (right vertical axis, the unit is the correlation energy Ec) and the corresponding ground-state density profiles (left vertical axis, units of 1/γ). The spatial variable x is in units of the correlation length γ. In panel (a) the exact ground state is compared to the DFT result obtained using the standard CNN with Softplus activation function and using the CNN. In panel (b) the standard CNN with ReLU activation is considered instead. FIG. 2 . 2Relative discrepancies of DFT predictions from exact grounds-state results as a function of the number of steps t of the gradient-descent optimization. The results are averaged over a test set of 500 speckle-pattern instances. Three error metrics are shown: absolute energy discrepancy |∆e|/e [panel (a)], energy discrepancy ∆e/e [panel (b)], and density discrepancy |∆n|/ |n| [panel (c)]. The results of the standard CNN with ReLU activation function are compared to the ones of the CNN (Nc = 260). The vertical bars represents the standard deviation over the test set. FIG. 3 . 3Histograms of the relative energy discrepancies ∆e/e [panel (a)] and the density discrepancies |∆n| / |n| [panel (b)] for a test set of 500 speckle-pattern instances, after tmax = 10000 steps of the gradient-descent optimization. The results of the standard CNN with ReLU activation function are compared to the ones of the CNN. FIG. 4 . 4Histograms of the relative energy discrepancies ∆e/e [panel (a)], of the density discrepancies |∆n|/|n| [panel (b)], and of the noise metric ∆A defined in Eq. 22 [panel (c)], for a test set of 500 speckle-pattern instances, after tmax = 10000 steps of the gradient descent optimization. The results of the CNNs with different number of convolutional channels Nc are shown. scatter plots of Fig. 5. They display the errors obtained after gradient descent optimization versus the coefficient of determination Eq.(18) computed on a test set of exact ground-state density profiles. The two standard CNNs and the CNN with three values of N c are considered. ACKNOWLEDGMENTSFIG. 6 . 6This work was supported by the Italian Ministry of University and Research under the PRIN2017 project CEnTraL 20172H2SC4. S.P. acknowledges PRACE for awarding access to the Fenix Infrastructure resources at Cineca, which are partially funded by the European Union's Horizon 2020 research and innovation program through the ICEI project under the Grant Agreement No. 800858. S. P. also acknowledges the Cineca award under the ISCRA initiative, for the availability of high performance computing resources and support.Appendix A: Gaussian-well potentialsTo favor direct comparison, and to further characterize the effectiveness of the CNN, here we consider a different testbed model, borrowed from previous studies. It describes narrow potential wells which confine the particle in the central region of a finite box. Setting the units such that 2 /(m 2 ) = 1, where is the length unit, the Histograms of the relative energy discrepancies ∆e/e [panel (a)] and the density discrepancies |∆n| / |n| [panel (b)] for a test set of 100 Gaussian-well potentials, after tmax = 10000 steps of the gradient-descent optimization. potential V (x) corresponding to an instance of the Gaussian-well potential (right vertical axis, unit of h 2 m 2 ) and the corresponding ground-state density profiles (left vertical axis, units of 1/ ). Blocks Activation Nc ks Neural network R 2TABLE I. Coefficient of determination R 2 for two standard CNNs with ReLU and with Softplus activation functions, and for three CNNs with different number of channels Nc. The test set includes 15000 (previously unseen) instances of the optical speckle pattern. The first column reports the number of convolutional blocks.2 ReLU 60 [4, 2] CNN 0.99961 2 ReLU 140 [4, 2] CNN 0.99969 2 ReLU 260 [4, 2] CNN 0.99967 3 ReLU 30 2 CNN 0.99996 3 Softplus 30 2 CNN 0.99991 Perspective on density functional theory. K Burke, 10.1063/1.4704546J. Chem. Phys. 136150901K. Burke, Perspective on density functional theory, J. Chem. Phys. 136, 150901 (2012). Inhomogeneous electron gas. P Hohenberg, W Kohn, 10.1103/PhysRev.136.B864Phys. Rev. 136864P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev. 136, B864 (1964). Universal variational functionals of electron densities, first-order density matrices, and natural spinorbitals and solution of the v-representability problem. M Levy, 10.1073/pnas.76.12.6062Proc. Natl. Acad. Sci. U.S.A. 766062M. Levy, Universal variational functionals of electron densities, first-order density matrices, and natural spin- orbitals and solution of the v-representability problem, Proc. Natl. Acad. Sci. U.S.A. 76, 6062 (1979). Density functionals for Coulomb systems. E Lieb, 10.1002/qua.560240302International Journal of Quantum Chemistry. 24243E. Lieb, Density functionals for Coulomb systems, Inter- national Journal of Quantum Chemistry 24, 243 (1983). Challenges for density functional theory. A J Cohen, P Mori-Sánchez, W Yang, 10.1021/cr200107zChem. Rev. 112289A. J. Cohen, P. Mori-Sánchez, and W. Yang, Chal- lenges for density functional theory, Chem. Rev. 112, 289 (2012). Machine learning & artificial intelligence in the quantum domain: A review of recent progress. V Dunjko, H J Briegel, 10.1088/1361-6633/aab406Rep. Prog. Phys. 8174001V. Dunjko and H. J. Briegel, Machine learning & artificial intelligence in the quantum domain: A review of recent progress, Rep. Prog. Phys. 81, 074001 (2018). Machine learning and the physical sciences. G Carleo, I Cirac, K Cranmer, L Daudet, M Schuld, N Tishby, L Vogt-Maranto, L Zdeborová, 10.1103/RevModPhys.91.045002Rev. Mod. Phys. 9145002G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, Machine learning and the physical sciences, Rev. Mod. Phys. 91, 045002 (2019). Machine learning for quantum matter. J Carrasquilla, 10.1080/23746149.2020.1797528Adv. Phys.-X. 51797528J. Carrasquilla, Machine learning for quantum matter, Adv. Phys.-X 5, 1797528 (2020). H J Kulik, T Hammerschmidt, J Schmidt, S Botti, M A L Marques, M Boley, M Scheffler, M Todorović, P Rinke, C Oses, A Smolyanyuk, S Curtarolo, A Tkatchenko, A P Bartók, S Manzhos, M Ihara, T Carrington, J Behler, O Isayev, M Veit, A Grisafi, J Nigam, M Ceriotti, K T Schütt, J Westermayr, M Gastegger, R J Maurer, B Kalita, K Burke, R Nagai, R Akashi, O Sugino, J Hermann, F Noé, S Pilati, C Draxl, M Kuban, S Rigamonti, M Scheidgen, M Esters, D Hicks, C Toher, P V Balachandran, I Tamblyn, S Whitelam, C Bellinger, L M Ghiringhelli, 10.1088/2516-1075/ac572fRoadmap on machine learning in electronic structure, Electronic Structure. 423004H. J. Kulik, T. Hammerschmidt, J. Schmidt, S. Botti, M. A. L. Marques, M. Boley, M. Scheffler, M. Todor- ović, P. Rinke, C. Oses, A. Smolyanyuk, S. Curtarolo, A. Tkatchenko, A. P. Bartók, S. Manzhos, M. Ihara, T. Carrington, J. Behler, O. Isayev, M. Veit, A. Grisafi, J. Nigam, M. Ceriotti, K. T. Schütt, J. Westermayr, M. Gastegger, R. J. Maurer, B. Kalita, K. Burke, R. Na- gai, R. Akashi, O. Sugino, J. Hermann, F. Noé, S. Pilati, C. Draxl, M. Kuban, S. Rigamonti, M. Scheidgen, M. Es- ters, D. Hicks, C. Toher, P. V. Balachandran, I. Tam- blyn, S. Whitelam, C. Bellinger, and L. M. Ghiringhelli, Roadmap on machine learning in electronic structure, Electronic Structure 4, 023004 (2022). Finding density functionals with machine learning. J C Snyder, M Rupp, K Hansen, K.-R Müller, K Burke, 10.1103/PhysRevLett.108.253002Phys. Rev. Lett. 108253002J. C. Snyder, M. Rupp, K. Hansen, K.-R. Müller, and K. Burke, Finding density functionals with machine learning, Phys. Rev. Lett. 108, 253002 (2012). Understanding machine-learned density functionals. L Li, J C Snyder, I M Pelaschier, J Huang, U.-N Niranjan, P Duncan, M Rupp, K.-R Müller, K Burke, 10.1002/qua.25040Int. J. Quantum Chem. 116819L. Li, J. C. Snyder, I. M. Pelaschier, J. Huang, U.- N. Niranjan, P. Duncan, M. Rupp, K.-R. Müller, and K. Burke, Understanding machine-learned density func- tionals, Int. J. Quantum Chem. 116, 819 (2016). Bypassing the Kohn-Sham equations with machine learning. F Brockherde, L Vogt, L Li, M E Tuckerman, K Burke, K.-R Müller, 10.1038/s41467-017-00839-3Nat. Commun. 8872F. Brockherde, L. Vogt, L. Li, M. E. Tuckerman, K. Burke, and K.-R. Müller, Bypassing the Kohn-Sham equations with machine learning, Nat. Commun. 8, 872 (2017). Deep learning and density-functional theory. K Ryczko, D A Strubbe, I Tamblyn, 10.1103/PhysRevA.100.022512Phys. Rev. A. 10022512K. Ryczko, D. A. Strubbe, and I. Tamblyn, Deep learning and density-functional theory, Phys. Rev. A 100, 022512 (2019). Pushing the frontiers of density functionals by solving the fractional electron problem. J Kirkpatrick, B Mcmorrow, D H P Turban, A L Gaunt, J S Spencer, A G D G Matthews, A Obika, L Thiry, M Fortunato, D Pfau, L R Castellanos, S Petersen, A W R Nelson, P Kohli, P Mori-Sánchez, D Hassabis, A J Cohen, 10.1126/science.abj6511Science. 3741385J. Kirkpatrick, B. McMorrow, D. H. P. Turban, A. L. Gaunt, J. S. Spencer, A. G. D. G. Matthews, A. Obika, L. Thiry, M. Fortunato, D. Pfau, L. R. Castellanos, S. Pe- tersen, A. W. R. Nelson, P. Kohli, P. Mori-Sánchez, D. Hassabis, and A. J. Cohen, Pushing the frontiers of density functionals by solving the fractional electron problem, Science 374, 1385 (2021). Machine learning density functional theory for the Hubbard model. J Nelson, R Tiwari, S Sanvito, 10.1103/PhysRevB.99.075132Phys. Rev. B. 9975132J. Nelson, R. Tiwari, and S. Sanvito, Machine learning density functional theory for the Hubbard model, Phys. Rev. B 99, 075132 (2019). Deep learning the hohenberg-kohn maps of density functional theory. J R Moreno, G Carleo, A Georges, 10.1103/PhysRevLett.125.076402Phys. Rev. Lett. 12576402J. R. Moreno, G. Carleo, and A. Georges, Deep learning the hohenberg-kohn maps of density functional theory, Phys. Rev. Lett. 125, 076402 (2020). Efficient learning of a one-dimensional density functional theory. M M Denner, M H Fischer, T Neupert, 10.1103/PhysRevResearch.2.033388Phys. Rev. Res. 233388M. M. Denner, M. H. Fischer, and T. Neupert, Efficient learning of a one-dimensional density functional theory, Phys. Rev. Res. 2, 033388 (2020). Machine learning approaches toward orbital-free density functional theory: Simultaneous training on the kinetic energy density functional and its functional derivative. R Meyer, M Weichselbaum, A W Hauser, 10.1021/acs.jctc.0c00580J. Chem. Theory Comput. 165685R. Meyer, M. Weichselbaum, and A. W. Hauser, Ma- chine learning approaches toward orbital-free density functional theory: Simultaneous training on the kinetic energy density functional and its functional derivative, J. Chem. Theory Comput. 16, 5685 (2020). Toward orbital-free density functional theory with small data sets and deep learning. K Ryczko, S J Wetzel, R G Melko, I Tamblyn, 10.1021/acs.jctc.1c00812J. Chem. Theory Comput. 18K. Ryczko, S. J. Wetzel, R. G. Melko, and I. Tamblyn, Toward orbital-free density functional theory with small data sets and deep learning, J. Chem. Theory Comput. 18, 1122-1128 (2022). Nonlinear gradient denoising: Finding accurate extrema from inaccurate functional derivatives. J C Snyder, M Rupp, K.-R Müller, K Burke, 10.1002/qua.24937Int. J. Quantum Chem. 1151102J. C. Snyder, M. Rupp, K.-R. Müller, and K. Burke, Nonlinear gradient denoising: Finding accurate extrema from inaccurate functional derivatives, Int. J. Quantum Chem. 115, 1102 (2015). Pure density functional for strong correlation and the thermodynamic limit from machine learning. L Li, T E Baker, S R White, K Burke, 10.1103/PhysRevB.94.245129Phys. Rev. B. 94245129L. Li, T. E. Baker, S. R. White, and K. Burke, Pure density functional for strong correlation and the thermo- dynamic limit from machine learning, Phys. Rev. B 94, 245129 (2016). In interacting systems, the unknown functional would include also correlation effects, while the mean-field contribution would be conveniently encoded in the so-called Hartree term. In interacting systems, the unknown functional would in- clude also correlation effects, while the mean-field contri- bution would be conveniently encoded in the so-called Hartree term. Automatic differentiation in pytorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z Devito, Z Lin, A Desmaison, L Antiga, A Lerer, A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, Automatic differentiation in pytorch, in NIPS-W (2017). Orbital-free density functional theory for materials research. W C Witt, B G Del Rio, J M Dieterich, E A Carter, 10.1557/jmr.2017.462J. Mater. Res. 33W. C. Witt, B. G. del Rio, J. M. Dieterich, and E. A. Carter, Orbital-free density functional theory for materi- als research, J. Mater. Res. 33, 777-795 (2018). J W Goodman, Speckle phenomena in optics: theory and applications. Roberts and Company PublishersJ. W. Goodman, Speckle phenomena in optics: the- ory and applications (Roberts and Company Publishers, 2007). Direct observation of Anderson localization of matter waves in a controlled disorder. J Billy, V Josse, Z Zuo, A Bernard, B Hambrecht, P Lugan, D Clément, L Sanchez-Palencia, P Bouyer, A Aspect, Nature. 453891J. Billy, V. Josse, Z. Zuo, A. Bernard, B. Hambrecht, P. Lugan, D. Clément, L. Sanchez-Palencia, P. Bouyer, and A. Aspect, Direct observation of Anderson localiza- tion of matter waves in a controlled disorder, Nature 453, 891 (2008). Inguscio, Anderson localization of a non-interacting Bose-Einstein condensate. G Roati, C Errico, L Fallani, M Fattori, C Fort, M Zaccanti, G Modugno, M Modugno, M , Nature. 453895G. Roati, C. D'Errico, L. Fallani, M. Fattori, C. Fort, M. Zaccanti, G. Modugno, M. Modugno, and M. In- guscio, Anderson localization of a non-interacting Bose- Einstein condensate, Nature 453, 895 (2008). Semiclassical spectral function and density of states in speckle potentials. T Prat, N Cherroret, D Delande, 10.1103/PhysRevA.94.022114Phys. Rev. A. 9422114T. Prat, N. Cherroret, and D. Delande, Semiclassical spectral function and density of states in speckle poten- tials, Phys. Rev. A 94, 022114 (2016). Supervised machine learning of ultracold atoms with speckle disorder. S Pilati, P Pieri, 10.1038/s41598-019-42125-wSci. Rep. 91S. Pilati and P. Pieri, Supervised machine learning of ul- tracold atoms with speckle disorder, Sci. Rep. 9, 1 (2019). Few-boson localization in a continuum with speckle disorder. P Mujal, A Polls, S Pilati, B Juliá-Díaz, 10.1103/PhysRevA.100.013603Phys. Rev. A. 10013603P. Mujal, A. Polls, S. Pilati, and B. Juliá-Díaz, Few-boson localization in a continuum with speckle disorder, Phys. Rev. A 100, 013603 (2019). Scaling theory of localization: Absence of quantum diffusion in two dimensions. E Abrahams, P W Anderson, D C Licciardello, T V Ramakrishnan, 10.1103/PhysRevLett.42.673Phys. Rev. Lett. 42673E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Scaling theory of localization: Ab- sence of quantum diffusion in two dimensions, Phys. Rev. Lett. 42, 673 (1979). Absence of diffusion in certain random lattices. P W Anderson, https:/journals.aps.org/pr/abstract/10.1103/PhysRev.109.1492Phys. Rev. 1091492P. W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. 109, 1492 (1958). Speckle photography fringe analysis: assessment of current algorithms. J Huntley, Appl. Opt. 284316J. Huntley, Speckle photography fringe analysis: assess- ment of current algorithms, Appl. Opt. 28, 4316 (1989). Collective dynamics and expansion of a bose-einstein condensate in a random potential. M Modugno, 10.1103/PhysRevA.73.013606Phys. Rev. A. 7313606M. Modugno, Collective dynamics and expansion of a bose-einstein condensate in a random potential, Phys. Rev. A 73, 013606 (2006). D P Kingma, J Ba, 10.48550/arXiv.1412.6980arXiv:1412.6980Adam: A method for stochastic optimization. D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv:1412.6980 (2014). Data for Deep learning density functionals for gradient descent optimization. E Costa, G Scriva, S Pilati, R Fazio, 10.5281/zenodo.6504566E. Costa, G. Scriva, S. Pilati, and R. Fazio, Data for Deep learning density functionals for gradient descent op- timization (2022). R Pederson, B Kalita, K Burke, 10.48550/arXiv.2205.01591arXiv:2205.01591Machine learning and density functional theory. R. Pederson, B. Kalita, and K. Burke, Machine learning and density functional theory, arXiv:2205.01591 (2022). Machine learning diffusion monte carlo energy densities. K Ryczko, J T Krogel, I Tamblyn, 10.48550/arXiv.2205.04547arXiv:2205.04547K. Ryczko, J. T. Krogel, and I. Tamblyn, Ma- chine learning diffusion monte carlo energy densities, arXiv:2205.04547 (2022). Completing density functional theory by machine learning hidden messages from molecules. R Nagai, R Akashi, O Sugino, 10.1038/s41524-020-0310-0Npj Comput. Mater. 61R. Nagai, R. Akashi, and O. Sugino, Completing density functional theory by machine learning hidden messages from molecules, Npj Comput. Mater. 6, 1 (2020). Kohn-sham equations as regularizer: Building prior knowledge into machine-learned physics. L Li, S Hoyer, R Pederson, R Sun, E D Cubuk, P Riley, K Burke, 10.1103/PhysRevLett.126.036401Phys. Rev. Lett. 12636401L. Li, S. Hoyer, R. Pederson, R. Sun, E. D. Cubuk, P. Ri- ley, and K. Burke, Kohn-sham equations as regularizer: Building prior knowledge into machine-learned physics, Phys. Rev. Lett. 126, 036401 (2021). Extensive deep neural networks for transferring small scale learning to large scale systems. K Mills, K Ryczko, I Luchak, A Domurad, C Beeler, I Tamblyn, 10.1039/C8SC04578JChem. Sci. 104129K. Mills, K. Ryczko, I. Luchak, A. Domurad, C. Beeler, and I. Tamblyn, Extensive deep neural networks for transferring small scale learning to large scale systems, Chem. Sci. 10, 4129 (2019). Scalable neural networks for the efficient learning of disordered quantum systems. N Saraceni, S Cantori, S Pilati, 10.1103/PhysRevE.102.033301Phys. Rev. E. 10233301N. Saraceni, S. Cantori, and S. Pilati, Scalable neural networks for the efficient learning of disordered quantum systems, Phys. Rev. E 102, 033301 (2020). Supervised learning of few dirty bosons with variable particle number. P Mujal, Àlex Martínez Miguel, A Polls, B Juliá-Díaz, S Pilati, 10.21468/SciPostPhys.10.3.073SciPost Phys. 1073P. Mujal, Àlex Martínez Miguel, A. Polls, B. Juliá- Díaz, and S. Pilati, Supervised learning of few dirty bosons with variable particle number, SciPost Phys. 10, 73 (2021).
[]
[ "A Robust Version of Hegedűs's Lemma", "A Robust Version of Hegedűs's Lemma" ]
[ "Cite As ", "Srikanth Srinivasan " ]
[]
[ "Applications. TheoretiCS" ]
A B S T R A C T .Hegedűs's lemma is the following combinatorial statement regarding polynomials over nite elds. Over a eld F of characteristic > 0 and for a power of , the lemma says that any multilinear polynomial ∈ F[ 1 , . . . , ] of degree less than that vanishes at all points in {0, 1} of some xed Hamming weight ∈ [ , − ] must also vanish at all points in {0, 1} of weight + . This lemma was used by Hegedűs (2009) to give a solution to Galvin's problem, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrubeš, Ramamoorthy, Rao and Yehudayo (2019) to prove optimal lower bounds against depth-2 threshold circuits for computing some symmetric functions.In this paper, we formulate a robust version of Hegedűs's lemma. Informally, this version says that if a polynomial of degree ( ) vanishes at most points of weight , then it vanishes at many points of weight + . We prove this lemma and give the following three di erent applications.Degree lower bounds for the coin problem: The -Coin Problem is the problem of distinguishing between a coin that is heads with probability ((1/2) + ) and a coin that is heads with probability 1/2. We show that over a eld of positive ( xed) characteristic, any polynomial that solves the -coin problem with error must have degree Ω( 1 log(1/ )), which is tight up to constant factors.Probabilistic degree lower bounds: The Probabilistic degree of a Boolean function is the minimum such that there is a random polynomial of degree that agrees with the function at each point with high probability. We give tight lower bounds on the probabilistic degree of every symmetric Boolean function over positive ( xed) characteristic. As far as Work done while at the
10.46298/theoretics.23.5
[ "https://export.arxiv.org/pdf/2202.04982v4.pdf" ]
246,705,904
2202.04982
11c0e982c7d5c384ffa1068df633abad15bc722b
A Robust Version of Hegedűs's Lemma 2023 Cite As Srikanth Srinivasan A Robust Version of Hegedűs's Lemma Applications. TheoretiCS 2202310.46298/theoretics.23.5Received Feb 28, 2022 Revised Oct 21, 2022 Accepted Dec 20, 2022 Published Feb 24, 20231 / 38 we know, this was not known even for some very simple functions such as unweighted Exact Threshold functions, and constant error. A robust version of the combinatorial result of Hegedűs (2009) mentioned above.and phrases Polynomial approximationBoolean functionsProbabilistic degreeCoin Problem A B S T R A C T .Hegedűs's lemma is the following combinatorial statement regarding polynomials over nite elds. Over a eld F of characteristic > 0 and for a power of , the lemma says that any multilinear polynomial ∈ F[ 1 , . . . , ] of degree less than that vanishes at all points in {0, 1} of some xed Hamming weight ∈ [ , − ] must also vanish at all points in {0, 1} of weight + . This lemma was used by Hegedűs (2009) to give a solution to Galvin's problem, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrubeš, Ramamoorthy, Rao and Yehudayo (2019) to prove optimal lower bounds against depth-2 threshold circuits for computing some symmetric functions.In this paper, we formulate a robust version of Hegedűs's lemma. Informally, this version says that if a polynomial of degree ( ) vanishes at most points of weight , then it vanishes at many points of weight + . We prove this lemma and give the following three di erent applications.Degree lower bounds for the coin problem: The -Coin Problem is the problem of distinguishing between a coin that is heads with probability ((1/2) + ) and a coin that is heads with probability 1/2. We show that over a eld of positive ( xed) characteristic, any polynomial that solves the -coin problem with error must have degree Ω( 1 log(1/ )), which is tight up to constant factors.Probabilistic degree lower bounds: The Probabilistic degree of a Boolean function is the minimum such that there is a random polynomial of degree that agrees with the function at each point with high probability. We give tight lower bounds on the probabilistic degree of every symmetric Boolean function over positive ( xed) characteristic. As far as Work done while at the Introduction The Polynomial Method is a technique of great utility in both Theoretical Computer Science and Combinatorics. The idea of associating polynomials with various combinatorial objects and then using algebraic or geometric techniques to analyze them has proven useful in many settings including, but not limited to, Computational Complexity (Circuit lower bounds [38,41,8,52], Pseudorandom generators [11]), Algorithm design (Learning Algorithms [32,28,27], Satis ability algorithms [52,51], Combinatorial algorithms [49,1,4]), and Extremal Combinatorics [21,16,18]. The engine that drives the proofs of many of these results is our understanding of combinatorial and algebraic properties of polynomials. In this paper, we investigate another such naturally stated property of polynomials de ned over the Boolean cube {0, 1} and strengthen known results in this direction. We then apply this result to sharpen known results in theoretical computer science and combinatorics. The question we address is related to how well low-degree polynomials can 'distinguish' However, if the eld F has positive characteristic and more speci cally if − is divisible by , then this simple polynomial no longer works and the answer is not so clear. In this setting, a classical theorem of Lucas tells us that if is the largest power of dividing − , then there is a polynomial of degree that distinguishes between {0, 1} and {0, 1} . A very interesting lemma of Hegedűs [23] shows that this is tight even if we only require to be non-zero at some point of {0, 1} . More precisely, Hegedűs's lemma shows the following. , , such that ∈ [ , − ], and a power of . If ∈ F[ 1 , . . . , ] is any polynomial that vanishes at all ∈ {0, 1} but does not vanish at some ∈ {0, 1} + , then deg( ) ≥ . 1 The lemma is usually stated [23,5,25] for a more restricted choice of parameters. However, the known proofs extend to yield the stronger statement given here. A proof of a more general statement can be found in [44,Theorem 1.5]. This lemma was rst proved in [23] using Gröbner basis techniques. An elementary proof of this was recently given by the author and independently by Alon (see [25]) using the Combinatorial Nullstellensatz. Hegedűs's lemma has been used to resolve various questions in both combinatorics and theoretical computer science. Hegedűs used this lemma to give an alternate solution to a problem of Galvin, which is stated as follows. Given a positive integer divisible by 4, what is the smallest size = ( ) of a family F of ( /2)-sized subsets of [ ] such that for any ⊆ [ ] of size /2, there is a ∈ F with | ∩ | = /4? It is easy to see that ( ) ≤ /2 for any . A matching lower bound was given by Enomoto, Frankl, Ito and Nomora [19] in the case that := ( /4) is odd. Hegedűs used the above lemma to give an alternate proof of a lower bound of in the case that is an odd prime. His proof was subsequently strengthened to a linear lower bound for all by Alon et al. [5] and more recently to a near-tight lower bound of ( /2) − ( ) for all by Hrubeš et al. [25]. Both these results used the lemma above. Alon et al. [5] also used Hegedűs's lemma to prove bounds for generalizations of Galvin's problem. Using this, they were able to prove improved lower bounds against syntatically multilinear algebraic circuits. These are algebraic circuits that compute multilinear polynomials in a "transparently multilinear" way (see e.g. [40] for more). Alon et al. used Hegedűs's lemma to prove near-quadratic lower bounds against syntactically multilinear algebraic circuits computing certain explicitly de ned multilinear polynomials, improving on an earlierΩ( 4/3 ) lower bound of Raz, Shpilka and Yehudayo [37]. Hrubeš et al. [25] also used Hegedűs's lemma to answer the following question of Ku- Main Result. Our main result in this paper is a 'robust' strengthening of Hegedűs's lemma. Proving 'robust' or 'stability' versions of known results is standard research direction in combinatorics. Such questions are usually drawn from the following template. Given the fact that objects that satisfy a certain property have some xed structure, we ask if a similar structure is shared by objects that 'almost' or 'somewhat' satisfy the property. In our setting, we ask if we can recover the degree lower bound in Hegedűs's lemma even if we have a polynomial that 'approximately' distinguishes between {0, 1} and {0, 1} + : this means that the polynomial vanishes at 'most' points of weight but is non-zero at 'many' 2 The Majority function is the Boolean function which accepts exactly those inputs that have more 1s than 0s. points of weight + . Our main lemma is that under suitable de nitions of 'most' and 'many', we can recover (up to constant factors) the same degree lower bound as in Lemma 1.1 above. 1. To keep the exposition informal, we have not speci ed exactly what is in the above lemma. However, we note below that the chosen is nearly the best possible in the sense that if is appreciably increased, then there is a sampling-based construction of a polynomial of degree ( ) satisfying the hypothesis of the above lemma (see Section 3.3). L E M M A 1 . 2 (Main Result (Informal)). Assume that F is a eld of characteristic . Let be 2. The reader might wonder why the lemma above is a strengthening of Hegedűs's lemma, given that we require the polynomial to be non-zero at many points of weight + , which is a seemingly stronger condition than required in Lemma 1.1. However, this is in fact a weaker condition. This is because of the following simple algebraic fact: if there is a polynomial of degree at most satisfying the hypothesis of Lemma 1.1 (i.e. vanishing at all points of weight but not at some point of weight + ), then there is also a polynomial of degree at most that vanishes at all points of weight but does not vanish at a signi cant fraction (at least a (1 − 1/ ) fraction) of points of weight + . We give a short proof of this in Appendix A. Hence, the above lemma is indeed a generalization of Lemma 1.1 (up to the constant-factor losses in the degree lower bound). Applications. Our investigations into robust versions of Hegedűs's lemma were motivated by questions in computational complexity theory. Using our main result, we are able to sharpen and strengthen known results in complexity as well as combinatorics. 1. Degree bounds for the Coin Problem: For a parameter ∈ [0, 1/2], we de ne the -coin problem as follows. We are given independent tosses of a coin, which is promised to either be of bias 1/2 (i.e. unbiased) or (1/2) − , and we are required to guess which of these is the case with a high degree of accuracy, say with error probability at most . (See De nition 4.1 for the formal de nition.) The coin problem has been studied in a variety of settings in complexity theory (see, e.g. [3,46,47,39,12,15]) and for various reasons such as understanding the power of randomness in bounded-depth circuits, the limitations of blackbox hardness ampli cation, and devising pseudorandom generators for bounded-width branching programs. More recently, Limaye et al. [31] proved optimal lower bounds on the size of AC 0 [⊕] 3 circuits solving the -coin problem with constant error, strengthening an earlier lower bound of Shaltiel and Viola [39]. This led to the rst class of explicit functions for which we have tight (up to polynomial factors) AC 0 [⊕] lower bounds. These bounds were in turn used by Golovnev, Ilango, Impagliazzo, Kabanets, Kolokolova and Tal [20] to resolve a long-standing open problem regarding the complexity of MCSP in the AC 0 [⊕] model, and by Potukuchi [36] to prove lower bounds for Andreev's problem. A key result in the lower bound of Limaye et al. [31] was a tight lower bound on the degree of any polynomial ∈ F[ 1 , . . . , ] that solves the -coin problem with constant error: they showed that any such polynomial must have degree at least Ω(1/ ). As noted by Agrawal [2], this is essentially equivalent to a recent result of Chattopadhyay, Hatami, Lovett and Tal [13] on the level-1 Fourier coe cients of low-degree polynomials over nite elds, which in turn is connected to an intriguing new approach [13] toward constructing pseudorandom generators secure against AC 0 [⊕]. Using the robust Hegedűs lemma, we are able to strengthen the degree lower bound of [31] to a tight degree lower bound for all errors. Speci cally, we show that over any eld F of xed positive characteristic , any polynomial that solves the -coin problem with error must have degree Ω( 1 log(1/ )), which is tight for all and . 2. Probabilistic degrees of symmetric functions: In a landmark paper [38], Razborov showed how to use polynomial approximations to prove lower bounds against AC 0 [⊕]. The notion of polynomial approximation introduced (implicitly) in his result goes by the name of probabilistic polynomials, and is de ned as follows. An -error probabilistic polynomial of degree for a Boolean function : {0, 1} → {0, 1} is a random polynomial of degree at most that agrees with at each point with probability at least 1 − . The -error probabilistic degree of is the least for which this holds. (Roughly speaking, a low-degree probabilistic polynomial for is an e cient randomized algorithm for , where we think of polynomials as algorithms and degree as a measure of e ciency.) Many applications of polynomial approximation in complexity theory [8] and algorithm design [50] use probabilistic polynomials and speci cally bounds on the probabilistic degrees of various symmetric Boolean functions. 4 Motivated by this, in a recent result with Tripathi and Venkitesh [43], we gave a near-tight characterization on the probabilistic degree of every symmetric Boolean function. Unfortunately, however, our upper and lower bounds were separated by logarithmic factors. This can be crucial: in certain algorithmic applications (see, e.g., [4, Footnote, Page 138]), the appearance or non-appearance of an additional logarithmic factor in the degree can be the di erence between (say) a truly subquadratic running time of 2− and a running time of 2− (1) , which might be less interesting. 4 Recall that a Boolean function : {0, 1} → {0, 1} is said to be symmetric if its output depends only on the Hamming weight of its input. In the case of characteristic 0 (or growing with ), such gaps look hard to close since we don't even understand completely the probabilistic degree of simple functions like the OR function [34,22,10]. However, in positive ( xed) characteristic, there are no obvious barrriers. Yet, even in this case, the probabilistic degree of very simple symmetric Boolean functions like the Exact Threshold functions (functions that accept inputs of exactly one Hamming weight) remained unresolved until this paper. In this paper, we resolve this question and more. We are able to give a tight (up to constants) lower bound (matching the upper bounds in [43]) on the probabilistic degree of every symmetric function over elds of positive ( xed) characteristic. 3. Robust version of Galvin's problem: Given that Hegedűs's lemma was used to solve Galvin's problem, it is only natural that we consider the question of using the robust version to solve a robust version of Galvin's problem. More precisely, we consider the minimum size = ( , ) to be the minimum size of a family F of ( /2)-sized subsets of [ ] such that for all but an -fraction of sets of size /2, there is a set ∈ F such that | ∩ | = /4. Proof Outline. We observe that the main lemma (Lemma 1.2) is quite similar to classical polynomial approximation results of Razborov [38] and Smolensky [41,42] (see also [45]). The main di erence is that while these results hold for polynomials approximating some function on the whole cube {0, 1} , the lemma deals with polynomial approximations that are more 'local' in that they are restricted on just two layers of the cube. Nevertheless, we can show that the basic proof strategy of Smolensky (or more speci cally a variant as in [6,29]) can be used to prove our lemma as well. The main point of di erence from these standard proofs is the employment of a result from discrete geometry due to Nie and Wang [35], that allows us to bound the size of the closure 5 of a small set of points in the cube. This is a well-studied object in coding theory [48] and combinatorics [14,26,35], and turns out to be a crucial ingredient in our proof. For the application to the coin problem, we show that if a polynomial solves the coin problem (see De nition 4.1 for the formal de nition of this), then it can be used to distinguish 5 The degree-closure cl ( ) of a set is the set of points where any degree-polynomial vanishing throughout is forced to vanish. between Hamming weights and + for and as in Lemma 1.2. This reduction is done by a simple sampling argument. The degree lower bound in Lemma 1.2 then implies the desired degree lower bound on the degree of . In the other applications to probabilistic degree and the robust version of Galvin's problem, the idea is to follow the proofs of the previous best results in this direction and apply the main lemma at suitable points. We defer more details to the actual proofs. Preliminaries We use the notation [ , ] to denote an interval in R as well as an interval in Z. The distinction will be clear from context. (1 − ) + 2 /3 . Symmetric Boolean functions Let be a growing integer parameter which will always be the number of input variables. We use B to denote the set of all symmetric Boolean functions on variables. Note that each symmetric Standard decomposition of a symmetric Boolean function [33]. inputs such that | | ≡ (mod ). In the special case that = 0, we also use MOD . We de ne the -error probabilistic degree of , denoted pdeg F ( ), to be the least such that has an -error probabilistic polynomial of degree at most . Probabilistic polynomials When the eld F is clear from context, we use pdeg ( ) instead of pdeg F ( ). FA C T 2 . 4. We have the following simple facts about probabilistic degrees of Boolean functions. Let F be any eld. for Maj , and 1 , . . . , are independent copies of . In particular, we have pdeg F ( ) ≤ pdeg F ( ) · (log(1/ )/log(1/ )). 1 , . . . , on a common set of variables, let ℎ denote the natural composed function ( 1 , . . . , ) on variables. Then, for any , > 0, we have pdeg F (Error reduction [22]) For any (Composition) For any Boolean function on variables and any Boolean functions + (ℎ) ≤ pdeg F ( ) · max ∈[ ] pdeg F ( ).> 0, we have pdeg F ( ) ≤ max ∈[ ] pdeg F ( ). The rst item above is not entirely obvious, as the polynomial is not necessarily Boolean-valued at points when ( ) ≠ ( ). Hence, it is not clear that composing with a polynomial that computes the Boolean Majority function achieves error-reduction. The second and third items above are trivial. Building on work of Alman and Williams [4] and Lu [33], Tripathi, Venkitesh and the author [43] gave upper bounds on the probabilistic degree of any symmetric function. We recall below the statement in the case of xed positive characteristic. T H E O R E M 2 . 5 (Known upper bounds on probabilistic degree of symmetric functions [43]). Let F be a eld of constant characteristic > 0 and ∈ N be a growing parameter. Let ∈ B be arbitrary and let ( , ℎ) be a standard decomposition of . Then we have the following for any > 0. 1. If per( ) = 1, then is a constant and hence pdeg ( ) = 0. If per( ) is a power of , then can be exactly represented 6 as a polynomial of degree at most per( ), and hence pdeg F ( ) ≤ per( ), 2. pdeg (ℎ) = ( √︁ (ℎ) log(1/ ) + log(1/ )) if (ℎ) ≥ 1 and 0 otherwise, and 3. pdeg ( ) =                ( √︁ log(1/ )) if per( ) > 1 and not a power of , (min{ √︁ log(1/ ), per( )}) if per( ) a power of and (ℎ) = 0, (min{ √︁ log(1/ ), per( )+ otherwise. √︁ (ℎ) log(1/ ) + log(1/ )}) A string lemma Given a function : → {0, 1} where ⊆ N is an interval, we think of as a string from the set {0, 1} | | in the natural way. For an interval ⊆ , we denote by | the substring of obtained by restriction to . The following simple lemma can be found, e.g. as a special case of [9, Theorem 3.1]. For completeness, we give a short proof in Appendix B. 6 While this is not part of the formal theorem statement from [43], it follows readily from the proof. L E M M A 2 . 6. Let ∈ {0, 1} + be any non-empty string 7 and , ∈ {0, 1} + such that = = . Then there exists a string ∈ {0, 1} + such that is a power of (i.e. = for some ≥ 2). C O R O L L A R Y 2 . 7. Let ∈ B be arbitrary with per( ) = > 1. Then for all , ∈ [0, − + 1] such that (mod ), we have Spec | [ , + −1] ≠ Spec | [ , + −1] . P R O O F . Suppose Spec | [ , + −1] = Spec | [ , + −1] for some (mod ). Assume without loss of generality that < < + . Let = Spec | [ , −1] , = Spec | [ , + −1] , = Spec | [ + , + −1] . Then = and the assumption = implies = . By Lemma 2.6, there exists a string such that = for ≥ 2 and therefore per( ) < . This contradicts our assumption on . Lucas's theorem T H E O R E M 2 . 8 (Lucas's theorem). Let , be any non-negative integers and any prime. Then = ≥0 (mod ) where (resp. ) is the ( + 1)th least signi cant digit of (resp. ) in base . The following is a standard application of Lucas's theorem, essentially observed by Lu [33] and Hegedűs [23], showing that Hegedűs's lemma is tight. Recall that, for any alphabet Σ, the notation Σ + denotes the set of non-empty strings over this alphabet. The Main Lemma In this section, we prove the main lemma, which is a robust version of Lemma 1.1. L E M M A 3 .1 (A Robust Version of Hegedűs's Lemma). Assume that F is a eld of characteristic . Let be a growing parameter and assume we have positive integer parameters , such that 100 < < − 100 and is a power of . De ne = min{ / , 1 − ( / )} and = / . Assume ∈ F[ 1 , . . . , ] is a polynomial such that for some ∈ { + , − }, Pr ∼{0,1} [ ( ) ≠ 0] ≤ min{ −100 2 / , 1/1000} (1a) Pr ∼{0,1} [ ( ) ≠ 0] ≥ − 2 /100 . (1b) Then, deg( ) = Ω( ), where the Ω(·) hides an absolute constant. One can ask if the above lemma can be proved under weaker assumptions: speci cally, if the upper bound in (1a) can be relaxed. It turns out that it cannot (up to changing the constant in the exponent) because for larger error parameters, there is a sampling-based construction of a polynomial with smaller degree that is zero on most of {0, 1} and non-zero on most of {0, 1} . We discuss this construction in Section 3.3. We rst prove a special case of the lemma which corresponds to the case when = + = /2 and su ciently larger than √ . This case su ces for most of our applications. The general case is a straightforward reduction to this special case. A special case Pr ∼{0,1} /2 − [ ( ) ≠ 0] ≤ (2a) Pr ∼{0,1} /2 [ ( ) ≠ 0] ≥ − /2 . (2b) Then, deg( ) ≥ /25. R E M A R K 3 . 3. By negating inputs (i.e. replacing with 1 − for each ), the above lemma also implies the analogous statements where /2 − and /2 are replaced by /2 + and /2 respectively. Before we prove this lemma, we need to collect some technical facts and lemmas. The following is standard. See, e.g., [ −8 ( − )/ ≤ /2 − /2 − ≤ −2 ( − )/ . P R O O F . Note that /2 − /2 − = ( /2 − + 1) · · · ( /2 − ) ( /2 + ) · · · ( /2 + + 1) ≤ /2 − /2 + − ≤ 1 − 2 − ≤ −2 ( − )/ , which implies the right inequality in the statement of the claim. We have used the inequality 1 − ≤ to deduce the nal inequality above. For the left inequality, we similarly have /2 − /2 − ≥ /2 − /2 + − ≥ 1 − 2 2 − ≥ −8 ( − )/ . where the nal inequality follows from the fact that (1 − ) ≥ −2 for ∈ [0, 1/2]. Given a set ⊆ {0, 1} , and a parameter ≤ , we de ne I ( ) to be the set of all multilinear polynomials of degree at most that vanish at all points of . Further, we de ne the degree-closure of , denoted cl ( ) as follows. cl ( ) := { ∈ {0, 1} | ( ) = 0 ∀ ∈ I ( )}. Note that cl ( ) ⊇ but could be much bigger than . The following result of Nie and Wang [35] gives a bound on |cl ( )| in terms of | |. (This particular form is noted and essentially proved in [35], and is explicitly stated and proved in [29, | | < =⇒ cl ( ) < 2 . The inequality stated in the lemma is tight for certain sets of size (a good example of such a set is any Hamming ball of radius ). However, when | | is much smaller than , the parameters can be tightened. A tight form of this lemma, that gives the best possible parameters depending on | |, was proved in earlier work of Keevash and Sudakov [26] (see also the works of Clements and Lindström [14], Wei [48], Heijnen and Pellikaan [24], and Beelen and Dutta [7] that prove similar results). However, we don't need this general form of the lemma here. We now begin the proof of the Lemma 3.2. P R O O F O F L E M M A 3 . 2 . Assume that is as given. Let = /2 . Let 0 , 1 be de ned as follows. (Here, the notation " " stands for "error sets".) 0 = { ∈ {0, 1} − | ( ) ≠ 0} 1 = { ∈ {0, 1} | ( ) = 0} We show that there are polynomials 1 , 2 ∈ F[ 1 , . . . , ] such that the following conditions hold. (Q1.1) 1 ( ) ≠ 0 if and only if | | ≡ (mod ). (Q2.1) 2 ( ) = 0 for all ∈ 0 . (Q2.2) 2 ( ) = 0 for all such that | | < − and | | ≡ (mod ). (Q2.3) 2 ( ) ≠ 0 for some ∈ {0, 1} \ 1 . Given polynomials 1 , 2 as above, we construct the polynomial to be the multilinear polynomial obtained by computing the formal product · 1 · 2 and replacing by for each > 1. Note that ( ) = ( ) 1 ( ) 2 ( ) for any ∈ {0, 1} . We observe that ( ) = 0 for all | | < . This is based on a case analysis of whether | | ≡ (mod ) or not. In the latter case, we see that 1 ( ) = 0 and hence ( ) = 0. In the former case, we have either ∈ {0, 1} − \ 0 , in which case ( ) = 0, or not, in which case 2 ( ) = 0. Hence, ( ) = 0 for all | | < . On the other hand, we note that is a non-zero polynomial. This is because by (Q2.3), we know that there is some ∈ {0, 1} \ 1 where 2 ( ) ≠ 0. Further, 1 ( ) ≠ 0 and ( ) ≠ 0 by (Q1.1) and the de nition of 1 respectively. Hence, ( ) ≠ 0, implying that is a non-zero multilinear polynomial. By Fact 3.4, we thus know that has degree at least . In particular, we obtain deg( ) ≥ deg( ) − deg( 1 ) − deg( 2 ) ≥ − deg( 1 ) − deg( 2 ). Hence, to nish the proof of the lemma, it su ces to prove the following claims. |cl ( )| < − /2 ·(3) since by hypothesis we have | 1 | ≥ − /2 · . To do this, we use Theorem 3.6. Note that we have | | ≤ | 0 | + ∑︁ < − : ≡ (mod ) ≤ · − + ∑︁ ≥1 − − · ≤ · − + − · −2 + −4 + . . . ≤ − · ( + 2 · −2 ) ≤ − · (3 −2 )(4) where the third inequality is a consequence of Lemma 3.5 (with = and = ( + 1) for various ) and the nal inequality uses ≤ −2 . On the other hand, the parameter from the statement of Theorem 3.6 can be lower bounded as follows. = ∑︁ =0 − ≥ 1 − − 2 1 ≥ 1 − · − > − · √ 3 · − where the second inequality follows from Lemma 3.5 (with = and = + 2 1 ) and the nal inequality uses the fact that 1 > /30 = √ /30 ≥ √ /3. Putting the above together with (4) immediately yields | | < 9 − · − √ · − = 9 − · −1/2 . Using Theorem 3.6, we thus obtain cl ( ) < 9 − · 2 √ ≤ − /2 · 2 2 √ ≤ − /2 · where the last inequality follows from Stirling's approximation. Having shown (3), the claim now follows. The General Case We start with some preliminaries. We rst show a simple 'error-reduction' procedure for polynomials. In particular, the above holds for a uniformly random chosen from {0, 1} . Hence, we have E ( ) [ ( ( ) )] = Pr ( ) , ∼{0,1} ( ) ( ) ≠ 0 = ( ) . We are now ready to prove the main lemma in its full generality. We consider now two cases. Let be a large constant that will be xed below. By Lemma 3.10, we know that there is a probabilistic polynomial ( ) of degree at most · deg( ) such that for each ∈ { , }, we have E ( ) [ ( ( ) )] = ( ) . The proof will proceed by another restriction to variables, where is de ned to be the largest even integer such that 100 ≤ 2 . We assume that is greater than a large enough absolute constant, since otherwise is upper bounded by a xed constant, in which case the degree bound to be proved is trivial. Note that := 2 / ≥ 100 by de nition. We also have = ( 2 /100) − 2, which implies that ≤ 100 + (1)/ 2 ≤ 101, as long as is greater than a large enough absolute constant. Relabel the variables so that is a polynomial in 1 , . . . , . Let be a uniformly random where the rst inequality uses (5). By Markov's inequality as above, there is a xed choice of ( ) , , and such that the corresponding polynomial is a polynomial on variables satisfying ( ) ≤ Tightness of the Main Lemma (Lemma 3.1) In this section, we discuss the near-optimality of Lemma 3.1 w.r.t. to the various parameters. Fix , , , , and F as in the statement of Lemma 3.1. Assume that = + (the case when = − is similar) and that ≤ /2. Let ∈ (0, 1) be arbitrary. First of all, we note that the degree lower bound obtained cannot be larger than , because by Corollary 2.9, it follows that there is a degree-polynomial that vanishes at all points of weight but no points of weight . So, the statement of Lemma 3.1 proves a lower bound on the degree that nearly (up to constant factors) matches this trivial upper bound, under the weaker assumption that the polynomial is forced to be zero only on most (say a 1 − fraction) of {0, 1} and non-zero on most (say a 1 − fraction) of {0, 1} . (Lemma 3.1 is a stronger statement, but we will show that even this weaker statement is tight.) In this section, we show that the value of cannot be increased beyond = exp(− ( 2 / )), if we want to prove a lower bound of Ω( ) on the degree. More precisely, we show the following. T H E O R E M 3 .1 1. Assume that = exp(− ( 2 / )). Then, there is a polynomial of degree ( ) such that Pr ∼{0,1} [ ( ) ≠ 0] ≤ Pr ∼{0,1} [ ( ) ≠ 0] ≥ 1 − . P R O O F . To prove this theorem, we analyze a di erent polynomial construction to achieve this based on sampling. We will need the following interpolation lemma that can be found in a paper of Alman and Williams [4] Fix any positive integer . By Lemma 3.12, it follows that there is a multilinear polynomial ∈ Z[ 1 , . . . , ] of degree ( ) such that ( ) = 0 for each ∈ {0, 1} such that | | ∈ (( − /2) , ( + /2) ) and ( ) = 1 for each ∈ {0, 1} such that | | ∈ (( + /2) , ( + 3 /2) ). Reducing the coe cients modulo , we obtain a polynomial˜∈ F[ 1 , . . . , ] with the same property. Fix this˜. Consider the probabilistic polynomial ( 1 , . . . , ) de ned as follows. Choose 1 , . . . , i.u.a.r. from [ ] where = · ( / 2 ) log(1/ ) for a large enough constant we will x below. We de ne ( 1 , . . . , ) to be the polynomial˜( 1 , . . . , ). Note that 8 This lemma has a trivial proof via univariate polynomial interpolation if we only want the polynomial to have rational coefficients. However, here it important that has integer coefficients. in [44]. An extension to the case when is not a power of R E M A R K 3 .1 5. As in the case of the main lemma, the degree lower bound obtained above is tight, using the same reasoning as in Section 3.3. P R O O F . W.l.o.g. assume = + . Let = / and = / − 1. Our aim will be to show using the polynomial that there is a polynomial on variables that distinguishes between Hamming weights and := + . We will then appeal to Lemma 3.1 to get the degree lower bound. It is easy to check that 100 < < − 100 as 100 = 100 < − < ( + 1) − ≤ ≤ < − 102 < ( + 2) − 102 ≤ − 100 = ( − 100 ) where we used the hypotheses that 200 < < − 200 . We construct the polynomial as follows. Assume that = + 1 and = + + 2 for 1 , 2 ∈ {0, . . . , − 1}. On an input ∈ {0, 1} , we consider the random input ∈ {0, 1} de ned as follows. Each co-ordinate of is repeated times to get an ∈ {0, 1} . We concatenate with the string 1 1 0 + 2 − 1 to get a string ∈ {0, 1} . A uniformly random permutation is applied to the coordinates of to get . Finally, we de ne the probabilistic polynomial ( ) := ( ). For a xed permutation , each coordinate of is a polynomial of degree at most 1 in the variables 1 , . . . , , and hence, deg( ) ≤ deg( ). We will show that there is some polynomial in the support of that has the desired properties. Let 0 and 1 denote the right hand sides of inequalities (6a) and (6b) respectively. Observe that when ∈ {0, 1} , then the random ∈ {0, 1} is uniformly distributed over {0, 1} + 1 . In particular, setting = and + , we get the following. Pr ∼{0,1} , [ ( ) ≠ 0] = Pr ∼{0,1} [ ( ) ≠ 0] ≤ 0 Pr ∼{0,1} + , [ ( ) ≠ 0] = Pr ∼{0,1} + [ ( ) ≠ 0] ≥ 1 . To nd a suitable xing of , we consider two cases. [ ( ) ≠ 0] < 2 0 = min{2 −1000 2 / , 2/2000} ≤ min{ −200 2 / , 1/1000},(7) where we used the simple fact that for any non-negative real number , we have the inequality min{2 , 1/1000} ≤ min{ 0.2 , 1/1000}. We also have Pr ∼{0,1} + [ ( ) ≠ 0] > 1 − 2.5 1 ≥ (1 − 1 ) 5 = 5 1 ≥ − 2 /200 ,(8) where the second inequality uses the fact that 1 ≤ 0.2 for any ∈ [0, 1], we have 9 (1 − ) 5 ≤ 1 − 5 + 10 2 . Case 2: − 2 /250 < 1/2: In this case, we proceed analogously, but de ne the 'bad' events as follows. where the second inequality used our assumption that − 2 /250 < 1/2. We also have Pr ∼{0,1} [ ( ) ≠ 0] < 0 1 /2 ≤ 2 /200 · 0 ≤ −999 2 / ≤ 200 2 / ≤ min{ −200 2 / , 1/1000},(10) where the second inequality used (9) above and the third and last inequalities use the fact that − 2 /250 < 1/2 to deduce that −1000 2 / ≤ 1/2000 and −200 2 / < 1/1000 respectively. Putting (7), (8), (10) and (9) together gives us that in both cases we have Pr ∼{0,1} [ ( ) ≠ 0] ≤ min{ −200 2 / , 1/1000} (11a) Pr ∼{0,1} + [ ( ) ≠ 0] ≥ − 2 /200 . (11b) 9 This is a special case of the Boole-Bonferroni inequalities, which are closely related to the Principle of Inclusion-Exclusion. To apply Lemma 3.1 to , we need to relate the above bounds to quantities de ned in terms of := / and := / . We claim that 2 ≤ ( ) 2 ≤ 2 2 .(12) Assuming these inequalities, we observe that satis es the hypotheses of Lemma 3.1. Applying this lemma gives us deg( ) ≥ deg( ) ≥ deg( ) = Ω( ), nishing the proof of Lemma 3.13. It remains to prove (12), which is a simple calculation. ( ) 2 = ( ) 2 = ( ) 2 = ( ) 2 2 ≥ 2 = 2 , ( ) 2 = ( ) 2 = ( ) 2 2 ≤ 2 ( − ) = 2 · 1 − −1 ≤ 2 2 where the nal inequality uses the fact that ≤ ≤ 0.01. Applications Tight Degree Lower Bounds for the Coin Problem We start with a de nition. In earlier work [31], we showed that this was tight for constant . That is, we showed that any polynomial that solves the -coin problem with error at most 1/10 (say) must have degree Ω(1/ ). This was also implied by an independent result of Chattopadhyay, Hosseini, Lovett and Tal [13] (see [2]). Both proofs relied on slight strengthenings of Smolensky's [41] lower bound on polynomials approximating the Majority function. It is not clear from these proofs, however, if this continues to be true for subconstant . The main lemma (Lemma 3.1), or even its simpler version Lemma 3.2, shows that this is indeed true. T H E O R E M 4 . 2 (Tight Degree Lower Bound for the -coin problem for all errors). Assume F has characteristic and , are parameters going to 0. Let ≥ 1 be any positive integer. Any polynomial ∈ F[ 1 , . . . , ] that solves the -coin problem with error must have degree Ω( 1 log(1/ )). P R O O F . We assume that is smaller than some small enough constant 0 (for larger , we can just appeal to the lower bound of [31]). Assume for now that = 1/ for some integer ≥ 1. Fix to be the least even integer such that ≥ 2 log(1/ ) for a large constant and := is a power of the characteristic . Note that ≤ ( ) · 2 log(1/ ) = Repeating the above argument with instead of yields that deg( ) = Ω( 1 log(1/ )) = Ω( 1 log(1/ )). We thus get the same lower bound for deg( ). Tight Probabilistic Degree Lower bounds for Positive Characteristic We start with some basic notation and de nitions and then state our result. Throughout this section, let F be a eld of xed (i.e. independent of ) characteristic > 0. The main theorem of this section characterizes (up to constant factors) the -error probabilistic degree of every symmetric function and for almost all interesting values of . T H E O R E M 4 . 3 (Probabilistic Degree lower bounds over positive characteristic). Let ∈ N be a growing parameter. Let ∈ B be arbitrary and let ( , ℎ) be a standard decomposition of (see Section 2 for the de nition). Then for any ∈ [1/2 , 1/3], we have pdeg F ( ) =                Ω( √︁ log(1/ )) if per( ) > 1+ √︁ (ℎ) log(1/ ) + log(1/ )}) Here the Ω(·) notation hides constants depending on the characteristic of the eld F. Note that this matches the upper bound construction from Theorem 2.5. Some Preliminaries D E F I N I T I O N 4 . 4 (Restrictions). Given functions ∈ B and ∈ B where ≤ , we say that is a restriction of if there is some ∈ [0, − ] such that the identity ( ) = ( 1 0 − − ) holds for every ∈ {0, 1} . Or equivalently, that can be obtained from by setting some inputs to 0 and 1 respectively. 10 We will use the following obvious fact freely. O B S E R VAT I O N 4 . 5. If is a restriction of , then for any > 0, pdeg ( ) ≤ pdeg ( ). In earlier work with Tripathi and Venkitesh [43], we showed the following near-optimal lower bound on the probabilistic degrees of Threshold functions. (The corresponding lemma in [43] is only stated for ≤ /2. However, as Thr +1− ( ) = 1 − Thr (1 − 1 , . . . , 1 − ), the above lower bound holds for > /2 also.) 10 Note that exactly which inputs are set to 0 or 1 is not important, since we are dealing with symmetric Boolean functions. The following classical results of Smolensky prove optimal lower bounds on the probabilistic degrees of some interesting classes of symmetric functions. L E M M A 4 . 7 (Smolensky's lower bound for Majority function [45, 42]). For any eld F, any ∈ (1/2 , 1/5), we have pdeg F (Maj ) = Ω( √︁ log(1/ )). L E M M A 4 . 8 (Smolensky's lower bound for MOD functions [41] ). For 2 ≤ ≤ /2, any F such that char(F) is either zero or coprime to , any ∈ (1/2 , 1/(3 )), there exists an ∈ [0, − 1] such that pdeg F (MOD , ) = Ω( √︁ log(1/ )). We now show how to use our robust version of Hegedűs's lemma to prove Theorem 4.3. In fact, Lemma 3.2 will su ce for this application. Strategy and two simple examples The probabilistic degree lower bounds below will use the following corollary of Lemma 3.2. C O R O L L A R Y 4 . 9. Let be a growing parameter and assume ∈ [2 − /100 , −200 ]. Assume is an integer such that is a power of and furthermore, = √ for some ∈ R such that 100 ≤ ≤ 1 2 · ln(1/ ). Let ℎ ∈ B be any function such that Spec ℎ( /2 ) ≠ Spec ℎ( /2 − ). Then, pdeg (ℎ) = Ω( ). To illustrate the usefulness of Corollary 4.9, we prove optimal lower bounds on the probabilistic degrees for two interesting classes of functions (both of which will be subsumed by Known lower bounds (Lemmas 4.7 and 4.8) can be used to prove similar lower bounds to the one given above, but with additional log-factor losses (see Lemma 4.8, which requires the error to be subconstant, and [43]). However, we do not know how to prove the above tight (up to constants) lower bound without appealing to Lemma 3.2. In particular, we do not know how to prove the above in characteristic 0. P R O O F . We use Corollary 4.9. We will use EThr /2 and MOD to construct functions that distinguish between weights /2 and /2 − for suitable = Ω( √ ). Corollary 4.9 then implies the required lower bound. For ℎ = EThr /2 , note that Spec ℎ( /2 ) ≠ Spec ℎ( /2 − ) for any < /2 . In particular, setting to be the smallest power of such that ≥ √ 100 and 0 = −2 2 / , we get by Proof of Theorem 4.3 The proof of this theorem closely follows our probabilistic degree lower bounds in [43] with careful modi cations to avoid the log-factor losses therein. Let ∈ B be arbitrary and let ( , ℎ) be a standard decomposition of . We start with a lemma that proves lower bounds on pdeg ( ) as long as per( ) is large. Note that by the bounds on assumed above ≥ 1 4 √︁ log(1/ ) ≥ 20 √ .(14) Using Corollary 4.9, we hence get On the other hand, if > −10000 2 , we proceed as follows. We construct as above, but we may no longer have ≥ 20 √ as implied by (14). However, for By error reduction (Fact 2.4 item 1), the same lower bound holds for pdeg ( ) as well. pdeg ( ) ≥ pdeg /2 ( ) = Ω( ) = Ω( √︁ log(1/ )) The next lemma allows us to prove a weak lower bound on pdeg ( ) depending only on its periodic part . 11 Note that we assume that the characteristic is a fixed positive constant and hence the Ω(·) can hide constants depending on . Assume that we choose the smallest ≥ /2 so that this condition holds. Then we have ≤ /2 + ≤ 51 · /100. Fix this . As Spec ( ) ≠ Spec ( + 1 ), we also have Spec ( ) ≠ Spec ( + 1 + · ) for any integer such that 0 ≤ + 1 + ≤ . In particular, as 1 ≡ 1 (mod ), we note that Spec ( ) ≠ Spec ( + 1 ). As 1 ≤ /100, we have /2 ≤ ≤ + 1 ≤ /2 + /50. As Spec ( ) = Spec ( ) for all ∈ [ /3 + 1, 2 /3 ], we have Spec ( ) ≠ Spec ( + 1 ). Without loss of generality, we assume that Spec ( ) = 0 and Spec ( + 1 ) = 1. Let = /2 . We de ne ∈ B as follows. ( ) = ( 1 0 − − ) where is chosen so that Spec ( /2 ) = Spec ( + 1 ) = 1. This also has the consequence that Spec ( /2 − 1 ) = Spec ( ) = 0. By Corollary 4.9, we get pdeg ( ) = Ω( 1 ) = Ω( √︁ log(1/ )), proving the lemma in this case. is a power of . In this case, we rst choose parameters , with the following properties. (P1) ∈ [ ] with ≥ 20 and ≡ (mod 2). (P2) 1/3 ≥ ≥ max{ , 1/2 }. We will show below how to nd , satisfying these properties. Assuming this for now, we rst prove the lower bound on pdeg ( ). )) and is a restriction of , the same lower bound holds for pdeg ( ) as well. This proves the lemma modulo the existence of , as above. We justify this now. 1. If ≤ 10 √ , we take to be the largest integer such that ≡ (mod 2) and ≤ 2 /100. The parameter is set to 1/3. If 10 √ < ≤ /100, then we take to be the largest integer such that ≡ (mod 2) and ≤ /2. The parameter = max{ , 2 − 2 /2 }. Note that as observed above, we have ≤ /100, and hence, the above analysis subsumes all cases. In each case, the veri cation of properties (P1)-(P4) is a routine computation. (We assume here that is greater than a suitably large constant, since otherwise the statement of the lemma is trivial.) This concludes the proof. We now prove a lower bound on pdeg (ℎ). Let (ℎ) = . Recall (Observation 2.2) that (ℎ) ≤ /3 . Further, by de nition of (ℎ), we have either Spec ℎ( − 1) = 1 or Spec ℎ( − + 1) = 1. We assume that Spec ℎ( − + 1) = 1 (the other case is similar). The lemma is equivalent to showing that pdeg (ℎ) = Ω(max{ √︁ (ℎ) log(1/ ), log(1/ )}). We do this based on a case analysis based on the relative magnitudes of log(1/ ) and . Assume for now that ≤ 2 − /1000 . In this case, we show a lower bound of Ω(log(1/ )). To see this, set = /4 and consider the restriction ∈ B obtained as follows. So from now, we assume that per( ) is a power of upper-bounded by √︁ log(1/ ) and that (ℎ) ≥ 1. In this case, Lemma 4.12 shows that pdeg( ) = Ω(per( )). On the other hand, since (ℎ) ≤ and ≥ 2 − , the lower bound we need to show is Ω(per( ) + √︁ (ℎ) log(1/ ) +log(1/ )). ( ) = ℎ( 1 − +1− 0 −1 ). Note that as By Lemma 4.13, it su ces to show a lower bound of Ω(per( ) + pdeg (ℎ)). The analysis splits into two simple cases. Assume rst that pdeg (ℎ) ≤ 4 · per( ). In this case, we are trivially done, because we already have pdeg( ) = Ω(per( )), which is Ω(pdeg( )+pdeg (ℎ)) as a result of our assumption. Now assume that pdeg (ℎ) > 4 · per( ). We know that = ⊕ ℎ and hence ℎ = ⊕ . Hence, we have pdeg (ℎ) ≤ 2(pdeg /2 ( ) + pdeg /2 ( )) ≤ (pdeg ( )) + 2per( ), where the rst inequality is a consequence of Fact 2.4 item 2 and the second follows from error-reduction and Theorem 2.5. The above yields pdeg ( ) = Ω( (pdeg (ℎ) − 2 · per( ))) = Ω(pdeg (ℎ)) = Ω(per( ) + pdeg (ℎ)). This nishes the proof. A Robust Version of Galvin's Problem We recall here a combinatorial theorem of Hegedűs [23] regarding set systems. The theorem (and also our robust generalization given below) is easier to prove in the language of indicator vectors, so we state it in this language. Given any vectors , ∈ F for any eld F, we de ne , := ∈[ ] . T H E O R E M 4 .1 4. Assume = 4 , for a large enough prime . Let (1) , . . . , ( ) ∈ {0, 1} /2 ⊆ Z be such that for each ∈ {0, 1} /2 , there is an ∈ [ ] such that ( ) , = . Then ≥ . The above theorem is nearly tight as can be seen by taking the indicator vectors of the sets = { , ( + 1), . . . , + ( /2) − 1} for ∈ [ /2]. Improvements on the above theorem (some of them asymptotically tight) were proved recently by Alon et al. [5] and Hrubeš et al. [25]. Using the robust version of Hegedűs's lemma, we can prove tight robust versions of the above statement. R E M A R K 4 .1 5. We can prove a robust generalization (stated below) in a slightly more general setting where the th inner product ( ) , is supposed to take a value (which is not necessarily ). Similar to Theorem 4.14 above, it is easy to note that our robust version is tight up to constant factors. However, if we consider the robust version of the original statement of Theorem 4.14 (where all the inner products take value ), then while our lower bound continues to hold, it is not clear whether it is tight (except in the settings where is either a constant or 2 −Ω( ) ). We conjecture that it is. We now prove a robust version of Theorem 4.14. T H E O R E M 4 .1 6. Assume is a growing even integer parameter and ∈ [2 − , 1/2].. Let (1) , . . . , ( ) ∈ {0, 1} /2 ⊆ Z and 1 , . . . , ≤ be such that Pr ∼{0,1} /2 ∃ ∈ [ ] s.t. ( ) , = ≥ 1 − . Then = Ω( √︁ log(1/ )). The theorem can easily seen to be tight up to constant factors. For = · √︁ log(1/ ), set = 2 +1 and take (1) = (2) = · · · = ( ) = 1 /2 0 /2 and 1 = ( /4)− , 2 = ( /4)− +1, . . . , = ( /4) + . By standard Cherno bounds for the Hypergeometric distribution, we immediately get that this set of hyperplanes satisfy the above condition for a large enough choice of the constant . We need the following standard bound on binomial coe cients. For completeness, we include the proof in Appendix C. ( 2 − 2 )/ )). Given the above, we can prove Theorem 4.16 as follows. P R O O F O F T H E O R E M 4 .1 6 . Recall that for any xed ∈ {0, 1} /2 and any ∈ Z, the probability that a uniformly random ∈ {0, 1} satis es , = is at most (1/ √ ). In particular, we must have = Ω( √ ) for any ≤ 1/2. This proves the result for = Ω(1). Hence, we may assume that is smaller than any xed constant. We can also assume that ≥ 2 − for a small enough constant . Assume that ≤ √︁ log(1/ ). for a large enough constant . Informally speaking, the reason for this inequality is as follows: the expected value of ( ) , is ( /4) − /2 and any number ≡ (mod ) is far from this expectation. To prove this, let = /2 − . Note that = Ω( ) as long as is small enough in relation to , which happens if is assumed to be a small enough constant. Using the fact that | | = ⇒ ( ) = 0 ( ) = 1. As the above linear system is over F ⊆ F, we note that we may assume that ∈ F [ 1 , . . . , ]. From now on, we assume that F = F . Consider the degree-closure = cl ({0, 1} ). By the existence of , we see that ∉ . However, by symmetry, this implies that no point ∈ {0, 1} + lies in . Let , denote the vector space of all multilinear polynomials of degree at most that vanish at all points in {0, 1} . Let be a uniformly random element of , . For any ∈ {0, 1} \ , standard linear algebra implies that ( ) is a uniformly random element of F = F . In particular, for any ∈ {0, 1} + , we see that Pr [ ( ) ≠ 0] = 1 − 1/ . In particular, there is a ∈ , that is non-zero at at least a (1 − 1/ ) fraction of points in {0, 1} + . This yields the statement of the claim. B. Proof of Lemma 2.6 (the string lemma) We begin by recalling the statement of the lemma. L E M M A 2 . 6. Let ∈ {0, 1} + be any non-empty string 12 and , ∈ {0, 1} + such that = = . Then there exists a string ∈ {0, 1} + such that is a power of (i.e. = for some ≥ 2). P R O O F . Assume that | | = , | | = and | | = + = . We will show in fact that both and are powers of the same non-empty string . This will clearly imply the lemma. The proof is by induction on the length of . The base case of the induction corresponds to = 2, which is obvious. We now proceed with the inductive case. Assume w.l.o.g. that ≤ . As = , we see that the rst symbols in match those of , and hence we have = for some ∈ {0, 1} − . If = , this implies that = and we are immediately done. Otherwise, we see that = = for a non-empty string . Hence, we have = . By the induction hypothesis, we know that both and are powers of some non-empty . Hence, so is . This concludes the proof. C. Proof of Claim 4.17 We rst restate the claim. The claim then follows by a simple induction on − . To prove (16), we proceed as follows. By an expansion of binomial coe cients in terms of factorials, we see that between di erent layers of the Boolean cube {0, 1} . For ∈ {0, . . . , }, let {0, 1} be the elements of {0, 1} of Hamming weight exactly . As a rst approximation, let us say that a polynomial ∈ F[ 1 , . . . , ] (here F is some eld) distinguishes between level sets {0, 1} and {0, 1} if it vanishes at all points in the former set and at no point of the latter. Note that the ability of low-degree polynomials to do this depends on the properties of the underlying eld F: when F = Q (or any eld of characteristic 0), the simple polynomial =1 − does the job. 1 L 1E M M A 1 .1 (Hegedűs's lemma). Let F be a eld of characteristic > 0. Fix any positive integers likov and Podolskii [30] on depth-2 threshold circuits. What is the smallest = ( ) such that there is a depth-2 circuit made up of Majority 2 gates of fan-in at most that computes the Majority function on bits? Using Hegedűs's lemma, Hrubeš et al. showed an asymptotically tight lower bound of /2 − ( ) on ( ). a growing parameter and assume we have positive integer parameters , such that 100 < < − 100 and is a power of . For = ( , , ), if ∈ F[ 1 , . . . , ] that vanishes at a (1 − )-fraction of points of {0, 1} but does not vanish at an 0.0001 fraction of points of {0, 1} + , then deg( ) = Ω( ). R E M A R K 1 . 3. the proof of Galvin's theorem from Hegedűs's lemma, we can prove a lower bound of Ω( √︁ log(1/ )) for the above version of Galvin's problem for any ∈ [2 − , 1/2]. Note that this interpolates smoothly between a bound of Ω( √ ) for constant and Ω( ) for = 2 −Ω( ) , both of which are tight. For general in between these two extremes, we do not know if our bounds are tight (we suspect they are). However, our bounds are tight for every for a natural generalization of the above problem, where we allow intersections of any size (and not just /4). We refer the reader to Section 4.3 for details. Multilinear polynomials and Multilinearization. Fix any eld F. Throughout, we work with functions : {0, 1} → F which are represented by multilinear polynomials. Recall that each such function has a unique multilinear polynomial representation. Further, given a (possibly non-multlinear) polynomial ( 1 , . . . , ) representing (i.e. ( ) = ( ) for all ∈ {0, 1} ), we can obtain a multilinear representation by simply replacing each for > 1 by in the polynomial . This preserves the underlying function as = for ∈ {0, 1}. Any polynomial can be multilinearized this way without increasing the degree. Bernstein's inequality. The following standard deviation bound can be found in, e.g., the book of Dubhashi and Panconesi [17, Theorem 1.2]. L E M M A 2 .1 (Bernstein's inequality). Let 1 , . . . , be independent and identically distributed Bernoulli random variables with mean . Let = =1 . Then for any > ( ) = )Spec (| |). Given a ∈ B , we de ne the period of , denoted per( ), to be the smallest positive integer such that Spec ( ) = Spec ( + ) for all ∈ [0, − ]. We say is -bounded if Spec is constant on the interval [ , − ]; let ( ) denote the smallest such that is -bounded. D E F I N I T I O N 2 . 3 (Probabilistic polynomial and Probabilistic degree). A probabilistic polynomial is a random polynomial (with some distribution having nite support) over F[ 1 , . . . , ]. We say that the degree of , denoted deg( ), is at most if the probability distribution de ning is supported on polynomials of degree at most . Given a Boolean function : {0, 1} → {0, 1} and an > 0, an -error probabilistic polynomial for is a probabilistic polynomial such that for each ∈ {0, 1} , Pr [ ( ) ≠ ( )] ≤ . < ≤ 1/3 and any Boolean function , if is anerror probabilistic polynomial for , then = ( 1 , . . . , ) is a -error probabilistic polynomial for where = (log(1/ )/log(1/ )), is the exact multilinear polynomial 3. (Sum) Assume that , 1 , . . . , are all Boolean functions on a common set of variables such that the functions 1 , . . . , are mutually exclusive and = ∈[ ] . Then, for any C O R O L L A R Y 2 . 9. Fix any prime and positive integer . Assume is a non-negative integer and a positive integer such that + ≤ . Let be the largest power of dividing . Then, there is a symmetric multilinear polynomial ∈ F [ 1 , . . . , ] of degree such that vanishes at all points of {0, 1} but at no point of {0, 1} + . P R O O F . Assume = where is not divisible by . Let , ∈ {0, . . . , − 1} be the ( + 1)th least signi cant digit of and + respectively in base . Note that = + 0 (mod ) where 0 is the least signi cant digit of in base ( 0 is non-zero as is not divisible by ). De ne the polynomial ( 1 , . . . , ) = ∑︁ ⊆[ ]:| |= ∈ − , which we consider an element of F [ 1 , . . . , ]. Note that at any input ∈ {0, 1} of Hamming weight , we have ( ) = − where the right hand side is interpreted modulo . Lucas's theorem then easily implies that ( ) = 0 if = and 0 if = + . C L A I M 3 . 8 . 8There is a 1 of degree at most satisfying property (Q1.1). C L A I M 3 . 9. There is a 2 of degree at most − − 1 satisfying properties (Q2.1)-(Q2.3), where For any polynomial ∈ F[ 1 , . . . , ] and any ∈ [0, ], let NZ ( ) denote the set of points of {0, 1} where does not vanish. Let( ) denote |NZ ( )|/ . L E M M A 3 .1 0. For any ∈ F[ 1 , . . . , ] and any ≥ 1, there is a probabilistic polynomial ( ) of degree at most · deg( ) such that for all ∈ [0, ], E ( ) [ ( ( ) )] = ( ) . P R O O F . For a permutation ∈ , and ∈ {0, 1} , de ne = ( (1) , . . . , ( ) ). Also, de ne ( 1 , . . . , ) = ( ) = ( (1) , . . . , ( ) ). For a uniformly random ∈ , and any ∈ {0, 1} , the probabilistic polynomial satis es Pr [ ( ) ≠ 0] = Pr [ ( ) ≠ 0] = Pr [ ∈ NZ ( )] = ( ) as is uniformly distributed over {0, 1} . Choose 1 , . . . , i.u.a.r. from , and de ne ( ) = =1 . For any ∈ {0, 1} Pr ( ) ( ) ( ) ≠ 0 = ( ( )) . P R O O F O F L E M M A 3 . 1 .. 1W.l.o.g. we assume that ≤ /2. (To prove the lemma for > /2, consider the polynomial ( ) = (1 − 1 , . . . , 1 − ) instead.) We rst reduce to the case where = /2. More precisely, note that there exist non-negative integers ≤ 2 and so that 2( − ) = − − . This can be seen by a simple case analysis. If = − , we can choose = 0, = − 2 + 2 ; if = + and − 2 ≥ 2 , we can choose = 0 and = − 2 − 2 ; and if = + and − 2 < 2 , we can choose = 2 − ( − 2 ) and = 0. Having chosen , as above, we set = − , = − and = − − . Let be a uniformly random subset of [ ] of size + and a uniformly random point in {0, 1} + . We set , ( : ∉ ) to be the probabilistic polynomial obtained by setting all the variables indexed by according to . Note that we have E , [ ( , )] = ( ) Hence, with positive probability over the choice of and , we have both ( , ) ≤ 2 0 / 1 and ( , ) > 1 /2. We x such a choice , for , and let denote , . Clearly, deg( ) ≥ deg( ) and hence it su ces to lower bound deg( ).We will now use Lemma 3.2 to obtain the desired lower bound on deg( ). First of all, bounds on in the statement of the lemma and the fact that ≤ 2 = . Case 1 : 1Assume rst that ≥ 100. Using the bounds on 0 and 1 that follow from the lemma statement and the bounds above, is a polynomial in variables satisfying 25) · 2 / ).where we have used the inequalities ≥ 2( − ) ≥ and = 2 ≤ 2( + ) ≤ 4 .De ne = exp(−2 ). Note that we have ≥ exp(− /5000) by the bound on above.Further, ( /2)− ( ) = ( ) ≤ 2 exp(−99 2 / ) = 2 exp(−99 ) ≤ exp(−2 ) = , also Remark 3.3), we immediately obtain deg( ) ≥ /25 and hence we are done in this case.Case 2: Now consider the case when < 100. In this case, the hypothesis of the lemma assures us that 0 ≤ 1/1000 and 1 ≥ exp(− 2 /100 ) ≥ exp(− /25) ≥ −4 where the second inequality uses ≤ 4 as argued above. Then, we have ( ) ≤ 1/400, both of which follow from (5a). subset of [ ] of size − and let be a uniformly random point in {0, 1} − ( − )/2 . De ne the probabilistic polynomial ( ) , obtained by setting the variables indexed by according to in the probabilistic polynomial ( ) . Let := /2 and := − ( − )/2. As above, we have E ) , )] = ( ) =: 1 .Let be the smallest positive integer so that 0 = ( ) ≤ −300 . Note that is upper bounded by an absolute constant, as ( ) ≤ 1/400 by (5a). Further, we have ( ) ( ) ≥ /25. As deg( ) ≤ · deg( ), we also get deg( ) = Ω( ), nishing the proof in this case as well. (Note that the Ω(·) hides an absolute constant.) . 8 L 8E M M A 3 .1 2. Let be arbitrary and ⊆ [0, ] be any interval of integers. Given any : → {0, 1}, there is a multilinear polynomial ∈ Z[ 1 , . . . , ] of degree at most | | − 1 such that ( ) = (| |) for each ∈ ∈ {0, 1} . deg( ) ≤ deg( ) = ( ) = (( / ) log(1/ )) = ( ) = ( ) where the second-last equality uses our assumption that = exp(− ( 2 / )). Let ∈ {0, 1} be arbitrary. We analyze the random variable ( ). Note that as long as the Hamming weight of = ( 1 , . . . , ) is in the interval (( − /2) , ( + /2) ), we have ( ) = 0. As each co-ordinate of is 1 with probability / = ∈ [0, as is a large enough constant. In a similar way, we also see that for any ∈ {0, 1} , we have Pr [ ( ) ≠ 1] < /2 and hence, in particular, Pr [ ( ) ≠ 0] > 1 − ( /2), as long as is a large enough constant. In particular, by Markov's inequality and the union bound, we see that there is a of degree at most deg( ) such that ( ) ≤ and ( ) ≥ 1 − .Thus, we have a polynomial as claimed in Theorem 3.11. anonymous reviewer suggested the following extension of the main lemma (Lemma 3.1). We prove this by a simple reduction to the main lemma. (This leads to a worsening in the constants involved.) L E M M A 3 .1 3 (An extension to the case when is not a power of ). Assume that F is a eld of characteristic . Let be a growing parameter and assume we have positive integer parameters , such that 200 < < − 200 . Let be the largest power of that divides and assume = . De ne = min{ / , 1 − ( / )} and = / . Assume that ∈ F[ 1 , . . . , ] is a polynomial such that for some ∈ { + , − }, Pr ∼{0,1} [ ( ) ≠ 0] ≤ min{ −1000 2 / , ( 6b ) 6bThen, deg( ) = Ω( ), where the Ω(·) hides an absolute constant. R E M A R K 3 .1 4. The 'non-robust' version of this lemma (when vanishes everywhere on {0, 1} but not on some point in {0, 1} ) yields a degree lower bound of , and can be proved using similar techniques to those used in proving Hegedűs's lemma. A proof can be found . By Markov's inequality, there is again a xing of such that neither of the above two events occurs. For such a polynomial , 2 /1000 · − 2 /250 = − 2 /200 , D( E F I N I T I O N 4 .1 (The -Coin Problem). For any ∈ [0, 1] and integer ≥ 1, let be the product distribution over {0, 1} obtained by setting each bit to 1 independently with probability . Let ∈ (0, 1) be a parameter. Given a function : {0, 1} → {0, 1}, we say that solves the -coin problem with error This de nition is sometimes [31] stated in terms of the distributions (1/2)− and (1/2)+ . This is essentially equivalent to the de nition above.) Let F be a prime eld of characteristic , where is a xed constant. We consider here the minimum degree of a polynomial ∈ F[ 1 , . . . , ] that solves the -coin problem with error . By Lemma 3.12, for any ≥ 1, there is a polynomial ∈ F[ 1 , . . . , ] of degree ( ) that outputs 0 on all inputs of weight ∈ ( Using Lemma 2.1 (Bernstein's inequality), it can be easily checked that solves the -coin problem with error as long as ≥ 1 2 log(1/ ) for some large enough constant > 0. This yields a polynomial of degree ( 1 log(1/ )). . (1/ )) as is a constant. De ne the probabilistic polynomial ∈ F[ 1 , . . . , ] obtained from by randomly replacing each variable of by a uniformly random variable among 1 , . . . , . For any ∈ {0, 1} /2 , we havePr [ ( ) = 0] = Pr ∼ 1/2 [ ( ) = 0] ≤ ,and similarly for ∈ {0, 1} ( /2)− , we have Pr [ ( ) ≠ 0] ≤ . In particular, by Markov's inequality, there is a xed polynomial of degree at most deg( ) ) ≠ 0] ≤ 2 . Hence, by Lemma 3.2, we have deg( ) = Ω( ) = Ω( 1 log(1/ )). Now, if is not of the assumed form, we consider be the largest integer such that ≤ 1/ and set := 1/ . De ne ∈ (0, 1) by = / . Note that if , ∈ {0, Now, if we de ne the probabilistic polynomial ( 1 , . . . , ) by ( 1 , . . . , ) = ( 1 ⊕ 1 , . . . , ⊕ ) where = ( 1 , . . . , ) is sampled from 1/2−( /2) , then solves the -coin problem with error at most . Note also that deg( ) ≤ deg( ) as for each xed , each ⊕ is a linear function of . and not a power of , Ω(min{ √︁ log(1/ ), per( )}) if per( ) a power of and (ℎ) L E M M A 4 . 6 ( 6Lemma 27 in [43]). Assume ≥ 1. For any ∈ [2 − , 1/3], pdeg (Thr ) = Ω( √︁ min{ , + 1 − } log(1/ ) + log(1/ )). P R O O F . By error reduction for probabilistic polynomials (Fact 2.4 item 1), it su ces to prove an Ω( ) lower bound on pdeg /2 (ℎ). Assume without loss of generality that Spec ℎ( /2 ) = 1 and Spec ℎ( /2 − ) = 0. Let be an ( /2)-error probabilistic polynomial for ℎ. Applying Lemma 3.2 to yields deg( ) ≥ deg( ) = Ω( ). .1 0 . 0Let ∈ (0, 1/3] be a constant. Let be any integer relatively prime to such that ≤ 0.99 . Then the -error probabilistic degrees of EThr /2 and MOD are Ω( √ ). Corollary 4.9 that pdeg 0 (ℎ) = Ω( ) = Ω( √ ). By error-reduction for probabilistic polynomials (Fact 2.4 item 1), we also have the same lower bound (up to constant factors) for any ≤ 1/3. This proves the claim in the case that ℎ = EThr /2 . For ℎ = MOD , we make some minor modi cations to the above idea. Let ∈ [0, − 1] be such that + ( − )/2 ≡ 0 (mod ). De ne ℎ ∈ B − by ℎ ( ) = ℎ( 1 0 − ). Set to be the smallest power of such that ≥ √︁ 100( − ) and 0 = −2 2 /( − ) . Note that Spec ℎ ( ( − )/2 ) = Spec ℎ( + ( − )/2 ) = 1 as + ( − )/2 ≡ 0 (mod ). On the other hand, + ( − )/2 − 0 (mod ) as is a power of and hence not divisible by , which implies that Spec ℎ ( ( − )/2 − ) = 0. Thus, by Corollary 4.9, we get pdeg 0 (ℎ ) = Ω( ) = Ω( √ ). ( ) = ( 1 0 ) 0where = + − /2 and = − − (it can be checked that , are non-negative for parameters , , as above). Note that Spec ( /2 ) = Spec ( /2 + ) = Spec ( + ) and similarly that Spec ( /2 − ) = Spec ( ). We thus obtain Spec ( /2 ) ≠ Spec ( /2 − ). which proves the lemma under the assumption on above. (We use the bounds on to ensure that 2 − /200 ≤ ≤ −2 2 / , which is part of the hypothesis of Corollary 4.9.) If ∈ [2 − , 2 − /10000 2 ], then for 0 = 2 − /10000 2 , we have pdeg ( ) ≥ pdeg 0 ( ) = Ω( √︁ log(1/ 0 )) = Ω( √︁ log(1/ )) which implies the desired lower bound. 11 Note that Spec ( /2 ) = Spec ( /2 ) and Spec ( /2 − ) = Spec ( /2 − ). Hence, for 1 = −10000 , Corollary 4.9 implies pdeg 1 ( ) ≥ pdeg 1 ( ) = Ω( ) = Ω( √︁ log(1/ 1 )). √︁ log(1/ ) = Ω(min{ , √︁ log(1/ )}) = Ω( ). (Recall that ≤ √︁ log(1/ ).) L E M M A 4 .1 3 . 3Assume (ℎ) ≥ 1. Then, ∈ [2 − , 1/3], pdeg (ℎ) = Ω( √︁ (ℎ) log(1/ ) + log(1/ )). P R O O F . Similar to the proof of Lemma 4.12, we may assume without loss of generality that ∈ [2 − /10000 , −10000 2 ]. Spec ℎ is the constant 0 function on the interval [ , − ], the function is computing the AND function on inputs. By Lemma 4.6, we immediately have pdeg (ℎ) ≥ pdeg ( ) = Ω(log(1/ )) proving the lemma in this case. Now assume that > 2 − /1000 . In this case, we need to show that pdeg (ℎ) is lower bounded by Ω( √︁ log(1/ )). To prove this, consider the restriction ∈ B 2 −2 de ned by ( ) = ℎ( 1 −2 +2 ). Since Spec ℎ is the constant 0 function on the interval [ , − ] and Spec ℎ( − + 1) = 1, it follows that the periodic part of has period Ω( ). It then follows from Lemma 4.11 that pdeg (ℎ) = Ω( √︁ log(1/ )). This concludes the proof of the lemma. Now, we are ready to prove Theorem 4.3. P R O O F O F T H E O R E M 4 . 3 . By Lemma 4.12, we already have the desired lower bound on pdeg ( ) in any of the following scenarios. per( ) is not a power of , or per( ) is a power of and per( ) ≥ √︁ log(1/ ), or (ℎ) = 0. ≤ call ∈ [ ] balanced if | − 4 | ≤ where := √︁ log(1/ ) for a large enough constant . If is not balanced, then we have for a uniformly random ∼ {0, The second inequality above follows from Claim 4.17, and the third follows from the Stirling approximation and using the fact that is a large enough constant. In particular, if is the set of balanced , we have Pr ∃ ∉ , ( ) √︁ log(1/ ). We can thus consider only { ( ) | ∈ }, which satisfy the hypothesis with error probability 1 := 2 . Now consider the polynomial ( 1 , . . . , ) = ∈ ( ( ) , − ).We know that vanishes at a random point of {0, 1} /2 with probability at least 1 − 1 . Now, x any prime ∈ [10 , 20 ] (such a prime exists by standard number-theoretic results). We claim that for any ∈ and a uniformly random point ∈ {0, 1} /2− now on, we consider the polynomial as an element of F [ 1 , . . . , ]. At this point, we would like to apply Lemma 3.1 to the polynomial and nish the proof. Unfortunately, the error parameter 1 above is not small enough to apply Lemma 3.1 directly (we need 1 ≤ exp(−200 2 / )). However, we can do a simple error reduction as in Lemma 3.10 to ensure that Lemma 3.1 is applicable. More precisely, choose to be a large enough absolute constant the last inequality we used the fact that is smaller than some absolute constant.By a simple union bound, there is a xed polynomial ∈ F [ 1 , . . . , ] of degree · deg( ) = ( ) applying Lemma 3.1 to the polynomial , we get deg( ) = Ω( ) = Ω( √︁ log(1/ )). This yields the desired lower bound on . F[ 1 , . . . , ] of degree at most deg( ) such that vanishes at all ∈ {0, 1} but is non-zero at at least a (1 − 1/ ) fraction of the points in {0, 1} + . P R O O F . Let = deg( ). Assume without loss of generality that ( ) = 1. Note that is the solution to the system of linear equations de ned by the following constraints on polynomials of degree at most . C L A I M 4 .1 7 . 47Let be an even integer and a non-negative integer with ≤ /2. Then, for any , ∈ {0, . . . , /2 } with ≤ , we have (−Ω( ( 2 − 2 )/ )). P R O O F . It su ces to show that for each ∈ {0, . . . , Fix any ∈ B . Among we choose a function such that per( ) is as small as possible. We call the periodic part of . De ne ℎ ∈ B by ℎ = ⊕ . We call ℎ the bounded part of .all symmetric Boolean functions ∈ B such that Spec ( ) = Spec ( ) for all ∈ [ /3 + 1, 2 /3 ], We will refer to the pair ( , ℎ) as a standard decomposition of the function . Note that we have = ⊕ ℎ. O B S E R VAT I O N 2 . 2. Let ∈ B and let ( , ℎ) be a standard decomposition of . Then, per( ) ≤ /3 and (ℎ) ≤ /3 . Some symmetric Boolean functions. Fix some positive ∈ N. The Majority function Maj on Boolean variables accepts exactly the inputs of Hamming weight greater than /2. For ∈ [0, ], the Threshold function Thr accepts exactly the inputs of Hamming weight at least ; and similarly, the Exact Threshold function EThr accepts exactly the inputs of Hamming weight exactly . Finally, for ∈ [2, ] and ∈ [0, − 1], the function MOD , accepts exactly those L E M M A 3 . 2 (A special case of Lemma 3.1). Let be a growing parameter and assume ∈ [2 − /100 , −200 ]. Assume is an integer such that is a power of and furthermore, = √ for some ∈ R such that 100 ≤ ≤ 1 2 · ln(1/ ). Let ∈ F[ 1 , . . . , ] be any polynomial such that 29, Lemma 3.3] for a proof.FA C T 3 . 4. Let ∈ F[ 1 , . . . , ] be a non-zero multilinear polynomial of degree at most ≤ . Then cannot vanish at all points in any Hamming ball of radius in {0, 1} . L E M M A 3 . 5. Let , , be any non-negative integers with ≤ ≤ /4. Then we have Theorem A.1] for all elds.) T H E O R E M 3 . 6. For any ⊆ {0, 1} and any ≤ , we have where = =0 , the number of multilinear monomials of degree at most .R E M A R K 3 . 7.It should be noted that the above lemma generalizes the standard linearalgebraic fact that for any such that | | < , there is a non-zero multilinear polynomial of degree that vanishes on . Or equivalently,cl ( ) 2 ≤ | | Proof of Claim 3.8. This follows immediately from the upper bound for periodic functions in Theorem 2.5. Consider the -periodic function that takes the value 1 at point ∈ {0, 1} if and only if | | ≡ (mod ). Since this function is -periodic, it can be represented exactly as a polynomial of degree at most . This yields the claim. Proof of Claim 3.9. Let denote − − 1 . Let = 0 ∪ < − : ≡ (mod ) {0, 1} . We want to show the existence of a polynomial 2 of degree at most such that 2 vanishes at all points of but 2 does not vanish at some point in 1 := {0, 1} \ 1 . Note that this is equivalent to saying that cl ( ) 1 . To show this, it su ces to show that C L A I M 4 .1 7. Let be an even integer and a non-negative integer with≤ /2. Then, for any, ∈ {0, . . . , /2 } with ≤ , we have/2 /2 − /2 /2 + /2 /2 − /2 /2 + ≤ exp(−Ω( Recall that these are bounded-depth circuits made up of AND, OR and ⊕ gates. = /25 .We now prove the above claims. Recall that, for any alphabet Σ, the notation Σ + denotes the set of non-empty strings over this alphabet. This work is licensed under the Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ © Srikanth Srinivasan. Acknowledgements. The author is grateful to Mrinal Kumar, Nutan Limaye, Utkarsh Tripathi and S. Venkitesh for useful discussions, feedback, and encouragement. The author thanks Nutan Limaye for suggesting the robust version of Galvin's problem as an application. The author is also grateful to the anonymous referees of STOC 2020 and TheoretiCS for their corrections and suggestions. In particular, a referee for the TheoretiCS submission pointed out an extension to the main lemma (Lemma 3.13). More applications of the polynomial method to algorithm design. Amir Abboud, Richard Ryan Williams, Huacheng Yu, 10.1137/1.9781611973730.17Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015. the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015San Diego, CA, USADOIAmir Abboud, Richard Ryan Williams, and Huacheng Yu. More applications of the polynomial method to algorithm design. Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 218-230, 2015 DOI (2). Coin theorems and the fourier expansion. Rohit Agrawal, Chic. J. Theor. Comput. Sci. 20205Rohit Agrawal. Coin theorems and the fourier expansion. Chic. J. Theor. Comput. Sci. 2020, 2020 (5, 23). A theorem on probabilistic constant depth computations. Miklós Ajtai, Michael Ben-Or, Proceedings of the 16th Annual ACM Symposium on Theory of Computing, STOC 1984. the 16th Annual ACM Symposium on Theory of Computing, STOC 1984Washington, DC, USAMiklós Ajtai and Michael Ben-Or. A theorem on probabilistic constant depth computations. Proceedings of the 16th Annual ACM Symposium on Theory of Computing, STOC 1984, April 30 - May 2, 1984, Washington, DC, USA, pages 471-474. DOI (4). 10.1145/800057.808715ACMACM, 1984 DOI (4). Probabilistic polynomials and hamming nearest neighbors. Josh Alman, Ryan Williams, 10.1109/FOCS.2015.18IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015. Berkeley, CA, USAIEEE Computer Society59Josh Alman and Ryan Williams. Probabilistic polynomials and hamming nearest neighbors. IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 136-150. IEEE Computer Society, 2015 DOI (2, 5, 9, 18). Unbalancing sets and an almost quadratic lower bound for syntactically multilinear arithmetic circuits. Mrinal Noga Alon, Ben Lee Kumar, Volk, 10.4230/LIPIcs.CCC.2018.1133rd Computational Complexity Conference, CCC 2018. San Diego, CA, USA1131Noga Alon, Mrinal Kumar, and Ben Lee Volk. Unbalancing sets and an almost quadratic lower bound for syntactically multilinear arithmetic circuits. 33rd Computational Complexity Conference, CCC 2018, June 22-24, 2018, San Diego, CA, USA, 11:1-11:16, 2018 DOI (2, 3, 31). The expressive power of voting polynomials. James Aspnes, Richard Beigel, Merrick L Furst, Steven Rudich, Combinatorica. 142James Aspnes, Richard Beigel, Merrick L. Furst, and Steven Rudich. The expressive power of voting polynomials. Combinatorica, 14(2):135-148, 1994 (6). Generalized Hamming weights of affine Cartesian codes. Peter Beelen, Mrinmoy Datta, 10.1016/j.ffa.2018.01.006Finite Fields Appl. 5113DOIPeter Beelen and Mrinmoy Datta. Generalized Hamming weights of affine Cartesian codes. Finite Fields Appl. 51:130-145, 2018 DOI (13). The polynomial method in circuit complexity. Richard Beigel, 10.1109/SCT.1993.336538Proceedings of the Eigth Annual Structure in Complexity Theory Conference. the Eigth Annual Structure in Complexity Theory ConferenceSan Diego, CA, USAIEEE Computer SocietyDOI (2, 5)Richard Beigel. The polynomial method in circuit complexity. Proceedings of the Eigth Annual Structure in Complexity Theory Conference, CCC 1993, San Diego, CA, USA, May 18-21, 1993, pages 82-95. IEEE Computer Society, 1993 DOI (2, 5). Combinatorics on words: a tutorial. Jean Berstel, Juhani Karhumäki, Bulletin of the EATCS. 799Jean Berstel and Juhani Karhumäki. Combinatorics on words: a tutorial. Bulletin of the EATCS, 79:178, 2003 (9). On the Probabilistic Degree of OR over the Reals. Siddharth Bhandari, Prahladh Harsha, Tulasimohan Molli, Srikanth Srinivasan, 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). 122Leibniz International Proceedings in Informatics (LIPIcs)Siddharth Bhandari, Prahladh Harsha, Tulasimohan Molli, and Srikanth Srinivasan. On the Probabilistic Degree of OR over the Reals. 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018), volume 122 of Leibniz International Proceedings in Informatics (LIPIcs), 5:1-5:12, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018 . Doi, 10.4230/LIPIcs.FSTTCS.2018.5DOI (6). Polylogarithmic independence fools AC 0 circuits. Mark Braverman, 10.1145/1754399.1754401J. ACM. 5752010Mark Braverman. Polylogarithmic independence fools AC 0 circuits. J. ACM, 57(5), 2010 DOI (2). The coin problem and pseudorandomness for branching programs. Joshua Brody, Elad Verbin, 10.1109/FOCS.2010.1051th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010. Las Vegas, Nevada, USADOIJoshua Brody and Elad Verbin. The coin problem and pseudorandomness for branching programs. 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, October 23-26, 2010, Las Vegas, Nevada, USA, pages 30-39. IEEE Computer Society, 2010 DOI (4). Pseudorandom generators from the second fourier level and applications to AC0 with parity gates. Eshan Chattopadhyay, Pooya Hatami, Shachar Lovett, Avishay Tal, 10.4230/LIPIcs.ITCS.2019.2210th Innovations in Theoretical Computer Science Conference, ITCS 2019. San Diego, California, USA22Eshan Chattopadhyay, Pooya Hatami, Shachar Lovett, and Avishay Tal. Pseudorandom generators from the second fourier level and applications to AC0 with parity gates. 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, 22:1-22:15, 2019 DOI (5, 23). A generalization of a combinatorial theorem of Macaulay. G F Clements, B Lindström, 10.1016/S0021-9800(69)80016-5Journal of Combinatorial Theory. 73DOIG.F. Clements and B. Lindström. A generalization of a combinatorial theorem of Macaulay. Journal of Combinatorial Theory, 7(3):230-238, 1969 DOI (6, 13). Two sides of the coin problem. Approximation, Randomization, and Combinatorial Optimization. Gil Cohen, Anat Ganor, Ran Raz, 10.4230/LIPIcs.APPROX-RANDOM.2014.618Schloss Dagstuhl -Leibniz-Zentrum für Informatik. 284DOIAlgorithms and Techniques, APPROX/RANDOMGil Cohen, Anat Ganor, and Ran Raz. Two sides of the coin problem. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, volume 28 of LIPIcs, pages 618-629. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2014 DOI (4). Progression-free sets in are exponentially small. Ernie Croot, F Vsevolod, Péter Pál Lev, Pach, Annals of Mathematics. 2Ernie Croot, Vsevolod F Lev, and Péter Pál Pach. Progression-free sets in are exponentially small. Annals of Mathematics:331-337, 2017 (2). Concentration of measure for the analysis of randomized algorithms. P Devdatt, Alessandro Dubhashi, Panconesi, Cambridge University PressDevdatt P Dubhashi and Alessandro Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, 2009 (7). On large subsets of with no three-term arithmetic progression. S Jordan, Dion Ellenberg, Gijswijt, Annals of Mathematics. 2Jordan S Ellenberg and Dion Gijswijt. On large subsets of with no three-term arithmetic progression. Annals of Mathematics:339-343, 2017 (2). . Hikoe Enomoto, Peter Frankl, Hikoe Enomoto, Peter Frankl, Noboru Ito, and Codes with given distances. Kazumasa Nomura, Graphs and Combinatorics. 31Kazumasa Nomura. Codes with given distances. Graphs and Combinatorics, 3(1):25-38, 1987 (3). AC 0 [p] lower bounds against MCSP via the coin problem. Alexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, Avishay Tal, 10.4230/LIPIcs.ICALP.2019.6646th International Colloquium on Automata, Languages, and Programming, ICALP 2019. Patras, Greece1322019:15. Schloss Dagstuhl -Leibniz-Zentrum für InformatikAlexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, and Avishay Tal. AC 0 [p] lower bounds against MCSP via the coin problem. 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019, July 9-12, 2019, Patras, Greece, volume 132 of LIPIcs, 66:1-66:15. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2019 DOI (5). Polynomial methods in combinatorics. Larry Guth, 10.1090/ulect/064University Lecture Series. 642273American Mathematical SocietyLarry Guth. Polynomial methods in combinatorics, volume 64 of University Lecture Series. American Mathematical Society, Providence, RI, 2016, pages ix+273 DOI (2). On polynomial approximations to AC. Random Struct. Prahladh Harsha, Srikanth Srinivasan, 10.1002/rsa.20786Algorithms. 542DOIPrahladh Harsha and Srikanth Srinivasan. On polynomial approximations to AC. Random Struct. Algorithms, 54(2):289-303, 2019 DOI (6, 8). Balancing sets of vectors. Gábor Hegedűs, Studia Scientiarum Mathematicarum Hungarica. 47310Gábor Hegedűs. Balancing sets of vectors. Studia Scientiarum Mathematicarum Hungarica, 47(3):333-349, 2009 (2, 3, 10, 31). Generalized Hamming weights of -ary Reed-Muller codes. Petra Heijnen, Ruud Pellikaan, 10.1109/18.651015IEEE Trans. Inform. Theory. 441DOIPetra Heijnen and Ruud Pellikaan. Generalized Hamming weights of -ary Reed-Muller codes. IEEE Trans. Inform. Theory, 44(1):181-196, 1998 DOI (13). Lower bounds on balancing sets and depth-2 threshold circuits. Pavel Hrubes, Anup Sivaramakrishnan Natarajan Ramamoorthy, Amir Rao, Yehudayoff, 10.4230/LIPIcs.ICALP.2019.7246th International Colloquium on Automata, Languages, and Programming, ICALP 2019. Patras, Greece13214. Schloss Dagstuhl -Leibniz-Zentrum für Informatik. DOI (2, 3, 31Pavel Hrubes, Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao, and Amir Yehudayoff. Lower bounds on balancing sets and depth-2 threshold circuits. 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019, July 9-12, 2019, Patras, Greece, volume 132 of LIPIcs, 72:1-72:14. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2019 DOI (2, 3, 31). Set systems with restricted cross-intersections and the minimum rank ofinclusion matrices. Peter, Benny Keevash, Sudakov, SIAM Journal on Discrete Mathematics. 184Peter. Keevash and Benny. Sudakov. Set systems with restricted cross-intersections and the minimum rank ofinclusion matrices. SIAM Journal on Discrete Mathematics, 18(4):713-727, 2005 . 10.1137/S089548010343463413DOIDOI (6, 13). Learning intersections and thresholds of halfspaces. Adam R Klivans, O&apos; Ryan, Rocco A Donnell, Servedio, 10.1016/j.jcss.2003.11.002J. Comput. Syst. Sci. 684Adam R. Klivans, Ryan O'Donnell, and Rocco A. Servedio. Learning intersections and thresholds of halfspaces. J. Comput. Syst. Sci. 68(4):808-840, 2004 DOI (2). Learning DNF in time 2˜( 1/3 ). R Adam, Rocco A Klivans, Servedio, 10.1016/j.jcss.2003.07.007J. Comput. Syst. Sci. 682Adam R. Klivans and Rocco A. Servedio. Learning DNF in time 2˜( 1/3 ) . J. Comput. Syst. Sci. 68(2):303-318, 2004 DOI (2). Certifying polynomials for AC 0 [⊕] circuits, with applications to lower bounds and circuit compression. Swastik Kopparty, Srikanth Srinivasan, 10.4086/toc.2018.v014a012Theory of Computing. 14111DOISwastik Kopparty and Srikanth Srinivasan. Certifying polynomials for AC 0 [⊕] circuits, with applications to lower bounds and circuit compression. Theory of Computing, 14(1):1-24, 2018 DOI (6, 11, 12). Computing majority by constant depth majority circuits with low fan-in gates. Alexander S Kulikov, Vladimir V Podolskii, 10.4230/LIPIcs.STACS.2017.4934th Symposium on Theoretical Aspects of Computer Science. Hannover, Germany49Alexander S. Kulikov and Vladimir V. Podolskii. Computing majority by constant depth majority circuits with low fan-in gates. 34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, March 8-11, 2017, Hannover, Germany, 49:1-49:14, 2017 DOI (3). A fixed-depth size-hierarchy theorem for AC 0 [⊕] via the coin problem. Nutan Limaye, Karteek Sreenivasaiah, Srikanth Srinivasan, Utkarsh Tripathi, S Venkitesh, 10.1137/19M1276467SIAM J. Comput. 5042021 DOI (4, 5, 22, 23Nutan Limaye, Karteek Sreenivasaiah, Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh. A fixed-depth size-hierarchy theorem for AC 0 [⊕] via the coin problem. SIAM J. Comput. 50(4):1461-1499, 2021 DOI (4, 5, 22, 23). Constant depth circuits, fourier transform, and learnability. Nathan Linial, Yishay Mansour, Noam Nisan, 10.1145/174130.174138J. ACM. 403DOINathan Linial, Yishay Mansour, and Noam Nisan. Constant depth circuits, fourier transform, and learnability. J. ACM, 40(3):607-620, 1993 DOI (2). An exact characterization of symmetric functions in qAC 0. Chi-Jen Lu, Theoretical Computer Science. 2612Chi-Jen Lu. An exact characterization of symmetric functions in qAC 0 [2]. Theoretical Computer Science, 261(2):297-303, 2001 (8-10). . Raghu Meka, Oanh Nguyen, Van Vu, Raghu Meka, Oanh Nguyen, and Van Vu. Anti-concentration for polynomials of independent random variables. 10.4086/toc.2016.v012a011Theory of Computing. 121DOIAnti-concentration for polynomials of independent random variables. Theory of Computing, 12(1):1-17, 2016 DOI (6). Hilbert functions and the finite degree zariski closure in finite field combinatorial geometry. Zipei Nie, Anthony Y Wang, 10.1016/j.jcta.2015.03.011Journal of Combinatorial Theory, Series A. 1346DOIZipei Nie and Anthony Y. Wang. Hilbert functions and the finite degree zariski closure in finite field combinatorial geometry. Journal of Combinatorial Theory, Series A, 134:196-220, 2015 DOI (6, 12). On the AC 0 [⊕] complexity of andreev's problem. Aditya Potukuchi, 10.4230/LIPIcs.FSTTCS.2019.2539th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2019. Bombay, India150201914. Schloss Dagstuhl -Leibniz-Zentrum für InformatikAditya Potukuchi. On the AC 0 [⊕] complexity of andreev's problem. 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2019, December 11-13, 2019, Bombay, India, volume 150 of LIPIcs, 25:1-25:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019 DOI (5). A lower bound for the size of syntactically multilinear arithmetic circuits. Ran Raz, Amir Shpilka, Amir Yehudayoff, 10.1137/070707932SIAM J. Comput. 384DOIRan Raz, Amir Shpilka, and Amir Yehudayoff. A lower bound for the size of syntactically multilinear arithmetic circuits. SIAM J. Comput. 38(4):1624-1647, 2008 DOI (3). Lower bounds on the size of bounded depth circuits over a complete basis with logical addition. Alexander A Razborov, 10.1007/BF01137685Russian. Mathematicheskie Zametki. DOI .4145English translation in Mathematical Notes of the Academy of Sciences of the USSRAlexander A. Razborov. Lower bounds on the size of bounded depth circuits over a complete basis with logical addition. Russian. Mathematicheskie Zametki, 41(4):598-607, 1987 DOI . (English translation in Mathematical Notes of the Academy of Sciences of the USSR, 41(4):333-338, 1987) (2, 5, 6). Hardness amplification proofs require majority. Ronen Shaltiel, Emanuele Viola, 10.1137/080735096SIAM J. Comput. 397DOIRonen Shaltiel and Emanuele Viola. Hardness amplification proofs require majority. SIAM J. Comput. 39(7):3122-3154, 2010 DOI (4, 5). Arithmetic circuits: A survey of recent results and open questions. Amir Shpilka, Amir Yehudayoff, 10.1561/0400000039Foundations and Trends in Theoretical Computer Science. 53-4DOIAmir Shpilka and Amir Yehudayoff. Arithmetic circuits: A survey of recent results and open questions. Foundations and Trends in Theoretical Computer Science, 5(3-4):207-388, 2010 DOI (3). Algebraic methods in the theory of lower bounds for boolean circuit complexity. Roman Smolensky, Proceedings of the nineteenth annual ACM symposium on Theory of computing. the nineteenth annual ACM symposium on Theory of computingACM2523Roman Smolensky. Algebraic methods in the theory of lower bounds for boolean circuit complexity. Proceedings of the nineteenth annual ACM symposium on Theory of computing, pages 77-82. ACM, 1987 (2, 6, 23, 25). On representations by low-degree polynomials. Roman Smolensky, Proceedings of 1993 IEEE 34th Annual Foundations of Computer Science, FOCS 1993. 1993 IEEE 34th Annual Foundations of Computer Science, FOCS 1993IEEE25Roman Smolensky. On representations by low-degree polynomials. Proceedings of 1993 IEEE 34th Annual Foundations of Computer Science, FOCS 1993, pages 130-138. IEEE, 1993 (6, 25). On the probabilistic degrees of symmetric boolean functions. Srikanth Srinivasan, Utkarsh Tripathi, S Venkitesh, 10.1137/19M1294162SIAM J. Discret. Math. 35324DOISrikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh. On the probabilistic degrees of symmetric boolean functions. SIAM J. Discret. Math. 35(3):2070-2092, 2021 DOI (5, 6, 9, 24, 26). On vanishing properties of polynomials on symmetric sets of the boolean cube. Srikanth Srinivasan, S Venkitesh, 10.48550/ARXIV.2111.0544519in positive characteristic, 2021 DOI . Available atSrikanth Srinivasan and S. Venkitesh. On vanishing properties of polynomials on symmetric sets of the boolean cube, in positive characteristic, 2021 DOI . Available at https://arxiv.org/abs/2111.05445 (2, 19). Algebraic methods in lower bounds for computational models with limited communication. M Szegedy, 25The University of ChicagoPhD thesisM. Szegedy. Algebraic methods in lower bounds for computational models with limited communication. PhD thesis, The University of Chicago, 1989 (6, 25). Short monotone formulae for the majority function. Leslie G Valiant, 10.1016/0196-6774(84)90016-6J. Algorithms. 53DOILeslie G. Valiant. Short monotone formulae for the majority function. J. Algorithms, 5(3):363-366, 1984 DOI (4). On approximate majority and probabilistic time. Emanuele Viola, Computational Complexity. 183Emanuele Viola. On approximate majority and probabilistic time. Computational Complexity, 18(3):337-375, 2009 (4). Generalized hamming weights for linear codes. V K Wei, 10.1109/18.133259IEEE Transactions on Information Theory. 375V. K. Wei. Generalized hamming weights for linear codes. IEEE Transactions on Information Theory, 37(5):1412-1418, September 1991 DOI (6, 13). Faster all-pairs shortest paths via circuit complexity. R , Ryan Williams, 10.1137/15M1024524SIAM J. Comput. 475R. Ryan Williams. Faster all-pairs shortest paths via circuit complexity. SIAM J. Comput. 47(5):1965-1985, 2018 DOI (2). The polynomial method in circuit complexity applied to algorithm design. Williams Richard Ryan, 10.4230/LIPIcs.FSTTCS.2014.4734th International Conference on Foundation of Software Technology and Theoretical Computer Science, FSTTCS 2014. New Delhi, IndiaDOI29invited talkRichard Ryan Williams. The polynomial method in circuit complexity applied to algorithm design (invited talk). 34th International Conference on Foundation of Software Technology and Theoretical Computer Science, FSTTCS 2014, December 15-17, 2014, New Delhi, India, volume 29 of LIPIcs, pages 47-60. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2014 DOI (5). New algorithms and lower bounds for circuits with linear threshold gates. Ryan Williams, Symposium on Theory of Computing, STOC 2014. New York, NY, USARyan Williams. New algorithms and lower bounds for circuits with linear threshold gates. Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 -June 03, 2014, pages 194-202. . 10.1145/2591796.2591858DOI. 2ACMACM, 2014 DOI (2). DOI (2). polynomial of degree at most deg( ) satisfying a stronger property, namely, that of not vanishing at too many points of {0. Ryan Williams, 10.1145/2559903J. ACM. 611Nonuniform ACC circuit lower boundsRyan Williams. Nonuniform ACC circuit lower bounds. J. ACM, 61(1):2:1-2:32, 2014 DOI (2). polynomial of degree at most deg( ) satisfying a stronger property, namely, that of not vanishing at too many points of {0, 1} + . Let F be a eld of characteristic > 0. Fix any positive integers , such that ∈. C L A I M A , If there is a polynomial ∈ F[ 1 , . . . , ] is any polynomial that vanishes at all ∈ {0, 1} but does not vanish at some ∈ {0, 1} + , then there is a ∈C L A I M A .1. Let F be a eld of characteristic > 0. Fix any positive integers , , such that ∈ [ , − ], and a power of . If there is a polynomial ∈ F[ 1 , . . . , ] is any polynomial that vanishes at all ∈ {0, 1} but does not vanish at some ∈ {0, 1} + , then there is a ∈
[]
[]
[ "Paul Bryan [email protected] ", "ANDMohammad N Ivaki [email protected] ", "Julian Scheuer [email protected] ", "\nDepartment of Mathematics\nDepartment of Mathematics\nMacquarie University\n2109NSWAustralia\n", "\nAlbert-Ludwigs-Universität, Mathematisches Institut\nUniversity of Toronto\nErnst-Zermelo-Str. 1M5S 2E4, 79104FreiburgOntarioCanada, Germany\n" ]
[ "Department of Mathematics\nDepartment of Mathematics\nMacquarie University\n2109NSWAustralia", "Albert-Ludwigs-Universität, Mathematisches Institut\nUniversity of Toronto\nErnst-Zermelo-Str. 1M5S 2E4, 79104FreiburgOntarioCanada, Germany" ]
[]
We study long-time existence and asymptotic behavior for a class of anisotropic, expanding curvature flows. For this we adapt new curvature estimates, which were developed by Guan, Ren and Wang to treat some stationary prescribed curvature problems. As an application we give a unified flow approach to the existence of smooth, even Lp-Minkowski problems in R n+1 for p > −n − 1.
10.2140/apde.2019.12.259
[ "https://arxiv.org/pdf/1608.02770v2.pdf" ]
119,584,976
1608.02770
c017ce15a89c1df83caaa68f5dcbc5fea28ae1fc
6 May 2018 Paul Bryan [email protected] ANDMohammad N Ivaki [email protected] Julian Scheuer [email protected] Department of Mathematics Department of Mathematics Macquarie University 2109NSWAustralia Albert-Ludwigs-Universität, Mathematisches Institut University of Toronto Ernst-Zermelo-Str. 1M5S 2E4, 79104FreiburgOntarioCanada, Germany 6 May 2018A UNIFIED FLOW APPROACH TO SMOOTH, EVEN L p -MINKOWSKI PROBLEMS We study long-time existence and asymptotic behavior for a class of anisotropic, expanding curvature flows. For this we adapt new curvature estimates, which were developed by Guan, Ren and Wang to treat some stationary prescribed curvature problems. As an application we give a unified flow approach to the existence of smooth, even Lp-Minkowski problems in R n+1 for p > −n − 1. Introduction Consider a smooth, closed, strictly convex hypersurface M 0 in Euclidean space R n+1 , n ≥ 2, given by a smooth embedding F 0 : M → R n+1 . Suppose the origin is in the interior of the region enclosed by M 0 . We study the long-time behavior of a family of hypersurfaces {M t } given by smooth maps F : M × [0, T ) → R n+1 satisfying the initial value problem (1.1) ∂ t F (x, t) = ϕ(ν(x, t)) (F (x, t) · ν(x, t)) 2−p K(x, t) ν(x, t), F (·, 0) = F 0 (·). Here K(·, t) and ν(·, t) are the Gauss curvature and the outer unit normal vector of M t = F (M, t) and ϕ is a positive, smooth function on S n . Furthermore, T is the maximal time for which the solution exists. For p = 2, ϕ ≡ 1, flow (1.1) was studied by Schnürer [62] in R 3 and by Gerhardt [31] in higher dimensions. Both works rely on the reflection principle of Chow and Gulliver [23] and McCoy [50]. Their result is as follows: the volume-normalized flow evolves any M 0 in the C ∞ -topology to an origin-centered sphere. For p > 2 and ϕ ≡ 1 it follows from Chow-Gulliver [23, Theorem 3.1] (see also Tsai [63,Example 1]) that (1.1) evolves M 0 , after rescaling to fixed volume, in the C 1 -topology to an origin-centered sphere. We refer the reader to the paper [37] regarding a rather comprehensive list of previous works on this curvature flow. In particular, in either case ϕ = 1 or ϕ ≡ 1, −n − 1 < p < 2, we are not aware of any result in the literature on the asymptotic behavior of the flow. The following theorem was proved in [37] regarding the case p = −n − 1, ϕ ≡ 1 (in this case the flow belongs to a family of centro-affine normal flows introduced by Stancu in [60]). Let us set write B for the unit ball of R n+1 and put K t := (V (B)/V (K t )) 1/(n+1) K t , where K t denotes the convex body enclosed by M t and V (·) is the (n+1)-dimensional Lebesgue measure. 1 Theorem ( [37]). Let n ≥ 2, p = −n − 1, ϕ ≡ 1 and suppose K 0 has its Santaló point at the origin, i.e., S n u h K0 (u) n+2 dσ(u) = 0. Then there exists a unique solution {M t } of flow (1.1), such thatM t converges in C ∞ to an origin-centered ellipsoid. Here h K0 is the support function of K 0 . A closed, convex hypersurface M 0 can be described in terms of its support function h K0 : S n → R defined by h K0 (u) = sup{u · x : x ∈ M 0 }. If M 0 is smooth and strictly convex, then h K0 (u) = u · F 0 (ν −1 (u)). From the evolution equation of F (·, t) it follows that h(·, t) := h Kt (·) : S n × [0, T ) → R evolves by (1.2) ∂ t h(u, t) = ϕ(u)(h 2−p S n )(u, t), where S n (u, t) = 1/K(ν −1 (u, t), t). A homothetic self-similar solution of this flow satisfies h 1−p det(∇ 2 h + Id h) = c ϕ , (1.3) for some positive constant c. Here∇ is the covariant derivative on S n . Note that S n = det(∇ 2 h + Id h). We list the main results of the paper extending the previous mentioned results. Theorem 1. Let −n − 1 < p < ∞ and ϕ be a positive, smooth even function on S n i.e., ϕ(u) = ϕ(−u). Suppose K 0 is origin-symmetric. There exists a unique originsymmetric solution {M t } of (1.1) such that {M t } converges for a subsequence of times in C 1 to a smooth, origin-symmetric, strictly convex solution of (1.3). Also, when p ≤ n + 1 the convergence is in C ∞ , and if p ≥ 1 the convergence holds for the full sequence. If −n − 1 < p ≤ −n, we can extend the result of the previous theorem by dropping the assumption that ϕ is even. Theorem 2. Let −n − 1 < p ≤ −n and K 0 satisfy S n u ϕ(u)h K0 (u) 1−p dσ(u) = 0. There exists a unique solution {M t } of flow (1.1) such that {M t } converges for a subsequence of times in C ∞ to a positive, smooth, strictly convex solution of (1.3). Given any convex body K 0 , there exists a vector v such that K 0 + v has the origin in its interior and it satisfies the assumption of the second theorem. For ϕ ≡ 1 we prove the following theorem. Theorem 3. Let 1 = p > −n − 1, ϕ ≡ 1 and K 0 satisfy S n u h K0 (u) 1−p dσ(u) = 0. Then there exists a unique solution {M t } of (1.1) such that {M t } converges in C 1 to the unit sphere. In addition, for 1 = p ≤ n + 1 the convergence holds in C ∞ . For p = n + 1, self-similar solutions to (1.1) are solutions of the L p -Minkowski problem (1.4), and for p = n + 1, a self-similar solution to (1.1) is a solution to the normalized L n+1 -Minkowski problem (1.5), which we shall introduce them now. The Minkowski problem deals with existence, uniqueness, regularity, and stability of closed convex hypersurfaces whose Gauss curvature (as a function of the outer normals) is preassigned. Major contributions to this problem were made by Minkowski [51,52], Aleksandrov [2][3][4], Fenchel and Jessen [27], Lewy [43,44], Nirenberg [53], Calabi [16], Pogorelov [54,55], Cheng and Yau [19], Caffarelli, Nirenberg, and Spruck [17], and others. A generalization of the Minkowski problem known as the L p -Minkowski problem was introduced by Lutwak in [45], where for any 1 < p = n + 1 and a preassigned even Borel measure on S n whose support does not lie in a great sphere of S n the existence and uniqueness of the solution were proved. This generalization for 1 < p = n + 1 was further studied by Lutwak and Oliker in [47], where they obtained the C k,α regularity of the solution. Solutions to many cases of these generalized problems followed later in [1, 6, 11, 12, 14, 18, 21, 26, 28, 29, 33, 35, 40, 41, 48, 49, 57-59, 64, 67-69]. For p = n + 1, in the smooth category, the L p -Minkowski problem asks, given a smooth, positive function ϕ : S n → R, does there exist a smooth, closed, strictly convex hypersurface M 0 ⊂ R n+1 such that (1.4) h 1−p (ν(x)) K(x) = 1 ϕ(ν(x)) where x ∈ M 0 , h denotes the support function, K the Gauss curvature and ν the Gauss map M 0 → S n . The even L p -Minkowski problem requires in addition, that ϕ is an even function. The case p = 1 is the original Minkowski problem. The special case of p = n + 1 is troubling since (1.4) might not have a solution. To remedy this, Lutwak, Yang and Zhang introduced a normalized formulation of the L n+1 -Minkowski problem in [48] and they proved the existence and uniqueness of the solution for any prescribed even Borel measure on S n whose support is not contained in a great sphere of S n . In the smooth category, the normalized L n+1 -Minkowski problem asks for the existence of a smooth, closed, strictly convex hypersurface M 0 ⊂ R n+1 that solves (1.5) 1 h n (ν(x))K(x) = V (K 0 ) ϕ(ν(x)) , where K 0 is the convex body with the boundary M 0 . In the rest of the paper, the L p -Minkowski problem refers to either (1.4) or (1.5), and we avoid the word "normalized". The existence and regularity of solutions to the L p -Minkowski problem are rather comprehensively discussed in [21] for p > −n−1. Our study on (1.1) provides an alternative variational treatment (based on curvature flow) of the even L p -Minkowski problem. For p = 1, Chou-Wang [20] treated the classical L 1 -Minkowski problem in the smooth category by a logarithmic Gauss curvature flow. For n = 1, and 1 = p > −3, the existence of solutions to the L p -Minkowski problems follows from Andrews' results [9] on the asymptotic behavior of a family of contracting and expanding flows of curves. Also, in higher dimensions, the existence of solutions to the L p -Minkowski problems follows from [11] when −n − 1 < p ≤ −n + 1 (a short proof of this is also given in [38]) or when ϕ is even (e.q., ϕ(u) = ϕ(−u)) and −n + 1 < p < 1. See also [5,10,32,65,66]. Using our results for the flows above, it is now a simple matter to give a new, unified proof of the smooth, even L p -Minkowski problem for all ranges of p > −n−1. Corollary 4. Let −n − 1 < p < ∞ and ϕ be a positive, smooth even function on S n i.e., ϕ(u) = ϕ(−u). Then for p = n + 1 there exists an origin-symmetric, smooth, strictly convex body such that (1.4) is satisfied. For p = n + 1, there exists an origin-symmetric, smooth, strictly convex body such that (1.5) is satisfied. Proof. By the first part of Theorem 1 (only the convergence for a subsequence of times is needed), there exists a smooth, strictly convex body K with the volume of the unit ball and a constant c > 0 such that h K = ch p ϕ . Hence c S n h p ϕ dσ = (n + 1)V (B n ). Thus there is a solution to h 1−p (ν(x)) K(x) = (n + 1)V (B) S n h p ϕ dσ 1 ϕ(ν(x)) . Now let us define λ :=          S n h p ϕ dσ (n+1)V (B) 1 n+1−p , p = n + 1; (n+1)V (B) V (K) S n h n+1 ϕ dσ 1 n+1 , p = n + 1. Therefore, λK solves the smooth, even L p -Minkowski problem. Let us close this section with a brief outline of this paper. The main difficulty in proving convergence of the normalized solutions is in obtaining long-time existence. The issue arises from the time-dependent anisotropic factor (the support function). We believe in such generality, (1.1) serves as the first example where a time-dependent anisotropic factor is allowed. To prove long-time existence, we first obtain bounds on the Gauss curvature in Section 3.1. Using the well-known standard technique of Tso [61] we obtain upper bounds. We obtain lower bounds by applying the same technique to the evolution of the polar body as in [38]. Controlling the principal curvatures requires estimates of higher derivatives of the speed which is generally quite difficult due to the non-linearity of the flow. In Section 3.2 we obtain these crucial estimates by adapting the remarkable C 2 estimates of Guan-Ren-Wang for the prescribed curvature problem see [34, (4.2)]. Long time existence then follows readily by standard arguments. Once it is proved that solutions to the flow exist until they expand to infinity uniformly in all directions, the method of [37, Section 8] applies and yields convergence of the volume-normalized solutions in C 1 to self-similar solutions provided p = 1. Further work is required to establish convergence of normalized solutions if p = 1, and to prove convergence in C ∞ for p ≤ n + 1; this is accomplished in Section 4; see also Remark 10. Acknowledgment The work of the first author was supported by the EPSRC on a Programme Grant entitled "Singularities of Geometric Partial Differential Equations" reference number EP/K00865X/1. The work of the second author was supported by Austrian Science Fund (FWF) Project M1716-N25 and the European Research Council (ERC) Project 306445. basic evolution equations Let g = {g ij }, and W = {w ij } denote, in order, the induced metric and the second fundamental form of M . At every point in the hypersurface M choose a local orthonormal frame {e 1 , . . . , e n }. We use the following standard notation w j i = g mj w im , (w 2 ) j i = g mj g rs w ir w sm , |W | 2 = g ij g kl w ik w lj = w ij w ij . Here, {g ij } is the inverse matrix of {g ij }. We use semicolons to denote covariant derivatives. The following geometric formulas are well-known: ν ;i = w k i e k , ν ;ij = g kl w ij;l e k − w l i w lj ν, h ;i = w k i (F · e k ), h ;ij = w ij − hw l i w lj + F · ∇w ij . Note that in above we considered the support function as a function on the boundary of the hypersurface; that is, at the point x ∈ M we have h(x) = F (x) · ν(x). For convenience, let ψ(x) = h 2−p (x)ϕ(ν(x)). The following evolution can be deduced in a standard manner; see for example [30]. Lemma 5. The following evolution equations hold: . For completeness, we give the proofs here. In this section we use∇ to denote covariant derivatives on the sphere with respect to the standard metric. The matrix of the radii of the curvature of a smooth, closed, strictly convex hypersurface is denoted by r = [r ij ] and the entries of r are considered as functions on the unit sphere. They can be expressed in terms of the support function as ∂ t ν = −∇ ψ K , ∂ t w j i = − ψ K ;ik g kj − ψ K w k i w j k = ψ K kl K 2 w j i;kl + ψ K kl K 2 w kr w r l w j i − (n + 1) ψ K w k i w j k + ψ K kl,rs K 2 g jm w kl;i w rs;m − 2ψ K 3 g jm K ;i K ;m + 1 K 2 g jk K ;k ψ ;i + 1 K 2 g jk ψ ;k K ;i − 1 K g jk ψ ;ik , ∂ t h = ψ K ij K 2 h ;ij + ψh K ij K 2 w l i w lj − (n − 1) ψ K − 1 K F · ∇ψ.r ij :=∇ 2 ij h +ḡ ij h, where [ḡ ij ] is the standard metric on S n . Additionally, we recall that S n = det[r ij ]/ det[ḡ ij ]. Lemma 6. Let {M t } be a solution of (1.1) on [0, t 1 ]. If c 2 ≤ h Kt ≤ c 1 on [0, t 1 ], then K ≤ c 4 on [0, t 1 ]. Here c 4 depends on K 0 , c 1 , c 2 , p, ϕ and t 1 . Proof. We apply the maximum principle to the following auxiliary function defined on the unit sphere Θ = ψS n 2c 1 − h = ∂ t h 2c 1 − h . At any minimum of Θ we have 0 =∇ i Θ =∇ i ψS n 2c 1 − h and∇ 2 ij Θ ≥ 0. Therefore, we get∇ i (ψS n ) 2c 1 − h = − ψS n∇i h (2c 1 − h) 2 and (3.1)∇ 2 ij (ψS n ) +ḡ ij ψS n ≥ −ψS n r ij + 2c 1 ψS nḡij 2c 1 − h . Differentiating Θ with respect to time yields ∂ t Θ = ψS ij n 2c 1 − h ∇ 2 ij (ψS n ) +ḡ ij ψS n + ψ 2 S 2 n (2c 1 − h) 2 1 + (2 − p)h −1 (2c 1 − h) , where S ij n is the derivative of S n with respect to the entry r ij . By applying inequality (3.1) to the preceding identity we deduce (3.2) ∂ t Θ ≥ Θ 2 (1 − n + 2c 1 H) − cΘ 2 , where H = S −1 n S ij nḡij . Therefore, we arrive at ϕ h 2−p K 2c 1 − h (t, u) ≥ 1 ct + 1/ min u∈S n ϕ h 2−p K 2c1−h (0, u) ≥ 1 ct 1 + 1/ min u∈S n ϕ h 2−p K 2c1−h (0, u) . Lemma 7. Let {M t } be a solution of (1.1) on [0, t 1 ]. If c 1 ≤ h Kt ≤ c 2 on [0, t 1 ], then K ≥ 1 a+bt − n n+1 on (0, t 1 ] , where a and b depend only on c 1 , c 2 , p, ϕ. In particular, K ≥ c 3 on [0, t 1 ] for a positive number c 3 that depends on K 0 , c 1 , c 2 , p, ϕ and is independent of t 1 . Proof. Suppose K * t is the polar body 1 of K t with respect to the origin. We furnish quantities associated with polar bodies with * . The polar bodies evolve by ∂ t h * = −ψ * S * −1 n , h * (·, t) = h K * t (·), where ψ * = (h * 2 + |∇h * | 2 ) n+1+p 2 h * n+1 ϕ h * u +∇h * h * 2 + |∇h * | 2 ; see Lemma 11 for the proof. In addition, we have c ′ 1 = 1/c 2 ≤ h * ≤ 1/c 1 = c ′ 2 . We will show that the function Θ = ψ * S * −1 n h * − c ′ 1 /2 remains bounded. At any maximal point of Θ : 0 =∇ i Θ =∇ i ψ * S * −1 n h * − c ′ 1 /2 and∇ 2 ij Θ ≤ 0. Hence, we obtain (3.3)∇ i (ψ * S * −1 n ) h * − c ′ 1 /2 = ψ * S * −1 n∇ i h * (h * − c ′ 1 /2) 2 , and consequently, (3.4)∇ 2 ij (ψ * S * −1 n ) +ḡ ij ψ * S * −1 n ≤ ψ * S * −1 n r * ij − c ′ 1 2 ψ * S * −1 nḡij h * − c ′ 1 /2 . Differentiating Θ with respect to time yields ∂ t Θ = ψ * S * −2 n h * − c ′ 1 /2 S * ij n ∇ 2 ij (ψ * S * −1 n ) +ḡ ij ψ * S * −1 n + S * −1 n h * − c ′ 1 /2 ∂ t ψ * + Θ 2 . On the other hand, in view of |∂ t h * | = ψ * S * −1 n , ∇ ∂ t h * = ∇ (ψ * S * −1 n ) = ψ * S * −1 n ∇ h * h * − c ′ 1 /2 , ∇ h * ≤ c ′ 2 , where for the second equation we used (3.3), we have S * −1 n h * − c ′ 1 /2 ∂ t ψ * ≤ c(n, p, c 1 , c 2 , ϕ)Θ 2 . Employing this last inequality and inequality (3.4) we infer that, at any point where the maximum of Θ is reached, we have (3.5) ∂ t Θ ≤ Θ 2 c ′ − c ′ 1 2 H * . 1 The polar body of a convex body K with the origin of R n+1 in its interior is the convex body defined by K * = {x ∈ R n+1 : x · y ≤ 1 for all y ∈ K}. Moreover, we have H * ≥ n h * − c ′ 1 /2 ψ * S * −1 n − 1 n ψ * h * − c ′ 1 /2 − 1 n ≥ nΘ 1 n c ′′ c ′ 1 − c ′ 1 /2 − 1 n . Therefore, we can rewrite the inequality (3.5) as follows ∂ t Θ ≤ Θ 2 c − c ′ Θ 1 n , for positive constants c and c ′ depending only on p, c 1 , c 2 , ϕ. Hence, (3.6) Θ ≤ c + c ′ t − n n+1 for some positive constants depending only on p, c 1 , c 2 , ϕ. 2 The inequality (3.6) implies that (3.8) S * −1 n ≤ a ′ + b ′ t − n n+1 for some a ′ and b ′ depending only on p, c 1 , c 2 , ϕ. Now we can use the argument given in [39,Lemma 2.3] to obtain the desired lower bound: For every u ∈ S n , there exists a unique u * ∈ S n such that S n h n+2 (u) S * n h * n+2 (u * ) = 1, see [36]. In view of this identity and (3.8) we conclude that on (0, t 1 ] we have Claim. Suppose f is a positive smooth function of t on [0, t 1 ] that satisfies d dt f ≤ c 0 + c 1 f + c 2 f 2 − c 3 f 2+p , (3.7) where c 3 , p are positive. There exist constant c, c ′ > 0 independent of the solution and depending only on c 0 , c 1 , c 2 , c 3 , p, such that f ≤ c + c ′ t −1/(p+1) on (0, t 1 ]. Proof. Note that there exists x 0 > 0 such that c 0 + c 1 x + c 2 x 2 − c 3 x 2+p < −c 3 /2x 2+p for x > x 0 . If f (0) ≤ x 0 , then f may increase forward in time, but when f reaches x 0 , then f must start decreasing (since the right-hand side of (3.7) becomes negative). Thus we may assume, without loss of generality, that f (0) > x 0 . Therefore, f > x 0 on a maximal time interval [0, t 0 ). On [0, t 0 ) we can solve d dt f ≤ −c 3 /2f 2+p to obtain f ≤ (c 3 (p + 1)/2t) −1/(p+1) . At t 0 we have c 0 + c 1 f + c 2 f 2 − c 3 f 1+p = −c 3 /2f 2+p and f = x 0 ; therefore the right-hand side of (3.7) is still negative. So f ≤ f (t 0 ) on [t 0 , t 1 ]. In conclusion, f ≤ max{(c 3 (p + 1)/2t) −1/(p+1) , x 0 = f (t 0 )} ≤ c + c ′ t −1/(1+p) , where c, c ′ do not depend on solutions. Upper and lower bounds on principal curvatures. To obtain upper and lower bounds on the principal curvatures, denoted by {κ i } n i=1 , we will consider the auxiliary function used by Guan-Ren-Wang for a prescribed curvature problem; see [34, (4.2)]. Lemma 8. Let {M t } be a solution of (1.1) on [0, t 1 ]. If c 1 ≤ h Kt ≤ c 2 on [0, t 1 ], then c 5 ≤ κ i ≤ c 6 on [0, t 1 ] , where c 5 and c 6 depend on K 0 , c 1 , c 2 , p, ϕ and t 1 . Proof. In view of Lemmas 6 and 7, it suffices to show that W remains bounded on [0, t 1 ]. Consider the auxiliary function Θ = 1 2 log( W 2 ) − α log h. Assume without loss of generality that c 1 > 1, for otherwise we replace h by 2h/c 1 , which does not effect the evolution equation of Θ. Using the parabolic maximum principle we show that for some α large enough Θ(·, t) is always negative on [0, t 1 ]. If the conclusion of the theorem is false, we may choose (x 0 , t 0 ) with t 0 > 0 and such that Θ(x 0 , t 0 ) = 0, Θ(x, t 0 ) ≤ 0, and Θ(x, t) < 0 for t < t 0 . Then, 0 ≤Θ − ψ K kl K 2 Θ ;kl = − ψ W 2 K kl K 2 w j i;k w i j;l + 2ψ W 4 K kl K 2 w j i w s r w i j;k w r s;l + ψ K kl K 2 w kr w r l − (n + 1)ψ (w 2 ) j i w i j K W 2 + ψw i j W 2 K kl,rs K 2 w kl;i g jp w rs;p − 2 g jp K ;i K ;p K 3 + 2 K 2 g jp ψ ;i K ;p − 1 K g jp ψ ;ip w i j W 2 + (n − 1) αψ hK + α hK (F · ∇ψ) − αψ h 2 K kl K 2 h ;k h ;l − αψ K kl K 2 w kr w r l . Pick normal coordinates around x 0 such that in (x 0 , t 0 ) there holds g ij = δ ij , w ij = w ii δ ij . At (x 0 , t 0 ) we may write K kl,rs w kl;i w rs;i = K kk,ll w kk;i w ll;i − K kk,ll w 2 kl;i , due to the relation K kl,rs w kl;i w rs;j w ij = i w ii p,q ∂ 2 K ∂κ p ∂κ q w pp;i w qq;i + p =q ∂K ∂κp − ∂K ∂κq κ p − κ q w 2 pq;i ,(3.0 ≤ − ψ W 2 K ii l w 2 ll;i − ψ W 2 K ii p =q w 2 pq;i + 2ψ W 4 K ii   j w jj w jj;i   2 + ψK ii w 2 ii − (n + 1)ψK i w 3 ii W 2 + ψ W 2 i w ii K pp,qq w pp;i w qq;i − K pp,qq w 2 pq;i − 2 (K ;i ) 2 K + i (2ψ ;i K ;i − Kψ ;ii ) w ii W 2 + (n − 1) αψK h + αK h (F · ∇ψ) − αψ h 2 K kl h ;k h ;l − αψK ii w 2 ii . At (x 0 , t 0 ) we have (3.10) 0 = Θ ;k = i w ii w ii;k W 2 − α h ;k h , We may assume at x 0 that w 11 = max{w ii : 1 ≤ i ≤ n}. Therefore, (3.11) Θ(x 0 , t 0 ) = 0 ⇒ c α 1 √ n ≤ w 11 ≤ c α 2 . On the other hand, since ψ is bounded above and below in view of the hypotheses of the lemma, we obtain ψ ;i ≤ C 0 w ii ⇒ 2ψ ;i K ;i ≤ εψ c 4 (K ;i ) 2 + c 4 C 2 0 ψε w 2 ii ≤ εψ (K ;i ) 2 K + C(ε, K 0 , ϕ, t 1 )ψw 2 ii , (3.12) where c 4 (depending on t 1 ) is from Lemma 6, and (3.13) ψ ;ii ≥ −C − Cw ii − Cw 2 ii + k w ii;k d ν ψ(∂ k ). Using (3.10) in (3.13) we obtain − K W 2 i w ii ψ ;ii ≤ K W 2 i w ii (C + Cw ii + Cw 2 ii − k w ii;k d ν ψ(∂ k )) ≤ K W 2 i w ii (C + Cw ii + Cw 2 ii ) − αK h k h ;k d ν ψ(∂ k ) = K W 2 i w ii (C + Cw ii + Cw 2 ii ) − αK h i w ii (∂ i · F )d ν ψ(∂ i ) ≤ ψ W 2 i w ii (C + Cw 2 ii ) − αK h i w ii (∂ i · F )d ν ψ(∂ i ). (3.14) For the last inequality, we used that K is bounded above and ψ is bounded below (so the constant C depends on K 0 , ϕ, t 1 ). Combining (3.10), (3.12) and (3.14) implies that 0 ≤ − ψ W 2 K ii l w 2 ll;i − ψ W 2 K ii p =q w 2 pq;i + 2ψ W 4 K ii   j w jj w jj;i   2 + ψK ii w 2 ii − (n + 1)ψK i w 3 ii W 2 + ψ W 2 l w ll K pp,qq w pp;l w qq;l − K pp,qq w 2 pq;l − (2 − ε) (K ;l ) 2 K + ψ W 2 i w ii (C + Cw 2 ii ) − αK h i w ii (∂ i · F )d ν ψ(∂ i ) + (n − 1) αψK h + αK h s (∂ s · F )d F ψ(∂ s ) + αK h i w ii (∂ i · F )d ν ψ(∂ i ) − αψ h 2 K ii w 2 ii (∂ i · F ) 2 − αψK ii w 2 ii ≤ ψ W 2 l w ll C + Cw 2 ll − nK l w 3 ll + K ii w 2 ii W 2 + αψ nK h − K ii w 2 ii − K ii w 2 ii (∂ i · F ) 2 h 2 + K hψ s (∂ s · F )d F ψ(∂ s ) − ψ i (A i + B i + C i + D i − E i ) − αψK h − ψK i w 3 ii W 2 , (3.15) where C depends on ε, K 0 , ϕ, t 1 , and A i = 2 − ε W 2 K w ii (K ;i ) 2 − w ii W 2 p,q K pp,qq w pp;i w qq;i , B i = 2 W 2 j w jj K jj,ii w 2 jj;i , C i = 2 W 2 j =i K jj w 2 jj;i , D i = 1 W 2 K ii j w 2 jj;i , E i = 2 W 4 K ii   j w jj w jj;i   2 . The terms B i and C i deserve some explanation. C i comes from the second term in (3.15), which reads − ψ W 2 i K ii p =q w 2 pq;i ≤ − ψ W 2 p =q K pp w 2 pq;p − ψ W 2 p =q K qq w 2 pq;q , which is exactly C i due to the Codazzi equation. The third line of (3.15) arises from (3.9). Since the second term in the bracket of (3.9) is negative and the hypersurface is convex, we can proceed in the same way as we derived C i and just throw away all indices i which are neither p nor q. This gives term B i . The first term in the big bracket goes into A i . In Corollary 14 of the appendix we will present an adaption of the method developed in [34] to deal with the curvature derivative terms A i , B i , C i , D i , E i . There we prove that we obtain the following alternative: There exist positive numbers δ 2 , . . . , δ n which only depend on the dimension and bounds on the Gauss curvature, such that either w ii > δ i w 11 ∀2 ≤ i ≤ n or A i + B i + C i + D i − E i ≥ 0 ∀1 ≤ i ≤ n. By taking α large in (3.11), in the first case we get a contradiction to the bound on the Gauss curvature. In the second case, using also K ii w 2 ii = K i w ii , (3.15) yields 0 ≤ ψ W 2 l w ll (C + Cw 2 ll ) − nK l w 3 ll − (α − 1)Kψ i w ii + αψ (n − 1) K h − K h 2 i w ii (∂ i · F ) 2 + K hψ l (∂ l · F )d F ψ(∂ l ) . Consequently we obtain 0 ≤ C(ε, K 0 , ϕ, t 1 )w 3 11 W 2 − (α − 1)Kψw 11 + C(K 0 , ϕ, t 1 )α, where we discarded −(α − 1)Kψ i =1 w ii ≤ 0 and used the bounds on h, ψ and K to bound w 11 in terms of w 3 11 . Now take α such that (α − 1)Kψ ≥ C(ε, K 0 , ϕ, t 1 ) + 1. Therefore, in view of (3.11) Proof. First, let p ≥ n + 1. In this case, by comparing with suitable outer balls, the flow exists on [0, ∞). For p > n + 1, consider an origin centered ball B r , such that 0 ≤ C(ε, K 0 , ϕ, t 1 )w 3 11 W 2 − (α − 1)Kψw 11 + C(K 0 , ϕ, t 1 )α ≤ C(ε, K 0 , ϕ, t 1 ) w 2 11 W 2 − 1 w 11 − w 11 + C(K 0 , ϕ, t 1 )α ≤ − c α 1 √ n + C(K 0 , ϕ, t 1 )α.K 0 ⊇ B r . Then K t ⊇ B r(t) , where r(t) = (min h K0 ) p−n−1 + t(p − n − 1) min ϕ 1 p−n−1 and B r(t) expands to infinity as t approaches ∞. For p = n + 1, K t ⊇ B r(t) with r(t) = e t min ϕ min h K0 and B r(t) expands to infinity as t approaches ∞. Second, if p < n + 1, then the flow exists only on a finite time interval. If max h Kt < ∞, then by Lemmas 6, 7 and 8, the evolution equation (1.1) is uniformly parabolic on [0, T ). Thus, the result of Krylov and Safonov [42] and standard parabolic theory allow us to extend the solution smoothly past time T , contradicting its maximality. convergence of normalized solutions 4.1. Convergence in C 1 , 1 = p > −n − 1. By the proof of [37,Corollary 7.5], there exist r, R such that 0 < r ≤ hK t ≤ R < ∞. (4.1) Therefore, a subsequence of {K t k } converges in the Hausdorff distance to a limiting shapeK ∞ with the origin in its interior. The argument of [37, Section 8.1] implies ϕh 1−p K∞ fK ∞ = c, where fK ∞ is the positive continuous curvature function ofK ∞ and c is some positive constant. By [21, Proposition 1.2],K ∞ is smooth and strictly convex. The C 1 -convergence follows, which is purely geometric and does not depend on the evolution equation, from [8, Lemma 13]. Remark 10. Section 4.1 completes the discussion on the existence of solutions to the smooth, even L p -Minkowski problems in R n+1 for 1 = p > −n − 1. The next section discusses the C ∞ convergence when 1 = p ≤ n + 1, and also when p = 1 and solutions are origin-symmetric. We mention that in the latter case, by the proof of [37, Corollary 7.5], the estimate (4.1) still holds. 4.2. Convergence in C ∞ . By [37, Lemma 9.2], there is a uniform upper bound on the Gauss curvature of the normalized solution when p ≤ n + 1. In the following, we first obtain a uniform lower bound on the Gauss curvature of the normalized solutionK t . Let h : S n × [0, T ) → R n+1 be a solution of equation (1.2). Then for each λ > 0, h defined byh : S n × 0, T /λ 1+n−p n+1 → R n+1 h(u, t) = λ 1 n+1 h u, λ 1+n−p n+1 t is also a solution of evolution equation (1.2) but with the initial data λ 1 n+1 h (·, 0) . For each fixed time t ∈ [0, T ), defineh a solution of (1.2) as follows h(u, τ ) = V (B) V (K t ) 1 n+1 h u, t + V (B) V (K t ) 1+n−p n+1 τ . Note thath(·, 0) is the support function of (V (B)/V (K t )) 1 n+1 K t ; therefore, r ≤h(u, 0) ≤ R. WriteK τ for the convex body associated withh(·, τ ) and let B c denote the ball of radius c centered at the origin. Since B R enclosesK 0 , the comparison principle implies that B 2R will encloseK τ for τ ∈ [0, δ], where δ depends only on p, R, ψ. By the first statement of Lemma 7 applied toh, there is a uniform lower bound (depending only on r, R, p, ϕ) on the Gauss curvature ofK δ 2 . On the other hand, the volume ofK δ 2 is bounded above by V (B 2R ); therefore, V (B) V (B 2R ) ≤ c t := V (K t ) V K t+ V (B) V (K t ) 1+n−p n+1 δ 2 ≤ 1 for all t ∈ [0, T ). Consequently,       V (B) V K t+ V (B) V (K t ) 1+n−p n+1 δ 2       1 n+1 h u, t + V (B) V (K t ) 1+n−p n+1 δ 2 = c 1 n+1 th ·, δ 2 has Gauss curvature bounded below for all t ∈ [0, T ). Now we show that for everyt ∈ (V (B)/V (K 0 )) 1+n−p n+1 δ 2 , T , we can find t ∈ [0, T ) such thatt = t + V (B) V (K t ) 1+n−p n+1 δ 2 . Define f (t) = t + V (B) V (Kt) 1+n−p n+1 δ 2 −t on [0, T ). f is continuous, and    f (T ) = T −t > 0, p < n + 1 f (∞) = ∞, p = n + 1 f (0) ≤ 0 p ≤ n + 1. The claim follows. Next we obtain uniform lower and upper bounds on the principal curvatures of the normalized solution. Consider the convex bodiesK τ : = V (B) V (Kt) 1 n+1 K t , where τ (t) := t 0 V (K s ) V (B) 1+n−p n+1 ds, 3 Let us furnish all geometric quantities associated withK τ by an over-tilde. The evolution equation ofh τ is given by ∂ τhτ = ϕh 2−pS n − S n ϕh 2−pS2 n dσ (n + 1)V (B)h . Since S n ϕh 2−pS2 n dσ (n+1)V (B) is uniformly bounded above, applying the maximum principle to Θ = 1 2 log( W 2 ) − α logh, and arguing as in the proof of Lemma 8, we see that W has a uniform upper bound. This in turn, in view of our lower and upper bounds on the Gauss curvature ofK τ , implies that we have uniform lower and upper bounds on the principal curvatures ofK τ . Higher order regularity estimates and convergence in C ∞ for a subsequence of {K τ } follow from Krylov-Safonov [42], standard parabolic theory and the Arzelà-Ascoli theorem. The convergence for the 3 Suppose p < n + 1. For each t ∈ [0, T ) by the comparison principle we have (max h Kt ) p−n−1 (n + 1 − p) max ϕ ≤ T − t ≤ (min h Kt ) p−n−1 (n + 1 − p) min ϕ . Therefore, since max h K t min h K t ≤ R r (see (4.1)), we get c 1 (T − t) 1 p−n−1 ≤ min h Kt ≤ V (Kt) V (B) 1 n+1 ≤ max h Kt ≤ c 2 (T − t) 1 p−n−1 . Thus lim t→T τ (t) = ∞. full sequence when p ≥ 1 follows from the uniqueness of the self-similar solutions to (1.3); see [21,45]. Moreover, note that when ϕ ≡ 1 and −n − 1 < p < 1, by the result of [15], the limit is the unit sphere. Appendix Evolution of polar bodies. Let K be a smooth, strictly convex body with the origin in its interior. Suppose ∂K, the boundary of K, is parameterized by the radial function r = r(u) : S n → R. The metric [g ij ], unit normal ν, support function h, and the second fundamental form [w ij ] of ∂K can be written in terms of r and its partial derivatives as follows: a: g ij = r 2ḡ ij +∇ i r∇ j r, b: ν = ru−∇r √ r 2 + ∇ r 2 , c: h = r 2 √ r 2 + ∇ r 2 , d: w ij = −r∇ 2 ij r+2∇ir∇j r+r 2ḡ ij √ r 2 + ∇ r 2 . Since 1 r is the support function of K * (see, e.g., [56, page 57]), we can calculate the entries of [r * ij ]: r * ij =∇ 2 ij 1 r + 1 rḡ ij = −r∇ 2 ij r + 2∇ i r∇ j r + r 2ḡ ij r 3 . Thus, using (d) we get r * ij = r 2 + ∇ r 2 r 3 w ij . Lemma 11. As K t evolve by (1.2), their polars K * t evolve as follows: ∂ t h * = −ϕ h * u +∇h * h * 2 + |∇h * | 2 (h * 2 + |∇h * | 2 ) n+1+p 2 h * n+1 S * n , h * (·, t) := h K * t (·). Proof. To obtain the evolution equation of h K * t , we first need to parameterize M t over the unit sphere F = r(u(·, t), t)u(·, t) : S n → R n+1 , where r(u(·, t), t) is the radial function of M t in the direction u(·, t). Note that ∂ t r = ϕ h 2−p K r 2 + ∇ r 2 r , and K = det w ij det g ij , 1 S * n = detḡ ij det r * ij , detḡ ij det g ij = 1 r 2n−2 (r 2 + ∇ r 2 ) , h = 1 h * 2 + ∇ h * 2 . Now we calculate ∂ t h * = ∂ t 1 r = − h 2−p K r 2 + ∇ r 2 r 3 ϕ(ν) = −h 2−p r 2 + ∇ r 2 r 3 det g ij det w ij ϕ(ν) = −h 2−p r 2 + ∇ r 2 r 3 detḡ ij det r * ij det g ij detḡ ij det r * ij det w ij ϕ(ν) = − r 2 + ∇ r 2 r 3 n+1 r 2n−2 (r 2 + ∇ r 2 ) (h * 2 + ∇ h * 2 ) 2−p 2 ϕ(ν) S * n . Replacing r by 1/h * and taking into account (b) finishes the proof. Estimates for curvature derivatives. For convenience we present some of the main ideas, how one can prove the alternative in Lemma 8 about balancing the curvature derivatives. This method was used in [34] for a similar stationary prescribed curvature equation. Recall that A i = 2 − ε W 2 K w ii (K ;i ) 2 − w ii W 2 p,q K pp,qq w pp;i w qq;i , B i = 2 W 2 j w jj K jj,ii w 2 jj;i , C i = 2 W 2 j =i K jj w 2 jj;i , D i = 1 W 2 K ii j w 2 jj;i , E i = 2 W 4 K ii   j w jj w jj;i   2 . Note that the term A i looks slightly different from the term A i in [34, p. 1309], where the K is not present in the denominator. We have to define A i in the way we did, because due to the inverse nature of the curvature flow equation we obtain an extra good derivative term. This allows us to choose the constant in A i as 2 − ε, whereas a large constant was required in [34] (denoted by K there). Fortunately the proofs of [34, Lemma 4.2, Lemma 4.3] also work for sufficiently small ε. The remaining terms B i , C i , D i , E i are all identical to those in [34]. In the following σ k denotes the k-th elementary symmetric function of principal curvatures. We begin by recalling the following special case (k = n) of inequality (2.4) from [34, Lemma 2.2], which can be deduced easily by differentiating G = σ n σ l 1 n−l twice, using the concavity of G and applying the Schwarz inequality. For any δ > 0, 1 ≤ i ≤ n and 1 ≤ l < n we have −K pp,qq w pp;i w qq;i + 1 − 1 n − l + 1 (n − l)δ (K ;i ) 2 K ≥ 1 + 1 − δ n − l K((σ l ) ;i ) 2 σ 2 l − K σ l σ pp,qq l w pp;i w qq;i . In particular, by taking δ = 1 2−ε , we have (2 − ε) (K ;i ) 2 K − K pp,qq w pp;i w qq;i ≥ 1 + 1 − ε (n − 1)(2 − ε) K((σ l ) ;i ) 2 σ 2 l − Kσ pp,qq l w pp;i w qq;i σ l ,(5.1) provided (2 − ε) > 1, i.e. 0 < ε < 1. Lemma 12. For each i = 1, if √ 3κ i ≤ κ 1 , we have A i + B i + C i + D i − E i ≥ 0. Proof. Note that from (5.1) with l = 1, it follows that A i ≥ 0 since σ pp,qq In the following proof we will write σ n = K for a better comparability with [34,Lemma 4.3]. Also denote by σ k (κ|i) the k-th elementary symmetric polynomial in the variables κ 1 , . . . , κ i−1 , κ i+1 , . . . , κ n and σ k (κ|ij) accordingly. 1 = 0. The proof of that B i + C i + D i − E i ≥ 0 Lemma 13. For λ = 1, . . . , n − 1 suppose there exists some δ ≤ 1 such that κ λ /κ 1 ≥ δ. There exists a sufficiently small positive constant δ ′ depending on δ, ǫ and the bounds for K, such that if κ λ+1 /κ 1 ≤ δ ′ , we have A i + B i + C i + D i − E i ≥ 0 for i = 1, . . . , λ. Proof. This corresponds to [34,Lemma 4.3]. We highlight the main estimates in this proof. First of all, from [34,Equ. (4.16), (4.17)] one can extract the following estimate: W 4 (B i + C i + D i − E i ) ≥ W 2 j =i (σ n−1 (κ|j) − 2σ n−1 (κ|ij)) w 2 jj;i − w 2 ii σ ii n w 2 ii;i = W 2 j =i σ n−1 (κ|j)w 2 jj;i − w 2 ii σ ii n w 2 ii;i ,(5.2) since σ n−1 (κ|ij) = 0. Now we show the right hand side of (5.2) is dominated by W 4 A i . From (5.1) we get for all 1 ≤ λ < n and for all 1 ≤ i ≤ n : A i = (2 − ε)w ii W 2 σ n ((σ n ) ;i ) 2 − w ii W 2 p,q σ pp,qq n w pp;i w qq;i ≥ w ii W 2 1 + 1 − ε (n − 1)(2 − ε) σ n ((σ λ ) ;i ) 2 σ 2 λ − w ii W 2 σ n p,q σ pp,qq λ w pp;i w qq;i σ λ = w ii σ n W 2 σ 2 λ 1 + 1 − ε (n − 1)(2 − ε) a (σ aa λ w aa;i ) 2 + 1 − ε (n − 1)(2 − ε) a =b σ aa λ σ bb λ w aa;i w bb;i + a =b σ aa λ σ bb λ − σ λ σ aa,bb λ w aa;i w bb;i . W 4 A i ≥ w 2 ii σ ii n w 2 11;i − C ǫ w ii a =1 w 2 aa;i . Combining this with (5.2) for i = 1 yields, W 2 (A 1 + B 1 + C 1 + D 1 − E 1 ) ≥ j =1 σ n−1 (κ|j)w 2 jj;1 − C ǫ w 11 j =1 w 2 jj;1 = j =1 σ n w jj − C ǫ w 11 w 2 jj;1 ≥ j =1 σ n δ ′ w 11 − C ǫ w 11 w 2 jj;1 ,(5.W 2 (A i + B i + C i + D i − E i ) ≥ j =i σ n−1 (κ|j)w 2 jj;i − C ǫ w ii δ 2 j>λ w 2 jj;i ≥ j>λ σ n−1 (κ|j) − C ǫ w ii δ 2 w 2 jj;i ≥ j>λ σ n w 11 δ ′ − C ǫ w ii δ 2 w 2 jj;i ,(5.6) which is non-negative for small δ ′ for the same reason as in (5.5). This completes the proof. Corollary 14. There exist positive numbers δ 2 , . . . , δ n , depending only on the dimension, on ǫ and on the bounds for the Gauss curvature, such that either (5.7) κ i > δ i κ 1 ∀2 ≤ i ≤ n or (5.8) A i + B i + C i + D i − E i ≥ 0 ∀1 ≤ i ≤ n. Proof. Choosing λ = 1 and δ = 1 in Lemma 13 yields the existence of δ ′ with the following property: if κ 2 /κ 1 ≤ δ ′ , then A 1 + B 1 + C 1 + D 1 − E 1 ≥ 0. Note that κ i ≤ κ 2 for i ≥ 2. Choose δ 2 = min{δ ′ , 1/ √ 3}. Therefore, in view of Lemma 12, κ 2 /κ 1 ≤ δ 2 implies that A i + B i + C i + D i − E i ≥ 0 ∀i ≥ 2. We now apply induction, assuming we have constructed δ 2 , . . . , δ j . We may assume κ i > δ i κ 1 for 2 ≤ i ≤ j otherwise A i + B i + C i + D i − E i ≥ 0 is already true for 2 ≤ i ≤ n. Choose δ = δ j and λ = j in Lemma 13 to get a δ ′ so that if κ j+1 ≤ δ ′ κ 1 , then A i + B i + C i + D i − E i ≥ 0 holds for 1 ≤ i ≤ j. Now in view of Lemma 12,taking δ j+1 = min{δ ′ , 1/ √ 3} gives A i + B i + C i + D i − E i ≥ 0 for j ≤ i ≤ n. . Lower and upper bounds on Gauss curvature. The proofs of the following two lemmas are similar to the proofs of [38, Lemmas 4.1, 4.2] some a and b depending only on p, c 1 , c 2 , ϕ. The lower bound for K on [0, δ] for a small enough δ > 0 follows from the short-time existence of the flow. The lower bound for K on [δ, t 1 ] follows from the inequality K ( 3 . 16 ) 316Taking α large enough yields a contradiction.Proposition 9. The solution to (1.1) satisfies lim t→T max h Kt = ∞. small δ ′ and λ = 1 the simple estimates[34, Equ. (4.19), can literally be taken from [34, Lemma 4.2], starting with [34, Equ. (4.10)]. 5 ) 5which is non-negative for δ ′ sufficiently small. Hence the lemma is true in the case λ = 1.For λ > 1 the series of elementary estimates [34, Equ. (4.22)-(4.27)] gives after having adapted ǫ if necessary and having chosen δ ′ sufficiently small again. Combining this last inequality with (5.2) for 1 ≤ i ≤ λ yieldsW 4 A i ≥ w 2 ii σ ii n a≤λ w 2 aa;i − w ii C ǫ δ 2 a>λ w 2 aa;i , Self-similar solutions for the anisotropic affine curve shortening problem. J Ai, K S Chou, J Wei, Calc. Var. Partial Differential Equations. 13J. Ai, K.S. Chou, J. Wei, "Self-similar solutions for the anisotropic affine curve shortening problem." Calc. Var. Partial Differential Equations 13(2001): 311-337. On the theory of mixed volumes. III. Extensions of two theorems of Minkowski on convex polyhedra to arbitrary convex bodies. A D Aleksandrov, Mat. Sb. (N.S.). 3A.D. Aleksandrov, "On the theory of mixed volumes. III. Extensions of two theorems of Minkowski on convex polyhedra to arbitrary convex bodies." Mat. Sb. (N.S.) 3(1939): 167- 174. Smoothness of the convex surface of bounded Gaussian curvature. A D Aleksandrov, C.R. (Dokl.) Acad. Sci. USSR. 36A.D. Aleksandrov, "Smoothness of the convex surface of bounded Gaussian curvature." C.R. (Dokl.) Acad. Sci. USSR 36(1942): 195-199. On the surface area measure of convex bodies. A D Aleksandrov, Mat. Sb. (N.S.). 6A.D. Aleksandrov, "On the surface area measure of convex bodies." Mat. Sb. (N.S.) 6(1983): 27-46. Flow by powers of the Gauss curvature. B Andrews, P Guan, L Ni, Adv. in Math. 299B. Andrews, P. Guan, L. Ni, "Flow by powers of the Gauss curvature." Adv. in Math. 299(2016): 174-201. Singularities in crystalline curvature flows. B Andrews, Asian J. Math. 6B. Andrews, "Singularities in crystalline curvature flows." Asian J. Math. 6(2002): 101-121. Contraction of convex hypersurfaces in Euclidean space. B Andrews, Calc. Var. Partial Differential Equations. 2B. Andrews, "Contraction of convex hypersurfaces in Euclidean space." Calc. Var. Partial Differential Equations 2(1994): 151-171. Monotone quantities and unique limits for evolving convex hypersurfaces. B Andrews, Int. Math. Res. Not. B. Andrews, "Monotone quantities and unique limits for evolving convex hypersurfaces." Int. Math. Res. Not. 1997(1997): 1001-1031. Evolving convex curves. B Andrews, Calc. Var. Partial Differential Equations. 7B. Andrews, "Evolving convex curves." Calc. Var. Partial Differential Equations 7(1998): 315-371. Gauss curvature flow: The fate of the rolling stones. B Andrews, Invent. Math. 138B. Andrews, "Gauss curvature flow: The fate of the rolling stones." Invent. Math. 138(1999): 151-161. Motion of hypersurfaces by Gauss curvature. B Andrews, Pacific J. Math. 195B. Andrews, "Motion of hypersurfaces by Gauss curvature." Pacific J. Math. 195(2000): 1-34. Classification of limiting shapes for isotropic curve flows. B Andrews, J. Amer. Math. Soc. 16B. Andrews, "Classification of limiting shapes for isotropic curve flows." J. Amer. Math. Soc. 16(2003): 443-459. Surfaces moving by powers of Gauss curvature. B Andrews, X Chen, Pure and Appl. Math. Quarterly. 8B. Andrews, X. Chen, "Surfaces moving by powers of Gauss curvature." Pure and Appl. Math. Quarterly 8(2012): 825-834. The logarithmic Minkowski problem. K Böröczky, E Lutwak, D Yang, G Zhang, J. Amer. Math. Soc. 26K. Böröczky, E. Lutwak, D. Yang, G. Zhang, "The logarithmic Minkowski problem." J. Amer. Math. Soc. 26(2013): 831-852. Asymptotic behavior of flows by powers of the Gaussian curvature. S Brendle, K Choi, P Daskalopoulos, Acta Math. 219S. Brendle, K. Choi, P. Daskalopoulos, "Asymptotic behavior of flows by powers of the Gauss- ian curvature." Acta Math. 219(2017): 1-16. Improper affine hyperspheres of convex type and a generalization of a theorem by K. Jorgens. E Calabi, Michigan Math. J. 5E. Calabi, "Improper affine hyperspheres of convex type and a generalization of a theorem by K. Jorgens." Michigan Math. J. 5(1958): 105-126. The Dirichlet problem for nonlinear second order elliptic equations I. Monge-Ampère equations. L Caffarelli, L Nirenberg, J Spruck, Comm. Pure Appl. Math. 37L. Caffarelli, L. Nirenberg, and J. Spruck, "The Dirichlet problem for nonlinear second order elliptic equations I. Monge-Ampère equations." Comm. Pure Appl. Math. 37(1984): 369-402. Lp Minkowski problem with not necessarily positive data. W Chen, Adv. in Math. 201W. Chen, "Lp Minkowski problem with not necessarily positive data." Adv. in Math. 201(2006): 77-89. On the regularity of the solution of the n-dimensional Minkowski problem. S Y Cheng, S T Yau, Comm. Pure Appl. Math. 29S.Y. Cheng, S.T. Yau, "On the regularity of the solution of the n-dimensional Minkowski problem." Comm. Pure Appl. Math. 29(1976): 495-516. A logarithmic Gauss curvature flow and the Minkowski problem. K S Chou, X J Wang, Ann. Inst. H. Poincaré Anal. Non Linéaire. 17K.S. Chou, X.J. Wang, "A logarithmic Gauss curvature flow and the Minkowski problem." Ann. Inst. H. Poincaré Anal. Non Linéaire 17(2000): 733-751. The Lp-Minkowski problem and the Minkowski problem in centroaffine geometry. K S Chou, X J Wang, Adv. in Math. 205K.S. Chou, X.J. Wang, "The Lp-Minkowski problem and the Minkowski problem in cen- troaffine geometry." Adv. in Math. 205(2006): 33-83. Deforming convex hypersurfaces by the nth root of the Gaussian curvature. B Chow, J. Differential Geom. 22B. Chow, "Deforming convex hypersurfaces by the nth root of the Gaussian curvature." J. Differential Geom. 22(1985): 117-138. Aleksandrov reflection and nonlinear evolution equations, I: The nsphere and n-ball. B Chow, R Gulliver, Calc. Var. Partial Differential Equations. 4B. Chow, R. Gulliver, "Aleksandrov reflection and nonlinear evolution equations, I: The n- sphere and n-ball." Calc. Var. Partial Differential Equations 4(1996): 249-264. Geometric expansion of convex plane curves. B Chow, D H Tsai, J. Differential Geom. 44B. Chow, D.H. Tsai, "Geometric expansion of convex plane curves." J. Differential Geom. 44(1996): 312-330. Expansion of convex hypersurface by non-homogeneous functions of curvature. B Chow, D H Tsai, Asian J. Math. 1B. Chow, D.H. Tsai, "Expansion of convex hypersurface by non-homogeneous functions of curvature." Asian J. Math. 1(1997): 769-784. The two dimensional Lp Minkowski problem and nonlinear equations with negative exponents. J Dou, M Zhu, Adv. in Math. 230J. Dou, M. Zhu, "The two dimensional Lp Minkowski problem and nonlinear equations with negative exponents." Adv. in Math. 230(2012): 1209-1221. Mengenfunktionen und konvexe Körper. W Fenchel, B Jessen, Danske Vid. Selskab. Mat.-fys. Medd. 16W. Fenchel, and B. Jessen, "Mengenfunktionen und konvexe Körper." Danske Vid. Selskab. Mat.-fys. Medd. 16(1938): 1-31. Evolving planes curves by curvature in relative geometries I. M Gage, Y Li, Duke Math. J. 72M. Gage, Y. Li, "Evolving planes curves by curvature in relative geometries I." Duke Math. J. 72(1993): 441-466. Evolving planes curves by curvature in relative geometries II. M Gage, Y Li, Duke Math. J. 75M. Gage, Y. Li, "Evolving planes curves by curvature in relative geometries II." Duke Math. J. 75(1994): 79-98. Curvature problems. C Gerhardt, Series in Geometry and Topology. SommervilleInternational Press39C. Gerhardt, "Curvature problems." Series in Geometry and Topology, vol. 39, International Press, Sommerville (2006). Non-scale-invariant inverse curvature flows in Euclidean space. C Gerhardt, Calc. Var. Partial Differential Equations. 49C. Gerhardt, "Non-scale-invariant inverse curvature flows in Euclidean space." Calc. Var. Partial Differential Equations 49(2014): 471-489. Entropy and a convergence theorem for Gauss curvature flow in high dimension. P Guan, L Ni, J. Eur. Math. Soc. 19P. Guan, L. Ni, "Entropy and a convergence theorem for Gauss curvature flow in high dimen- sion." J. Eur. Math. Soc. 19(2017): 3735-3761. On equation det(u ij + δ ij u) = u p f. P Guan, C S Lin, NCTS in Tsing-Hua Universitypreprint No. 2000-7P. Guan, C.S. Lin, "On equation det(u ij + δ ij u) = u p f ." preprint No. 2000-7, NCTS in Tsing-Hua University, 2000. Global C 2 estimates for convex solutions of curvature equations. P Guan, C Ren, Z Wang, Comm. Pure Appl. Math. LXVIIIP. Guan, C. Ren, and Z. Wang, "Global C 2 estimates for convex solutions of curvature equations." Comm. Pure Appl. Math. Vol. LXVIII, (2015): 1287-1325. On the regularity of the Lp Minkowski problem. Y Huang, Q P Lu, Adv. in Appl. Math. 50Y. Huang, Q.P. Lu, "On the regularity of the Lp Minkowski problem." Adv. in Appl. Math. 50(2013): 268-280. Curvature relations and affine surface area for a general convex body and its polar. D Hug, Results Math. 29D. Hug, "Curvature relations and affine surface area for a general convex body and its polar." Results Math. 29(1996): 233-248. Deforming a hypersurface by Gauss curvature and support function. M N Ivaki, J. Funct. Anal. 271M.N. Ivaki, "Deforming a hypersurface by Gauss curvature and support function." J. Funct. Anal. 271(2016): 2133-2165. An application of dual convex bodies to the inverse Gauss curvature flow. M N Ivaki, Proc. Amer. Math. Soc. 143M.N. Ivaki, "An application of dual convex bodies to the inverse Gauss curvature flow." Proc. Amer. Math. Soc. 143(2015): 1257-1271. Volume preserving centro-affine normal flows. M N Ivaki, A Stancu, Comm. Anal. Geom. 21M.N. Ivaki, A. Stancu, "Volume preserving centro-affine normal flows." Comm. Anal. Geom. 21(2013): 671-685. 2π-periodic self-similar solutions for the anisotropic affine curve shortening problem. M Jiang, L Wang, J Wei, Calc. Var. Partial Differential Equations. 41M. Jiang, L. Wang, J. Wei, "2π-periodic self-similar solutions for the anisotropic affine curve shortening problem." Calc. Var. Partial Differential Equations 41(2011): 535-565. Remarks on the 2-dimensional Lp-Minkowski problem. M Y Jiang, Adv. Nonlinear Stud. 10M.Y. Jiang, "Remarks on the 2-dimensional Lp-Minkowski problem." Adv. Nonlinear Stud. 10(2010): 297-313. Certain properties of parabolic equations with measurable coefficients. N Krylov, M V Safonov, Izv. Akad. Nauk USSR Ser. Mat. 40N.V Krylov, and M.V. Safonov, "Certain properties of parabolic equations with measurable coefficients." Izv. Akad. Nauk USSR Ser. Mat. 40(1981): 161-175; . English Transl, Math, Izv, 16English transl., Math. USSR Izv. 16(1981): 151-164. On the existence of a closed convex surface realising a given Riemannian metric. H Lewy, Proc. Nat. Acad. Sci. USA. 24H. Lewy, "On the existence of a closed convex surface realising a given Riemannian metric." Proc. Nat. Acad. Sci. USA 24(1938): 104-106. On differential geometry in the large, I (Minkowski's problem). H Lewy, Trans. Amer. Math. Soc. 43H. Lewy, "On differential geometry in the large, I (Minkowski's problem)." Trans. Amer. Math. Soc. 43(1938): 258-270. The Brunn-Minkowski-Firey theory I: mixed volumes and the Minkowski problem. E Lutwak, J. Differential Geom. 38E. Lutwak, "The Brunn-Minkowski-Firey theory I: mixed volumes and the Minkowski prob- lem." J. Differential Geom. 38(1993): 131-150. The Brunn-Minkowski-Firey theory II: affine and geominimal surface areas. E Lutwak, Adv. in Math. 118E. Lutwak, "The Brunn-Minkowski-Firey theory II: affine and geominimal surface areas." Adv. in Math. 118(1996): 244-294. On the regularity of solutions to a generalization of the Minkowski problem. E Lutwak, V Oliker, J. Differential Geom. 41E. Lutwak, V. Oliker, "On the regularity of solutions to a generalization of the Minkowski problem." J. Differential Geom. 41(1995): 227-246. On the Lp-Minkowski problem. E Lutwak, D Yang, G Zhang, Trans. Amer. Math. Soc. 356E. Lutwak, D. Yang, G. Zhang, "On the Lp-Minkowski problem." Trans. Amer. Math. Soc. 356(2004): 4359-4370. Rotationally symmetric solutions to the Lp-Minkowski problem. J Lu, X J Wang, J. Differential Equations. 254J. Lu, X.J. Wang, "Rotationally symmetric solutions to the Lp-Minkowski problem." J. Dif- ferential Equations 254(2013), 983-1005. The surface area preserving mean curvature flow. J Mccoy, Asian J. Math. 7J. McCoy, "The surface area preserving mean curvature flow." Asian J. Math. 7(2003): 7-30. Allgemeine Lehrsätzeüber die konvexen Polyeder. H Minkowski, Nachr. Ges. Wiss. Göttingen. H. Minkowski, "Allgemeine Lehrsätzeüber die konvexen Polyeder." Nachr. Ges. Wiss. Göttingen (1897): 198-219. Volumen und Oberfläche. H Minkowski, Math. Ann. 57H. Minkowski, "Volumen und Oberfläche." Math. Ann. 57(1903): 447-495. The Weyl and Minkowski problems in differential geometry in the large. L Nirenberg, Comm. Pure Appl. Math. 6L. Nirenberg, "The Weyl and Minkowski problems in differential geometry in the large." Comm. Pure Appl. Math. 6(1953): 337-394. Regularity of a convex surface with given Gaussian curvature. A V Pogorelov, Mat. Sb. 31RussianA.V. Pogorelov, "Regularity of a convex surface with given Gaussian curvature." Mat. Sb. 31(1952): 88-103 (Russian). A regular solution of the n-dimensional Minkowski problem. A V Pogorelov, Dokl. Akad. Nauk. SSSR. 199A.V. Pogorelov, "A regular solution of the n-dimensional Minkowski problem." Dokl. Akad. Nauk. SSSR 199(1971): 785-788; . English Transl, Soviet Math. Dokl. 12English transl., Soviet Math. Dokl. 12(1971): 1192-1196. Convex bodies: the Brunn-Minkowski theory. R Schneider, Cambridge University Press151R. Schneider, "Convex bodies: the Brunn-Minkowski theory." Vol. 151. Cambridge University Press, 2014. Uniqueness of self-similar solutions for a crystalline flow. A Stancu, Indiana Univ. Math. J. 45A. Stancu, "Uniqueness of self-similar solutions for a crystalline flow." Indiana Univ. Math. J. 45(1996): 1157-1174. On the number of solutions to the discrete two-dimensional L 0 -Minkowski problem. A Stancu, Adv. in Math. 180A. Stancu, "On the number of solutions to the discrete two-dimensional L 0 -Minkowski prob- lem." Adv. in Math. 180(2003): 290-323. The discrete planar L 0 -Minkowski problem. A Stancu, Adv. in Math. 167A. Stancu, "The discrete planar L 0 -Minkowski problem." Adv. in Math. 167(2002): 160-174. Centro-affine invariants for smooth convex bodies. A Stancu, Int. Math. Res. Not. 2012A. Stancu, "Centro-affine invariants for smooth convex bodies." Int. Math. Res. Not. 2012(2012): 2289-2320. Deforming a hypersurface by its Gauss-Kronecker curvature. K Tso, Comm. Pure Appl. Math. 38K. Tso, "Deforming a hypersurface by its Gauss-Kronecker curvature." Comm. Pure Appl. Math. 38 (1985): 867-882. Surfaces expanding by the inverse Gauss curvature flow. O Schnürer, J. Reine Angew. Math. 600O. Schnürer, "Surfaces expanding by the inverse Gauss curvature flow." J. Reine Angew. Math. 600(2006): 117-134. Behavior of the gradient for solutions of parabolic equations on the circle. D H Tsai, Calc. Var. Partial Differential Equations. 23D.H. Tsai, "Behavior of the gradient for solutions of parabolic equations on the circle." Calc. Var. Partial Differential Equations 23(2005): 251-270. On the solvability of the two-dimensional Lp-Minkowski problem. V Umanskiy, Adv. in Math. 225V. Umanskiy, "On the solvability of the two-dimensional Lp-Minkowski problem." Adv. in Math. 225(2010): 3214-3228. Complete noncompact self-similar solutions of Gauss curvature flows I. Positive powers. J Urbas, Math. Ann. 311J. Urbas, "Complete noncompact self-similar solutions of Gauss curvature flows I. Positive powers." Math. Ann. 311(1998): 251-274. Complete noncompact self-similar solutions of Gauss curvature flows. II. Negative powers. J Urbas, Adv. Differential Equations. 4J. Urbas, "Complete noncompact self-similar solutions of Gauss curvature flows. II. Negative powers." Adv. Differential Equations 4(1999): 323-346. The centro-affine Minkowski problem for polytopes. G Zhu, J. Differential Geom. 101G. Zhu, "The centro-affine Minkowski problem for polytopes." J. Differential Geom. 101(2015): 159-174. The Lp Minkowski problem for polytopes for 0 < p < 1. G Zhu, J. Func. Anal. 269G. Zhu, "The Lp Minkowski problem for polytopes for 0 < p < 1." J. Func. Anal. 269(2015): 1070-1094. The logarithmic Minkowski problem for polytopes. G Zhu, Adv. in Math. 262G. Zhu, "The logarithmic Minkowski problem for polytopes." Adv. in Math. 262(2014): 909- 931.
[]
[ "WWW DATABASE OF MODELS OF ACCRETION DISKS IRRADIATED BY THE CENTRAL STAR", "WWW DATABASE OF MODELS OF ACCRETION DISKS IRRADIATED BY THE CENTRAL STAR" ]
[ "P D&apos;alessio ", "B Merín ", "N Calvet ", "L Hartmann ", "B Montesinos " ]
[]
[ "Revista Mexicana de Astronomía y Astrofísica" ]
RESUMENAnunciamos la publicación de un catálogo de modelos físicos de discos de acreción irradiados por la estrella central alrededor de estrellas jóvenes, basados en las técnicas de modelización de D'Alessio et al. El catálogo WWW incluye ∼ 3000 modelos de disco para diferentes estrellas centrales, tamaños del disco, inclinaciones del disco, contenidos de polvo y tasas de acreción de masa. Para cada uno de ellos, los perfiles radiales de las propiedades físicas del disco y distribuciones espectrales de energía sintéticas se pueden consultar o descargar desde la página web para compararlas con observaciones. El catálogo está accesible en ABSTRACT We announce the release of a catalog of physical models of irradiated accretion disks around young stars based on the modelling techniques by D'Alessio et al. The WWW catalog includes ∼ 3000 disk models for different central stars, disk sizes, inclinations, dust contents and mass accretion rates. For any of them, radial profiles of disk physical parameters and synthetic spectral energy distributions can be browsed and downloaded to compare with observations. It can be accessed at http://www-cfa.harvard.edu/youngstars/dalessio/ (US),
null
[ "https://export.arxiv.org/pdf/astro-ph/0412331v1.pdf" ]
15,844,250
astro-ph/0412331
e01226ac008e46df3338e5e7bf4cb9ba558b0230
WWW DATABASE OF MODELS OF ACCRETION DISKS IRRADIATED BY THE CENTRAL STAR 2004 P D&apos;alessio B Merín N Calvet L Hartmann B Montesinos WWW DATABASE OF MODELS OF ACCRETION DISKS IRRADIATED BY THE CENTRAL STAR Revista Mexicana de Astronomía y Astrofísica 002004Received 20th March 2022; accepted 20th March 2022and atASTRONOMICAL DATABASE: DISK MODELS -STARS: PRE-MAIN SEQUENCE -STARS: ACCRETION DISKS RESUMENAnunciamos la publicación de un catálogo de modelos físicos de discos de acreción irradiados por la estrella central alrededor de estrellas jóvenes, basados en las técnicas de modelización de D'Alessio et al. El catálogo WWW incluye ∼ 3000 modelos de disco para diferentes estrellas centrales, tamaños del disco, inclinaciones del disco, contenidos de polvo y tasas de acreción de masa. Para cada uno de ellos, los perfiles radiales de las propiedades físicas del disco y distribuciones espectrales de energía sintéticas se pueden consultar o descargar desde la página web para compararlas con observaciones. El catálogo está accesible en ABSTRACT We announce the release of a catalog of physical models of irradiated accretion disks around young stars based on the modelling techniques by D'Alessio et al. The WWW catalog includes ∼ 3000 disk models for different central stars, disk sizes, inclinations, dust contents and mass accretion rates. For any of them, radial profiles of disk physical parameters and synthetic spectral energy distributions can be browsed and downloaded to compare with observations. It can be accessed at http://www-cfa.harvard.edu/youngstars/dalessio/ (US), INTRODUCTION The old idea that stars are born surrounded by disks, which may form planetary systems, has found strong observational support in the last couple of decades. The properties of these disks are quantified from the comparison between different observations and models. For instance, disk mass accretion rates have been inferred from the analysis of the short wavelength excess, modeled as produced by accretion shocks at the stellar surface (Hartigan, Edwards & Ghandour 1995;Gullbring et al. 1998;Hartmann et al. 1998 accretion rates and inner disk radii have been estimated from models of different line emission profiles thought to form in the magnetospheric flows connecting the disks to their central stars (Muzerolle et al. 1998a,b and c;Muzerolle et al. 2001). Kinetic information on disks, central star masses, details of the vertical temperature distribution and molecular abundances are quantified from detailed analysis of different molecular lines (e.g., Dutrey et al. 1996;Dutrey, Guilloteau & Guelin 1997;Guilloteau & Dutrey 1998;Simon, Dutrey & Guilloteau 2000;Najita et al. 2000;Dartois, Dutrey & Guilloteau 2003;Aikawa et al. 2002Aikawa et al. , 2003Qi et al. 2003;Carr, Tokunaga & Najita 2004). Other disk properties, such as masses, degree of flaring of the disk surface and dust properties, are inferred from the spectral energy distributions (SEDs) of young stars, from near IR to radio-frequencies, using different kinds of disk mod-els (e.g., Kenyon & Hartmann 1987;Beckwith et al. 1990;Calvet et al. 1991Calvet et al. , 1992Malbet & Bertout 1991;Beckwith & Sargent 1991;Chiang & Goldreich 1997D'Alessio et al. 1998D'Alessio et al. , 1999D'Alessio et al. , 2001Dullemond, Dominik & Natta (2001); Dullemond, van Zadelhoff & Natta (2002); Dullemond (2002); Malbet, Lachaume & Monin 2001;Lachaume, Malbet & Monin 2003). These models go from simple power-law descriptions of the disk mass surface density, temperature and opacity, to detailed models where different heating mechanisms are included, and where the radiative transfer and dust opacity are calculated with different degrees of sophistication. Disks have been also imaged at near IR (e.g., Stapelfeldt et al. 1998;Koresko 1998;Malbet et al. 1998;Weinberger et al. 1999;Padgett et al. 1999;Tuthill et al. 2002;Colavita et al. 2003), mid IR (e.g., McCabe, Duchêne, & Ghez 2003) and millimeter wavelengths (Dutrey et al. 1996;Wilner, Ho & Rodríguez 1996;Rodríguez et al. 1998;Mannings & Sargent 2000). Analysis of these observations with different models has given information about the radial and vertical structures of the disks. In general, models with some degree of complexity are required to fit multi-wavelength observations of a given object (e.g., Lay, Carlstrom & Hills 1997;Wilner & Lay 2000). In the present paper we describe a database with a series of models of disks around pre-main sequence stars. These models, which are described in detail in Merín (2004), are self-consistently calculated given the properties of the central star, the disk and its dust, using the methods developed by D' Alessio et al. (1998Alessio et al. ( , 1999Alessio et al. ( and 2001. We make these models available with the hope that they can be useful for fitting observations of young stars, helping to extract more information than simpler and usually faster approaches. In a forthcoming paper (Merín et al. 2005), relationships between disk, star and dust parameters and observable quantities such as spectral indices, colors, etc., will be presented and discussed in detail, to enable a faster search for the best model fit to the observations of a given disk or a survey of disks. In addition, comparison of models in this database with other modeling efforts will be helpful to test the underlying assumptions in each set of models. Finally, we hope that this database will help increase our knowledge of the intrinsic properties of the disks around young stars. DISK MODELS The disk models from D' Alessio et al. (1998Alessio et al. ( , 1999Alessio et al. ( and 2001 have the following characteristics: • Energy is transported by radiation, convection and a turbulent flux, the first one being the most important mechanism given the disk parameters we have explored in this work. • The disk mass surface density distribution is a consequence of conservation of angular momentum flux given the functional form of the viscosity coefficient and the disk mass accretion rate. • The gas and dust are in thermal balance, and are heated by viscous dissipation, stellar irradiation, and ionization by energetic particles (from cosmic rays and radioactive decay). For the set of parameters of models in the database, viscous dissipation is important in regions close to the star and at the disk midplane, while stellar irradiation is important for most of the disk. • The fraction of the stellar radiative flux intercepted by the disk is evaluated at the surface where the mean radial optical depth to the stellar radiation is unity. This surface is selfconsistently calculated given the disk structure and properties of its dust, and the intercepted flux is used as a boundary condition in the integration of the radiative transfer equation. The models are based on the following simplifying assumptions: • The disk is in steady state, with a mass accretion rateṀ = dM/dt. • The disk is geometrically thin: H/R << 1, where H is the gas scale height of the disk and R is the radial distance. With this assumption, the vertical and radial structures can be calculated separately. • The viscosity coefficient is given by ν = α H c s , following the α-prescription from Shakura & Sunyaev (1973), where c s is the local sound speed and α is the viscosity parameter, assumed to be constant through the disk. The turbulent flux of energy is calculated consistently with the α prescription, assuming a Prandtl number P r = 1. • Dust and gas are well mixed in the entire disk, i.e., the dust to gas mass ratio of each dust ingredient and the grains size distribution are both taken to be constant in those regions where the temperature is lower than the sublimation temperature. • Dust and gas are in thermal equilibrium and a unique temperature is calculated for both components. • The radiation field is considered in two separate regimes: stellar radiation (UV, optical and near IR) and disk radiation (IR to radio wavelengths), and we use mean opacities calculated with appropriate weighting functions to describe the interaction of both fields with the disk material. • The radiative transfer in each regime is done by solving the two first moments of the radiative transfer equation. • The inner disk is truncated at the dust destruction radius and the inner cylindrical surface ("the wall") emits as a blackbody of 1400 K , Muzerolle et al. 2003). This is a new feature added to the models, which has not been included in the models described in the previous papers (D 'Alessio et al. 1998'Alessio et al. , 1999'Alessio et al. , 2001. We follow the prescriptions in D' Alessio et al. (2003) to calculate the emission from the wall, taking into account the contributions of the stellar and accretion luminosities to calculate the position of the dust destruction radius, and the dependence on inclination of the emitting area. More specifically, we compute the inner disk radii with eq. (3) from Muzerolle et al. (2003) where we take qχ d /κ d = 2.5, which gives the best fits to the excess near-IR continuum in Classical T Tauri disks. Then, we assume a fixed ratio between the height of the wall and the scale height at the dust destruction radius, z wall = 4H. However, we list separately the disk and wall contributions, allowing the user to include different heights or radii for the wall. Comparison with observations The synthetic SED of a model calculated for typical stellar parameters, mass accretion rate, disk radius, inclination angle and a dust grain size distribution with millimeter-sized grains, fits the median observed SED of the Classical T Tauri Stars in the Taurus Molecular Cloud (see D'Alessio et al. 2001). Also, Merín et al. (2004a) used the models to fit the detailed SEDs of two HAeBe stars (namely HD 34282 and HD 141569) and showed that not only the age was responsible for the disk evolution but also that the metallicity could play an important role. Allen et al. (2004) have constructed color-color diagrams for different clusters using observations from IRAC-Spitzer. They find three distinctive regions in the diagrams: one corresponding to stars without IR excess, other for Classical T Tauri Stars and another for embedded objects. The synthetic IRAC colors of a subset of models from this library (T * = 4000 K and age=1 Myr) agree very well with those of observed Classical T Tauri Stars (see also Sicilia-Aguilar et al. 2004;Hartmann et al. 2004). It is important to mention that it is very difficult, if not impossible, to constrain the model parameters by considering only the SED of a given object. The main problem is that the SED is given by the monochromatic flux emerging from whole disk, i.e., it is a spatially integrated quantity. Thus, detailed information on the spatial distribution of the emergent intensity, which is more directly related to the disk structure, is hidden in the flux. Fortunately, different parts of the SED are sensitive to different combination of disk parameters, but more than one of these combinations can produce the same emergent flux from the disk in a particular wavelength range. For instance, the near IR SED is dominated by the emission of the wall at the dust sublimation radius. For a given sublimation temperature, different silicate composition, grain sizes, total (stellar and accretion shock) luminosities result in different values of the sublimation radius, but different inclination angles and wall heights can be assumed to obtain a similar wall SED. In principle, a SED which spans a wide range of wavelengths (from UV to radio-frequencies) might help to disentangle some of the disk properties when a self-consistent model is used. For instance, the continuum at mid-IR emerges from optically thick zones of the disk. In this spectral range, the emergent flux depends on the disk mass accretion rate (if the accretion luminosity is similar or larger than the stellar luminosity), the disk inclination angle, and the grains composition and size distribution (which determines the height of the irradiation surface). The 10 and 18 µm silicate bands reflect the silicate composition and grain sizes in the atmosphere, thus a self-consistent model should have the same type of grains producing the observed features and absorbing the stellar radiation which determines the disk vertical temperature distribution and its mid-IR emergent continuum. However, not all the degeneracy can be removed by modeling the SED alone, specially if the dust at the midplane in the outer disk has a different size distribution than the dust in the atmosphere in the inner disk, which is expected if dust is settling and growing in disks. The only way to remove all possible degeneracies is when the SED and images if a self-consistent model are compared to multi-frequency high angular resolution observations. In this first release of the catalog we are making available only SEDs, but intensity distributions at different wavelengths can be calculated upon request for a given set of model parameters. WEB-BASED MODEL LIBRARY We have constructed a grid of models for different central stars and a range of values for the physical parameters of the disk and its dust. For the central stars, we chose 17 stars with spectral types from K7 to B9 (T eff = 4000−10000 K respectively) and ages of 1 and 10 Myrs. Stellar parameters were taken from the pre-main sequence tracks by Siess et al. (2000). For the disk, we considered four values of the accretion rate, namelyṀ = 10 −6 , 10 −7 , 10 −8 , 10 −9 M ⊙ /yr, a typical viscosity parameter α = 0.01, three values of the disk radius R disk = 100, 300, 800 AU and two inclination angles i = 30, 60 degrees. For the dust we use the abundances proposed by Pollack et al. (1994), a grain size distribution given by n(a) = a −p with p = 3.5, 2.5, with a minimum size typical of interstellar grains (a min = 0.005 µm) and six different maximum grain sizes a max = 1, 10, 100 µm, 1 mm, 1 cm, 10 cm. Columns (1) to (6) of table 1 show the stellar effective temperatures, spectral types, ages, radii, masses and luminosities for the central stars in the grid of disk models, column (7) lists the mass accretion rates of the disks, column (8) has the accretion luminosity, and columns (9) and (10) show the wall radius, in astronomical units and in stellar radii. Notice that, for a given central star, the wall radius increases with mass accretion rate, reflecting the fact that we are including the irradiation from the accretion shocks at the stellar surface as an additional heating source of the dust (see Muzerolle et al. 2003). The disk models were computed in parallel with Linux PCs at the Harvard-Smithsonian CfA (MA, US), LAEFF (Madrid, Spain) and the Linux cluster nostromo at the CRyA (Morelia, México) and used a total CPU time of approximately 8000 hours. CONCLUDING REMARKS The WWW catalog contains currently more than 3000 model SEDs and disk structures. We hope this library of models will be a valuable tool for analyzing the SEDs of young stars surrounded by accretion disks. It is a dynamical website. We plan to continue filling the catalog with the emission maps and new disk models with different degrees of dust settling to the midplane. We would like to acknowledge the corrections and suggestions made by an anonymous referee which have helped to improve the present manuscript. This work was supported by NASA through grants AR-09524.01-A from the Space Telescope Science Institute, and by NASA Origins of Solar Systems grant NAG5-9670. The numerical calculations were performed on the linux cluster at CRyA-UNAM, acquired through CONACYT grant 36571-E to Enrique Vázquez-Semadeni. PD acknowledges grants from CONACyT and PAPIIT, DGAPA, UNAM, México. B. Merín wishes to acknowledge to the INTA for its financial support with a graduate fellowhip. ). Also, mass1 Centro de Radioastronomía y Astrofísica, UNAM, More- lia, México. 2 Laboratorio de Astrofísica Espacial y Física Fundamental, INTA, Madrid, Spain. 3 Harvard-Smithsonian Center for Astrophysics, Cam- bridge, USA. 4 Instituto de Astrofísica de Andalucía, Granada, Spain. TABLE 1 1PARAMETERS FOR THE CENTRAL STARS AND DISKS IN THE LIBRARY OF DISK MODELS.Central Star Irradiated accretion disk model T eff SpType Age L * R * M * Ṁ Lacc R wall R wall K Myr L⊙ R⊙ M⊙ M⊙ yr −1 L⊙ AU R * 4000 K7 1 1.60 2.64 0.70 1 × 10 −9 0.009 0.11 8.65 1 × 10 −8 0.087 0.11 8.88 1 × 10 −7 0.869 0.13 10.68 1 × 10 −6 8.694 0.27 21.58 4000 K7 10 0.31 1.16 0.80 1 × 10 −9 0.022 0.05 8.97 1 × 10 −8 0.217 0.06 11.31 1 × 10 −7 2.167 0.13 24.52 1 × 10 −6 21.67 0.39 73.05 4500 K4 1 3.76 3.20 1.40 1 × 10 −9 0.014 0.16 10.92 1 × 10 −8 0.137 0.17 11.15 1 × 10 −7 1.375 0.19 12.80 1 × 10 −6 13.75 0.35 23.63 4500 K4 3 1.60 2.08 1.34 1 × 10 −9 0.020 0.11 11.06 1 × 10 −8 0.202 0.11 11.67 1 × 10 −7 2.024 0.16 16.54 1 × 10 −6 20.24 0.39 40.61 4500 K4 10 0.60 1.28 1.10 1 × 10 −9 0.027 0.07 11.18 1 × 10 −8 0.270 0.08 13.17 1 × 10 −7 2.700 0.15 25.65 1 × 10 −6 27.00 0.44 74.19 5000 K1 1 14.86 5.14 3.00 1 × 10 −9 0.018 0.32 13.56 1 × 10 −8 0.184 0.33 13.64 1 × 10 −7 1.834 0.34 14.37 1 × 10 −6 18.34 0.48 20.26 5000 K1 10 1.72 1.75 1.40 1 × 10 −9 0.025 0.11 13.64 1 × 10 −8 0.251 0.12 14.50 1 × 10 −7 2.514 0.17 21.25 1 × 10 −6 25.14 0.43 53.52 6000 G0 1 59.10 7.13 3.50 1 × 10 −9 0.015 0.65 19.49 1 × 10 −8 0.154 0.65 19.51 1 × 10 −7 1.542 0.65 19.74 1 × 10 −6 15.42 0.73 21.88 6000 G0 10 5.91 2.25 1.60 1 × 10 −9 0.020 0.20 19.56 1 × 10 −8 0.223 0.21 19.89 1 × 10 −7 2.234 0.24 22.92 1 × 10 −6 22.34 0.45 42.70 TABLE 1 ( 1CONTINUED).Central Star Irradiated accretion disk models T eff SpType Age L * R * M * Ṁ Lacc R wall R wall K Myr L⊙ R⊙ M⊙ M⊙ yr −1 L⊙ AU R * 7000 F1 1 130.39 7.78 4.00 1 × 10 −9 0.016 0.96 26.53 1 × 10 −8 0.161 0.96 26.54 1 × 10 −7 1.615 0.97 26.69 1 × 10 −6 16.15 1.02 28.12 7000 F1 10 11.02 2.20 1.70 1 × 10 −9 0.024 0.28 27.30 1 × 10 −8 0.243 0.28 27.57 1 × 10 −7 2.428 0.31 30.13 1 × 10 −6 24.28 0.50 48.81 8000 A6 1 165.08 6.70 4.0 1 × 10 −9 0.019 1.08 34.71 1 × 10 −8 0.188 1.08 34.71 1 × 10 −7 1.879 1.09 34.90 1 × 10 −6 18.79 1.14 36.63 8000 A6 10 12.62 1.85 1.9 1 × 10 −9 0.032 0.30 34.75 1 × 10 −8 0.322 0.30 35.15 1 × 10 −7 3.227 0.33 38.89 1 × 10 −6 32.27 0.56 65.46 9000 A2 1 215.39 6.05 4.0 1 × 10 −9 0.021 1.23 43.84 1 × 10 −8 0.207 1.23 43.86 1 × 10 −7 2.077 1.24 44.05 1 × 10 −6 20.77 1.29 45.91 9000 A2 3 71.00 3.47 2.7 1 × 10 −9 0.024 0.71 43.90 1 × 10 −8 0.244 0.71 43.94 1 × 10 −7 2.445 0.72 44.64 1 × 10 −6 24.45 0.82 50.89 9000 A2 10 17.08 1.70 2.0 1 × 10 −9 0.037 0.34 43.99 1 × 10 −8 0.370 0.35 44.41 1 × 10 −7 3.697 0.38 48.46 1 × 10 −6 26.97 0.62 78.16 10000 B9.5 1 251.94 5.30 4.0 1 × 10 −9 0.024 1.33 54.13 1 × 10 −8 0.237 1.33 54.15 1 × 10 −7 2.371 1.34 54.38 1 × 10 −6 23.71 1.40 56.62 10000 B9.5 10 29.20 1.80 2.3 1 × 10 −9 0.040 0.45 54.30 1 × 10 −8 0.402 0.46 54.63 1 × 10 −7 4.015 0.48 57.87 1 × 10 −6 40.15 0.70 83.62 . L E Allen, N Calvet, P D&apos;alessio, B Merin, L Hartmann, ApJS. 154363Allen, L. E., Calvet, N., D'Alessio, P., Merin, B., Hart- mann, L. et al. 2004, ApJS, 154, 363 . Y Aikawa, G J Van Zadelhoff, E F Van Dishoeck, E Herbst, A&A. 386622Aikawa, Y., van Zadelhoff, G. J., van Dishoeck, E. F., & Herbst, E. 2002, A&A, 386, 622 . Y Aikawa, M Momose, W Thi, G Van Zadelhoff, C Qi, G A Blake, E F Van Dishoeck, PASJ. 5511Aikawa, Y., Momose, M., Thi, W., van Zadelhoff, G., Qi, C., Blake, G. A., & van Dishoeck, E. F. 2003, PASJ, 55, 11 . S V W Beckwith, A I Sargent, R S Chini, R Guesten, AJ. 99924Beckwith, S. V. W., Sargent, A. I., Chini, R. S., & Guesten, R. 1990, AJ, 99, 924 . S V W Beckwith, A I Sargent, ApJ. 381250Beckwith, S. V. W. & Sargent, A. I. 1991, ApJ, 381, 250 . N Calvet, A Patino, G C Magris, P &amp; D&apos;alessio, ApJ. 380617Calvet, N., Patino, A., Magris, G. C., & D'Alessio, P. 1991, ApJ, 380, 617 . N Calvet, G C Magris, A Patino, P &amp; D&apos;alessio, Revista Mexicana de Astronomía y Astrofísica. 2427Calvet, N., Magris, G. C., Patino, A., & D'Alessio, P. 1992, Revista Mexicana de Astronomía y Astrofísica, 24, 27 . J S Carr, A T Tokunaga, J Najita, ApJ. 603213Carr, J. S., Tokunaga, A. T., & Najita, J. 2004, ApJ, 603, 213 . N Calvet, E Gullbring, ApJ. 509802Calvet, N. & Gullbring, E. 1998, ApJ, 509, 802 . E I Chiang, P Goldreich, ApJ. 519279Chiang, E. I. & Goldreich, P. 1999, ApJ, 519, 279 . E I Chiang, P Goldreich, ApJ. 490368Chiang, E. I. & Goldreich, P. 1997, ApJ, 490, 368 . M Colavita, ApJ. 59283Colavita, M., et al. 2003, ApJ, 592, L83 . P D&apos;alessio, J Cantó, N Calvet, S Lizano, ApJ. 500411D'Alessio, P., Cantó, J., Calvet, N. and Lizano, S., 1998, ApJ, 500, 411 . P D&apos;alessio, N Calvet, L Hartmann, S Lizano, J Cantó, ApJ. 527893D'Alessio, P., Calvet, N., Hartmann, L., Lizano, S. and Cantó, J., 1999, ApJ, 527, 893 . P D&apos;alessio, N Calvet, L Hartmann, ApJ. 553321D'Alessio, P., Calvet, N. and Hartmann, L., 2001, ApJ, 553, 321 P ; E D&apos;alessio, A Dutrey, S Guilloteau, IAU Symposium, 221. Dartois. 399773D'Alessio, P. 2003, IAU Symposium, 221. Dartois, E., Dutrey, A., & Guilloteau, S. 2003, A&A, 399, 773 . C P Dullemond, C Dominik, A Natta, ApJ. 560957Dullemond, C. P., Dominik, C., & Natta, A. 2001, ApJ, 560, 957 . C P Dullemond, G J Van Zadelhoff, A Natta, A&A. 389464Dullemond, C. P., van Zadelhoff, G. J., & Natta, A. 2002, A&A, 389, 464 . C P Dullemond, A&A. 395853Dullemond, C. P. 2002, A&A, 395, 853 . A Dutrey, S Guilloteau, G Duvert, L Prato, M Simon, K Schuster, F Menard, A&A. 309493Dutrey, A., Guilloteau, S., Duvert, G., Prato, L., Simon, M., Schuster, K., & Menard, F. 1996, A&A, 309, 493 . A Dutrey, S Guilloteau, M Guelin, A&A. 31755Dutrey, A., Guilloteau, S., & Guelin, M. 1997, A&A, 317, L55 . S Guilloteau, A Dutrey, A&A. 339467Guilloteau, S. & Dutrey, A. 1998, A&A, 339, 467 . E Gullbring, L Hartmann, C Briceño, N Calvet, ApJ. 492323Gullbring, E., Hartmann, L., Briceño, C., & Calvet, N. 1998, ApJ, 492, 323 . L Hartmann, N Calvet, E Gullbring, P &amp; D&apos;alessio, ApJ. 495385Hartmann, L., Calvet, N., Gullbring, E., & D'Alessio, P. 1998, ApJ, 495, 385 . L Hartmann, ApJin preparationHartmann L., et al. 2004 ApJin preparation. . P Hartigan, S Edwards, L Ghandour, ApJ. 452736Hartigan, P., Edwards, S., & Ghandour, L. 1995, ApJ, 452, 736 . S J Kenyon, L Hartmann, ApJ. 323714Kenyon, S.J. & Hartmann, L., 1987, ApJ, 323, 714 . C D Koresko, ApJ. 507145Koresko, C. D. 1998, ApJ, 507, L145 . R Lachaume, F Malbet, J.-L Monin, A&A. 400185Lachaume, R., Malbet, F., & Monin, J.-L. 2003, A&A, 400, 185 . O P Lay, J E Carlstrom, R E Hills, ApJ. 489917Lay, O. P., Carlstrom, J. E., & Hills, R. E. 1997, ApJ, 489, 917 . C Mccabe, G Duchêne, A M Ghez, ApJ. 588113McCabe, C., Duchêne, G., & Ghez, A. M. 2003, ApJ, 588, L113 . F Malbet, C Bertout, ApJ. 383814Malbet, F. & Bertout, C. 1991, ApJ, 383, 814 . F Malbet, ApJ. 507149Malbet, F., et al. 1998, ApJ, 507, L149 . F Malbet, R Lachaume, J.-L Monin, A&A. 379515Malbet, F., Lachaume, R., & Monin, J.-L. 2001, A&A, 379, 515 . V Mannings, A I Sargent, ApJ. 529391Mannings, V. & Sargent, A. I. 2000, ApJ, 529, 391 . B ; B Merín, B Montesinos, C Eiroa, E Solano, A Mora, P D&apos;alessio, N Calvet, A&A. 419301Universidad Autónoma de Madrid MerínPhD ThesisMerín, B., 2004, PhD Thesis, Universidad Autónoma de Madrid Merín, B., Montesinos, B.; Eiroa, C.; Solano, E.; Mora, A.; D'Alessio, P.; Calvet, N. et al. 2004, A&A, 419, 301 . B Merín, P D&apos;alessio, N Calvet, L Hartmann, B Montesinos, J Muzerolle, L Hartmann, N Calvet, AJ. 1162965Merín. B., D'Alessio, P., Calvet, N., Hartmann, L., Mon- tesinos, B. 2005, in preparation Muzerolle, J., Hartmann, L., & Calvet, N. 1998, AJ, 116, 2965 . J Muzerolle, L Hartmann, N Calvet, AJ. 116455Muzerolle, J., Hartmann, L., & Calvet, N. 1998, AJ, 116, 455 . J Muzerolle, N Calvet, L Hartmann, ApJ. 492743Muzerolle, J., Calvet, N., & Hartmann, L. 1998, ApJ, 492, 743 . J Muzerolle, N Calvet, L Hartmann, ApJ. 550944Muzerolle, J., Calvet, N., & Hartmann, L. 2001, ApJ, 550, 944 . J Muzerolle, N Calvet, L Hartmann, P &amp; D&apos;alessio, ApJL. 597149Muzerolle, J., Calvet, N., Hartmann, L., & D'Alessio, P. 2003, ApJL, 597, L149 J R Najita, S Edwards, G Basri, J Carr, Protostars and Planets IV. 457Najita, J. R., Edwards, S., Basri, G., & Carr, J. 2000, Protostars and Planets IV, 457 . A Natta, T Prusti, R Neri, D Wooden, V P Grinin, V Mannings, A&A. 371186Natta, A., Prusti, T., Neri, R., Wooden, D., Grinin, V. P., & Mannings, V. 2001, A&A, 371, 186 . D L Padgett, W Brandner, K R Stapelfeldt, S E Strom, S Terebey, D Koerner, AJ. 1171490Padgett, D. L., Brandner, W., Stapelfeldt, K. R., Strom, S. E., Terebey, S., & Koerner, D. 1999, AJ, 117, 1490 . J B Pollack, D Hollenbach, B Beckwith, D P Simonelli, R Roush, W Fong, ApJ. 421615Pollack, J.B., Hollenbach, D., Beckwith, B., Simonelli, D.P., Roush, R., Fong, W., 1994, ApJ, 421, 615 . C Qi, J E Kessler, D W Koerner, A I Sargent, G A Blake, ApJ. 597986Qi, C., Kessler, J. E., Koerner, D. W., Sargent, A. I., & Blake, G. A. 2003, ApJ, 597, 986 . L F Rodríguez, P D&apos;alessio, D J Wilner, P T P Ho, Nature. 395355Rodríguez, L. F., D'Alessio, P., Wilner, D. J., Ho, P. T. P. et al. 1998, Nature, 395, 355 . A Sicilia-Aguilar, ApJin preparationSicilia-Aguilar, A. et al. 2004. ApJin preparation. . L Siess, E Dufour, M Forestini, A&A. 358593Siess, L., Dufour, E., & Forestini, M. 2000, A&A 358, 593 . M Simon, A Dutrey, S Guilloteau, ApJ. 5451034Simon, M., Dutrey, A., & Guilloteau, S. 2000, ApJ, 545, 1034 . N I Shakura, R A Sunyaev, A&A. 24337Shakura, N.I. & Sunyaev, R.A., 1973, A&A, 24, 337 . K R Stapelfeldt, J E Krist, F Menard, J Bouvier, D L Padgett, C J Burrows, ApJ. 50265Stapelfeldt, K. R., Krist, J. E., Menard, F., Bouvier, J., Padgett, D. L., & Burrows, C. J. 1998, ApJ, 502, L65 . P G Tuthill, J D Monnier, W C Danchi, D D S Hale, C H Townes, ApJ. 577826Tuthill, P. G., Monnier, J. D., Danchi, W. C., Hale, D. D. S., & Townes, C. H. 2002, ApJ, 577, 826 . A J Weinberger, E E Becklin, G Schneider, B A Smith, P J Lowrance, M D Silverstone, B Zuckerman, R J Terrile, ApJ. 52553Weinberger, A.J., Becklin, E.E., Schneider, G., Smith, B.A., Lowrance, P.J., Silverstone, M.D., Zuckerman, B., & Terrile, R.J. 1999, ApJ 525, L53 . D J Wilner, P T P Ho, L F Rodríguez, ApJ. 470117Wilner, D. J., Ho, P. T. P., & Rodríguez, L. F. 1996, ApJ, 470, L117 D J Wilner, O P Lay, Protostars and Planets IV. 509Wilner, D. J. & Lay, O. P. 2000, Protostars and Planets IV, 509 . Paola D&apos;alessio, 58090Campus Morelia, Apartado Postal; Morelia, Michoacán, MéxicoCentro de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de Mé[email protected] D'Alessio: Centro de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México, Campus Morelia, Apartado Postal 3-72, 58090 Morelia, Michoacán, México ([email protected]). Nuria Calvet, Lee Hartmann, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street. Cambridge, MA 02138, [email protected], [email protected] Calvet and Lee Hartmann: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA. ([email protected], [email protected]).
[]
[ "Fermions and Disorder in Ising and Related Models in Two Dimensions", "Fermions and Disorder in Ising and Related Models in Two Dimensions" ]
[ "V N Plechko \nBogoliubov Laboratory of Theoretical Physics\nJoint Institute for Nuclear Research\n141980DubnaRussia\n" ]
[ "Bogoliubov Laboratory of Theoretical Physics\nJoint Institute for Nuclear Research\n141980DubnaRussia" ]
[]
The aspects of phase transitions in the two-dimensional Ising models modified by quenched and annealed site disorder are discussed in the framework of fermionic approach based on the reformulation of the problem in terms of integrals with anticommuting Grassmann variables.
10.1134/s1063779610070166
[ "https://arxiv.org/pdf/1008.4961v1.pdf" ]
59,416,012
1008.4961
4cf7f52a342b6e8f7010609e12075a3df8138134
Fermions and Disorder in Ising and Related Models in Two Dimensions 29 Aug 2010 V N Plechko Bogoliubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research 141980DubnaRussia Fermions and Disorder in Ising and Related Models in Two Dimensions 29 Aug 2010 The aspects of phase transitions in the two-dimensional Ising models modified by quenched and annealed site disorder are discussed in the framework of fermionic approach based on the reformulation of the problem in terms of integrals with anticommuting Grassmann variables. Introduction The two-dimensional (2D) Ising model (2DIM) plays important role in the theory of phase transitions and critical phenomena due to the analytic results available (in pure case) over the whole temperature range [1,2,3,4]. In this report, we review the new mathematical methods of analysis and the results so far obtained for 2DIM modified by site disorder with application of the anticommuting (Grassmann) integrals. The Ising model by itself, in its original formulation, is a lattice model of a ferromagnet presented by a set of Ising spins σ mn = ±1 interacting with their nearest neighbours along the lattice bonds [1,2,3,4]. The modern approaches to Ising models are merely based however on the fermionic path integral reformulation of the problem in terms of the integrals with anticommuting Grassmann variables [5,6,7,8,9,10]. The advantage of the use of Grassmann variables is that they are canonical variables, as distinct from Ising spins, so that one can pass to the momentum space for fermions [5,6,9,10]. In the pure case, the fermionization of 2DIM results the Gaussian fermionic integral for the partition function, Z, which in essence means the exact solution of the problem [5,6,8,9]. The formulations of this kind also admit the interpretation of the 2D Ising model as a lattice quantum field theoretical (QFT) problem [11,12,13,14]. In particular, the pure 2DIM on a rectangular lattice may be presented by the Majorana action with two-component massive fermions on a lattice [14]. By doubling the number of fermions, one can pass as well to the Dirac action [14]. The effects of disorder in 2DIM have been extensively studied during last decades both theoretically and in the precise Monte-Carlo simulations [11]- [38]. The disordered versions of 2DIM may assume either random modification of the interaction along the lattice bonds [11,12,15,17,18,30,31], or an admixture of the random nonmagnetic impurities at lattice sites [14,19,22,25,26,27,28], also see [32,33,34,35,36]. In each of this cases one may be interested, motivated by physical considerations, and possible applications, in either quenched or annealed versions of disorder. In the quenched version, the impurities are assumed to be frozen over the sample. In this case their distribution does not depend on temperature and other tuning parameters, like magnetic field, and one have to average rather the free energy −βF = ln Z than the partition function Z itself, over the impurities [11,12,13,14]. In the annealed version, the impurities may be created and annihilated by a variation of external parameters and their concentration is governed by the temperature rate and associated chemical potentials [32,33,36,38]. In this case, one has to average in Z itself over all states [33,34,35,36]. In essence, for annealed site disorder, the dilute site can be viewed as being presented by additional (zero) component of Ising spin. The resulting model is also known as the spin-1 Ising model, or the Blume-Capel model [32,33,36,38]. The basic variable in the Blume-Capel model is S mn = 0, ±1. The modifications introduced by disorder of any kind typically result in the appearance of the additional non-Gaussian terms in fermionic action. Despite of the non-Gaussian action in the fermionic integral for Z, the precise results can still be derived for disordered 2D Ising models [11,12,13,14,36]. In what follows, we only consider the generic case of the random-site disorder (site dilution) introduced by adding some amount of nonmagnetic impurities into a sample, which may be either quenched or annealed, and discuss the consequences that can be derived from the fermionic integral representations for the partition functions of that models. The quenched site dilute Ising model The basic variable in the pure 2DIM is the dichotomic Ising spin σ mn = ±1. The spins are disposed at the sites of a regular two-dimensional lattice and interact with nearest neighbours along the lattice bonds. The disordered version (quenched site dilution) assumes that some sites may be nonmagnetic at random. It is suitable to introduce such sites by adding the variable y mn = 0, 1 at each mn site, corresponding to the magnetic moment of Ising spin at a given site [14]. The resulting hamiltonian is: H{y | σ} = − mn [ J 1 y mn y m+1n σ mn σ m+1n + J 2 y mn y mn+1 σ mn σ mn+1 ] ,(1) where J 1,2 are the ferromagnetic exchange energies; the lattice sites are marked by discrete coordinates mn, where m, n = 1, 2, . . . , L, are running in horizontal and vertical directions, respectively; we put L 2 → ∞ at final stages. For fixed disorder, the partition function and free energy are defined by the canonical equations: Z{y} = Σ exp(−βH {y | σ}) = exp(−βF {y}), where the sum is taken over the all possible spin configurations provided by σ mn = ±1 at each site. The hamiltonian modulus in the Gibbs exponential is: − βH {y | σ} = mn [ b 1 y mn y m+1n σ mn σ m+1n + b 2 y mn y mn+1 σ mn σ mn+1 ] ,(2) where b 1,2 = βJ 1,2 , and β = 1/kT is the inverse temperature. For a typical bond weight from Z we write: exp (b yy ′ σσ ′ ) = cosh(b yy ′ ) + yy ′ σσ ′ sinh(b), since σσ ′ = ±1 and yy ′ = 0, 1. The partition function can then be written in the form: Z { y } = R { y } Q { y }, where R{y} is a nonsingular spin-independent prefactor, formed by a product of cosh(byy ′ ), while Q{y} is the reduced partition function: Q{y} = Sp (σ) mn (1 + t 1 y mn y m+1n σ mn σ m+1n )(1 + t 2 y mn y mn+1 σ mn σ mn+1 ) ,(3) where t 1,2 = tanh b 1,2 , and we assume a properly normalized spin averaging, such that Sp (1) = 1 and Sp (σ mn ) = 0 at each site. In given case, since we are interesting in quenched disorder, we have to average over y mn = 0, 1 rather the free energy −β ln Z{y} than the partition function Z{y} itself. The prefactor R{y} provides only additive nonsingular contribution like ln R{y} to ln Z{y} and will be ignored in what follows. The problem thus reduces to the averaging of −βF Q = ln Q{y} over y mn = 0, 1 at each site. 1 The known device to avoid the averaging of the logarithm is the replica trick: [−βF Q { y }] = [ln Q{ y }] = [ 1 N (Q N {y} − 1)] N →0 , where [. . .] stands for the average over the impurities. In this scheme, one takes N identical copies of the original partition function and average Q N {y} , with formal limit N → 0 to be performed at final stages. The simplest distribution for the averaging over the impurities is assumed in what follows: w (y mn ) = p δ (1 − y mn ) + (1 − p) δ (y mn ), where p is the probability that any given site, chosen at random, is occupied by the normal Ising spin, while 1 − p is the probability that the given site is dilute. The averaging of any function like A(y mn ) then results: [A(y mn )] = p A(1) + (1 − p) A(0). The partition function with fixed disorder (3) can be transformed into a Gaussian fermionic integral following the method of the mirror-ordered factorization for the density matrix [8,9,14]. Introducing a pair of fermionic (Grassmann) variables a mn , a * mn , we write for the horizontal weight: 1 + t 1 y mn y m+1n σ mn σ m+1n = da * mn da mn e amna * mn (1 + a mn y mn σ mn ) (1 + t 1 a * mn y m+1n σ m+1n ) .(4) In a conventional notation, the horizontal weight is now presented as a product of two factors, A mn A * m+1n , with decoupled spins, taken under the Gaussian averaging. 2 In a similar way, one prepares the factorized vertical weights in the form B mn B * mn+1 . At next stage, we have to arrange the factors in their global products in order the elimination of spin variables be possible [8,9,14]. The final point is that we have to average over σ mn = ±1 the product of four factors with the same spin like A * mn B * mn A mn B mn at each site, thus passing to a purely fermionic expression. This results the integral [14]: Q{y} = mn db * mn db mn da * mn da mn exp mn a mn a * mn + b mn b * mn + + y 2 mn a mn b mn + t 1 t 2 a * m−1n b * mn−1 + (t 1 a * m−1n + t 2 b * mn−1 )(a mn + b mn ) ,(5) where a mn , a * mn , b mn , b * mn are Grassmann variables. The disorder parameters y 2 mn = 0, 1 are still free parameters in the above integral, while Ising degrees being already eliminated. In turn, integrating out a part of fermionic variables from (5), namely the variables a mn , b mn , we obtain the reduced integral for Q in terms of a * mn , b * mn [14]. Changing notation, a * mn , b * mn → c mn , −c mn , the integral becomes: Q {y} = mn dc mn dc mn y 2 mn exp mn y −2 mn c mncmn + + (c mn +c mn ) (t 1 c m−1n − t 2c mn−1 ) − y 2 mn t 1 t 2 c m−1ncmn−1 ,(6) where c mn ,c mn are again Grassmann variables, and we assume: y 2 mn exp ( y −2 mn c mncmn ) = y 2 mn + c mncmn , with y 2 mn = 0, 1. The integrals (5) and (6) are still the exact expressions for Q{y} originally defined in (3). Taking y mn = 1 at all sites, we obtain the 2DIM integral for the pure case: Q {1} = mn dc mn dc mn exp mn c mncmn + + (c mn +c mn ) (t 1 c m−1n − t 2c mn−1 ) − t 1 t 2 c m−1ncmn−1 .(7) In particular, the evaluation of the integral (7) by transformation to the momentum space results the Onsager's expression for Z and ln Z of the standard rectangular lattice [14]. The advantage of the reduced representation with two variables per site like (6) and (7) is also that it explicitly illuminates the Majorana-Dirac structures of 2DIM already at the lattice level. This can be most easily seen in the pure case. 3 2 Let us remember that Grassmann variables (nonquantum fermionic fields) are the purely anticommuting fermionic symbols. Given a set of Grassmann variables a 1 , a 2 , ... , a N , we have a i a j +a j a i = 0, and a 2 j = 0. The rules of integration over Grassmann variables (fermionic path integral) were originally introduced by F.A. Berezin in QFT context [10]. The elementary rules of integration for one variable are [10]: da j · a j = 1 , da j · 1 = 0. In a multidimensional integral, the differential symbols da 1 , da 2 , . . . , da N are again anticommuting with each other and with the variables. Gaussian fermionic integrals are in general related to the determinants and Pfaffians. In the field-theoretical language, the fermionic form in the exponential under the integral is typically called action. For more comments about Gaussian fermionic integrals in a related context also see [5,6,9,10,14,36]. To prepare the N-replicated integral (6), with same set of y mn in each copy, it is suitable to multiply first (6) by factor δ(y mn − 1) + δ(y mn − 0) = 1, which realizes the decomposition over the states y mn = 0, 1. We assume that y mn = 1 is realized with probability p, and y mn = 0 is realized with probability 1 − p. The averaging of the Nreplicated integral (6) over the disorder within the N-replica scheme finally results the theory with interaction presented by the integral [14]: Q N {y} av = mn N α=1 dc (α) mn dc (α) mn mn p N α=1 e S (α) mn + (1 − p) N α=1 c (α) mnc (α) mn = p L 2 mn N α=1 dc (α) mn dc (α) mn exp mn N α=1 S (α) mn + 1 − p p N α=1 c (α) mnc (α) mn e −S (α) mn ,(8) where S (α) mn is the replicated Gaussian action from (6) for the pure case, see (7). The effect of dilution is introduced here through the second non-Gaussian term, with bar coupling constant g 0 ∝ 1−p p . The continuum-limit field theory for weak site dilution (RS 2DIM) that follows from the exact lattice integral (8) is commented in more detail in [14]. This corresponds to the low-momenta sector of the exact lattice theory associated with (6) and (8). To extract the effective low-momenta (long-wave) effective action, one has to distinguish explicitly the higher and low-momentum lattice fermionic modes in the exact lattice action (8). Integrating out the higher-momentum modes in the first order of perturbation theory then results the N-colored Gross-Neveu model (N → 0) with action [14]: S G−N = d 2 x N α=1 m N ψ (α) 1 ψ (α) 2 + 1 2 ψ (α) 1 (∂ 1 + i ∂ 2 ) ψ (α) 1 + 1 2 ψ (α) 2 (−∂ 1 + i ∂ 2 ) ψ (α) 2 + g N N α=1 ψ (α) 1 ψ (α) 2 2 ,(9)m N = 1 − t 1 − t 2 − t 1 t 2 2(t 1 t 2 ) c + A N 1 − p p B A 1 2 (t 1 t 2 ) c , g N = A N 1 − p p B 2 A 2 1 4 (t 1 t 2 ) c , where ψ 1 , ψ 2 are the anticommuting Majorana components, m N and g N are the effective mass and charge, respectively. The parameters A and B are some lattice fermionic averages (definite numbers) explicitly calculated in [14]. Since the replica limit N → 0 is assumed at final stages, one can put N = 0 and A N = 1 in mass and charge already in (9). The Gaussian part in (9) is the replicated Majorana action, corresponding to the pure case, with the mass term modified by disorder. The condition of zero mass will give the c m−1n → c mn −∂ m c mn ,c mn →c mn −∂ ncmn , where ∂ m , ∂ n are lattice derivatives (momenta). This results the Majorana like action S =mc mncmn + . . . with mass term and the kinetic part [14]. Evidently, the mass parameter will bem = 1 − t 1 − t 2 − t 1 t 2 . The conditionm = 0 defines the critical point of 2DIM in pure case [14]. This conditionm = 0 may be rewritten as well in the form: sinh(2b 1 ) sinh(2b 2 ) = 1. The disordered phase corresponds to positive mass, while the ordered phase corresponds to negative mass. coexisting curve (T c , p) for quenched site dilution, which is exact for small concentration of vacancies, as p → 1. The analysis of that curve at strong and moderated dilution, that is coded in the exact integrals like (5), (8), has not yet been performed in detail. This will probably claim for the advanced methods of approximation like lattice RG or application of variational approaches like Hartree-Fock-Bogoliubov. The effective continuum-limit N = 0 Gross-Neveu model similar to (9), but with another m N and g N , has been originally derived and analyzed by DD-SSL as an effective theory near T c for weak bond dilution [11,12,13,24,28]. The DD-SSL predictions for weak bond dilution, based on the renormalization group (RG) analysis of their effective N = 0 Gross-Neveu model in the low momentum sector, with taking also into account some fine symmetry effects related to the Kramers-Wannier duality and the CFT interpretations of 2DIM in the pure case, are the double-logarithmic singularity in the specific heat and the logarithmic corrections to the pure-case power laws in other thermodynamic functions, as T → T c [11,12,13,24,28]. The derivation of the effective action for site dilution in the N = 0 Gross-Neveu form (9) thus supports the idea of the double-logarithmic singularity in specific heat and only logarithmic corrections to the pure-case power laws in other functions (as in the DD-SSL scheme) also for random-site 2D Ising ferromagnets, for weak quenched dilution [14]. For more details and a recent discussion of the effects of quenched disorder in RB and RS versions of 2DIM along theoretical and experimental (Monte-Carlo) lines also see [19,22,23,24,25,26,27,28,29,30,31]. In conclusion, we note that the fermionic integrals like (5), (6) and (8) are still the exact lattice expressions for Z (either its reduced version Q). Respectively, one can try other methods of the averaging as well as other tools of analysis of fermionic theories of random-site 2DIM directly on a lattice, starting from these exact fermionic integrals. The Blume-Capel model The Blume-Capel (BC) model is a classical spin-1 model originally introduced to study phase transitions in specific magnetic materials with a possible admixture of non-magnetic states. This is a model with the tricritical point at the critical line in (T c , ∆ 0 ) plane [32,33,34,38]. The Blume-Capel model can also be viewed as the annealed site-dilute version of the ordinary Ising model [36]. The hamiltonian is: H = − L m=1 L n=1 J 1 S mn S m+1n + J 2 S mn S mn+1 + ∆ 0 L m=1 L n=1 S 2 mn ,(10) where S mn = 0, ±1 is the BC spin-1 variable associated with the mn lattice site (m, n = 1, 2, 3, . . . , L). As distinct from the quenched disorder case, the zero-spin or vacancy state of S mn = 0, ±1 is now rather to be considered as a one of the three possible states of spin variable [32,33,35,36]. The hamiltonian modulus in the Gibbs exponential is: − βH = L m=1 L n=1 K 1 S mn S m+1n + K 2 S mn S mn+1 + ∆ L m=1 L n=1 S 2 mn ,(11) with K 1,2 = βJ 1,2 and ∆ = −β∆ 0 . The decomposition like S mn = y mn σ mn , that was used in the quenched case, is still possible, but in order to eliminate the Ising degrees by transformation of Z into a fermionic integral it is now more suitable to make use rather of the gauge transformation like S mn → σ mn S mn under the averaging, since the product of cosines (factor R{y}) is to be included into Z anyhow, before averaging. The Boltzmann factors from the Gibbs exponential associated with (11) can be written as (extended) polynomials in variables S mn = 0, ±1. The polynomial interpretation is important for fermionization [36]. The partition function becomes (with λ i = sinh K i , λ ′ i = cosh K i − 1): Z = Sp (S) L m=1 L n=1 e ∆S 2 mn (1 + λ 1 S mn S m+1n + λ ′ 1 S 2 mn S 2 m+1n ) × (1 + λ 2 S mn S mn+1 + λ ′ 2 S 2 mn S 2 mn+1 ) .(12) The factorization of local bond weights can again be performed by analogy with the Ising case, but now we have to add the even part of the polynomial into the Gaussian exponential in the measure [36]. For the horizontal weights, with Grassmann variables a mn ,ā mn , we write: 1 + λ 1 S mn S m+1n + λ ′ 1 S 2 mn S 2 m+1n = = dā mn da mn exp{(1 + λ ′ 1 S 2 mn S 2 m+1n ) a mnā mn } × (1 + a mn S mn ) (1 + λ 1āmn S m+1n ) ,(13) and similarly we can factorize the vertical bond Boltzmann weights. The factorization (13) makes it possible to pass to the purely fermionic expression for Z in few steps. The final fermionic integral for Z appears in the form [36]: Z = (2e ∆ cosh K 1 cosh K 2 ) L 2 L m=1 L n=1 dā mn da mn db mn db mn × exp L m=1 L n=1 a mnāmn + b mnbmn + a mn b mn +(t 1ām−1n + t 2bmn−1 )(a mn + b mn ) + t 1 t 2ām−1nbmn−1 + g 0 a mnāmn b mnbmn exp (−γ 1 a m−1nām−1n − γ 2 b mn−1bmn−1 −t 1 t 2ām−1nbmn−1 ) ,(14) with parameters (where ∆ = −β∆ 0 ): g 0 = e −∆ 2 cosh K 1 cosh K 2 , γ i = 1 − 1 cosh K i = 1 − 1 − t 2 i .(15) The fermionic integral (14) is still the exact expression, even for finite lattices, provided we assume free boundary conditions both for spins and fermions. The exponential in the interaction term can be expanded into a series, which results a finite polynomial in Grassmann variables. For instance, for particular exponential factor we find: exp(−γ 1 a m−1nām−1n ) = 1 − γ 1 a m−1nām−1n , and analogously one can expand other factors. The B-C model is thus presented in (14) as a fermionic theory with free-fermion (Gaussian) part and a polynomial interaction terms in the action, the highest term in the interaction polynomial is of order 8 in fermions. The overall coupling constant in interaction term is g 0 ∝ exp{β∆ 0 }, the increasing g 0 ∝ exp{β∆ 0 } means increasing dilution. There are also additional parameters γ 1,2 in the interaction polynomial. These parameters come from accounting properly the weights related to factors cosh(b 1 S mn S m+1n ) cosh(b 2 S mn S mn+1 ) in Z{S} for Blume-Capel, there are no analogs of γ 1,2 terms in the quenched case. In fact, these terms with γ 1,2 are responsible for the existence of the tricritical point in the Blume-Capel model. The elimination of the variables like a mn , b mn is now not possible in action (14), at least straightforwardly, as distinct from the quenched case (cf. (6)- (7)), just because of the presence of the combinations like a mn a m−1 and b mn b mn−1 in the γ 1,2 terms in the non-Gaussian part of the lattice action (14). Despite of the non-Gaussian representation for Z, it is still possible to extract physical information by taking the continuous limit (low momenta sector) of the BC lattice action like (14) and analyzing it using tools from quantum field theory. The details of constructing the effective two-component fermionic action at low momenta can be seen in [36]. The resulting action includes the Gaussian part, with mass term modified by disorder, and the four-fermion interaction of the form (ψψ | ∂ x ψ∂ yψ ). The condition of the zero effective mass already gives the equation for the BC line of phase transitions (critical line) in the (T c , ∆ 0 ) plane, while the effect of the interaction is merely to modify the kinetic terms in the action [36]. This also provide grounds to estimate the position of the tricritical point [36]. These effects are shortly commented below. The critical line is given by the condition of vanishing the mass term in effective BC action: m BC = 1 − t 1 − t 2 − t 1 t 2 + g 0 = 0, where g 0 is given in (15). Following [36], we now consider the isotropic lattice case, with t 1 = t 2 = t and K 1 = K 2 = K. The critical line is given by m BC = 1 + g 0 − 2t − t 2 = 0, with t = tanh K and K = βJ, where β = 1/T . In notation with K = J T → 1 T and ∆ 0 J → ∆ 0 , the criticality condition becomes: tanh 2 1 T + 2 tanh 1 T − 1 = e ∆ 0 T 2 cosh 2 1 T ,(16) which may be written as well in the form: sinh 2 T = 1 + 1 2 exp ∆ 0 T ,(17) which in turn admits the explicit solution for ∆ 0 as function of T = T c in the form: ∆ 0 = T ln 2 sinh 2 T − 2 .(18) The inverse dependence for T c as function of ∆ 0 can be evaluated numerically by solving any of the above equations, which are all equivalent to the condition of the zero mass in the effective continuum-limit theory that follow from (14). This results the critical line for the BC model shown in Fig. 1 in [36]. The critical line is started with maximal T c at the left end at ∆ 0 = −∞, which corresponds to the pure case (g 0 = 0), and goes lower as dilution increases, with increasing ∆ 0 and g 0 ∝ exp ∆ 0 −2 T . The critical line finally terminates at Chemical potential Temperature T c (∆ 0 ) ∆ 0 Ref. [33] Ref. [35] Eqs. (16) (16)- (18). Note that small variation of ∆ 0 causes more significant changes in T c (∆ 0 ) in the region near ∆ 0 = 2, as it is to be expected from (16)- (18). ∆ 0 = 2 at zero temperature. There is no ordered phase at stronger dilution, as it also can be deduced from (16)- (18). The theoretical critical line is compared with the results of the recent Monte-Carlo simulations for B-C model, see Fig. 1 in [36]. The agreement is found to be very good (typically within 1% accuracy) over the whole temperature range [36]. The available numerical data for (T c , ∆ 0 ) are also presented (in part) in Table 1. The position of the tricritical point at the (T c , ∆ 0 ) line can as well be estimated from the condition of vanishing the kinetic (stiffness) coefficient in the effective B-C action associated with (14) [36]. The Hartree-Fock-Bogoliubov method has been applied to decouple the four-fermion interaction term in the effective action to extract the corrections to the kinetic part [36]. The singular point where the kinetic coefficient vanishes was found at (T * t , ∆ * 0,t ) ≃ (0.42158, 1.9926), in a reasonably good agreement with the results of Monte-Carlo simulations for the position of tricritical point: (T t , ∆ 0,t ) ≃ (0.610, 1.9655) [33], and (T t , ∆ 0,t ) ≃ (0.609(3), 1.966(2)) [35]. It is in general important that the B-C fermionic integral with a non-Gaussian action (14) finally predicts the existence of a tricritical point at the B-C critical line at strong dilution, somewhere close to the termination point of that line at ∆ 0 = 2. The method of constructing the critical line from the condition of zero mass in fermionic integral for Z has been recently extended by Fortin and Clusel [37] to the general set of the spin-S Ising models (S = 1/2, 1, 3/2, 2, 5/2, . . . ) [37]. The standard 2DIM and the Blume-Capel models are the first two representatives in this set. The agreement of the theoretical predictions for the critical line with the available Monte-Carlo data for the spin-S models was again found to be very good, even despite of highly complicated polynomial structures, with many fermions, arising for the higher spin-S Ising models in the kinetic part of the action [37]. These features may be probably understood as an evidence for the well expressed clusterization processes at low temperatures in such models, including the generic case of the spin-1 Blume-Capel model. Conclusions The integrals with anticommuting (Grassmann) variables provide effective tools to analyze pure and disordered Ising like spin models in two dimensions. The Ising spin glasses, geometry disordered lattices, regularly diluted models, also can be analyzed along these lines. In a more general context, it may be noted that there are as well few other important physical problems with spins and fermions in two dimensions. The most prominent are the quantum Hall effect and the high-T c superconductivity in oxide cuprates and related substances. It is interesting that the fine tuning in spin-fermion correspondence as well as the effects of disorder seemingly play the role in both cases. It is therefore important to understand better the ordering phenomena in terms of fermions in such systems. Table1: Numerical values of the critical points (T c (∆ 0 ), ∆ 0 ) in the Blume-Capel model: comparison of the results of Monte-Carlo simulations and the equations-(18) 0.0 1.695 1.714(2) 1.6740 0.5 1.567 1.584(1) 1.5427 1.0 1.398 1.413(1) 1.3695 1.5 1.150 1.155(1) 1.1162 1.87 0.800 0.800(3) 0.7712 1.95 0.650 0.651(2) 0.6135 1.962 0.620 0.619(1) 0.5776 1.969 0.600 0.596(5) 0.5531 The situation is different for the annealed case (the Blume-Capel model), where the zero state provided by y mn = 0, 1 is rather to be considered as a zero component of the BC spin S mn = 0, ±1 so that one have to average over the all three states of S mn = 0, ±1 directly in Z, inside of the logarithm. Respectively, the cosine factors (product R{y}) are now to be preserved under the averaging in Z. These factors in fact add new degrees of freedom for clusterization of the magnetic sites at low temperatures, and are eventually responsible for the appearance of the tricritical point in the Blume-Capel model at strong dilution[36]. In fermionic language, the tricritical point is associated with vanishing of the kinetic (stiffness) coefficient in the Blume-Capel fermionic action at strong dilution[36]. In the pure case, the lattice Majorana like action for 2DIM readily follows from (7) by substitution L Onsager, Critical Phenomena in Alloys, Magnets and Superconductors. R.E. Mills, E. Ascher, R.I. JaffeNew YorkMcGraw-HillL. Onsager, In: Critical Phenomena in Alloys, Magnets and Superconductors, Ed. by R.E. Mills, E. Ascher, R.I. Jaffe (McGraw-Hill, New York, 1971), p. XIX-XXIV; . T D Schultz, D C Mattis, E H Lieb, Rev. Mod. Phys. 36856T.D. Schultz, D.C. Mattis, E.H. Lieb, Rev. Mod. Phys. 36, 856 (1964). . E W Montroll, R B Potts, J C Ward, J. Math. Phys. 4308E.W. Montroll, R.B. Potts, J.C. Ward, J. Math. Phys.4, 308 (1963). . C A Hurst, H S Green, J. Chem. Phys. 331059C.A. Hurst and H.S. Green, J. Chem. Phys. 33, 1059 (1960). . F A Berezin, Russ. Math. Surveys. 2431F.A. Berezin, Russ. Math. Surveys, 24, No. 3, 1 (1969). . S Samuel, J. Math. Phys. 212806S. Samuel, J. Math. Phys. 21, 2806 (1980). . C Itzykson, Nucl. Phys. B. 210448C. Itzykson, Nucl. Phys. B 210, 448 (1982). . V N Plechko, Sov. Phys. Doklady. 30271V.N. Plechko, Sov. Phys. Doklady, 30, 271 (1985). . V N Plechko, Theor. Math. Phys. 64748V.N. Plechko, Theor. Math. Phys. 64, 748 (1985). The Method of Second Quantization. F A Berezin, Academic PressNew YorkF.A. Berezin, The Method of Second Quantization (Academic Press, New York, 1966). . V S Dotsenko, V S Dotsenko, Adv. Phys. 32129V.S. Dotsenko and V.S. Dotsenko, Adv. Phys. 32, 129 (1983). . B N Shalaev, Phys. Rep. 237129B.N. Shalaev, Phys. Rep. 237, 129 (1994). . G Jug, B N Shalaev, Phys. Rev. B. 543442G. Jug and B.N. Shalaev, Phys. Rev. B 54, 3442 (1996). . V N Plechko, Phys. Lett. A. 239289V.N. Plechko, Phys. Lett. A 239, 289 (1998). . A Roder, J Adler, W Janke, Phys. Rev. Lett. 804697A. Roder, J. Adler, and W. Janke, Phys. Rev. Lett. 80, 4697 (1998). . G Mazzeo, R Kühn, Phys. Rev. E. 603823G. Mazzeo and R. Kühn, Phys. Rev. E 60, 3823 (1999). . J.-K Kim, Phys. Rev. B. 611246J.-K. Kim, Phys. Rev. B 61, 1246 (2000). . K Ziegler, Nucl. Phys. B. 344499K. Ziegler, Nucl. Phys. B 344, 499 (1990). . L N Shchur, O A Vasilyev, Phys. Rev. E. 6516107L.N. Shchur and O.A. Vasilyev, Phys. Rev. E 65, 016107 (2001). . V N Plechko, I K Sobolev, Phys. Lett. A. 157335V.N. Plechko and I.K. Sobolev, Phys. Lett. A, 157, 335 (1991). . V N Plechko, I K Sobolev, Physica A. 197323V.N. Plechko and I.K. Sobolev, Physica A, 197, 323 (1993). . W Selke, L N Shchur, O A Vasilyev, Physica A. 259388W. Selke, L.N. Shchur, and O.A. Vasilyev, Physica A 259, 388 (1998). . H J Luo, L Schülke, B Zheng, Phys. Rev. E. 6436123H.J. Luo, L. Schülke, and B. Zheng, Phys. Rev. E 64, 036123 (2001). . M Picco, A Honecker, P Pujol, J. Stat. Mech.: Theory Exp. 9006M. Picco, A. Honecker, and P. Pujol, J. Stat. Mech.: Theory Exp. (2006) P09006. . P H L Martins, J A Plascak, Phys. Rev. E. 7612102P.H.L. Martins and J.A. Plascak, Phys. Rev. E 76, 012102 (2007). . M Hasenbusch, F P Toldin, A Pelissetto, E Vicari, Phys. Rev. B. 7811110M. Hasenbusch, F.P. Toldin, A. Pelissetto, and E. Vicari, Phys. Rev. B 78, 011110 (2008). . R Kenna, J J Ruiz-Lorenzo, Phys. Rev. E. 7831134R. Kenna and J.J. Ruiz-Lorenzo, Phys. Rev. E 78, 031134 (2008). A Gordillo-Guerrero, R Kenna, J J Ruiz-Lorenzo, arXiv:0909.3774AIP Conf. Proc. 1198. Also seeA. Gordillo-Guerrero, R. Kenna, and J.J. Ruiz-Lorenzo, AIP Conf. Proc. 1198, p.42- 54, 2009. Also see: arXiv:0909.3774. . N G Fytas, A Malakis, I A Hadjiagapiou, J. Stat. Mech.: Theory Exp. 11009N.G. Fytas, A. Malakis, and I.A. Hadjiagapiou, J. Stat. Mech.: Theory Exp., P11009 (2008). . A Malakis, A Berker, I A Hadjiagapiou, N G Fytas, Phys. Rev. E. 79A. Malakis, A. Nihat Berker, I.A. Hadjiagapiou, N.G. Fytas, Phys. Rev. E, 79, (2009). . X T Wu, J Y Zhao, Phys. Rev. B. 80104402X.T. Wu and J.Y. Zhao, Phys. Rev. B, 80 (2009) 104402. . M Blume, V J Emery, R B Griffiths, Phys. Rev. A. 41071M. Blume, V.J. Emery and R.B. Griffiths, Phys. Rev. A, 4, 1071 (1971). . P D Beale, Phys. Rev. B. 331717P.D. Beale, Phys. Rev. B, 33, 1717 (1986). . Y Deng, W Guo, H W Blöte, Phys. Rev. E. 7216101Y.Deng, W.Guo, and H.W.Blöte, Phys. Rev. E 72, 016101 (2005). . C J Silva, A A Caparica, J A Plascak, Phys. Rev. E. 7336702C.J. Silva, A.A. Caparica, and J.A. Plascak, Phys. Rev. E, 73, 036702 (2006). . M Clusel, J.-Y Fortin, V N Plechko, J. Phys. A: Math. Theor. 41405004M. Clusel, J.-Y. Fortin, and V.N. Plechko, J. Phys. A: Math. Theor., 41, 405004 (2008). . J.-Y Fortin, M Clusel, Phys. Rev. B. 78172402J.-Y. Fortin and M. Clusel, Phys. Rev. B, 78, 172402 (2008). I D Lawrie, S Sarbach, Phase Transitions and Critical Phenomena. C. Domb and J.L. LebowitzLondonAcademic Press91I.D. Lawrie and S. Sarbach, in Phase Transitions and Critical Phenomena, edited by C. Domb and J.L. Lebowitz (Academic Press, London, 1984), Vol. 9, p. 1.
[]
[ "Early prediction of the risk of ICU mortality with Deep Federated Learning", "Early prediction of the risk of ICU mortality with Deep Federated Learning" ]
[ "\nDept. of Computer & Systems Sciences\nStockholm University\nStockholmSweden\n" ]
[ "Dept. of Computer & Systems Sciences\nStockholm University\nStockholmSweden" ]
[ "Núria Lladós Armengol" ]
Intensive Care Units usually carry patients with a serious risk of mortality. Recent research has shown the ability of Machine Learning to indicate the patients' mortality risk and point physicians toward individuals with a heightened need for care. Nevertheless, healthcare data is often subject to privacy regulations and can therefore not be easily shared in order to build Centralized Machine Learning models that use the combined data of multiple hospitals. Federated Learning is a Machine Learning framework designed for data privacy that can be used to circumvent this problem. In this study, we evaluate the ability of deep Federated Learning to predict the risk of Intensive Care Unit mortality at an early stage. We compare the predictive performance of Federated, Centralized, and Local Machine Learning in terms of AUPRC, F1-score, and AUROC. Our results show that Federated Learning performs equally well as the centralized approach and is substantially better than the local approach, thus providing a viable solution for early Intensive Care Unit mortality prediction. In addition, we show that the prediction performance is higher when the patient history window is closer to discharge or death. Finally, we show that using the F1-score as an early stopping metric can stabilize and increase the performance of our approach for the task at hand.
10.48550/arxiv.2212.00554
[ "https://export.arxiv.org/pdf/2212.00554v2.pdf" ]
254,125,687
2212.00554
6f9de69d56c04c90d2c8170cce3964afc306bff0
Early prediction of the risk of ICU mortality with Deep Federated Learning Dept. of Computer & Systems Sciences Stockholm University StockholmSweden Early prediction of the risk of ICU mortality with Deep Federated Learning Núria Lladós Armengol Federated Learning · Early Mortality Prediction · Recur- rent Neural Networks · Multivariate Time Series · Intensive Care Unit Intensive Care Units usually carry patients with a serious risk of mortality. Recent research has shown the ability of Machine Learning to indicate the patients' mortality risk and point physicians toward individuals with a heightened need for care. Nevertheless, healthcare data is often subject to privacy regulations and can therefore not be easily shared in order to build Centralized Machine Learning models that use the combined data of multiple hospitals. Federated Learning is a Machine Learning framework designed for data privacy that can be used to circumvent this problem. In this study, we evaluate the ability of deep Federated Learning to predict the risk of Intensive Care Unit mortality at an early stage. We compare the predictive performance of Federated, Centralized, and Local Machine Learning in terms of AUPRC, F1-score, and AUROC. Our results show that Federated Learning performs equally well as the centralized approach and is substantially better than the local approach, thus providing a viable solution for early Intensive Care Unit mortality prediction. In addition, we show that the prediction performance is higher when the patient history window is closer to discharge or death. Finally, we show that using the F1-score as an early stopping metric can stabilize and increase the performance of our approach for the task at hand. Introduction Intensive Care Units (ICUs) usually treat patients with a heightened mortality risk. A study conducted on patients admitted to 167 ICUs from 17 European countries during four weeks in 2011, and 2012 registered that, out of 5, 834 patients, 1, 113 (19 %) died in the ICU alone and a total of 1, 397 patients (24%) died in the whole hospital [3]. Mortality increases even further when considering patients admitted to an ICU during 2020 and 2021. Among 1, 686 patients admitted to the ICU with COVID-19, the mortality rate was 30% [1]. The ICU is also one of the units where medical errors are most likely to occur, given the complexity of care. ICU patients are severely ill and usually subject to multiple complex interventions and treatments, thus leading to a high risk of an adverse outcome. In order to enable clinicians to take action and prevent such an outcome, the risk of patients' mortality has to be predicted not only as accurately as possible but also as early as possible; we refer to this concept as early ICU mortality prediction. ICUs are therefore equipped with advanced diagnostic and therapeutic resources in order to enable quick response to changes in patients' health. Traditionally, hospitals use the collected Electronic Health Records (EHRs) to assess individual mortality risks in the ICU with the help of scores like APACHE [13] and SAPS [5]. Recent research in the field of Machine Learning (ML) has shown improvements in these scores in terms of predictive performance [2,11]. The results of Awad et al. [2] clearly showcase the performance improvement of classical ML methods over clinical severity scores for early ICU mortality prediction. Similarly, Johnson and Mark [11] show that classical ML models trained using data from the first 24 h of the ICU stay can outperform clinical scores. These ML models rely on big amounts of data in order to learn correlations between current patient data and their risk of mortality within a specific time window in the future. Conventionally, ML approaches use all the available data to train the model in a centralized manner. However, as patient data are usually subject to privacy regulations, like the European GDPR, the data cannot simply be shared between hospitals to train Centralized ML (CML) models. Alternatively, locally available data could be used by each hospital to set up its own independent Local ML (LML) early warning system. However, this paradigm could suffer from a lack of sufficient training data and would not consider the heterogeneity of patients across multiple medical centers. A promising alternative to LML or CML is presented by Federated Learning (FL). Instead of exchanging data, this ML framework relies on training many local models of identical structure at the location of the data, which are then combined into a global model. In our case, the data are stored at different hospitals, which we refer to as clients. Apart from ensuring privacy by design, FL enables parallel training using the clients as computational units [14,17]. FL can be perfectly integrated into hospital facilities that store patients' information in the form of EHRs. An advantage of EHRs is that data are collected in the form of Multivariate Time-Series (MTS), which means that each feature consists of a stream of values changing over time. Deep Learning (DL) architectures like Recurrent Neural Networks (RNNs) represent powerful tools for dealing with this kind of data, as they also take into account the history encoded by the time series data. One major challenge when training a DL model is to find a trade-off between training the neural network enough to learn the features of the training set and stopping before it overfits the training data. One possible technique to find the optimal number of training epochs is Early Stopping (ES). In case of overfitting, the validation performance of the model begins to degrade, and the training process stops. Different metrics, such as loss or F1-score, can be used to trigger ES. Their suitability depends on the task. Some approaches in literature use RNNs for binary classification on MTS from the ICU; however, they do not tackle the problem in combination with FL. Pattalung et al. [16] compare different RNN architectures to create benchmark values for ICU mortality prediction tasks on different publicly accessible databases. The authors take MTS data over a 48 h period in order to predict whether a patient dies at the end of this period. Ge et al. [6] combine Logistic Regression and LSTM for early ICU mortality prediction after 48 h of ICU admission. Their results show that DL achieves higher accuracy than Logistic Regression in identifying patients at high risk of death. Despite the benefits of using an FL setup, few papers address the problem of mortality prediction with FL. In this study, we built upon the research of Mondrejevski et al. [15] that propose FLICU, an FL workflow for mortality prediction tasks in the ICU. However, their paper does not focus on predicting ICU mortality at an early stage but it is rather a retrospective study. This paper tackles the current limitations by combining FL with early prediction of ICU mortality. The main contributions are summarized below: 1. We propose a workflow for predicting early ICU mortality using deep FL. 2. We analyze the predictive performance of our proposed solution on different time windows and data cohorts. 3. We compare the predictive performance of the FL setup (with 2, 4, and 8 clients) against CML and LML. 4. We compare two early-stopping criteria during training: loss and F1-score. Method In this paper, we propose a workflow for early ICU mortality prediction that comprises three main phases: (i) Data Preparation, (ii) Window Selection, and (iii) Modeling. A schematic view of our workflow is shown in Fig. 1a. (b) Patient data is available from the admission to the ICU t adm to death or discharge from the ICU t end . Training data is sampled during ∆t data (from t adm to t eval ). This leaves a prediction window ∆t pred , during which data is unknown to the model. (c) Data are split into F equally sized folds. Over F iterations, each of the folds is used F − 1 times for training and validation and once for testing. For FL and LML, we further split the combined training and validation data into the number of clients. These partitions are, in turn, divided into training and validation data. Problem Formulation Our basic research problem can be formulated as a binary classification problem for early ICU mortality prediction: given a patient cohort D that consists of n patients, we aim to estimate the real class label y i ∈ [0, 1] for each patient i ∈ [0, 1, . . . , n] by predicting a labelŷ i . The label y i denotes whether patient i died or survived the ICU stay andŷ i indicates the patient's predicted mortality risk. The labelŷ i is estimated based on an MTS feature stream X i (t) = [x i0 (t), x i1 (t), . . . , x im (t)] , that consists of m features (e.g. vital signs or laboratory values). Each x ij (t), j ∈ [0, 1, . . . , m] represents the value of a univariate time series at time t. Since the main focus of this study is early prediction, the observations X i (t) that are used are collected only during the first hours of the patient's ICU stay. Furthermore, we assume that the patients are randomly distributed over K ICUs/hospitals. D k is the local cohort of patients (i.e., the set of locally available X i (t)) in each hospital k. Thus, D = D 0 ∪ D 1 ∪ . . . ∪ D K is the global dataset of n patients with D a ∩ D b = ∅, ∀(a, b) ∈ [1, K] × [1, K]. In this paper, we compare the efficiency of CML, LML, and FL on this problem. Data Preparation In the first phase of our workflow, we are preparing the data for the predictive modeling, following five steps (based on the FLICU-workflow [15]): (i) Patient selection: Initially, we select patients with an ICU stay and filter the cohort according to the criteria below: 1. We dismiss all but the first ICU stay of each patient. 2. We dismiss patients with data recorded for less than ∆t min . 3. We dismiss patients staying longer than ∆t max . The windows ∆t min and ∆t max allow us to define the patient cohorts. ∆t min denotes the minimum length of MTS data per patient. Additionally, it guarantees that all patients will have at least the required history length. The upper bound ∆t max limits the prediction window in order to dismiss patients that stay in the ICU for an extensive period of time. (ii) Feature selection: Consecutively, we extract vital signs and lab values in the form of MTS. We extract vitals explicitly connected to ICU stays and consider the time of the first vital of each patient to be the time of admission to the ICU. If no vitals are recorded for an ICU stay, we use the admission time logged by the hospital staff. Lab values are sampled during the patient's whole ICU stay, from ICU admission to ICU discharge or death. (iii) Re-sampling: Vital signs and lab values are re-sampled to fixed length intervals. The length of these intervals depends on the data collection frequency in the hospitals and may differ for vitals and labs. If more than one measurement falls in the same interval, the median is used for data aggregation. (iv) Imputation: Missing values are treated with forward, and then backward imputation, starting from the beginning of the ICU stay. Non-observed features for a patient's ICU stay are replaced with −1. (v) Labeling: Finally, if a patient dies in the ICU, we assign the class label y i = 1 (death). Otherwise, we assume that the patient survived the ICU and assign the label y i = 0 (discharge). Window Selection In the second phase of our workflow, we are interested in selecting the patient history window that will be used as input to the predictive models, defined as ∆t data ≤ ∆t min . ∆t data starts at the time of ICU admission t adm and ends at the time of evaluation t eval (see Fig. 1b). Since the main focus of this study is early ICU mortality prediction, we aim to predict the labelŷ i ahead of the patient's time of ICU discharge or death, thus ∆t data is considered to end before the end of the ICU stay t end . More precisely, there is a prediction window ∆t pred ≥ 0 that starts at t eval and ends at t end . Modeling To avoid bias induced by data partitioning, we use F -fold cross-validation as the model evaluation technique. We split the data into F partitions, where each split is used as a testing dataset once, while the remaining folds are split into training and validation sets for each client, respectively (see Fig. 1c). To allocate the data points throughout the clients (in LML and FL), we assume K horizontal, stratified splits. This means that the data in each hospital k ∈ [0, 1, . . . , K] has the same number of patients |D k | = n K with the same class distribution and that each patient's records are kept in only one hospital at the same time. Before being passed into the model, all data streams are normalized according to the global minima and maxima found in the available training and validation data. To deal with class imbalance during training, class weights are added to the training data. This method down-weighs the impact of classes with more examples (in our case: the discharged patients) and increases the importance of classes with fewer data examples during error back-propagation. Thus, applying class weights enables us to achieve optimization-level class balance. The weights w c for each class c are calculated according to the following formula: w c = # of samples in dataset # of samples in class c . This setup is aggregated to the three configurations of our predictive model, FL, CML, and LML. As both main gated RNN architectures, LSTM [8], and GRU [4] have shown to perform equally well on tasks comparable to ours [15,16], we focus on the less resource-intensive GRU in this study. Our basic DL model architecture (similar to [15] and [16]) consists of two parallel input layers, one for the vital signs and one for the laboratory variables, each followed by three recurrent layers of 16 GRUs. Consecutively, we perform batch normalization and combine the resulting outputs using two fully connected layers. Finally, we add a sigmoid-layer for estimating the patient risk of ICU mortality with a value between 0 and 1. Matching to this binary output of the system, we use binary cross-entropy as the loss function L(·). In CML, the model f CM L (·, θ) is trained on the whole training data D, where θ is a set of DL weights defining the function of the model. The goal is to find the optimal set of weights θ CM L that minimizes the error between y i and y i = f CM L (X i (t), θ) for a given X i (t). More formally: θ CM L = argmin θ * ∈R L(D, θ * )(1) After each epoch of training, we evaluate the predictive performance of f CM L (·, θ) on the validation data using a previously selected metric M (ŷ, y), comparing a vector of predictionsŷ with the vector of true class labels y. In order to avoid overfitting the training data, we stop the training and reset θ to the time of the best score s * if s = M (·) does not improve for a predefined number of epochs P . We refer to this as early stopping (ES) with patience P . In LML, each hospital k trains a local model f LM L k (·, θ), using only the data in the local dataset D k , whereŷ i = f LM L k (X i (t), θ k ). As before, the goal is to minimize the prediction error by producing an optimal set of weights θ LM L k for each client k: θ LM L k = argmin θ * ∈R L(D k , θ * )(2) Similarly to CML, we calculate local validation scores s k = M (ŷ k , y k ) on each client k in every epoch and monitor them for ES. For the FL approach, we use a slightly modified version of Federated Averaging (FedAvg) [14]: (i) First, all local models of the K hospitals are initialized with the same set of starting weights θ 0 . (ii) By performing local training for E local epochs, each of the participating hospitals derives an updated set of weight parameters θ k . The fraction of participating hospitals per round is C. The local objective is similar to Equation 2. (iii) After local training, a global model f F L (·, θ) is created by averaging all θ k to one set of parameters θ. (iv) These averaged weights are then sent back to the hospitals, which overwrite their local weights with the new ones. (v) Afterwards, an ES score s is calculated by evaluating the local models on the respective validation set of each client k and then averaging the results: s = K k=1 n k n M (ŷ k , y k ). The above is repeated from step (ii) until the validation scores suggest an optimal set of weights θ F L and ES is activated. Here, the objective is to optimize the global model f F L (·, θ) by calculating: θ F L = argmin θ * ∈R K k=1 n k n F k (θ * )(3) where F k (θ * ) = 1 n k L(D k , θ * ), n k = |D k |. Empirical Evaluation Data Description In this paper, we use the MIMIC-III (version 1.4) clinical dataset provided by PhysioNet [7]. This dataset provides de-identified data collected from ICU patients in Beth Israel Deaconess Medical Center in Boston, Massachusetts. The data was collected from 46, 476 patients during the years 2001 to 2012 [9,10]. The database includes patient information such as demographics, vital sign measurements, and laboratory test results. Initially, we select the patients based on the criteria described in Section 2.2, and in addition, we dismiss patients from the Neonatal Intensive Care Unit (NICU) and Pediatric Intensive Care Unit (PICU). For this study, we use two different cohorts of patients: (i) Cohort 1 is identified by ∆t min = 24 h and ∆t max = 72 h, and we compare different ∆t data ∈ [8 h, 16 For the pre-processing and feature selection, we follow the approach from [15,16]. Initially, we extract demographic information, such as gender and age, that is used for describing the cohorts. We also extract 7 vital signs and 16 lab values, shown in Tab. 1, in the form of MTS. In this paper, vital signs are re-sampled in 1 h intervals, while we use a sampling interval of 8 h for lab values. (1 per hour ) heart-rate, systolic blood-pressure, diastolic blood-pressure, mean blood-pressure, respiratory rate, core temperature, blood oxigen saturation (spo2) Lab Values: (1 per 8 hours) albumin, blood urea nitrogen (bun), bilirubin, lactate, bicarbonate, band neutrophils (bands), chloride, creatinine, glucose, hemoglobin, hematocrit, platelets, potassium, partial thromboplastin time (ptt), sodium, white blood-cells Finally, we extract the label indicating whether a patient died or not based on the column deathtime in table admissions of MIMIC-III. If a deathtime is recorded for patient i during the ICU stay, we assign the label y i = 1. Otherwise, we assume that the patient survived the ICU and assign the label y i = 0. As shown in Tab. 2, there is a heavily imbalanced class distribution as there are more patients in the discharge class than in the death class. Model Training and Evaluation In this study, we use F = 5 folds for stratified cross-validation, where we evaluate CML, LML, and FL on the same testing data splits within each cross-validation round. In each round, the data of the remaining four folds are again partitioned in LML and FL models, as those are trained with different numbers of clients K ∈ [2,4,8], whilst all remaining data are used in CML. The available data in each scenario (either all remaining data in CML or each client's data in LML and FL) are split into 80% training and 20% validation (see Fig. 1c). We use Adaptive Moment Estimation (ADAM) [12] as an optimizer for updating the network weights in CML, FL, and LML. Additionally, we apply an initial learning rate η of 0.01, which is reduced by 50% every five epochs. Lastly, we use ES via monitoring the loss or F1-score on the validation set with patience P = 30 and the maximum number of epochs set to 100. In case the F1-score is undefined, i.e., recall and precision are zero, we set it to −1. The local minibatch size B depends on the number of clients K participating in each configuration: we use B = 512/K for performance reasons. For FL, we use E = 1 number of local epochs. The fraction of clients computing in each FL round is C = 1, meaning that all clients participate in each iteration. To compare the performance of the different settings, we use the evaluation metrics AUPRC, AUROC, Precision, Recall, and F1-score. AUROC is chosen as it is commonly used to assess the performance of ICU mortality prediction. However, AUPRC and F1-score are more suitable for highly imbalanced classes, which is the case for our problem. The entire code produced in this paper is publicly available on GitHub 1 . Results & Discussion As previously described, we evaluate the ability of our proposed FL workflow to predict the risk of ICU mortality at an early stage. We compare it with the CML and LML approaches on two cohorts of patients (∆t min = 24 h and 48 h), using two ES metrics (loss and F1-score) and different time windows ∆t data . The results for cohort ∆t min = 24 h, using the minimum loss for ES, are shown in Tab. 3. Overall, the results show that our model performs well in the task of early ICU mortality prediction. For example, for FL with ∆t data = 24 h, we obtain an average AUROC of 0.90 ± 0.01 and an AUPRC of 0.47 ± 0.04. It's important to highlight that due to the class imbalance of 4.4 % towards the positive class, the baseline value for the AUPRC is 0.044. In addition, we yield three conclusions from Tab. 3: (i) While an increasing number of clients results in a decrease in predictive performance over all the metrics in LML, the performance of FL remains close to that of CML, regardless of the number of clients. (ii) With growing ∆t data , and respectively shrinking ∆t pred , the model performance increases. (iii) The relation between precision and recall is very unstable, which is shown in the fluctuation of the averages and the high standard deviations. While (i) clearly shows that FL has the potential to improve on LML for early ICU mortality prediction, (ii) and (iii) are further explored in the following experiments. Studying the influence of the size of ∆t data and ∆t pred . To assess whether the performance improves with an increasing ∆t data , decreases with an increasing ∆t pred , or both, we compare the test scores of the two cohorts (∆t min = 24 h and ∆t min = 48 h). The comparison is shown in Fig. 2. The figure shows that cohort ∆t min = 24 h achieves higher performance than cohort ∆t min = 48 h, at the same ∆t data (Fig. 2a). Nevertheless, both cohorts' performance increases alongside ∆t data . When comparing the performance over the length of ∆t pred , we see that with a rising ∆t pred , all the curves are decreasing in a similar manner (Fig. 2b). This behavior can be seen over different models and metrics and is a strong indicator that the size of ∆t pred is more important than the size of ∆t data , since ∆t pred is bigger in cohort ∆t min = 48 h than cohort ∆t min = 24 h for the same ∆t data . This means that even for small ∆t data , 8 h or just 6 h as Awad et al. [2] demonstrate, prediction should be possible, if ∆t pred is small enough. Stabilizing precision & recall. To better understand the interplay between precision and recall, we examine their fluctuations from the CML model, as it represents the most basic case. Its learning curve in Fig. 3 shows that there is a trade-off between precision and recall: precision increases during the first 12 epochs and then shows a decline, while recall increases steadily. We can also see that the loss shows a very flat minimum while precision is still stabilizing, and recall has just begun to increase. This means that minimal changes in the loss's progression can greatly impact the models' precision and recall. The F1-score, however, shows a more defined maximum. In order to stabilize precision and recall, we re-train the models whose performance is shown in Tab. 3, using the F1-score as the ES metric. As the F1-Score is the harmonic mean between precision and recall, using it for stopping the training should create a model with an optimal balance between precision and recall. The results are shown in Tab. 4. A comparison of tables Tab. 3 and Tab. 4 shows that ES with the highest F1-score produces better and more stable results for all models. For example, for FL with ∆t data = 24 h, we now obtain an average F1-score of 0.46 ± 0.03, instead of 0.32 ± 0.15 (as shown in Tab. 3) in addition to marginally higher AUROC and AUPRC values. Conclusion We present an FL workflow that allows for early ICU mortality prediction. Our results show that FL performs equally well as the CML approach and substantially better than the LML, especially as the number of clients increases. While the performance remains stable in FL with 2, 4, and 8 clients, in LML, the performance decreases considerably with an increasing number of clients. These findings are based on the AUROC score -widely used in the literature but illsuited for the heavily imbalanced data in this problem -but also on the more meaningful AUPRC and F1-score. Furthermore, our results indicate, in agreement with literature [2,16], that the size of the prediction window is much more important for the performance in early prediction tasks than the length of patient history during which data is collected. Thus, the results show better predictive performance when the patient history window is closer to the end of the ICU stay. Lastly, we show that using the F1-score as an ES metric can stabilize and increase the predictive performance in tasks like ours. Nevertheless, this study also creates the basis for future work. Since we limit ourselves to horizontal and stratified client splits for comparability reasons, it is necessary to re-evaluate our findings in more realistic settings. In addition, our FL workflow performs considerably well in predicting ICU mortality at an early stage using the MIMIC-III dataset. However, the generalizability of the approach needs to be tested beyond this specific dataset. Furthermore, it would be interesting to explore how far ahead of death or discharge early ICU mortality can be reasonably predicted and hereby expand on our findings. Fig. 1 : 1(a) Schematic Workflow, (b) Time-window representation (c), Data splits. AUROC 0 . 087 ± 0.02 0.86 ± 0.01 0.86 ± 0.01 0.87 ± 0.01 0.85 ± 0.01 0.82 ± 0.01 0.81 ± 0.01 AUPRC 0.36 ± 0.03 0.37 ± 0.02 0.37 ± 0.02 0.37 ± 0.03 0.34 ± 0.04 0.31 ± 0.03 0.29 ± 0.02 F1 0.29 ± 0.15 0.38 ± 0.01 0.36 ± 0.05 0.38 ± 0.04 0.34 ± 0.07 0.34 ± 0.02 0.25 ± 0.04 precision 0.62 ± 0.21 0.38 ± 0.07 0.39 ± 0.11 0.41 ± 0.08 0.53 ± 0.14 0.41 ± 0.06 0.48 ± 0.08 recall 0.24 ± 0.14 0.41 ± 0.07 0.42 ± 0.14 0.39 ± 0.09 0.29 ± 0.10 0.30 ± 0.07 0.17 ± 0.04 ∆tdata = 16 h; avg. ∆tpred = 27.2 h AUROC 0.89 ± 0.01 0.88 ± 0.01 0.88 ± 0.01 0.88 ± 0.01 0.87 ± 0.01 0.85 ± 0.01 0.82 ± 0.01 AUPRC 0.44 ± 0.04 0.41 ± 0.04 0.41 ± 0.04 0.42 ± 0.03 0.40 ± 0.04 0.35 ± 0.05 0.32 ± 0.02 F1 0.42 ± 0.01 0.35 ± 0.07 0.41 ± 0.02 0.38 ± 0.08 0.29 ± 0.05 0.29 ± 0.12 0.22 ± 0.02 precision 0.51 ± 0.10 0.58 ± 0.13 0.47 ± 0.05 0.46 ± 0.07 0.55 ± 0.18 0.50 ± 0.14 0.55 ± 0.09 recall 0.38 ± 0.07 0.27 ± 0.09 0.37 ± 0.04 0.34 ± 0.10 0.21 ± 0.03 0.22 ± 0.10 0.14 ± 0.02 ∆tdata = 24 h; avg. ∆tpred = 19.2 h AUROC 0.89 ± 0.01 0.90 ± 0.01 0.90 ± 0.01 0.89 ± 0.01 0.89 ± 0.01 0.87 ± 0.01 0.83 ± 0.01 AUPRC 0.48 ± 0.03 0.48 ± 0.03 0.47 ± 0.04 0.45 ± 0.04 0.46 ± 0.03 0.41 ± 0.05 0.37 ± 0.03 F1 0.10 ± 0.11 0.33 ± 0.17 0.34 ± 0.07 0.30 ± 0.17 0.22 ± 0.12 0.26 ± 0.07 0.32 ± 0.05 precision 0.66 ± 0.38 0.70 ± 0.18 0.75 ± 0.06 0.72 ± 0.16 0.51 ± 0.18 0.47 ± 0.10 0.59 ± 0.15 recall 0.05 ± 0.07 0.27 ± 0.16 0.22 ± 0.05 0.23 ± 0.14 0.15 ± 0.08 0.19 ± 0.06 0.23 ± 0.05 Fig. 2 : 2Mean AUPRC of the two scenarios (∆t min = 24 h and ∆t min = 48 h). represent the mean values of the validation scores over the 5-fold cross-validation iterations using loss as the ES metric. The borders of the shaded areas mark the standard deviation. The best loss and F1 values recognized by ES fall within the red and purple regions. Fig. 3 : 3Learning progress of CML (∆t min = 24 h). h, 24 h]; and (ii) Cohort 2 is identified by ∆t min = 48 h and ∆t max = 96 h, and we compare ∆t data ∈ [8 h, 16 h, 24 h, 32 h, 40 h, 48 h]. Both of these cohorts use ∆t max = ∆t min + 48 h. For simplicity, we therefore, refer to the cohorts by their ∆t min only. Table 1 : 1Vital and Laboratory ValuesVital Signs: Table 2 : 2Cohort Sizes and Class DistributionCohort 1 [∆tmin = 24 h] Cohort 2 [∆tmin = 48 h] Deaths Discharges Total Deaths Discharges Total absolute percent absolute percent absolute percent absolute percent Patients 804 4.4% 17, 477 95.6% 18, 281 547 5.4% 9, 496 94.6% 10, 043 Male 420 2.3% 10, 075 55.1% 10, 495 287 2.9% 5, 255 52.3% 5, 542 Female 384 2.1% 7, 402 40.5% 7, 786 260 2.6% 4, 241 42.2% 4, 501 Age 0 to 29 18 0.1% 892 4.9% 910 19 0.2% 421 4.2% 440 Age 30 to 59 177 1.0% 5, 777 31.6% 5, 954 133 1.3% 2, 863 28.5% 2, 996 Age 60 to 89 525 2.9% 9, 920 54.3% 10, 445 352 3.5% 5, 692 56.7% 6, 044 Age 90+ 84 0.5% 888 4.9% 972 43 0.4% 520 5.2% 563 Table 3 : 3ES with min. loss (∆t min = 24 h).score CML FL LML 2 clients 4 clients 8 clients 2 clients 4 clients 8 clients ∆tdata = 8 h; avg. ∆tpred = 35.2 h Table 4 : 4ES with max. F1 (∆t min = 24 h). precision 0.42 ± 0.07 0.38 ± 0.05 0.35 ± 0.06 0.35 ± 0.08 0.37 ± 0.05 0.37 ± 0.05 0.35 ± 0.precision 0.46 ± 0.05 0.41 ± 0.04 0.43 ± 0.06 0.43 ± 0.04 0.38 ± 0.08 0.38 ± 0.05 0.41 ± 0.03 precision 0.55 ± 0.05 0.52 ± 0.05 0.49 ± 0.07 0.45 ± 0.07 0.48 ± 0.08 0.39 ± 0.04 0.46 ± 0.02The dots represent the mean values over the 5-fold cross-validation iterations. The vertical bars show the standard deviation. The scores were calculated on the test sets, while the ES metric is the loss.score CML FL LML 2 clients 4 clients 8 clients 2 clients 4 clients 8 clients ∆tdata = 8 h; avg. ∆tpred = 35.2 h AUROC 0.87 ± 0.01 0.87 ± 0.01 0.87 ± 0.01 0.87 ± 0.01 0.86 ± 0.01 0.84 ± 0.01 0.80 ± 0.01 AUPRC 0.39 ± 0.04 0.38 ± 0.04 0.37 ± 0.04 0.37 ± 0.03 0.36 ± 0.05 0.32 ± 0.04 0.28 ± 0.02 F1 0.40 ± 0.03 0.38 ± 0.02 0.39 ± 0.04 0.39 ± 0.04 0.39 ± 0.03 0.36 ± 0.03 0.35 ± 0.02 04 recall 0.41 ± 0.08 0.41 ± 0.11 0.46 ± 0.08 0.47 ± 0.09 0.43 ± 0.04 0.36 ± 0.03 0.34 ± 0.03 ∆tdata = 16 h; avg. ∆tpred = 27.2 h AUROC 0.89 ± 0.01 0.88 ± 0.01 0.88 ± 0.01 0.88 ± 0.01 0.87 ± 0.01 0.85 ± 0.01 0.82 ± 0.01 AUPRC 0.44 ± 0.05 0.41 ± 0.04 0.41 ± 0.04 0.42 ± 0.03 0.40 ± 0.05 0.37 ± 0.04 0.33 ± 0.03 F1 0.44 ± 0.02 0.41 ± 0.01 0.40 ± 0.05 0.43 ± 0.03 0.41 ± 0.02 0.39 ± 0.03 0.37 ± 0.02 recall 0.42 ± 0.05 0.42 ± 0.07 0.40 ± 0.11 0.43 ± 0.06 0.46 ± 0.08 0.42 ± 0.06 0.33 ± 0.04 ∆tdata = 24 h; avg. ∆tpred = 19.2 h AUROC 0.90 ± 0.01 0.90 ± 0.01 0.90 ± 0.00 0.89 ± 0.01 0.89 ± 0.01 0.87 ± 0.01 0.83 ± 0.01 AUPRC 0.50 ± 0.04 0.49 ± 0.04 0.47 ± 0.04 0.47 ± 0.03 0.47 ± 0.04 0.43 ± 0.04 0.37 ± 0.03 F1 0.49 ± 0.04 0.48 ± 0.03 0.46 ± 0.03 0.44 ± 0.03 0.45 ± 0.02 0.42 ± 0.02 0.40 ± 0.02 recall 0.46 ± 0.08 0.46 ± 0.05 0.44 ± 0.05 0.46 ± 0.11 0.46 ± 0.10 0.48 ± 0.09 0.35 ± 0.03 CML (∆tmin = 24 h) CML (∆tmin = 48 h) FL (∆tmin = 24 h) FL (∆tmin = 48 h) 10 15 20 ∆tdata in [h] 0.2 0.4 0.6 AUPRC 20 25 30 35 ∆tpred in [h] 0.2 0.4 0.6 AUPRC (a) (b) arXiv:2212.00554v2 [cs.LG] 5 Dec 2022 https://github.com/randlbem/Early ICU mortality prediction with deep FL.git . S C Auld, K R V Harrington, M W Adelman, C J Robichaux, E C Overton, M Caridi-Scheible, C M Coopersmith, D J Murphy, 50Research collaborative trends in icu mortality from coronavirus disease 2019: A tale of three surges. the Emory COVID-19 Quality and ClinicalAuld, S.C., Harrington, K.R.V., Adelman, M.W., Robichaux, C.J., Overton, E.C., Caridi-Scheible, M., Coopersmith, C.M., Murphy, D.J.: Research collaborative trends in icu mortality from coronavirus disease 2019: A tale of three surges. the Emory COVID-19 Quality and Clinical 50(2), 245-255 (2022) Predicting hospital mortality for intensive care unit patients: Time-series analysis. A Awad, M Bader-El-Den, J Briggs, J Mcnicholas, Y El-Sonbaty, Health Informatics Journal. 262Awad, A., Bader-El-Den, M., Briggs, J., McNicholas, J., El-Sonbaty, Y.: Predicting hospital mortality for intensive care unit patients: Time-series analysis. Health Informatics Journal 26(2), 1043-1059 (2020) Hospital mortality of adults admitted to intensive care units in hospitals with and without intermediate care units: A multicentre european cohort study. M Capuzzo, C Volta, T Tassinati, R Moreno, A Valentin, B Guidet, G Iapichino, C Martin, T Perneger, C Combescure, A Poncet, A Rhodes, S Oeyen, M Matejovic, P Toft, H Wrigge, Critical Care. 185Capuzzo, M., Volta, C., Tassinati, T., Moreno, R., Valentin, A., Guidet, B., Iapichino, G., Martin, C., Perneger, T., Combescure, C., Poncet, A., Rhodes, A., Oeyen, S., Matejovic, M., Toft, P., Wrigge, H., et al.: Hospital mortality of adults admitted to intensive care units in hospitals with and without intermediate care units: A multicentre european cohort study. Critical Care 18(5) (2014) K Cho, B Van Merrienboer, D Bahdanau, Y Bengio, On the properties of neural machine translation: Encoder-decoder approaches. Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches (2014) A new simplified acute physiology score (saps ii) based on a european/north american multicenter study. JAMA: The Journal of the. J R Gall, S Lemeshow, F Saulnier, American Medical Association. 27024Gall, J.R., Lemeshow, S., Saulnier, F.: A new simplified acute physiology score (saps ii) based on a european/north american multicenter study. JAMA: The Jour- nal of the American Medical Association 270(24), 2957-2963 (1993) An interpretable icu mortality prediction model based on logistic regression and recurrent neural networks with lstm units. W Ge, J W Huh, Y P Rang, J Lee, Y H Kim, A Turchin, AMIA annual symposium. 1Ge, W., Huh, J.W., Rang, Y.P., Lee, J., Kim, Y.H., Turchin, A.: An interpretable icu mortality prediction model based on logistic regression and recurrent neural networks with lstm units. In: AMIA annual symposium. vol. 1, pp. 460 -469. American Medical Informatics Association (2018) A L Goldberger, L A Amaral, L Glass, J M Hausdorff, P C Ivanov, R G Mark, J E Mietus, G B Moody, C K Peng, H E Stanley, Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. 101Goldberger, A.L., Amaral, L.A., Glass, L., Hausdorff, J.M., Ivanov, P.C., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.K., Stanley, H.E.: Physiobank, phys- iotoolkit, and physionet: components of a new research resource for complex phys- iologic signals. Circulation 101(23), E215-220 (2000) Long short-term memory. S Hochreiter, J Schmidhuber, Neural Computation. 981735Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735 (1997) Mimic-iii, a freely accessible critical care database. A E W Johnson, T J Pollard, L W H Lehman, M Feng, M Ghassemi, B Moody, L A Celi, R G Mark, L Shen, P Szolovits, Scientific Data. 3Johnson, A.E.W., Pollard, T.J., Lehman, L.W.H., Feng, M., Ghassemi, M., Moody, B., Celi, L.A., Mark, R.G., Shen, L., Szolovits, P.: Mimic-iii, a freely accessible critical care database. Scientific Data 3 (2016) Mimic-iii clinical database (version 1.4). A E W Johnson, T J Pollard, R G Mark, PhysioNet. Johnson, A.E.W., Pollard, T.J., Mark, R.G.: Mimic-iii clinical database (version 1.4). PhysioNet (2016) Real-time mortality prediction in the intensive care unit. A E Johnson, R G Mark, AMIA Annual Symposium Proceedings. 2017994Johnson, A.E., Mark, R.G.: Real-time mortality prediction in the intensive care unit. In: AMIA Annual Symposium Proceedings. vol. 2017, p. 994. American Med- ical Informatics Association (2017) Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) The apache iii prognostic system: Risk prediction of hospital mortality for critically iii hospitalized adults. W Knaus, D Wagner, E Draper, J Zimmerman, M Bergner, P Bastos, C Sirio, D Murphy, T Lotring, A Damiano, F HarrellJr, Chest. 1006Knaus, W., Wagner, D., Draper, E., Zimmerman, J., Bergner, M., Bastos, P., Sirio, C., Murphy, D., Lotring, T., Damiano, A., Harrell Jr., F.: The apache iii prognostic system: Risk prediction of hospital mortality for critically iii hospitalized adults. Chest 100(6), 1619-1636 (1991) H B Mcmahan, E Moore, D Ramage, S Hampson, B A Arcas, Communication-efficient learning of deep networks from decentralized data. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.y.: Communication-efficient learning of deep networks from decentralized data (2016) Flicu: A federated learning workflow for intensive care unit mortality prediction. L Mondrejevski, I Miliou, A Montanino, D Pitts, J Hollmen, P Papapetrou, 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS). Los Alamitos, CA, USAIEEE Computer SocietyMondrejevski, L., Miliou, I., Montanino, A., Pitts, D., Hollmen, J., Papapetrou, P.: Flicu: A federated learning workflow for intensive care unit mortality prediction. In: 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS). pp. 32-37. IEEE Computer Society, Los Alamitos, CA, USA (2022) Feature explanations in recurrent neural networks for predicting risk of mortality in intensive care patients. Na Pattalung, T Ingviya, T Chaichulee, S , Journal of Personalized Medicine. 119Na Pattalung, T., Ingviya, T., Chaichulee, S.: Feature explanations in recurrent neural networks for predicting risk of mortality in intensive care patients. Journal of Personalized Medicine 11(9) (2021) Privacy-preserving deep learning. R Shokri, V Shmatikov, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. the 22nd ACM SIGSAC Conference on Computer and Communications SecurityNew York, NY, USAAssociation for Computing MachineryCCS '15Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. p. 1310-1321. CCS '15, Association for Computing Machinery, New York, NY, USA (2015)
[ "https://github.com/randlbem/Early" ]
[ "Context-Dependent Anomaly Detection with Knowledge Graph Embedding Models", "Context-Dependent Anomaly Detection with Knowledge Graph Embedding Models" ]
[ "Nathan Vaska ", "Kevin Leahy ", "Victoria Helus " ]
[]
[]
Increasing the semantic understanding and contextual awareness of machine learning models is important for improving robustness and reducing susceptibility to data shifts. In this work, we leverage contextual awareness for the anomaly detection problem. Although graphed-based anomaly detection has been widely studied, context-dependent anomaly detection is an open problem and without much current research. We develop a general framework for converting a context-dependent anomaly detection problem to a link prediction problem, allowing well-established techniques from this domain to be applied. We implement a system based on our framework that utilizes knowledge graph embedding models and demonstrates the ability to detect outliers using context provided by a semantic knowledge base. We show that our method can detect context-dependent anomalies with a high degree of accuracy and show that current object detectors can detect enough classes to provide the needed context for good performance within our example domain.
10.1109/case49997.2022.9926631
[ "https://arxiv.org/pdf/2203.09354v2.pdf" ]
247,519,020
2203.09354
0b96f94cc6bfc7664678908bbe09214f7f45fbca
Context-Dependent Anomaly Detection with Knowledge Graph Embedding Models Nathan Vaska Kevin Leahy Victoria Helus Context-Dependent Anomaly Detection with Knowledge Graph Embedding Models Increasing the semantic understanding and contextual awareness of machine learning models is important for improving robustness and reducing susceptibility to data shifts. In this work, we leverage contextual awareness for the anomaly detection problem. Although graphed-based anomaly detection has been widely studied, context-dependent anomaly detection is an open problem and without much current research. We develop a general framework for converting a context-dependent anomaly detection problem to a link prediction problem, allowing well-established techniques from this domain to be applied. We implement a system based on our framework that utilizes knowledge graph embedding models and demonstrates the ability to detect outliers using context provided by a semantic knowledge base. We show that our method can detect context-dependent anomalies with a high degree of accuracy and show that current object detectors can detect enough classes to provide the needed context for good performance within our example domain. I. INTRODUCTION Machine learning approaches today have achieved impressive and at times superhuman performance at a variety of tasks, such as game playing, pattern recognition, classification, and more [1], [2], [3], [4]; however, such performance is often limited to a narrow domain, with system capability degrading rapidly as data distributions shift away from the training distribution [5], [6] or inputs are perturbed and corrupted [7], [8]. As machine learning systems proliferate, gain greater influence over decision making, and increasingly integrate into daily life it becomes imperative to broaden model capabilities. In contrast, humans maintain an awareness of context that reduces susceptibility to these problems; we reason about the relationships between objects, determine if something belongs, and adjust our beliefs given what we perceive. Aspiring to systems that are expected to explore and navigate through unseen environments or operate in shifting domains demand a higher level understanding; this motivates the DISTRIBUTION Use of this work other than as specifically authorized by the U.S. Government may violate any copyrights that exist in this work. 1 Authors are with MIT Lincoln Laboratory, Lexington, MA 02420 Corresponding author: [email protected] exploration of techniques that incorporate contextual relationships as a prior. Consider an embodied agent placed in a house and given a high-level instruction such as "I am watching TV, please bring me a spoon [9] for my food, and tell me if you see anything strange." While this may sound simple to a human, a high degree of implicit knowledge is required to ensure successful comprehension and execution of such an instruction. An agent would have to understand the instruction (language processing), flag unknown objects (uncertainty quantification), recognize where it is (scene classification), understand what doesn't belong in its current location (anomaly detection), figure out where a spoon would be and how to get there, where you would be watching TV and how to get there (reasoning), and physically reach the necessary locations (navigation). We choose to focus initially on the anomaly detection component of this problem. Detecting anomalies within a scene requires a more generalized contextual understanding than current narrow approaches can offer; a conceptual system would operate as shown in Fig. 1. For example, while a teapot may not be out of place in a kitchen or a living room, it would be considered odd in a bathroom, even if the teapot itself is not strange. Given the lack of much prior work on context-dependent anomaly detection [10], [11], the design space of models addressing this problem is largely unexplored. In this work we treat the context-dependent anomaly detection problem as an extension of the link prediction task (also known as graph completion) [12], [13], a popular problem that has multiple benchmarks [14], [15] and applications in disease monitoring, recommendation systems, drug discovery, social media suggestions, and more. Our contributions in this work are the following: • We develop a general framework for mapping any context-dependent anomaly detection to the link prediction task for which a knowledge graph exists or can be constructed. • We apply our method to a specific domain, identifying anomalous objects within household scenes, and show that our method achieves good performance while remaining efficient and interpretable. • We show that our method could be implemented on an embodied agent equipped with widely-available object detectors. Fig. 1. A notional deployed system. The object detector feeds labels into an anomaly detection system denoted in green, which is the focus of this paper. Notation shown is explained in Section III. II. RELATED WORK A. Anomaly Detection Anomaly detection is the task of detecting outliers from a given data distribution, and is relevant in many fields and applications, including cybersecurity, finance, healthcare, surveillance, and more. Deep learning methods have demonstrated promising performance in detecting anomalies [10]; yet, most of the existing techniques are focused on point anomalies (an individual instance that lies far from the rest of the instances), rather than the more challenging group anomalies (an individual instance may be considered normal, but a collection of instances is anomalous compared to the other instances), or contextual anomalies (an instance can be normal or abnormal depending on its current context) [10], [11]. Performing context-dependent anomaly detection usually involves trying to reduce the problem to a point anomaly detection problem, where multiple models may be conditioned on specific contexts, or training a model on previous data (often sequential or time-series) to predict expected behavior that can then be flagged as anomalous or not [11], [16]. In contrast, our method takes advantage of the full context and does not require historical patterns to train on or compare to. Another interesting and relevant early approach demonstrates the ability to detect abnormal nodes in applications that can be modeled as bipartite graphs [17]; currently, detecting anomalies in graphs (anomalous nodes, edges, or subgraphs) is a widely researched problem [12], [18], often used in applications such as social networks and transportation networks. While these approaches are looking for anomalies in already completed graphs, our focus on extending the link prediction task assumes an incomplete graph. B. Visual Question Answering Visual question answering (VQA) is the task of being able to provide an answer to a natural language question based on image data [19]. Much like the embodied problem posed in the introduction, even answering questions with a simple "yes" or "no" requires a nontrivial implicit thought process. Unlike other related language and image tasks (e.g., image captioning), VQA often demands a specificity in answers that can only be accomplished through the type of reasoning that requires human-aligned knowledge and understanding. Unlike the problem we are considering, VQA solvers are usually focused on answering more specific questions that often even call out features in an image (e.g., "how many balls are to the left of the blue cube?"), rather than answering more general questions such as "is there anything unusual in this scene?" Several VQA datasets have been developed [19], [20], [21], [22], [23] and include sentence descriptions of the images or annotated ground-truth of the scene and objects. For our work, we leverage one of these datasets for its annotations and relationship-rich scenes to train instead for context-dependent anomaly detection. C. Vision-and-Language Navigation Similar to VQA, vision-and-language navigation (VLN) seeks to connect language processing capabilities to navigate to a goal. This is also currently a very active field of research [24], although most current systems require highly detailed instructions (e.g., "go down the hallway, turn at your second left, enter the first door to the right where there is a bed in the room.") rather than high-level instructions (e.g., "go to the bedroom"). In addition, generalizing to unseen environments is also an open problem [9]. Extending this to an environment that even a human has not seen and asking for an agent to use context to follow high-level instructions is an even more difficult task. Our work connects to this task through its leveraging of common-sense knowledge, a feature which will likely be necessary for generalization to unseen environments. III. PROBLEM FORMULATION To utilize a standard link prediction model for contextdependent anomaly detection on an input, the input must be framed as a graph. The following section provides general framework for this conversion, after which we will provide an example of this process applied to a specific domain. A. Entities and Relationships We introduce the idea of an entity, which represents any element or attribute in an environment of interest. We frame the anomaly detection problem as one of correctly predicting relationships between entities. An anomaly occurs when an entity appears among a set of entities with which it doesn't have a relationship in common. To formalize this notion of anomalous entities, we attach several properties to each of our entities to help characterize our problem: Let T be the set of types of entities. Types represent the broad classes of entities that we wish to reason about. Individual entities of the same type are distinguishable by labels. Let Y be the set of true labels that the entities are representing. Subsets Y ti ⊆ Y are divided by entity type such that t∈T Y t = Y . Let R be the set of types of relationships between two entity types (self-relation included). Then let E be the set of entities that represent an input such that e i ∈ E, e i := (y i , t i , {R ti,tj }), where y i ∈ Y ti is the known true label of the entity, t i is the known type of entity and R ti,tj is the known set of possible relationships between entity types t i , t j ∈ T . This set provides the context of the input. Example. To ground this formulation in a more tangible problem, we apply it to detecting anomalous objects within household scenes. Since we wish to reason about objects in scenes, let T = {t o , t s } be the set of types of possible entities-object or scene. Then Y is the full set of possible labels across all entity types (any object or scene label), with Y ts ⊂ Y being the set of possible scene (t s ) labels in our domain (e.g., "office," "bathroom," "kitchen"). Similarly, Y to ⊂ Y is the set of possible object (t o ) labels in our domain (e.g., "toaster," "desk," "oven"). In this example, objects can belong in a scene or be associated with other objects. Let R = {r AL , r LN } be the set of the possible relationships "At Location" and "Located Near" that each link l can represent. Then R to,ts = {r AL } and R to,to = {r LN }. We do not include any scene-to-scene relations so R ts,ts = {∅}. One might expect the "toaster" and "oven" objects to share the "Located Near" relationship, and each of them to share the "At Location" relationship with "kitchen." On the other hand, "desk" is unlikely to share any of these relationships. B. Knowledge Graphs and Link Prediction A knowledge graph, as defined in [25], is "a graph of data intended to accumulate and convey knowledge of the real world, whose nodes represent entities of interest and whose edges represent relations between these entities." Knowledge graphs allow information from multiple data sources to be flexibly integrated into a cohesive schema, making it a powerful tool for describing context and contextual anomalies. Furthermore, their interpretability offers an advantage when there is a need to understand a system's outputs. We define a directed graph G = (E, L) as a set of entities E (also known as vertices or nodes) that are connected by a set of relationships (i.e., links) L ⊆ E ×E, where each l ∈ L connects what is referred to as a "head" entity and a "tail" entity. Knowledge graph entities can also contain related characteristics referred to as attributes. Attributes can vary depending on entity type (e.g., an object entity might have an attribute describing weight, but an audio entity would not have such an attribute), and can be modeled as a relationship or an entity. We do not include attributes in our example, but the framework can easily accommodate them. The link prediction problem takes an incomplete graph (a graph with at least one pair of vertices that do not have a connecting edge) as its training data. It assumes that the observed links are a subset of the true links that should exist. The goal is to try to predict the existence of the nonexistent true link(s). A link prediction model is a model f : E ×E → R that is trained to map potential links to a score, where a high score is indicative of a true link and a low score a corrupted link (i.e., a link that should not exist). Corrupted links are generated by permuting a true link's head or tail entity such that the resulting head, relation, tail triplet does not exist in the training data. For a complete description of training a link prediction model, see [14]. Given a trained link prediction model f , then for a new link l = (e i , e j ) / ∈ L we can calculate a score f (e i , e j ), where i = j. If f (e i , e j ) is high then link l is likely to exist while if f (e i , e j ) is low l is unlikely to exist. Problem. Given a set of n entities [e 1 , e 2 , ..., e n ] ∈ E, of which one entity is known to be have a true weak or nonexistent relationship with the other entities, identify this entity as anomalous. IV. APPROACH We use the link prediction model to score possible links to perform context-dependent anomaly detection on an input. e can consider a particular object's strength of relationship with the set of other objects O and the set of possible scenes S as follows: given o i ∈ O, ∀o j ∈ O, i = j or ∀s j ∈ S m and ∀r ∈ {R ti,tj }, a set of scores can be calculated for each relationship type Z r = {f r (o i , o j )} (or {f r (o i , s j )} for object-scene relations). As these are multiple separate scores and the goal is to have one final anomaly score to assess a candidate object, the sets of relationship scores can be aggregated to give an overall of anomaly score Z = g(h r1 (Z r1 ), h r2 (Z r2 ), ..., h rn (Z rn )), where g is a domainspecific aggregation function and each h r ∀r ∈ {R ti,tj } is a relationship specific aggregation function. Higher values of Z indicate that entity o i is more likely to be anomalous based on the context given by O and S. Note that individual relationship types may have either a positive or negative weight in g; a negative weight for a relationship type implies that entities with weak links of that type are more likely to be anomalous and vice versa. This allows any type of relationship to be included in the graph, as long as the sign of its contribution to the anomaly score is constant across all entities. Additionally, note that different types of anomalies can be detected by pairing the same graph and link prediction model with multiple aggregation functions g. Given a model f trained to predict "At Location" and "Located Near" links (r AL and r LN , respectively), the following procedure can be used to compute object and scene context scores and perform anomaly inference. A. Object Context Let O be the set of entities that are objects in a given input (i.e., o i = (y i , t o , {R to,to , R to,ts }) ∈ O), where the input's object labels y i are known. For a particular set O of room objects and injected anomalous object, given an anomaly candidate o ca ∈ O from the set, we can calculate a "Located Near" score f r LN (o ca , o i ) between the anomaly candidate and every other object in the set. The average of these scores, z o , is the "object score." It is a measure of how likely the anomaly candidate is to be found in the same scene as the other objects in the set and can be calculated by: z o (o ca ) = 1 |O| e∈O\oca f r LN (o ca , o)(1) B. Scene Context Next, we are interested in predicting the input's scene based off of the set of objects, so that the predicted scene can be used as additional contextual information to help identify the anomaly. Let S be the set of all possible scene type entities such that s ∈ S = (y, t s , {R to,ts })∀y in Y ts and let S m be the set of entities that represent the m most likely scenes for a given input (i.e., s i = (y i , t s , {R to,ts }) ∈ S m ) and |S m | = m. We can calculate a "scene score" via the following method. First, generate an "At Location" score f r AL (o i , s j ) between each object (including the anomaly candidate) and each scene from the set of scene candidates S. Then, average the scores across objects to provide a compatibility score between each scene type and the set of objects. Select the top m scenes based on compatibility score as likely candidates and call this set S m , which can be written as: S m = arg max S ⊆S,|S |=m,s∈S 1 |O| * o∈O f r AL (o, s)(2) Similar to above, the "At Location" scores from the scene types in S m are then averaged to get the scene score z s , which indicates how likely the current anomaly candidate object is to be found in the most likely scene types for the given set O. z s (o ca ) = 1 m s∈Sm f r AL (o ca , s)(3) Selecting the wrong scenes for S m is always a risk, and possibly a highly impactful one-something like a chainsaw may be innocuous in a basement location, but alarming in an office. Selecting more than one scene for S m helps mitigate the chance that incorrect scene predictions will solely be used to generate the scene context. Since there are often several scene types with similar traits (i.e., garden, yard) using multiple scenes as context is not expected to overly dilute information provided by correct scene type predictions. C. Aggregating Context and Computing Anomaly Score The overall anomaly score z for the current anomaly candidate is calculated by taking the negative weighted sum of z o and z s where α ∈ [0, 1] is a weighting parameter. Note that as "At Location" and "Located Near" scores are always generated by the same model and averaged over their variable inputs, they do not require any additional normalization. z(o ca ) = −α * z o (o ca ) − (1 − α) * z s (o ca )(4) A anomaly score can be calculated for each object in the the set as an anomaly candidate; the anomalous object o a can then be predicted by simply taking an arg max over all anomaly scores as per Eq. 5. o a = arg max o∈O z(o)(5) An advantage of this procedure is its computational efficiency when sufficient memory is available. Calculating the link scores for every relation type is the most computationally heavy component, taking O(L|R||E| 2 ) time where L represents the time complexity of of the link scoring method for the chosen link prediction model. However, once calculated, the scores can be reused indefinitely for any number of inputs. Assuming that inference method consists of simple sum and average operations on the score table, as is the case for our example domain, the time complexity of performing anomaly inference on a single input is O(|R||E i | 2 ) and on many inputs is O(N |R||E i | 2 ) where N is the number of inputs and E i is the largest set of entities considered in any input. Assuming that |R| N and |E i | N , the runtime on many inputs reduces to O(N ). In practice we found that, using a standard CPU, we were able to complete inference on our largest anomaly dataset on the order of minutes. The results of our process are detailed in Section VI. Remark. While we did not explore this feature in depth in this work, because an explicit anomaly score is calculated between each object and scene, our model allows the factors which led to a particular object being classified as anomalous to be examined more closely than is possible with a standard neural network classification architecture. V. EXPERIMENTAL METHODS A. Model We used a family of knowledge graph embedding (KGE) models for our link prediction models in our experiments [13], [26], [27]. KGE models are the first choice for graph completion; since graphs can contain millions of links, model efficiency is extremely important. KGE models address this problem by learning an N-dimensional vector to represent each graph entity. Link prediction is treated as a comparison between entity vectors, often using another set of relationship vectors to differentiate between different graph relations. As this comparison is highly computationally efficient, KGE models can process even large knowledge graphs within a reasonable period of time. Additionally, since the training of non-neural KGE models often produces a descriptive set of entity embeddings it also provides more insight into the learning process than other methods. Popular models are also easily accessible through the TorchKGE package [28], which we used in our experiments. Additionally, since the (non-neural) KGE model training process often produces a descriptive set of entity embeddings it also provides more insight into the learning process than other methods. Popular models are also easily accessible through the TorchKGE package [28], which we used in our experiments. B. Data Although there are general knowledge graphs available, to the best of our knowledge there are none that contain a large amount of realistic object and scene information with both "At Location" and "Located Near" relations. Furthermore, there are no publicly available datasets containing realistic anomalies for common household scenes. The following sections detail our methodology for creating these datasets. 1) Object-Scene Dataset Methodology: A subset of the Visual Genome [22] dataset was used as the main source of data for the object-scene knowledge graph. Visual Genome is a heavily annotated image dataset that is used to train AI systems to have greater understanding of the relationship It contains about 108,000 images of real-world scenes, of which 8,419 relate to household scenes within twenty-eight, unbalanced categories. The full set of scenes considered and their distribution is shown in Fig. 2. Critically, Visual Genome images are labeled with both their scene type and the set of objects found in the scene. These labels allowed us to quickly extract semantic information about the relationships between objects, other objects, and scene types from the images; if two objects were found in the same image, a "Located Near" link between the two objects was added to the knowledge graph and if an object was found in a picture labeled as a specific scene type, an "At Location" link was added. To augment the graph, we also incorporated "At Location" and "Located Near" links from ConceptNet, a large and publicly available knowledge graph with 34 million assertions over a variety of link types [29]. While Concept-Net only contributed an additional 151 "At Location" links, a small fraction of our final knowledge graph, ConceptNet also served as an object filter on the raw Visual Genome object sets, as Visual Genome's human annotations are largely unsupervised and include misspellings, pluralization, and other nonsensical objects. By only including objects that exist in both ConceptNet and Visual Genome, we were able to remove the majority of these spurious objects. The final graph consists of nodes representing both objects and scenes, "At Location" links connecting objects to scenes where they are commonly found, and "Located Near" links connecting two objects that are often found in the same scene. In addition to the "full" dataset we created two filtered versions of the knowledge graph. The first graph filtered object entities by relative scene co-occurence and relative object co-occurrence frequency to eliminate object-scene and object-object connections that appear only infrequently in the data. This subset was intended to provide a less noisy dataset compared to the full, unfiltered dataset at the cost of fewer training examples, and will be referred to as the "Filtered" dataset. The second filtered knowledge graph was created by removing all entities and associated links that did not have a corresponding class in the Google Open Images [30] object detection dataset; this subset was intended to more closely match widely-available object detector performance in terms of unique entities that are available for detection. We will refer to this dataset as the "Detector" dataset. Details on all datasets can be found in Table I. 2) Anomaly Dataset Methodology: There are no publicly available resources detailing anomalous objects for scene types, so we again utilized Visual Genome images to build our anomaly dataset. We defined an anomalous object for a given scene type to be an object that did not appear in a single image of that scene type within Visual Genome's household scene images. We then identified the set of objects found within each scene type, and removed them from the full set of objects to create each scene's anomalous object set. For each annotated image of a specific scene type within Visual Genome, anomalous scenes were created by appending each object from the corresponding anomalous objects list for that scene to the list of labeled objects in that annotated image. We will refer to the full anomaly dataset created through this process as the "Out-of-Scene" anomaly dataset, or "Out" for short. However, given the limited number of images available for some scene types, we were concerned that some non-anomalous objects would be incorrectly added as an anomaly in the testing set. To address this concern, we filtered out any objects that occurred in multiple rooms from the set of anomalous objects. As an example, let's assume we do not have very many images of kitchens in our training set, and none of the images we do have contain chicken meat, which is non-anomalous in a kitchen. However, our training set does contain a lot of images of dining rooms and backyards, and chicken meat appears in images from both these scene types (e.g., at the table in the dining room, on a grill in the backyard); we then choose to omit the "chicken" object from the set of anomalous objects used to create the testing set. Since objects that are only found in one room are more likely to be strongly associated with that room, this process of removing possible spurious anomalies made this "Unique Out-of-Scene" ("Unique Out") anomaly dataset a less challenging but more realistic benchmark. The disadvantage of this method is that we do end up filtering out objects that occur in multiple rooms that may very well be anomalous in a different scene type (e.g., chicken meat in the bathroom is an anomaly, but since "chicken" has been excluded from the set of possible anomalous objects, this will not be a datapoint in the test set). Details on both datasets can be found in Table II. Following this process, we were able to generate a Full version of 934,202 image and anomaly combinations for the Out-of-Scene dataset and 577,907 image and anomaly combinations for the Unique Out-of-Scene anomaly dataset. Fig. 3 shows an example of an anomaly datapoint drawn from the Unique Out-of-Scene anomaly dataset. Remark. It is important to note that our anomaly datasets are not adding anomalies at the image level. Directly injecting objects into images without generating artifacts is both an extremely difficult task and unnecessary for the evaluation of our technique. When testing the models trained on the Filtered and Detec-tor datasets, which did not include the full set of objects, any anomaly datapoints where an out-of-dataset object was used as the anomaly were removed from the dataset. Datapoints that simply contained an out-of-dataset object were left in the anomaly dataset, with said out-of-dataset objects ignored during testing. Additionally, any scenes with fewer than five objects remaining after filtering were also removed to ensure that there was a minimum amount difficulty in each anomaly datapoint. 3) Data Splitting: In many machine learning tasks, the training data and the evaluation objective are both tied to the same data modality, making it straightforward to create validation and test sets by randomly removing examples from the overall dataset. However, in our domain the training data was a knowledge graph and the evaluation data consisted of sets of objects containing anomalies. This setup prevented us from directly sampling validation and testing examples from the training dataset. Instead, we chose to separate our training, validation, and testing data at the Visual Genome image level. We split the corresponding Visual Genome images into training, validation, and testing sets using a 80/10/10 split for each scene type, and then combined the per scene sets. This guaranteed that at least a few of each scene type would be included in the validation and testing sets; since there are a few extremely common classes and a few uncommon classes, a purely random split would have guaranteed that some classes would be missing from the training and validation splits. Finally, the training knowledge graph was extracted from only the separated training images. C. Model Training and Comparisons For training models, our experiment procedures were as follows. First, we trained KGE models on our three objectscene knowledge graphs (Full, Filtered, and Detector), varying the type of KGE model used and searching over other training hyperparameters. See the Appendix for details on the search space, selected models, and link prediction metrics. The inference hyperparameters, namely the number of scene contexts considered, m, and the weighting between scene context and object context, α, were tuned during the validation set evaluation. As mentioned above, there is work being done in solving the link prediction problem and finding graph-based anomalies [12], and we take advantage of this framework; however, given that context-dependent anomaly detection is not a widely studied problem, there are no public datasets for this domain that can be used as benchmarks. Therefore, we did not include comparisons to other techniques in this paper. true anomaly in the top three anomaly scores), we found that the detection rate jumped to 96.5% and 99.0%, respectively. Given that there were on average 20.55 objects in each anomaly scene, our model's performance was substantially greater than a random model. VI. EXPERIMENTAL RESULTS AND DISCUSSION A. Anomaly Detection 1) Qualitative Analysis of Anomaly Detection Performance: When examining the decisions our models made, a few encouraging qualitative trends emerge. Objects that were correctly detected as anomalies in one scene were also correctly identified as non-anomalous in other inputs. Table IV shows an example of "broccoli" being correctly flagged in "driveway", and correctly not flagged in "garden." Most of the errors the model made contained anomalies that fall into two categories: abstract objects like "text" or "holes," and food related objects like "meat" and "cheese." Fig. 4 includes the objects that most frequently escaped detection for top 5 accuracy when using the Unique Out-of-Scene dataset. Since both categories can be found in multiple scene types in the real world, their inclusion as anomalies likely stems from our limited data and automated anomaly generation procedure rather than a direct limitation of the model. Further support for this idea can be found in Table V, which shows examples of the other objects that were predicted to have higher anomaly scores than the anomalous object. These examples were randomly selected to avoid introducing bias, and similarly show that datapoints where the model was unable to flag the anomalous object contained other seemingly more anomalous objects (i.e., boards in hallways over escalators in hallways). While the model does make some genuine mistakes (i.e., rating straps as more similar to garden than pasture), overall this indicates that our model performance might increase on an anomaly dataset constructed under more human supervision. 2) Context Trade-off: During the tuning of α, we consistently found that our models achieved the best performance when we maximized contribution of object context, with performance decreasing steadily as the contribution of the the scene context increased. Fig. 5 shows this trend for our best performing model, which held regardless of the number of scenes m that were considered. To further explore this observation, we calculated the model's accuracy on scene Fig. 4. The 10 anomalies missed most frequently by the best performing Full model on the Unique Out-of-Scene anomaly dataset, as a percentage of the total anomalies that went undetected. Combined, they account for about 25% of the total undetected anomalies. Note that most objects in the set are either food related or abstract. Fig. 5. Decrease in performance as scene context is weighted more heavily relative to object context. Trend holds regardless of the number of scene contexts considered, but faster decreases in performance are seen with higher number of contexts. Highest performance is achieved when scene context is completely ignored. prediction and found that it achieved a top 1 accuracy of 58.9% and a top 5 accuracy of 80.4%. While significantly lower than the accuracy achieved by state-of-the-art computer vision-based models, these results indicate that there is usable information coming from the scene context, and it is possible that future work will be able to better utilize this information. 3) Performance When Trained on Detector Dataset: The best model trained on the Detector dataset achieved a top 1 accuracy of 97.0% and a top 3 accuracy of 99.7% on the more challenging Out-of-Scene anomaly dataset. While these accuracy values are higher than those of the model trained on the Full dataset, the Detector-based anomaly dataset is a less difficult benchmark as there are only 9.31 objects per scene on average compared to the 20.55 objects per room in the Full dataset. However, since the Detector-based set is a subset of the Full set, models trained on the Full dataset can be tested against the Detector (and Filtered) anomaly dataset to provide a baseline for comparison. TABLE III) that our methodology would be viable using state-of-the-art object detectors on raw image data, at least in terms of the raw number of types available. 4) Performance When Trained on Filtered Dataset: The models trained on the Filtered dataset performed significantly worse than the models trained on the Full, noisy dataset, with top 1 accuracy decreasing by 21.5% and 25.4% on the corresponding Out-of-Scene and Unique Out-of-Scene test sets. The most likely explanation of this result is that detrimental effects of the reduced amount of training data from filtering out noisy links outweighed any benefit gained from removing spurious correlations from the dataset. Interestingly, this decrease in performance was significantly larger than the decrease between the Full and Detector models. This is particularly notable since the Detector training dataset has a similar number of training links as compared to the Filtered dataset, which implies that there is be a set of filters that reduce the amount of training data required without overly lowering performance. VII. CONCLUSION AND FUTURE WORK In this paper, we demonstrate the use of a KGE-based method for context-dependent anomaly detection that is scalable, efficient, and interpretable. We successfully show that this method can identify anomalies in a single domain, household scenes, when given a relatively small amount of labeled image training data, and show that widely available object detector datasets provide enough classes to apply our method. Despite this success, there are several clear avenues in which our methods can be extended. Immediate work will include adding an object detector to tackle the challenge of building a knowledge graph directly from or performing inference on noisy image data and incorporating online updates. We are also exploring adding new relational links and implementing a graph neural network in the pipeline to leverage richer information about the graph structure and learn more complex relationships. This, paired with the object detector, will allow us to observe performance in point and group anomalies as well. Additionally, while our methodology for developing anomaly datasets was sufficient for this work, it could be improved with additional human oversight or better filtering techniques. Context-dependent anomaly detection is still in its infancy as a research direction, but is an interesting problem for continued exploration. APPENDIX Table VII shows the full space of hyperparameters and models which were tested in this work. Table VIII shows the best hyperparameter combinations for each training data set, and Table IX shows the performance of each model on various common link prediction metrics. Note that the metrics are not comparable between datasets, as they depend on the size of the dataset. STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering. © 2022 Massachusetts Institute of Technology. Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Government rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Fig. 2 . 2Number of Visual Genome Images Per Scene Type. Note that scenes from bathroom, kitchen, and balcony far outnumber scenes from other categories. Fig. 3 . 3Example drawn from the Detector version of the Unique Out-of-Scene anomaly dataset. Blue represents the scene type, green the labeled objects, and red the possible anomalies for this scene type, of which two examples are shown. To form an individual anomaly datapoint, one anomaly is selected (i.e., squash) and links are hypothesized to exist between the scene, each object, and the anomaly. TABLE I TRAINING IKNOWLEDGE GRAPH LINK COUNTS AND DISTRIBUTION BETWEEN LINK TYPES FOR EACH DATASET DATASET INFORMATION FOR EACH DATASET AND ANOMALY FILTER between objects, regions, and human descriptions in images.Number of Datapoints Anomaly Dataset Average Objects Out Unique Out Full 20.55 934,202 577,907 Filtered 19.31 374,187 136,937 Detector 9.31 96,817 47,002 TABLE II ANOMALY Table III provides IIIan overview of our results. Our best performing model on the Full dataset correctly identified 80.7% Out-of-Scene anomalies and 88.3% of Unique Outof-Scene anomalies. Relaxing the accuracy criteria to top 3 accuracy (the frequency with which the model ranked theTop 1 Accuracy Top 3 Accuracy Model Out Unique Out Out Unique Out Full 80.7 88.3 96.5 99.0 Filtered 62.5 66.8 88.0 91.1 Detector 97.0 99.3 99.7 99.9 TABLE III ANOMALY PREDICTION PERFORMANCE FOR EACH MODEL AND ANOMALY DATASET True Scene Anomaly Prediction Other Objects in Input driveway broccoli window line road car garden milk rosemary jalapeno broccoli pile TABLE IV EXAMPLE OF CORRECT IDENTIFICATION OF ANOMALY AND NON-ANOMALY. Table VIshows the accuracy values from this comparison, which found that there is only a 1.50% difference in top 1 accuracy when looking at the performance of the models on the Detector Out-of-Scene dataset (Full model achieves 95.5% and Detector model, shown in III, achieves 97.0%), and no difference in top 3 accuracy (99.7% for both Full and Detector models) between the two models. The very slight changes in performance between the Full model and the Detector model indicateTrue Scene True Anomaly Higher Anomaly Score Objects hallways boards mounted elderly cameras escalator terminal office drink mounted sit thermostat hanging printer garden straps flown polo wind streamers pasture TABLE V EXAMPLES VOF ANOMALY DATAPOINTS MISIDENTIFIEDTABLE VI ANOMALY PREDICTION PERFORMANCE OF FULL MODEL WHEN TESTED ON THE FILTERED AND DETECTOR OUT-OF-SCENE AND UNIQUE ANOMALY DATASETS. NOTICE THAT THE TOP 1 ACCURACY SHOWN HERE ON THE DETECTOR-BASED OUT-OF-SCENE DATASET IS 95.5%, COMPARED TO THE DETECTOR MODEL'S PERFORMANCE ON THE SAME DATASET OF 97% (SHOWN INOut Unique Out Model Top 1 Accuracy Filtered Detector Filtered Detector 84.0 95.5 92.2 98.8 Model Top 3 Accuracy Filtered Detector Filtered Detector 96.9 99.7 99.4 99.9 TABLE VII MODEL AND HYPERPARAMETER SEARCH SPACE USED FOR TRAINING LINK PREDICTION MODELSParameter Type Values Model TransE TransR TransD ComplEx Analogy Learning Rate 5e-3 1e-3 5e-4 Learning Rate Schedule None Linear 1Cycle Object Embedding Size 25 50 75 100 500 700 800 1000 Relation Embedding Size 25 50 75 100 150 Epochs 10 50 100 200 300 400 500 1000 2000 Model Parameter Type Full Filtered Detector Model TransD TransR TransD Learning Rate 5e-3 1e-3 1e-4 Learning Rate Schedule Linear None None Object Embedding Size 75 300 400 Relation Embedding Size 75 150 100 Epochs 500 1000 1000 TABLE VIII MODEL VIIITYPE AND HYPERPARAMETERS THAT RESULTED IN BEST PERFORMING MODEL FOR EACH DATASETTABLE IX BEST MODEL PERFORMANCE ON LINK PREDICTION METRICS FOR EACH DATASETModel Metric Type Full Filtered Detector Filtered Hits @ 10 35.9 92.6 96.4 Filtered Mean Rank 154.8 4.69 3.33 Filtered MRR 0.144 0.429 0.439 ACKNOWLEDGMENTSThe authors would like to thank Dr. Zachary Serlin for his time and helpful feedback, and Drs. Rajmonda Caceres, Lori Layne, and Sung-Hyun Son for their support. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. K He, X Zhang, S Ren, J Sun, 2015 IEEE International Conference on Computer Vision (ICCV). K. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026- 1034, 2015. Mastering the game of go with deep neural networks and tree search. D Silver, A Huang, C J Maddison, A Guez, L Sifre, G Van Den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T P Lillicrap, M Leach, K Kavukcuoglu, T Graepel, D Hassabis, Nature. 529D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. P. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, "Mastering the game of go with deep neural networks and tree search," Nature, vol. 529, pp. 484-489, 2016. Superhuman performance in gran turismo sport using deep reinforcement learning. F Fuchs, Y Song, E Kaufmann, D Scaramuzza, P Dürr, IEEE Robotics and Automation Letters. 6F. Fuchs, Y. Song, E. Kaufmann, D. Scaramuzza, and P. Dürr, "Super- human performance in gran turismo sport using deep reinforcement learning," IEEE Robotics and Automation Letters, vol. 6, pp. 4257- 4264, 2021. A study and comparison of human and deep learning recognition performance under visual distortions. S F Dodge, L Karam, 26th International Conference on Computer Communication and Networks (ICCCN). S. F. Dodge and L. Karam, "A study and comparison of human and deep learning recognition performance under visual distortions," 2017 26th International Conference on Computer Communication and Networks (ICCCN), pp. 1-7, 2017. Wilds: A benchmark of in-the-wild distribution shifts. P W Koh, S Sagawa, H Marklund, S M Xie, M Zhang, A Balsubramani, W Hu, M Yasunaga, R L Phillips, I Gao, International Conference on Machine Learning. PMLR, 2021. P. W. Koh, S. Sagawa, H. Marklund, S. M. Xie, M. Zhang, A. Bal- subramani, W. Hu, M. Yasunaga, R. L. Phillips, I. Gao, et al., "Wilds: A benchmark of in-the-wild distribution shifts," in International Con- ference on Machine Learning. PMLR, 2021, pp. 5637-5664. A unifying view on dataset shift in classification. J G Moreno-Torres, T Raeder, R Alaiz-Rodríguez, N V Chawla, F Herrera, Pattern Recogn. 451J. G. Moreno-Torres, T. Raeder, R. Alaiz-RodríGuez, N. V. Chawla, and F. Herrera, "A unifying view on dataset shift in classification," Pattern Recogn., vol. 45, no. 1, p. 521-530, Jan 2012. [Online]. . 10.1016/j.patcog.2011.06.019Available: https://doi.org/10.1016/j.patcog.2011.06.019 Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I J Goodfellow, R Fergus, abs/1312.6199CoRR. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," CoRR, vol. abs/1312.6199, 2014. Benchmarking neural network robustness to common corruptions and perturbations. D Hendrycks, T G Dietterich, ArXiv. D. Hendrycks and T. G. Dietterich, "Benchmarking neural network robustness to common corruptions and perturbations," ArXiv, vol. abs/1903.12261, 2019. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. P Anderson, Q Wu, D Teney, J Bruce, M Johnson, N Sünderhauf, I Reid, S Gould, A Van Den, Hengel, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sünderhauf, I. Reid, S. Gould, and A. van den Hengel, "Vision-and-language nav- igation: Interpreting visually-grounded navigation instructions in real environments," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. Deep learning for anomaly detection. G Pang, C Shen, L Cao, A Van Den, Hengel, ACM Computing Surveys (CSUR). 54G. Pang, C. Shen, L. Cao, and A. van den Hengel, "Deep learning for anomaly detection," ACM Computing Surveys (CSUR), vol. 54, pp. 1 -38, 2021. Anomaly detection: A survey. V Chandola, A Banerjee, V Kumar, ACM Comput. Surv. 41V. Chandola, A. Banerjee, and V. Kumar, "Anomaly detection: A survey," ACM Comput. Surv., vol. 41, 07 2009. Graph based anomaly detection and description: a survey. L Akoglu, H Tong, D Koutra, Data Mining and Knowledge Discovery. 29L. Akoglu, H. Tong, and D. Koutra, "Graph based anomaly detection and description: a survey," Data Mining and Knowledge Discovery, vol. 29, pp. 626-688, 2014. Knowledge graph embedding for link prediction. A Rossi, D Firmani, A Matinata, P Merialdo, D Barbosa, ACM Transactions on Knowledge Discovery from Data (TKDD). 15A. Rossi, D. Firmani, A. Matinata, P. Merialdo, and D. Barbosa, "Knowledge graph embedding for link prediction," ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 15, pp. 1 -49, 2020. Translating embeddings for modeling multi-relational data. A Bordes, N Usunier, A García-Durán, J Weston, O Yakhnenko, NIPS. A. Bordes, N. Usunier, A. García-Durán, J. Weston, and O. Yakhnenko, "Translating embeddings for modeling multi-relational data," in NIPS, 2013. Observed versus latent features for knowledge base and text inference. K Toutanova, D Chen, 072015K. Toutanova and D. Chen, "Observed versus latent features for knowledge base and text inference," 07 2015. Context-dependent anomaly detection for low altitude traffic surveillance. I Bozcan, E Kayacan, 2021 IEEE International Conference on Robotics and Automation (ICRA). I. Bozcan and E. Kayacan, "Context-dependent anomaly detection for low altitude traffic surveillance," 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 224-230, 2021. Neighborhood formation and anomaly detection in bipartite graphs. J Sun, H Qu, D Chakrabarti, C Faloutsos, Fifth IEEE International Conference on Data Mining (ICDM'05). J. Sun, H. Qu, D. Chakrabarti, and C. Faloutsos, "Neighborhood formation and anomaly detection in bipartite graphs," Fifth IEEE International Conference on Data Mining (ICDM'05), pp. 8 pp.-, 2005. A comprehensive survey on graph anomaly detection with deep learning. X Ma, J Wu, S Xue, J Yang, Q Z Sheng, H Xiong, abs/2106.07178ArXiv. X. Ma, J. Wu, S. Xue, J. Yang, Q. Z. Sheng, and H. Xiong, "A com- prehensive survey on graph anomaly detection with deep learning," ArXiv, vol. abs/2106.07178, 2021. Vqa: Visual question answering. A Agrawal, J Lu, S Antol, M Mitchell, C L Zitnick, D Parikh, D Batra, International Journal of Computer Vision. 123A. Agrawal, J. Lu, S. Antol, M. Mitchell, C. L. Zitnick, D. Parikh, and D. Batra, "Vqa: Visual question answering," International Journal of Computer Vision, vol. 123, pp. 4-31, 2015. Clevrer: Collision events for video representation and reasoning. K Yi, C Gan, Y Li, P Kohli, J Wu, A Torralba, J B Tenenbaum, abs/1910.01442ArXiv. K. Yi, C. Gan, Y. Li, P. Kohli, J. Wu, A. Torralba, and J. B. Tenenbaum, "Clevrer: Collision events for video representation and reasoning," ArXiv, vol. abs/1910.01442, 2020. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. J Johnson, B Hariharan, L Van Der Maaten, L Fei-Fei, C L Zitnick, R B Girshick, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick, "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1988-1997, 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. R Krishna, Y Zhu, O Groth, J Johnson, K Hata, J Kravitz, S Chen, Y Kalantidis, L.-J Li, D A Shamma, M S Bernstein, L Fei-Fei, International Journal of Computer Vision. 123R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. S. Bernstein, and L. Fei-Fei, "Visual genome: Connecting language and vision using crowdsourced dense image annotations," International Journal of Computer Vision, vol. 123, pp. 32-73, 2016. . T.-Y Lin, M Maire, S J Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Microsoft coco: Common objects in context," in ECCVT.-Y. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," in ECCV, 2014. Visual-and-language navigation: A survey and taxonomy. W Wu, T Chang, X Li, abs/2108.11544ArXiv. W. Wu, T. Chang, and X. Li, "Visual-and-language navigation: A survey and taxonomy," ArXiv, vol. abs/2108.11544, 2021. Knowledge graphs. A Hogan, E Blomqvist, M Cochez, C Amato, G De Melo, C Gutiérrez, S Kirrane, J E L Gayo, R Navigli, S Neumaier, A.-C N Ngomo, A Polleres, S M Rashid, A Rula, L Schmelzeisen, J Sequeda, S Staab, A Zimmermann, ACM Computing Surveys (CSUR). 54A. Hogan, E. Blomqvist, M. Cochez, C. d'Amato, G. de Melo, C. Gutiérrez, S. Kirrane, J. E. L. Gayo, R. Navigli, S. Neumaier, A.- C. N. Ngomo, A. Polleres, S. M. Rashid, A. Rula, L. Schmelzeisen, J. Sequeda, S. Staab, and A. Zimmermann, "Knowledge graphs," ACM Computing Surveys (CSUR), vol. 54, pp. 1 -37, 2021. A survey on knowledge graph embeddings for link prediction. M Wang, L Qiu, X Wang, 10.3390/sym13030485Symmetry. 133485M. Wang, L. Qiu, and X. Wang, "A survey on knowledge graph embeddings for link prediction," Symmetry, vol. 13, no. 3, p. 485, Mar 2021. [Online]. Available: http://dx.doi.org/10.3390/sym13030485 An overview of embedding models of entities and relationships for knowledge base completion. D Q Nguyen, abs/1703.08098ArXiv. D. Q. Nguyen, "An overview of embedding models of entities and rela- tionships for knowledge base completion," ArXiv, vol. abs/1703.08098, 2017. Torchkge: Knowledge graph embedding in python and pytorch. A Boschin, ArXiv. A. Boschin, "Torchkge: Knowledge graph embedding in python and pytorch," ArXiv, vol. abs/2009.02963, 2020. Conceptnet 5.5: An open multilingual graph of general knowledge. R Speer, J Chin, C Havasi, AAAIR. Speer, J. Chin, and C. Havasi, "Conceptnet 5.5: An open multilin- gual graph of general knowledge," in AAAI, 2017. Openimages: A public dataset for large-scale multilabel and multi-class image classification. I Krasin, T Duerig, N Alldrin, V Ferrari, S Abu-El-Haija, A Kuznetsova, H Rom, J Uijlings, S Popov, S Kamali, M Malloci, J Pont-Tuset, A Veit, S Belongie, V Gomes, A Gupta, C Sun, G Chechik, D Cai, Z Feng, D Narayanan, K Murphy, I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, S. Kamali, M. Mal- loci, J. Pont-Tuset, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan, and K. Murphy, "Openimages: A public dataset for large-scale multi- label and multi-class image classification." Dataset available from https://storage.googleapis.com/openimages/web/index.html, 2017.
[]
[ "Digital Asset Valuation: A Study on Domain Names, Email Addresses, and NFTs", "Digital Asset Valuation: A Study on Domain Names, Email Addresses, and NFTs" ]
[ "Kai Sun [email protected] " ]
[]
[]
Existing works on valuing digital assets on the Internet typically focus on a single asset class. To promote the development of automated valuation techniques, preferably those that are generally applicable to multiple asset classes, we construct DASH, the first Digital Asset Sales History dataset that features multiple digital asset classes spanning from classical to blockchain-based ones. Consisting of 280K transactions of domain names (DASH DN ), email addresses (DASH EA ), and non-fungible token (NFT)-based identifiers (DASH NFT ), such as Ethereum Name Service names, DASH advances the field in several aspects: the subsets DASH DN , DASH EA , and DASH NFT are the largest freely accessible domain name transaction dataset, the only publicly available email address transaction dataset, and the first NFT transaction dataset that focuses on identifiers, respectively.We build strong conventional feature-based models as the baselines for DASH. We next explore deep learning models based on fine-tuning pre-trained language models, which have not yet been explored for digital asset valuation in the previous literature. We find that the vanilla fine-tuned model already performs reasonably well, outperforming all but the best-performing baselines. We further propose improvements to make the model more aware of the time sensitivity of transactions and the popularity of assets. Experimental results show that our improved model consistently outperforms all the other models across all asset classes on DASH.
10.48550/arxiv.2210.10637
[ "https://export.arxiv.org/pdf/2210.10637v1.pdf" ]
252,992,764
2210.10637
15533a6447f4f56c2ba7c72c491b52d985d3b2ec
Digital Asset Valuation: A Study on Domain Names, Email Addresses, and NFTs Kai Sun [email protected] Digital Asset Valuation: A Study on Domain Names, Email Addresses, and NFTs CCS CONCEPTS • Computing methodologies → Neural networksMachine learning approachesNatural language processing• Applied computing → Economics KEYWORDS digital asset, valuation, domain name, email address, non-fungible token, transaction, dataset, machine learning, language model Existing works on valuing digital assets on the Internet typically focus on a single asset class. To promote the development of automated valuation techniques, preferably those that are generally applicable to multiple asset classes, we construct DASH, the first Digital Asset Sales History dataset that features multiple digital asset classes spanning from classical to blockchain-based ones. Consisting of 280K transactions of domain names (DASH DN ), email addresses (DASH EA ), and non-fungible token (NFT)-based identifiers (DASH NFT ), such as Ethereum Name Service names, DASH advances the field in several aspects: the subsets DASH DN , DASH EA , and DASH NFT are the largest freely accessible domain name transaction dataset, the only publicly available email address transaction dataset, and the first NFT transaction dataset that focuses on identifiers, respectively.We build strong conventional feature-based models as the baselines for DASH. We next explore deep learning models based on fine-tuning pre-trained language models, which have not yet been explored for digital asset valuation in the previous literature. We find that the vanilla fine-tuned model already performs reasonably well, outperforming all but the best-performing baselines. We further propose improvements to make the model more aware of the time sensitivity of transactions and the popularity of assets. Experimental results show that our improved model consistently outperforms all the other models across all asset classes on DASH. INTRODUCTION Since its birth, the Internet has generated many digital assets, such as domain names, and works on their monetary appraisal date back to over two decades ago [18]. Most research on automated digital asset valuation focuses on a single asset class [1,4,6,9,10,13,17,24,28,29]. Existing valuation methods rely heavily on expert knowledge and asset-and market-specific feature-engineering, whose cost reduces the potential for broadly applying the methods. Moreover, for many digital asset classes, there are no common testbeds or even no freely accessible data for studying valuation techniques, which raises the difficulties in making direct comparisons between methods, further limiting the progress in this research area. With the goal of advancing the development of automated valuation methods, preferably those that are broadly applicable to multiple digital asset classes, we construct DASH, a Digital Asset Sales History dataset containing transactions of multiple representative asset classes. Specifically, DASH consists of sales history of domain names (DASH DN ), email addresses (DASH EA ), and non-fungible token (NFT)-based identifiers (DASH NFT ), such as Ethereum Name Service (ENS) names (Section 3). Assets of the classes featured in DASH are challenging to assess due to their non-fungibility [26]. The valuation methods for these asset classes can potentially benefit from being collectively studied as they share the inherent property of being in some form of unique identifier. To establish baseline performance on DASH, we design a set of features that apply to all the studied asset classes and build three conventional feature-based regression models (Section 4.2). We next explore approaches based on fine-tuning pre-trained language models (LMs) (Section 4.3). We find that the vanilla fine-tuned model performs reasonably well: without leveraging any handcrafted features or explicit expert knowledge, it surpasses two out of three conventional models on the average performance over all subsets of DASH. We further propose two improvements: (i) make the model more aware of the time sensitivity of transactions using a two-stage fine-tuning approach; (ii) append to the input sequence external knowledge about the popularity of assets. Experiments demonstrate that our improved model substantially reduces the mean squared logarithmic error (MSLE) by 4.2% on average on the test set of DASH compared to the best-performing conventional model (Section 5). Our main contributions are as follows. • We introduce DASH, the first digital asset transaction dataset that features multiple asset classes spanning from classical to blockchain-based ones. To our knowledge, (i) DASH DN is the largest freely accessible domain name transaction dataset; (ii) DASH EA is the only publicly available email address transaction dataset; and (iii) DASH NFT is the first NFT transaction dataset that focuses on identifiers. • We propose conventional feature-based models and deep learning models for DASH. In contrast to all previous works, we present the first study that leverages pre-trained language models for digital asset valuation and demonstrates that fine-tuning a pre-trained language model can deliver performance superior to conventional models. • We conduct a comprehensive ablation study and a detailed error analysis of the proposed models on the DASH dataset. We also discuss variants of our models and the limitation of the work. The dataset and code will be available at https: //dataset.org/dash/. [1,4,6,12,28]. The data released by [13] is the only publicly accessible dataset we know. Their released dataset consists of 1, 335 domain names, which are all .com domains, along with binary labels indicating whether the price is high or low based on pre-defined thresholds. The exact sales price and date are not available. In comparison, DASH DN has orders of magnitude more sales with the exact sales price and date, covering over 500 different domain extensions. Some platforms (e.g., NameBio 1 ) provide online domain name sales search services, but they do not support exporting data in batch freely. Blockchain-Based Assets. Recent works have released several datasets regarding NFT transactions [10,14,17]. These datasets primarily focus on the NFTs' image objects [10,17], traits assigned by the NFT creators [14], and mentions in social media [10]. Valuation Approaches We primarily discuss the works about valuing unique individual assets rather than predicting the price of fungible assets (e.g., the Bitcoin to USD price [7]) or estimating the aggregate price of asset collections (e.g., the median price of four-letter .com domain names, the average price of Bored Ape Yacht Club (BAYC) [9]) over time. Various digital asset valuation methods, from theoretical to empirical, have been developed over the years [1, 4, 6, 10, 13-15, 17, 24, 26, 28, 29]. Most methods employed in recent research are conventional feature-based machine learning models [4,10,13,17]. They show the superior performance of random forest, eXtreme Gradient Boosting (XGBoost) [2], and Adaptive Boosting (AdaBoost) [8] for the valuation of domain names or NFTs compared with other feature-based models given the same feature set. A few works on NFT valuation leverage deep learning models, but the models are mainly used as the image encoder to encode image-based NFTs [10,17] Fine-Tuning Pre-Trained Language Models The past few years have witnessed significant progress in various natural language understanding problems with the help of finetuning pre-trained high-capacity language models [5,22]. More recently, applications of language model fine-tuning to problems beyond natural language understanding have emerged, such as automated theorem proving [21] and playing chess [23]. We follow this thread and explore the application to digital asset valuation, which is underexplored in previous research. DATA 3.1 Collection Methodology 3.1.1 Data Sources. We collect digital asset transaction data from a variety of data sources, summarized in the following. • Domain names: We track the domain name auctions hosted by sedo.com (Sedo) and flippa.com (Flippa). Additionally, we track the publicly disclosed buy-it-now sales and sales by negotiation completed on Flippa. • Email addresses: We track the email address auctions hosted by fglt.net (FGLT). • NFTs: We track sales of NFT-based identifiers, including ENS, Unstoppable Domains, and Decentraland Names reported by opensea.io (OpenSea), x2y2.io (X2Y2), and looksrare.org (LooksRare). Filtering and Normalization. We filter out bundle sales and sales whose price is zero. For auctions, we only keep the "reserve met" and "no reserve" ones that have at least one bid. As ENS names are stored in a hashed form [30], we filter out ENS name transactions whose unhashed name is unknown. We convert the sales price to the dominant currency used in transactions of the corresponding asset class (i.e., USD, CNY, and ETH for domain names, email addresses, and NFTs, respectively) using the exchange rate at the transaction time. Suspicious Transaction Detection. We notice suspicious transactions in the collected data. For example, the ENS name oneboy.eth has been traded between two Ethereum addresses over 800 times, indicating that a single agent likely controls the two addresses and these sales are likely bogus. To reduce the potential negative impact of such transactions, we adopt the following strategies: (i) for each transaction of NFT from address to address , we remove if both and are involved in (but not necessarily together) at least two other transactions of ; (ii) for each transaction of email address , we remove if another transaction of happens after within seven days. We do not apply similar strategies to domain names because Sedo and Flippa have relatively high commission fees, making it expensive to initiate suspicious transactions. The resulting transactions of domain names, email addresses, and NFTs constitute DASH DN , DASH EA , and DASH NFT , respectively, and DASH = DASH DN ∪ DASH EA ∪ DASH NFT . Each transaction in DASH comprises an asset identifier, a transaction date, and the corresponding sales price, along with the meta information of the asset that consists of the asset class and asset collection (if applicable). Note that the studied assets in DASH do not have name collisions so far, so an asset identifier alone can unambiguously represent an asset. The meta information offers additional information to distinguish between different assets that share the same identifier in case there are name collisions in the future. Data Statistics For convenience, we first formally define the name and suffix of an asset, which will be referred to in the rest of this paper: given an asset from DASH, its name refers to the substring of the identifier starting from the beginning to the first delimiter (exclusive), and its suffix refers to the substring after the first delimiter (exclusive). The delimiter is at sign for DASH EA and dot for DASH DN and DASH NFT . For example, the name and suffix of example.eth are example and eth, respectively; the name and suffix of [email protected] are email and example.com, respectively. We summarize the statistics of DASH in Table 1. We see a considerable sales price range with a large standard deviation in every subset of DASH. Besides, most assets in DASH have only one transaction, and there are numerous different suffixes. These altogether provide evidence that DASH is a very challenging dataset for digital asset valuation. We plot Figure 1 to show an overall view of transaction distributions across time. Compared with DASH DN and DASH EA , the transactions in DASH NFT are less evenly distributed in terms of volume over time. APPROACHES 4.1 Task Formulations In this work, we enforce that transactions in the development set are newer (resp. older) compared with the training (resp. test) set. This is distinguished from many previous works, where training data can be newer than evaluation data [4,10]. Specifically, we split the data by date, with the latest 5% for testing, the next latest 5% for development, and the earliest 90% for training. We summarize the split in Table 2. As the date range in the split is different for different asset classes, to prevent time leakage, we train separate models for DASH DN , DASH EA , and DASH NFT . For consistency and clarity, we now introduce several notations and give a formal definition of the task. Given a set of transac- tions = {( 1 , 1 , 1 ), ( 2 , 2 , 2 ), . . .} where , , and represent the asset identifier, price, and time of the th transaction, respectively, and ∈ {DASH DN , DASH EA , DASH NFT }, we partition into train , dev , and test such that ∀( , , ) ∈ train , ( , , ) ∈ dev , ( , , ) ∈ test ( ≤ ≤ ). The task is to learn from train a model that takes as input an asset identifier, and outputs an estimation of the price as accurate as possible, measured by MSLE on dev and test . Non-Neural Models 4.2.2 Feature-Based Regression Models. Following previous empirical studies (Section 2.2), we develop conventional feature-based models using random forest, XGBoost, and AdaBoost. Inspired by previous studies on drop catching and squatting [16,30], we design the following feature set reflecting the intrinsic value of identifiers, which applies to all the asset classes studied in this work. • Length: the length of the asset name in character. • Suffix: the asset suffix, represented by a one-hot vector. • Character: four binary features indicating (i) whether the asset name only contains alphabet letters, (ii) whether the asset name contains hyphens, (iii), whether all the characters in the asset name are numeric, and (iv) whether the asset name contains non-ASCII characters, respectively. • Number of tokens: we tokenize the asset name using morphological analysis [25] and consider the number of tokens as a feature. • Vocabulary: two binary features indicating (i) whether the asset name is a word and (ii) whether the asset name is an adult word, respectively. Non-Neural Models. We implement random forest and Ad-aBoost using [19] and XGBoost using [2]. All hyperparameters take the default values. Neural Models. We implement vanilla mBERT and mBERT+ based on Transformers [27]. We use the multilingual uncased BERT-Base released by [5]. For vanilla mBERT, we fine-tune for three epochs. As for mBERT+, we set to 3, 000 and fine-tune for one epoch in the first fine-tuning stage and three epochs in the second fine-tuning stage. We set the learning rate and batch size to 2 × 10 −5 and 64, respectively. All unspecified hyperparameters take the default values [5]. Method Main Results We report in Table 3 the performance of all models introduced in Section 4. XGBoost consistently performs the best among conventional feature-based models across all subsets of DASH. Vanilla mBERT, which does not leverage any handcrafted features or explicit expert knowledge, outperforms AdaBoost and random forest in average performance, showing the potential of language model fine-tuning for digital asset valuation. We see a significant reduction in MSLE relative to vanilla mBERT when employing the proposed improvements. Compared with XGBoost, the improved model mBERT+ substantially reduces the MSLE by 4.2% (i.e., 1.442 vs. 1.505) on average on the test set of DASH (p-value < 0.005). To measure the contribution of different components, we conduct ablation tests, where we remove one component from XGBoost or mBERT+ at a time. As shown in Table 4, the suffix, TLD count, and length features contribute the most to the performance of XGBoost. Every component of mBERT+ heavily impacts the overall performance. Specifically, compared with a single-stage fine-tuning on the entire (resp. the most recent portion of) training data, the two-stage fine-tuning reduces the MSLE by 0.145 (resp. 0.224) on average. Incorporating TLD count in the input sequence contributes to an average decrease of 0.172 in MSLE. Furthermore, the MSLE increases by 0.207 on average if we replace the pre-trained mBERT weights with randomly initialized weights. Error Analysis We perform an error analysis of XGBoost, vanilla mBERT, and mBERT+ on the development set to understand their difference and identify their limitations. Name Length. We report the model performance with respect to different name length groups in Figure 2. We observe a clear difference in performance over different name length groups for all asset classes. Notably, all models for DASH DN and DASH NFT can give relatively accurate predictions when given an asset of name length four, and all models for DASH EA and DASH NFT perform relatively well when given an asset of name length five. XGBoost demonstrates a notable advantage over the other models in valuing email addresses longer than seven. Figure 3 the model performance for different suffixes. While, unsurprisingly, uncommon suffixes that fall under the "other" groups are relatively hard to assess for all asset classes, perhaps surprisingly, models perform worse than average on some common suffixes, including "net" in DASH DN and "126.com" in DASH EA . Although mBERT+ achieves the lowest 0.0 0.5 MSLE in most groups, XGBoost considerably outperforms mBERT+ in a few groups, such as "other" in DASH NFT . Suffix. We compare in Name Character Set . We present the model performance grouped by the character set of the asset name in Figure 4. Interestingly, the relative difficulty in valuing number-only names varies greatly across asset classes. Besides, for all models and all asset classes, the appraisal of letter-only names is more challenging compared with names that fall into the "other" groups. XGBoost once again outperforms mBERT+ in several groups. Further Discussions Model Ensemble. Since the error analysis in Section 5.3 indicates XGBoost and mBERT+ are complementary in many aspects, we combine them by taking the geometric mean of their predictions. As shown in Table 5, the ensemble model achieves an MSLE reduction of 5.9% on average compared with mBERT+. Pre-Trained Language Models. We study the impact of different pre-trained language models on the performance of neural models. We choose the following language models for comparison: English uncased BERT-Base [5] (BERT), XLM-R-Base [3] (XLM-R), and FNet-Base [11] (FNet). We denote the vanilla fine-tuned model and the improved model with the pre-trained language model replaced by M ∈ {BERT, XLM-R, FNet} as vanilla M and M+, respectively. To minimize time leakage risk, we report the performance on DASH NFT whose transactions in the development and test sets all happen after the release dates of the employed language models. As shown in Table 6, our improved model consistently outperforms the corresponding vanilla model regardless of the pretrained language model used. Surprisingly, the relative strength of these pre-trained models differs dramatically between natural language understanding and digital asset valuation: XLM-R does not outperform mBERT in asset valuation, though XLM-R is superior to mBERT in multilingual natural language processing [3]; similarly, BERT does not outperform FNet, though FNet sacrifices some performance for speed compared with BERT [11]. Moreover, we find that monolingual pre-trained LMs can achieve performance close to multilingual pre-trained LMs (e.g., FNet vs. mBERT). sample weights of the newest transactions to be two times the weights of the other transactions in the training data. We observe no substantial difference in average performance between XGBoost with and without non-uniform sample weights. Non-Uniform Comparison with a Commercial Model. We compare our models with GoValue, a state-of-the-art proprietary automated domain valuation tool from the industry (Section 2.2). Because (i) GoValue only gives the valuation result when the estimated price is between 100 USD and 25, 000 USD, (ii) GoValue supports valuing neither internationalized domain names (IDNs) nor third-level domain names, and (iii) GoValue does not support bulk appraisal and has a limited query quota, we use a modified setting for the experiment, specified in the following. • We randomly sample from the test set of DASH DN 100 transactions, in which the domain name is neither an IDN nor a third-level domain name, and the price is between 100 USD and 25, 000 USD. • We call GoValue and our models to predict the price of every domain name in the 100 samples. If the predicted price is below 100 (resp. above 25, 000) USD, we change the prediction to 100 (resp. 25, 000). As shown in Table 8, all our conventional feature-based models and neural models significantly outperform GoValue by a large margin (all p-values < 1 × 10 −5 ). Note that although the result is highly suggestive of the superiority of our models, the result may not be conclusive enough because the comparison is arguably unfair for both GoValue and our models. Specifically, on the one hand, GoValue leverages unrivaled amounts of data available only to GoDaddy for model training; on the other hand, the distribution of the test samples is likely closer to that of the training set of DASH DN compared with GoValue's training data. Nevertheless, the comparison can hardly be improved due to a lack of access to both the modeling details and training data of GoValue. Limitations and Future Work The data and models presented in this work focus on digital assets represented in texts without touching other modalities (e.g., images). Nevertheless, such a uniform representation makes our work a reasonably suitable starting point for studying general techniques that apply to multiple asset classes. We leave the study of valuing assets represented in other modalities for future work. We are aware of some proprietary external knowledge sources (e.g., Google Trends 8 ) that contain potentially additional useful knowledge for building more accurate valuation models. However, we choose not to employ them in this paper to avoid dependence on proprietary business services and leave the study of leveraging them for future research. While this choice limits the best performance our models can attain, we believe it does not influence the main contributions of this paper and helps improve the accessibility and reproducibility of this work. CONCLUSION We present DASH, the first digital asset sales history dataset featuring multiple asset classes, including domain names, email addresses, and NFTs. We propose several valuation models for DASH, including conventional feature-based models and deep learning models, all applicable to multiple asset classes. We conduct comprehensive experiments to evaluate the proposed models on DASH and, for the first time, demonstrate that fine-tuning a pre-trained model can beat conventional models in digital asset valuation. Value Baseline. This simple model predicts a constant value that minimizes the MSLE on the training set. More formally, the constant value is the geometric mean price˜defined as = exp 1 | train | ∑︁ ( , , ) ∈ train log( ) . Figure 2 : 2Performance comparison in MSLE by name length on the development set. The percentage of each length group is in parentheses. Figure 3 : 3Performance comparison in MSLE by suffix on the development set. The percentage of each suffix group is in parentheses. Table 2 : 2Data splitting (Txns: transactions). • Trademark: a binary feature indicating if the asset name appears in any trademark applications.• Top-level domain (TLD) count: the number of TLDs where the asset name is registered. tune the pre-trained mBERT on all transactions in train . We then fine-tune the resulting model again in the second stage on the newest transactions in train . (ii) We propose a modification to the input sequence of Vanilla mBERT to leverage external knowledge that helps approximate the popularity of the asset name but is not readily available in the pre-trained representation. Concretely, the modi-Feature Extraction. We employ Polyglot 5 for morphological analysis. For vocabulary feature extraction, we use the vocabulary of GloVe.840B.300D[20] and a self-collected adult word list6 . We look up trademark applications from April 1884 to December 2018 released by the United States Patent and Trademark Office. We leverage the DNS Census 2013 7 to obtain TLD count.4.3 Neural Models 4.3.1 Vanilla mBERT. We follow the framework of fine-tuning a pre-trained high-capacity language model [22] and use multilingual BERT (mBERT) [5] as the pre-trained model. Given an asset, we concatenate the name and suffix of the asset with the classifi- cation token [CLS] and separator token [SEP] in mBERT as the input sequence [CLS] [SEP] [SEP] to mBERT, with a linear layer on top of the final hidden state for [CLS] in the input sequence. 4.3.2 mBERT+. We improve vanilla mBERT in two effective yet easy-to-implement ways: (i) Since transaction data is time sensitive, we propose a two- stage fine-tuning approach to highlight the relatively new transactions during training. Specifically, in the first stage, we first fine-fied input sequence is [CLS] [SEP] [SEP] [SEP], where is a string of digits representing the TLD count (defined in Section 4.2). 5 EXPERIMENTS AND DISCUSSIONS 5.1 Implementation Details 5.1.1 Table 3 : 3Performance in MSLE. +0.305 1.422 +0.210 1.165 +0.158 1.666 +0.224 -2nd stage fine-tuning 2.438 +0.332 1.215 +0.003 1.107 +0.100 1.587 +0.145 -TLD count 2.396 +0.290 1.382 +0.170 1.064 +0.057 1.614 +0.172Method DASH DN DASH EA DASH NFT Average MSLE ∆ MSLE ∆ MSLE ∆ MSLE ∆ XGBoost 2.218 - 1.253 - 1.081 - 1.517 - -length 2.244 +0.026 1.431 +0.178 1.302 +0.221 1.659 +0.142 -suffix 2.645 +0.427 1.834 +0.581 1.154 +0.073 1.878 +0.361 -character 2.223 +0.005 1.424 +0.171 1.173 +0.092 1.607 +0.090 -# of tokens 2.227 +0.009 1.280 +0.027 1.089 +0.008 1.532 +0.015 -vocabulary 2.225 +0.007 1.293 +0.040 1.093 +0.012 1.537 +0.020 -trademark 2.231 +0.013 1.297 +0.044 1.091 +0.010 1.540 +0.023 -TLD count 2.548 +0.330 1.449 +0.196 1.151 +0.070 1.716 +0.199 mBERT+ 2.106 - 1.212 - 1.007 - 1.442 -LM pre-training 2.203 +0.097 1.570 +0.358 1.173 +0.166 1.649 +0.207 -1st stage fine-tuning 2.411 Table 4 : 4Ablation tests on the development set. Table 5 : 5Performance of an ensemble of XGBoost and mBERT+ in MSLE (↓: decreased MSLE compared with mBERT+). All MSLE reductions are statistically significant (all p-values < 5 × 10 −4 ). Sample Weights. Motivated by the effectiveness of the proposed two-stage fine-tuning approach, we investigate whether we can improve the conventional model by emphasizing more on the relatively new transactions during training as well. We present the experimental results inTable 7, where we set theSun Dev Test Vanilla mBERT 1.111 1.312 Vanilla BERT 1.193 1.331 Vanilla XLM-R 1.147 1.163 Vanilla FNet 1.176 1.084 mBERT+ 1.007 1.066 BERT+ 1.075 1.249 XLM-R+ 1.058 1.100 FNet+ 1.046 1.033 Table 6 : 6Performance comparison of models with different pre-trained language models in MSLE on DASH NFT .Dev Test DASH DN 2.199 (↓0.019) 1.931 (↓0.009) DASH EA 1.282 (↑0.029) 1.518 (↑0.021) DASH NFT 1.070 (↓0.011) 1.056 (↓0.021) Average 1.517 (=0.000) 1.502 (↓0.003) Table 7 : 7Performance of XGBoost with non-uniform sample weights (↑/↓/=: increased/decreased/unchanged MSLE compared with XGBoost). Table 8 : 8Performance comparison using the modified setting (Section 5.4.4) on 100 samples from DASH DN . https://github.com/aboSamoor/polyglot6 We will release the adult word list along with the code. 7 https://archive.org/details/DNSCensus2013 https://trends.google.com/ Prediction of Domain Values: High throughput screening of domain names using Support Vector Machines. Zsolt Bikadi, Sapumal Ahangama, Eszter Hazai, 10.48550/ARXIV.1707.00906cs.CY/1707.00906arXiv preprintZsolt Bikadi, Sapumal Ahangama, and Eszter Hazai. 2017. Prediction of Domain Values: High throughput screening of domain names using Support Vector Ma- chines. arXiv preprint cs.CY/1707.00906 (2017). https://doi.org/10.48550/ARXIV. 1707.00906 XGBoost: A Scalable Tree Boosting System. Tianqi Chen, Carlos Guestrin, 10.1145/2939672.2939785Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco, California, USA; New York, NY, USAAssociation for Computing MachineryKDD '16)Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD '16). Association for Computing Machinery, New York, NY, USA, 785-794. https: //doi.org/10.1145/2939672.2939785 Unsupervised Cross-lingual Representation Learning at Scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 10.18653/v1/2020.acl-main.747Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guil- laume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics, Online, 8440-8451. https://doi.org/10.18653/v1/2020.acl-main.747 Domain Name Valuation: Characteristics & Price Exposed! Master's thesis. Emrullah Delibaş, Istanbul Şehir UniversityEmrullah Delibaş. 2019. Domain Name Valuation: Characteristics & Price Exposed! Master's thesis. Istanbul Şehir University. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171-4186. https://doi.org/10.18653/v1/N19-1423 A Hybrid CBR-ANN Approach to the Appraisal of Internet Domain Names. Sebastian Dieterle, Ralph Bergmann, 10.1007/978-3-319-11209-1_8Case-Based Reasoning Research and Development, Luc Lamontagne and Enric PlazaSpringer International PublishingChamSebastian Dieterle and Ralph Bergmann. 2014. A Hybrid CBR-ANN Approach to the Appraisal of Internet Domain Names. In Case-Based Reasoning Research and Development, Luc Lamontagne and Enric Plaza (Eds.). Springer International Publishing, Cham, 95-109. https://doi.org/10.1007/978-3-319-11209-1_8 A Gated Recurrent Unit Approach to Bitcoin Price Prediction. Aniruddha Dutta, Saket Kumar, Meheli Basu, 10.3390/jrfm13020023Journal of Risk and Financial Management. 13Aniruddha Dutta, Saket Kumar, and Meheli Basu. 2020. A Gated Recurrent Unit Approach to Bitcoin Price Prediction. Journal of Risk and Financial Management 13, 2 (2020). https://doi.org/10.3390/jrfm13020023 A desicion-theoretic generalization of on-line learning and an application to boosting. Yoav Freund, Robert E Schapire, 10.1007/3-540-59119-2_166Computational Learning Theory, Paul VitányiSpringerBerlin Heidelberg; Berlin, HeidelbergYoav Freund and Robert E. Schapire. 1995. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, Paul Vitányi (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 23-37. https://doi.org/10.1007/3-540-59119-2_166 NFT Appraisal Prediction: Utilizing Search Trends, Public Market Data, Linear Regression and Recurrent Neural Networks. arXiv preprint q-fin. Shrey Jain, Camille Bruckmann, Chase Mcdougall, 10.48550/ARXIV.2204.12932ST/2204.12932Shrey Jain, Camille Bruckmann, and Chase McDougall. 2022. NFT Appraisal Prediction: Utilizing Search Trends, Public Market Data, Linear Regression and Recurrent Neural Networks. arXiv preprint q-fin.ST/2204.12932 (2022). https: //doi.org/10.48550/ARXIV.2204.12932 TweetBoost: Influence of Social Media on NFT Valuation. Arnav Kapoor, Dipanwita Guhathakurta, Mehul Mathur, Rupanshu Yadav, Manish Gupta, Ponnurangam Kumaraguru, 10.1145/3487553.3524642Companion Proceedings of the Web Conference 2022 (Virtual Event. Lyon, France; New York, NY, USAAssociation for Computing MachineryWWW '22)Arnav Kapoor, Dipanwita Guhathakurta, Mehul Mathur, Rupanshu Yadav, Manish Gupta, and Ponnurangam Kumaraguru. 2022. TweetBoost: Influence of Social Media on NFT Valuation. In Companion Proceedings of the Web Conference 2022 (Virtual Event, Lyon, France) (WWW '22). Association for Computing Machinery, New York, NY, USA, 621-629. https://doi.org/10.1145/3487553.3524642 FNet: Mixing Tokens with Fourier Transforms. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon, 10.18653/v1/2022.naacl-main.319Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsJames Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. 2022. FNet: Mixing Tokens with Fourier Transforms. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, 4296-4313. https://doi.org/10.18653/v1/2022.naacl-main.319 Valuable words: The price dynamics of internet domain names. Thies Lindenthal, 10.1002/asi.23012Journal of the Association for Information Science and Technology. 65Thies Lindenthal. 2014. Valuable words: The price dynamics of internet domain names. Journal of the Association for Information Science and Technology 65, 5 (2014), 869-881. https://doi.org/10.1002/asi.23012 Data Driven Domain Appraisal: Extracting Information from Short Dense Texts. Jian Liu, Xiangdong Zeng, Adam Ghandar, Georgios Theodoropoulos, 10.1109/SSCI44817.2019.90029042019 IEEE Symposium Series on Computational Intelligence (SSCI). Jian Liu, Xiangdong Zeng, Adam Ghandar, and Georgios Theodoropoulos. 2019. Data Driven Domain Appraisal: Extracting Information from Short Dense Texts. In 2019 IEEE Symposium Series on Computational Intelligence (SSCI). 2489-2496. https://doi.org/10.1109/SSCI44817.2019.9002904 Heterogeneous rarity patterns drive price dynamics in NFT collections. Amin Mekacher, Alberto Bracci, Matthieu Nadini, Mauro Martino, Laura Alessandretti, Maria Luca, Andrea Aiello, Baronchelli, 10.1038/s41598-022-17922-5Scientific Reports. 1213890Amin Mekacher, Alberto Bracci, Matthieu Nadini, Mauro Martino, Laura Alessan- dretti, Luca Maria Aiello, and Andrea Baronchelli. 2022. Heterogeneous rarity patterns drive price dynamics in NFT collections. Scientific Reports 12, 1 (16 Aug 2022), 13890. https://doi.org/10.1038/s41598-022-17922-5 What is my URL worth? Placing a value on premium domain names. Aron Meystedt, Valuation Strategies. 19Aron Meystedt. 2015. What is my URL worth? Placing a value on premium domain names. Valuation Strategies 19, 2 (2015), 10-17,48. WWW '18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva. Najmeh Miramirkhani, Timothy Barron, Michael Ferdman, Nick Nikiforakis, 10.1145/3178876.3186092Proceedings of the 2018 World Wide Web Conference. the 2018 World Wide Web ConferenceLyon, FrancePanning for Gold.Com: Understanding the Dynamics of Domain DropcatchingNajmeh Miramirkhani, Timothy Barron, Michael Ferdman, and Nick Nikiforakis. 2018. Panning for Gold.Com: Understanding the Dynamics of Domain Dropcatch- ing. In Proceedings of the 2018 World Wide Web Conference (Lyon, France) (WWW '18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 257-266. https://doi.org/10.1145/3178876.3186092 Mapping the NFT revolution: market trends, trade networks, and visual features. Matthieu Nadini, Laura Alessandretti, Flavio Di Giacinto, Mauro Martino, Maria Luca, Andrea Aiello, Baronchelli, 10.1038/s41598-021-00053-8Scientific Reports. 11120902Matthieu Nadini, Laura Alessandretti, Flavio Di Giacinto, Mauro Martino, Luca Maria Aiello, and Andrea Baronchelli. 2021. Mapping the NFT revolu- tion: market trends, trade networks, and visual features. Scientific Reports 11, 1 (22 Oct 2021), 20902. https://doi.org/10.1038/s41598-021-00053-8 Cyberproperty and Judicial Dissonance: The Trouble with Domain Name Classification. Xuan-Thao Nguyen, George Mason Law Review. 10Xuan-Thao Nguyen. 2001. Cyberproperty and Judicial Dissonance: The Trouble with Domain Name Classification. George Mason Law Review 10, 2 (2001), 183- 214. Scikit-learn: Machine Learning in Python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay, Journal of Machine Learning Research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12, 85 (2011), 2825-2830. GloVe: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, 1532-1543. https://doi.org/10.3115/v1/ D14-1162 Generative Language Modeling for Automated Theorem Proving. Stanislas Polu, Ilya Sutskever, 10.48550/ARXIV.2009.03393cs.LG/2009.03393arXiv preprintStanislas Polu and Ilya Sutskever. 2020. Generative Language Modeling for Automated Theorem Proving. arXiv preprint cs.LG/2009.03393 (2020). https: //doi.org/10.48550/ARXIV.2009.03393 Improving Language Understanding by Generative Pre-Training. Alec Radford, Karthik Narasimhan, Technical ReportTim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Im- proving Language Understanding by Generative Pre-Training. Technical Report. Watching a Language Model Learning Chess. Andreas Stöckl, Incoma Ltd, Held Online, Proceedings of the International Conference on Recent Advances in Natural Language Processing. the International Conference on Recent Advances in Natural Language ProcessingRANLP 2021Andreas Stöckl. 2021. Watching a Language Model Learning Chess. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021). INCOMA Ltd., Held Online, 1369-1379. A general domain name appraisal model. Min Chu Jih Hsin Tang, Hsu, Hsing-Hua Ting Yuan Hu, Huang, 10.6138/JIT.2014.15.3.11Journal of Internet Technology. 151Jih Hsin Tang, Min Chu Hsu, Ting Yuan Hu, and Hsing-Hua Huang. 2014. A general domain name appraisal model. Journal of Internet Technology 15, 3 (1 May 2014), 427-431. https://doi.org/10.6138/JIT.2014.15.3.11 Morfessor 2.0: Python Implementation and Extensions for Morfessor Baseline. Sami Virpioja, Peter Smit, Stig-Arne Grönroos, Mikko Kurimo, 38Technical ReportSami Virpioja, Peter Smit, Stig-Arne Grönroos, and Mikko Kurimo. 2013. Morfessor 2.0: Python Implementation and Extensions for Morfessor Baseline. Technical Report. 38 pages. The Valuation of Digital Intangibles: Technology, Marketing and Internet. Roberto Moro Visconti, 10.1007/978-3-030-36918-7PalgraveMacmillan ChamRoberto Moro Visconti. 2020. The Valuation of Digital Intangibles: Technology, Marketing and Internet. Palgrave Macmillan Cham. https://doi.org/10.1007/978- 3-030-36918-7 Transformers: State-of-the-Art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander Rush; OnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Online, 38-45. https://doi.org/10.18653/v1/2020.emnlp-demos.6 Domain Name Valuation Model Constructing and Emperical Evidence. Hai-Yi Zu-Guang Wu, He, 10.1109/MINES.2009.1532009 International Conference on Multimedia Information Networking and Security. 2Zu-guang Wu and Hai-yi He. 2009. Domain Name Valuation Model Constructing and Emperical Evidence. In 2009 International Conference on Multimedia Informa- tion Networking and Security, Vol. 2. 201-204. https://doi.org/10.1109/MINES. 2009.153 Domain Name Valuation Model Based on Semantic Theory and Content Analysis. Guo-Hua Zu-Guang Wu, Rui Zhu, Bin Huang, Xia, 10.1109/APCIP.2009.1942009 Asia-Pacific Conference on Information Processing. 2Zu-guang Wu, Guo-hua Zhu, Rui Huang, and Bin Xia. 2009. Domain Name Valuation Model Based on Semantic Theory and Content Analysis. In 2009 Asia- Pacific Conference on Information Processing, Vol. 2. 237-240. https://doi.org/10. 1109/APCIP.2009.194 Pengcheng Xia, Haoyu Wang, Zhou Yu, Xinyu Liu, Xiapu Luo, Guoai Xu, 10.48550/ARXIV.2104.05185cs.CR/2104.05185Ethereum Name Service: the Good, the Bad, and the Ugly. arXiv preprintPengcheng Xia, Haoyu Wang, Zhou Yu, Xinyu Liu, Xiapu Luo, and Guoai Xu. 2021. Ethereum Name Service: the Good, the Bad, and the Ugly. arXiv preprint cs.CR/2104.05185 (2021). https://doi.org/10.48550/ARXIV.2104.05185
[ "https://github.com/aboSamoor/polyglot6" ]
[ "ON VANISHING NEAR CORNERS OF TRANSMISSION EIGENFUNCTIONS", "ON VANISHING NEAR CORNERS OF TRANSMISSION EIGENFUNCTIONS" ]
[ "Eemeli Blåsten ", "Hongyu Liu " ]
[]
[]
Let Ω be a bounded domain in R n , n ≥ 2, and V ∈ L ∞ (Ω) be a potential function. Consider the following transmission eigenvalue problem for nontrivial v, w ∈ L 2 (Ω) and k ∈ R + ,We show that the transmission eigenfunctions v and w carry the geometric information of supp(V ). Indeed, it is proved that v and w vanish near a corner point on ∂Ω in a generic situation where the corner possesses an interior angle less than π and the potential function V does not vanish at the corner point. This is the first quantitative result concerning the intrinsic property of transmission eigenfunctions and enriches the classical spectral theory for Dirichlet/Neumann Laplacian. We also discuss its implications to inverse scattering theory and invisibility.Mathematics Subject Classification (2010): 35P25, 58J50, 35R30, 81V80 * Also called the relative scattering operator. The unitary scattering operator is the identity plus the former.
10.1016/j.jfa.2017.08.023
[ "https://arxiv.org/pdf/1701.07957v3.pdf" ]
119,321,069
1701.07957
7c7f6d76894cd1fe2489219253c0c869936a0df5
ON VANISHING NEAR CORNERS OF TRANSMISSION EIGENFUNCTIONS 24 Oct 2017 Eemeli Blåsten Hongyu Liu ON VANISHING NEAR CORNERS OF TRANSMISSION EIGENFUNCTIONS 24 Oct 2017spectralinterior transmission eigenfunctioncornervanishing and localizingnon-scattering Mathematics Subject Classification (2010): 35P25, 58J50, 35R30, 81V80 Let Ω be a bounded domain in R n , n ≥ 2, and V ∈ L ∞ (Ω) be a potential function. Consider the following transmission eigenvalue problem for nontrivial v, w ∈ L 2 (Ω) and k ∈ R + ,We show that the transmission eigenfunctions v and w carry the geometric information of supp(V ). Indeed, it is proved that v and w vanish near a corner point on ∂Ω in a generic situation where the corner possesses an interior angle less than π and the potential function V does not vanish at the corner point. This is the first quantitative result concerning the intrinsic property of transmission eigenfunctions and enriches the classical spectral theory for Dirichlet/Neumann Laplacian. We also discuss its implications to inverse scattering theory and invisibility.Mathematics Subject Classification (2010): 35P25, 58J50, 35R30, 81V80 * Also called the relative scattering operator. The unitary scattering operator is the identity plus the former. Introduction Let Ω be a bounded domain in R n , n ≥ 2, and V ∈ L ∞ (Ω) be a potential function. Consider the following (interior) transmission eigenvalue problem for v, w ∈ L 2 (Ω),      (∆ + k 2 )v = 0 in Ω, (∆ + k 2 (1 + V ))w = 0 in Ω, w − v ∈ H 2 0 (Ω), v L 2 (Ω) = 1. (1.1) If the system (1.1) admits a pair of nontrivial solutions (v, w), then k is referred to as an (interior) transmission eigenvalue and (v, w) is the corresponding pair of (interior) transmission eigenfunctions. Note in particular that nothing is imposed a-priori on the boundary values of v or w individually. In this paper, we are mainly interested in the real eigenvalues, k ∈ R + , which are physically relevant. The study of the transmission eigenvalue problem has a long history and is of significant importance in scattering theory. The transmission eigenvalue problem is a type of non elliptic and non self-adjoint problem, so its study is mathematically interesting and challenging. In the literature, the existing results are mainly concerned with the spectral properties of the transmission eigenvalues, including the existence, discreteness and infiniteness, and Weyl laws; see for example [4,7,11,24,29,31,32] and the recent survey [8]. There are few results concerning the intrinsic properties of the transmission eigenfunctions. Here we are aware that the completeness of the set of generalized transmission eigenfunctions in L 2 is proven in [4,31]. In this paper, we are concerned with the vanishing properties of interior transmission eigenfunctions. It is shown that in admissible geometric situations, transmission eigenfunctions which can be approximated suitably by Herglotz waves will vanish at corners of the support of the potential V . To our best knowledge, this is the first quantitative result on intrinsic properties of transmission eigenfunctions. As expected, these carry geometric information of the support of the underlying potential V as well as other interesting consequences and implications in scattering theory, which we shall discuss in more details in Section 7. The location of vanishing of eigenfunctions is an important area of study in the classical spectral theory for the Dirichlet/Neumann Laplacian. Two important topics are the nodal sets and eigenfunction localization. The former is the set of points in the domain where the eigenfunction vanishes. For the latter, an eigenfunction is said to be localized if most of its L 2 -energy is contained in a subdomain which is a fraction of the total domain. Considerable effort has been spent on the nodal sets and localization in the classical spectral theory. We refer to the recent survey [19]. For the curious, we mention briefly basic facts about them, all of which are completely open for transmission eigenfunctions. Nodal sets are C ∞ -curves whose intersections form equal angles. By the celebrated Courant's nodal line theorem, the nodal set of the m-th eigenfunction divides the domain into at most m nodal domains. Localization seems to be a more recent topic even though some examples have been known for a long time. A such example is the whispering gallery modes that comes from Lord Rayleigh's study of whispering waves in the Saint Paul Cathedral in London during the late 19th century. These eigenfunctions concentrate their energy near the boundary of a spherical or elliptical domain. Other well known localized modes are called bouncing ball modes and focusing modes [9,25]. It is worth noting that the Laplacian does not possess localized eigenfunctions on rectangular or equilateral triangular domains [28]. However, localization does appear for the classical eigenvalue problem in a certain sense when the angle is reflex [27]. We also refer to [22] for more relevant examples. In our case of the transmission eigenvalue problem, peculiar and intriguing phenomena are observed in that both vanishing and localization of transmission eigenfunctions may occur near corners of the support of the potential. Indeed, in an upcoming numerical paper [3], we show that if the interior angle of a corner is less than π, then the transmission eigenfunctions vanish near the corner, whereas if the interior angle is bigger than π, then the transmission eigenfunctions localize near the corner. In this paper, we shall rigorously justify the vanishing property of the transmission eigenfunction in a certain generic situation. It turns out to be a highly technical matter. In fact, even in the classical spectral theory, the intrinsic properties of the eigenfunctions are much more difficult to study than those of the eigenvalues, and they remain a fascinating topic for a lot of ongoing research. Nevertheless, we would also like to mention that with the help of highly accurate computational methods, we can present a more detailed numerical investigation in [3] including the vanishing/localizing order as well as its relationship to the angle of the corner. We believe that the vanishing and localizing properties of transmission eigenfunctions are closely related to the analytic continuation of the eigenfunctions. Indeed in the recent papers [5,14,15,20,30], it is shown that transmission eigenfunctions cannot be extended analytically to a neighbourhood of a corner. The failure of the analytic continuation of transmission eigenfunctions can be used via an absurdity argument in [20] to show the uniqueness in determining the polyhedral support of an inhomogeneous medium by a single far-field pattern in the inverse scattering theory. By further quantifying the aforementioned analytic continuation property of transmission eigenfunctions, sharp stability estimates were established in [1] in determining the polyhedral support of an inhomogeneous medium by a single far-field pattern. Those uniqueness and stability results already indicate that the intrinsic properties of transmission eigenfunctions carry geometric information of the underlying potential function V . Furthermore in [1], as an interesting consequence of the quantitative estimates involved, a sharp lower bound can be derived for the far-field patterns of the waves scattered from polyhedral potentials associated with incident plane waves. In this paper, we can significantly extend this result by establishing a similar quantitative lower bound associated with incident Herglotz waves. On the other hand, it is known [6] that the scattered waves created by incident waves that are Herglotz approximations to transmission eigenfunctions will have an arbitrarily small far-field energy. This critical observation apparently indicates that the transmission eigenfunctions must vanish near the corner point. We shall give more relevant discussion of our results in Section 7, connecting our study to inverse scattering problems and invisibility cloaking. The rest of the paper is organized as follows. We will recall scattering theory and define notation in Section 2. All of the background and admissibility assumptions are contained therein. We state our main results mathematically in Section 3, and then proceed to prove them in Section 5 and Section 6 using results from Section 4. Preliminaries In this section we recall background theory, lay some definitions and fix notation. We will start by describing acoustic scattering theory for penetrable scatterers. This will be referred to as "background assumptions" in theorems. After that we recall what is the interior transmission problem and some of its known facts. Finally we define which potentials are admissible for our theorems. 2.1. Background assumptions. Whenever we say that "let the background assumptions hold" we mean that everything in this section should hold, unless stated otherwise. We will recall the fundamentals of acoustic scattering theory. For more details in the three dimensional case we refer the readers to [12]. We will consider only scatterers of finite diameter that are contained in a large origin-centered ball, the domain of interest, B R = B(0, R) = {x ∈ R n | |x| < R} where R > 1 is fixed. Let V ∈ L ∞ (B R ) be a bounded potential function representing the medium parameter of the scatterer. We shall consider scattering of a fixed frequency by fixing the wavenumber k ∈ R + . The scatterer V is illuminated by an incident wave, which in this paper is chosen to be any Herglotz wave. These are superpositions of plane-waves that can be written as u i (x) = S n−1 e ikθ·x g(θ)dσ(θ) (2.1) where the kernel g ∈ L 2 (S n−1 ). We say that u i is normalized if g L 2 (S n−1 ) = 1. The field u i is called incident because it satisfies the equation (∆ + k 2 )u i = 0 which corresponds to a background unperturbed by the presence of V . Unless V is transparent to u i , the illumination of V by u i creates a unique scattered wave u s ∈ H 2 loc (R n ) such that (∆ + k 2 (1 + V ))u = 0 in R n , u = u i + u s , lim r→∞ r n−1 2 (∂ r u s − iku s ) = 0. (2.2) Here u is the total field which, as a superposition of the incident field and scattered field, represents the physical observable field. The third condition, where r = |x|, says that u s satisfies the Sommerfeld radiation condition, which can be interpreted as having u s propagating from V to infinity instead of the other way around. A property of the scattered field is that as one zooms out, the potential V starts to look more and more like a point-source in a sense. This means that far away, u s looks like the Green's function to ∆ + k 2 but modulated by a far-field pattern u s ∞ . More precisely, as |x| → ∞, u has the expansion u(x) = u i (x) + e ik|x| |x| (n−1)/2 u s ∞ x |x| ; u i + O 1 |x| n/2 where for a fixed u i the far-field pattern is a real-analytic map u s ∞ : S n−1 → C (it is also called the scattering amplitude). The interior transmission problem. Direct scattering theory is all about the study of the map (u i , V ) → u s ∞ . Given a potential V the far-field operator * maps the Herglotz kernel g of u i to the far-field pattern u s ∞ . In inverse scattering one is interested in recovering meaningful information about the scatterer V from full or partial information of the far-field operator. A number of algorithms in inverse scattering, such as linear sampling [10] and factorization methods [21] fail at wavenumbers where the far-field operator has non-trivial kernel. In such a case there is an incident wave u i for which V does not cause a detectable change in the far-field, and thus by Rellich's lemma and unique continuation u i does not scatter at all: supp u s ⊂ Ω. If this happens we call k a nonscattering energy (or wavenumber) and say that V is transparent to u i , or that u i is non-scattering. It is known that there are radially symmetric potentials which are transparent to certain incident waves [16]. If u i is non-scattering and we restrict it to the supporting set Ω, then the following interior transmission problem has a non-trivial solution (v, w) ∈ L 2 (Ω) × L 2 (Ω) (∆ + k 2 )v = 0 in Ω, (2.3) (∆ + k 2 (1 + V ))w = 0 in Ω, (2.4) w − v ∈ H 2 0 (Ω),(2.5) namely v = u i |Ω and w = u |Ω . When this non-elliptic, non self-adjoint eigenvalue problem has a solution, we call k a transmission eigenvalue. The functions v and w are referred to as the transmission eigenfunctions. If V is radially symmetric, then v in (2.3) extends to the whole R n as a Herglotz function, and hence in this case transmission eigenvalues and non-scattering energies coincide [13]. This observation hinted for a long time that these sets of numbers coincide in general. However it was a red herring: a series of papers on corner scattering [5,14,15,30] showed that in the presence of a certain type of corner or edge singularity in the potential V there are no non-scattering energies despite the wellknown fact that such a scatterer always has an infinite discrete set of transmission eigenvalues. We remark that the problem (2.3)-(2.5) has been studied heavily [8]. Many properties of the transmission eigenvalues are known. Despite this almost nothing is known about the eigenfunctions themselves before this paper. Herglotz approximation. We introduced the Herglotz wave function in (2.1), which shall be used to approximate the transmission eigenfunction v satisfying (2.3). We briefly recall the following result concerning the Herglotz approximation for the subsequent use. Theorem 2.1 (Theorem 2 in [36]). Let W k denote the space of all Herglotz wave functions of the form (2.1). For Ω ⊂ R n a C 0 -domain, define U k (Ω) := {u ∈ C ∞ (Ω); (∆ + k 2 )u = 0}, and W k (Ω) := {u |Ω ; u ∈ W k }. Then W k (Ω) is dense in U k (Ω) ∩ L 2 (Ω) with respect to the topology induced by the L 2 -norm. 2.4. Admissible potentials. As part of our proof of the vanishing of transmission eigenfunctions at corners we will show lower bounds for the far-field pattern u s . That is, we shall consider the scattering from a corner and make use of the corner singularity in the potential. To save notational burden we collect these a-priori assumptions in this section. We shall only consider polygonal or hypercuboidal scatterers V for simplicity. In essence V will be defined as a Hölder-continuous function ϕ restricted to a polygonal domain Ω; see below. As the arguments are local, the results will hold qualitatively for any potential V for which V |U = χ Ω ϕ |U for some open set U and such that there is a reasonable path from U to infinity. Definition 2.2. Recalling the notation B R from Section 2.1, we say that the potential V is (qualitatively) admissible if (1) V = χ Ω ϕ, where χ Ω (x) = 1 if x ∈ Ω and χ Ω (x) = 0 otherwise; (2) Ω ⊂ B R is an open convex polygon in 2D or a cuboid in higher dimensions; (3) ϕ ∈ C α (R n ) for some α > 0 in 2D and α > 1/4 in higher dimensions; (4) ϕ = 0 at some vertex of Ω. N = max{M ∈ Z | ∃C < ∞ : |f (x)| ≤ C |x − x c | M near x c }. If the set is unbounded from above we say that f has order ∞. If the set is empty f has order −∞. for any normalized incident Herglotz wave u i which is of order N ≤ N at x c and whose Taylor expansion there begins with P N . Here P N = S n−1 |P N (θ)| dσ(θ). Theorem 3.2. Let n ∈ {2, 3} and V be a qualitatively admissible potential. Assume that k > 0 is a transmission eigenvalue: there exists v, w ∈ L 2 (Ω) such that (∆ + k 2 )v = 0 in Ω (∆ + k 2 (1 + V ))w = 0 in Ω w − v ∈ H 2 0 (Ω), v L 2 (Ω) = 1. If v can be approximated in the L 2 (Ω)-norm by a sequence of Herglotz waves with uniformly L 2 (S n−1 )-bounded kernels, then lim r→0 1 m(B(x c , r)) B(xc,r) |v(x)| dx = 0 where x c is any vertex of Ω such that ϕ(x c ) = 0. Remark 3.3. A sequence of Herglotz waves v j with univormly bounded kernels has uniformly bounded L 2 -norms in any fixed bounded set. However the converse is not true by inspecting a sequence of spherical harmonics g j = Y 0 j . In other words the condition we have here is rather technical. See Section 7 for more relevant discussion. Auxiliary results In this section, we collect three auxiliary propositions that follow without too much effort from our previous results in [1] concerning the corner scattering. We add a proposition showing that in the presence of transmission eigenfunctions incident waves creating arbitrary small far-field patterns can be generated. Finally, another proposition gives a lower bound for the Laplace transform of a harmonic polynomial. The latter is necessary for quantitative estimates involving incident Herglotz waves in corner scattering. In comparison, we note that the paper [1] is mainly concerned with corner scattering associated with incident plane waves. This is a less general version of Proposition 5.10 in our previous paper. We will also need a "converse" result estimating the far-field pattern by the near-field. In more detail, we will build incident waves with arbitrarily small far-field patterns in the presence of a transmission eigenfunction (cf. [6]). C = C(V, k) < ∞ such that if v j ∈ L 2 loc is an incident wave such that v − v j L 2 (Ω) < ε then the produced far-field pattern has v s j∞ L 2 (S n−1 ) < Cε. Proof. Let v i 0 be the zero-extension of v to the whole R n , and let v s 0 be the radiating solution to (∆ + k 2 (1 + V ))v s 0 = −k 2 V v i 0 . Also let ν s 0 be the zero-extension of w − v ∈ H 2 0 (Ω) to R n . By standard scattering theory (e.g. Chapter 8 in [12]) we see that v s 0 = ν s 0 since (∆ + k 2 (1 + V ))v s 0 = −k 2 V v i 0 = −k 2 V v = (∆ + k 2 (1 + V ))ν s 0 in R n and both satisfy the Sommerfeld radiation condition trivially. Hence the far-field pattern of v s 0 is zero. Since v j approximates v in L 2 (Ω), and V is supported on Ω, we have −k 2 V v j approximating −k 2 V v i 0 in R n . Let v s j be the scattered wave arising from the incident wave v j and potential V . Then, again from standard scattering theory, its far-field pattern approximates the farfield pattern of v s 0 , i.e. zero. The operators involved are all bounded, so v s j∞ L 2 (S n−1 ) < C V,k ε. We also recall the existence of complex geometrical optics solutions. Proposition 4.3. Let n ∈ {2, 3}, k > 0 and let V be a qualitatively admissible potential. Then there is p = p(V, n) ≥ 2 and c = c(V, R, k, n) < ∞ with the following properties: if ρ ∈ C n satisfies ρ · ρ + k 2 = 0 and |ℑρ| ≥ c (n+1)/2 then there is ψ ∈ L p (R n ) such that u 0 (x) = e ρ·x (1 + ψ(x)) solves (∆ + k 2 (1 + V ))u 0 = 0 in R n , and ψ L p (R n ) ≤ c |ℑρ| −n/p−β for some β = β(V, n) > 0. In addition there is the norm estimate ψ H 2 (B 2R ) ≤ c |ρ| 2 . Proposition 4.3 specializes Proposition 7.6 from [1]. Also, mainly by Corollary 6.2 from that same paper, together with the use of Taylor's theorem on the real-analytic incident wave u i , we can show Proposition 4.4. Let n ∈ {2, 3} and let the background assumptions hold with u i a normalized Herglotz wave. Let V = χ Ω ϕ be a qualitatively admissible potential. Choose coordinates such that the origin is a vertex of Ω where ϕ = 0. Let N ∈ N be such that ∂ γ u i (0) = 0 for |γ| < N and set P N (x) = |γ|=N ∂ γ u i (0) γ! x γ . Let ρ ∈ C n be such that it satisfies the assumptions of Proposition 4.3, |ℜρ| ≥ max(1, k) and ℜρ · x ≤ −δ 0 |x| |ℜρ| for some δ 0 > 0 and any x ∈ Ω. Then Next is the turn of a lower bound to the Laplace transform for homogeneous harmonic polynomials of arbitrary degree. The proof is a compactness argument with basis in the non-vanishingness proofs from [5] and [30]. We recall that the norm for homogeneous polynomials is c C e ρ·x P N (x)dx ≤ |ℜρ| −N −n−min(1,α,β) + |ℜρ| 3 sup ∂(C∩B(0,h)) {|u s | , |∇u s |}P = S n−1 |P (θ)| dσ(θ).P N = P : C n → C ∆P ≡ 0, P (x) = |γ|=N c γ x γ . Let the angle of C be at most 2α m < π and let α m + α d < π/2. Then there is τ 0 > 0 and c > 0, both depending only on C, N, n, α m + α d with the following properties: If P ∈ P N then there is a curve τ → ρ(τ ) ∈ C n satisfying ρ(τ ) · ρ(τ ) + k 2 = 0, τ = |ℜρ(τ )|, ℜρ(τ ) · x ≤ − cos(α m + α d ) |ℜρ(τ )| |x| for all x ∈ C, and such that if τ ≥ τ 0 then C e ρ(τ )·x P (x)dx ≥ c P |ℜρ(τ )| N +n . (4.4) Proof. We identify P N with a subset of C m , where m = #{γ ∈ N n | |γ| = N} = (N + n − 1)!/(N!(n − 1)!), by mapping P ∈ P N to the point corresponding to its coefficients listed in some fixed order (e.g. by the lexical order of the multi-indices γ). This induces a topology on P N which makes it a complete metric space. The space P N ∩ { P = 1} is compact. We will first consider the easier case of a complex vector satisfying ζ · ζ = 0 instead of ρ · ρ + k 2 = 0. Write δ 0 = cos(α m + α d ) and set R C,δ 0 = {ζ ∈ C n |ζ · ζ = 0, |ℜζ| = 1, ℜζ · x ≤ −δ 0 |ℜζ| |x| ∀x ∈ C}. Also, write LP (ζ) = C exp(ζ · x)P (x)dx for P ∈ P N and ζ ∈ R C,δ 0 . We claim first that inf P ∈P N sup ζ∈R C,δ 0 |LP (ζ)| = c P (4.5) for some constant c = c(N, C, δ 0 ) > 0. By dividing P with P and the linearity of L we may assume that P = 1. If (4.5) did not hold then for any j ∈ N there is P j ∈ P N , P j = 1 such that |LP j (ζ)| < j −1 for any ζ ∈ R C,δ 0 . Since P N ∩ { P = 1} is compact there is P ∞ ∈ P N , P ∞ = 1 and a subsequence P j ℓ → P ∞ . Let ζ ∈ R C,δ 0 . It is easily seen that |L( P j ℓ − P ∞ )(ζ)| ≤ (N + n − 1)!δ 1−N −n 0 P j ℓ − P ∞ → 0 as ℓ → ∞. Hence |LP ∞ (ζ)| = 0 for any complex vector ζ ∈ R C,δ 0 , but this contradicts the Laplace transform lower bounds from [5] and [30]. Thus the lower bound (4.5) holds, but for vectors satisfying ζ · ζ = 0. Let us build ρ(τ ) by using a ζ from the previous paragraph. Let P ∈ P N be arbitrary and take ζ ∈ R C,δ 0 such that |LP (ζ)| ≥ c P /2. For τ > 0 set ρ(τ ) = τ ℜζ + i √ τ 2 + k 2 ℑζ. Then ρ(τ )/τ → ζ as τ → ∞ and moreover ρ(τ ) · ρ(τ ) + k 2 = 0, and ℜρ(τ ) · x ≤ −δ 0 |ℜρ(τ )| |x| for x ∈ C. When τ is large enough we will have |LP (ρ(τ )/τ )| ≥ c P /4. The proof is as follows: set f (r) = exp((ℜζ + irℑζ) · x). Then f (1) = exp(ζ · x) and f 1 + k 2 /τ 2 = exp(ρ(τ ) · x/τ ). By the mean value theorem f (1) − f 1 + k 2 /τ 2 ≤ sup 1<r< √ 1+k 2 /τ 2 |f ′ (r)| 1 + k 2 /τ 2 − 1 . But note that 1 + k 2 /τ 2 − 1 = τ −1 k 2 / τ + √ τ 2 + k 2 ≤ k/τ . Also f ′ (r) = iℑζ · xf (r) and since |ℜζ| = |ℑζ| = 1 we get |f ′ (r)| ≤ |x| exp(−δ 0 |x|). In other words f (1) − f 1 + k 2 /τ 2 ≤ k τ |x| e −δ 0 |x| . Finally we see the claim: LP (ζ) − LP ρ(τ ) τ = C f (1) − f 1 + k 2 /τ 2 P (x)dx ≤ k τ C e −δ 0 |x| |x| |P (x)| dx = P k τ ∞ 0 e −δ 0 r r 1+N +n−1 dr = (N + n)!δ −N −n 0 kτ −1 P , and so |LP (ρ(τ )/τ )| > c P /4 if τ > 4(N + n)!δ −N −n 0 k/c. A change of variables gives then LP (ρ(τ )/τ ) = τ N +n LP (ρ(τ )) and so the proposition is proven. Bound for far-field pattern with incident Herglotz wave Proof of Theorem 3.1. Let S = S(V, k) be such that u s H 2 (B 2R ) ≤ S whenever the incident wave is a normalized Herglotz wave. Let u i be a normalized incident wave and u s the corresponding scattered wave. Let u i be of order N ∈ N at the vertex x c , which we may take as being the origin, and on which ϕ = 0. Moreover let P N be its N-th degree homogeneous Taylor polynomial at0. Note that this polynomial is harmonic because (∆ + k 2 )u i = 0. Firstly combine (4.4), (4.3) and (4.1) to get c P N ≤ |ℜρ(τ )| − min(1,α,β) + |ℜρ(τ )| N +n+3 ln ln S u s ∞ L 2 (S n−1 ) when u s ∞ ≤ ε m and τ ≥ τ 0 , with constants depending on V, N, n, k, α m + α d , S. The estimate above depends monotonically on each individual constant. Fix N ∈ N and set Then if N ≥ N the estimate holds with these new constants and N in the exponent instead of N (since |ℜρ(τ )| = τ ≥ 1). In other words c N P N ≤ |ℜρ(τ )| − min(1,α,β) + |ℜρ(τ )| N +n+3 ln ln S u s ∞ L 2 (S n−1 ) (5.1) when u s ∞ ≤ ε m,N and τ ≥ τ 0,N and u i is of order N ≤ N at0. Write γ = min(1, α, β) and R = ln ln(S/ u s ∞ L 2 (S n−1 ) ). The righthand side of (5.1) has a global minimum at the point τ m = (γR/(N + n + 3)) 1/(N +n+3+γ) , and the minimal value there is given by c(N , n, γ)R −γ/(N +n+3+γ) . Hence if τ m ≥ τ 0,N , we may set τ = τ m in (5.1) and solve for the norm of the far-field pattern. We then have u s ∞ L 2 (S n−1 ) ≥ S exp exp c P N −ℓ (5.2) where the exponent ℓ ≥ 2(N + n + 4) and c < ∞ may be chosen to depend only on V, n, k, N . The other case, namely τ m < τ 0,N reduces to u s ∞ L 2 (S n−1 ) > S/(exp exp c) for some c = c(V, n, k, N ). Vanishing of the interior transmission eigenfunction at corners Proof of Theorem 3.2. Let us start by taking a sequence of incident Herglotz waves v j (x) = S n−1 exp(ikθ · x)g j (θ)dσ(θ) approximating the interior transmission eigenfunction v in the L 2 (Ω)norm; see Theorem 2.1. We may assume for example that v − v j L 2 (Ω) < 2 −j . By Proposition 4.2 we have the estimate v s j∞ L 2 (S n−1 ) < C V,k 2 −j (6.1) for the corresponding far-field pattern. The assumption on v allows us to have g j L 2 (S n−1 ) ≤ G < ∞ for all j. Let x c ∈ ∂Ω be a vertex such that ϕ(x c ) = 0. Our goal is to estimate the integral of |v| in B(x c , r) ∩ Ω. We will achieve that by estimating the corresponding integrals of v j . Let us denote B = B(x c , r) for convenience. Let N j be the order of v j at x c , so ∂ α v j (x c ) = 0 for |α| < N j . Then by the smoothness of v j we have N j ∈ N ∪ {∞}. By its real- analyticity we have N j < ∞. Fix N ∈ N. If N j ≥ N, then v L 1 (B∩Ω) ≤ v − v j L 1 (B∩Ω) + v j L 1 (B) ≤ C Ω 2 −j + C N,v j r N +n . The theorem would follow if N j ≥ 1 for an inifinite sequence of j's and sup j C N,v j < ∞ for these. Let us study v j L 1 in more detail. Again, assuming N j ≥ N, by Taylor's theorem v j (x) = |α|=N ∂ α v j (x c ) α! (x − x c ) α + R v j ,N,xc (x). Set P j,N (x) = |α|=N ∂ α v j (x c )x α /α!, and so v j (x) = P j,N (x − x c ) + R v j ,N,xc (x). Define P j,N = S n−1 |P j,N (θ)| dσ(θ). Then P j,N (· − x c ) L 1 (B) = P j,N N + n r N +n and |R v j ,N,xc (x)| ≤ |β|=N +1 |x − x c | N +1 β! max |γ|=N +1 max |y−xc|≤1 |∂ γ v j (y)| ≤ C N,n |x − x c | N +1 max |γ|=N +1 max |y−xc|≤1 S n−1 k N +1 |θ γ | |g j (θ)| dσ(θ) ≤ C N,k,n |x − x c | N +1 g j L 2 (S n−1 ) . In other words v j L 1 (B) ≤ C N,k,n,G ( P j,N + r)r N +n if v j has order N j ≥ N at x c since we had assumed the uniform bound g j L 2 (S n−1 ) ≤ G. Thus v L 1 (B∩Ω) ≤ C Ω 2 −j + C N,k,n,G ( P j,N + r)r N +n (6.2) whenever N j ≥ N. Fix N = 1 now. At least one of the following is true: 1) there is a subsequence of v j for which N j ≥ 1, or 2) there is a subsequence for which N j = 0. In the former case we note that P j,1 ≤ C n,k,G < ∞ by the Herglotz wave formula for v j , and thus (6.2) implies that v has order 1 at x c ; a stronger result than in the theorem. So consider case 2) from now on. We may assume that N j = 0 for all j since we are in case 2). We will use Theorem 3.1. To use (3.1) we need to have normalized incident Herglotz waves, a property which is not necessarily true for v j . However note that v j / g j L 2 (S n−1 ) is normalized. We have v j L 2 (Ω) ≥ v L 2 (Ω) − v − v j L 2 (Ω) > 1 − 2 −j and v j L 2 (Ω) ≤ S n−1 e ikθ·x L 2 (Ω,x) |g j (θ)| dσ(θ) ≤ m(Ω)σ(S n−1 ) g j L 2 (S n−1 ) . In other words g j L 2 (S n−1 ) ≥ 1/ 2 m(Ω)σ(S n−1 ) > 0 when j ≥ 1. We also know that v j has order 0 at x c . Hence by Theorem 3.1 v s j∞ ≥ S g j L 2 (S n−1 ) exp exp c min(1, P j,0 g j L 2 (S n−1 ) ) −ℓ ≥ S/ 2 m(Ω)σ(S n−1 ) exp exp c min(1, P j,0 G ) −ℓ for all j. By (6.1) and the above we see that P j,0 → 0 as j → ∞. By having N = 0 in (6.2) and taking the limit j → ∞ we see that v L 1 (B) ≤ C k,n,G r n+1 . Hence lim r→0 1 m(B) B |v(x)| dx = 0. Discussion In this paper, we are concerned with the transmission eigenvalue problem, a type of non elliptic and non self-adjoint eigenvalue problem. We derive intrinsic properties of transmission eigenfunctions by showing that they vanish near corners at the support of the potential function involved. This is proved by an indirect approach, connecting to the wave scattering theory. Indeed, we first show that by using the Herglotz-approximation of a transmission eigenfunction as an incident wave field, the generated scattered wave can have an arbitrarily small energy in its far-field pattern. On the other hand, we establish that with an incident Herglotz wave the scattered far-field pattern has a positive lower bound depending on the Herglotz wave's order of vanishing at a corner. This hints that the transmission eigenfunction should vanish near the corner point. Nevertheless, the rigorous justification of the vanishing property is a highly nontrivial procedure. To our best knowledge, Theorem 3.2 is the first result in the literature on the intrinsic properties of transmission eigenfunctions. The vanishing behaviour obviously carries geometric information of the support of the involved potential function V . Indeed, in inverse scattering theory, an important problem arising in practical application is to infer knowledge of V by measurements of the far-field pattern u s ∞ x |x| ; u i (cf. [12,23,26,[33][34][35]). There is relevant study on determining the transmission eigenvalues using knowledge of u s ∞ x |x| ; u i (cf. [8]). Clearly, it would be interesting and useful as well to determine the corresponding eigenfunctions from the inverse scattering point of view. Indeed, as suggested by Theorem 3.2, if the unknown function V is supported in a convex polyhedral domain, then one might use the vanishing property of the corresponding transmission eigenfunction to determine the vertices of the polyhedral support of V . As mentioned earlier, in the upcoming numerical paper [3], we shall show that the vanishing order is related to the angle of the corner and the vanishing behaviour also occurs at the edge singularities of supp(V ). Hence, one can use these intrinsic properties of transmission eigenfunctions to determine the polyhedral support of an unknown function V . This is beyond the aim and scope of the present article and we shall investigate this interesting issue in our upcoming papers. We will comment on the requirement of uniformly bounded Herglotz kernels of Theorem 3.2. It is a technical condition and very difficult to relate directly to Theorem 2.1. This study is a first step in the research of intrinsic properties of transmission eigenfunctions and we have brought a new phenomenon into attention. This observation was derived from the apparent contradiction of the well-known Theorem 2.1 and our new Theorem 3.1. In addition, the upcoming numerical study [3] gives evidence that this vanishing phenomenon is true more generally. Also in another upcoming paper (Proposition 3.5 in [2]) we study corner scattering with more general incident waves, namely waves in H 2 that do not need to be defined outside a small interior neighbourhood of a corner of Ω. That result suggests that the condition of approximation by uniformly bounded kernels can be swapped out for the condition that v restricted to Ω ∩ B(x c , ε) is in H 2 . In other words, if a transmission eigenfunction is smooth enough near a corner, then it must vanish at that corner. We shall further explore this interesting issue in forthcoming papers. Finally, we would like to mention that Theorem 3.1 is of significant interest for its own sake, particularly for invisibility cloaking (cf. [17,18]). Indeed, it generalises our earlier corner scattering result in [1] where the incident wave fields are confined to be plane waves. It suggests that if the support of the underlying scatterer possesses corner singularities, then in principle for any incident fields, invisibility cannot be achieved. On the other hand, it also suggests that if one intends to diminish the scattering effect, then the incident wave field should be such chosen that it vanishes to a high order at the corner point. This is another interesting topic worth of further investigation, especially the corresponding extension to anisotropic scatterers. 2. 5 . 5Function order. An important concept in corner scattering is the so-called function order. This determines how flat the function is at a certain point, or in other words what is the order of the first nontrivial homogeneous polynomial in its Taylor expansion at that point.Definition 2.3. Let f be a complex-valued function defined in an open neighbourhood of x c ∈ R n . We say that f has order N at x c if Remark 2. 4 . 4If f is smooth then it has order N < ∞ at x c if and only if ∂ α f (x c ) = 0 for α ∈ N n , |α| < N and ∂ β f (x c ) = 0 for some β ∈ N n , |β| = N. When N = ∞ the second condition is ignored: there are smooth functions vanishing to infinite order e.g. exp(−1/ |x| 2 ). Smooth functions always have non-negative order.3. Statement of the main resultsTheorem 3.1. Let n ∈ {2, 3} and let the background assumptions hold. If V is qualitatively admissible with ϕ(x c ) = 0 at a vertex x c of Ω, and N ∈ N, then there is c, ℓ < ∞ depending on V, n, k, N and S = S(V, k) ≥ 1 such that u s ∞ L 2 (S n−1 ) ≥ S exp exp c min(1, P N ) −ℓ (3.1) Proposition 4 . 1 . 41Let the background assumptions hold with n ∈ {2, 3}, V qualitatively admissible, u i a normalized Herglotz wave and let S ≥ 1. Then there is ε m (S, k, R) > 0 such that if u s H 2 (B 2R ) ≤ S and u s ∞ L 2 (S n−1 ) ≤ ε m then sup x∈∂Ω |u s (x)| + |∇u s (x) some c = c(V, S, k, R) < ∞. Proposition 4 . 2 . 42Let the background assumptions hold with V supported in Ω, and assume that (v, w) ∈ L 2 (Ω) × L 2 (Ω) are a pair of transmission eigenfunctions on a bounded domain Ω. There is (4. 3 ) 3where C is the open cone generated by Ω at the origin, h = h(Ω) is the minimal distance from any vertex of Ω to any of its non-adjacent edges, and the constant c > 0 depends on V, N, δ 0 and k. Proposition 4 . 5 . 45Let n ∈ {2, 3}, C = ∅ be either an open orthant (3D) or an oblique open cone (2D). For N ∈ N set ε m,N = min N ≤N ε m , τ 0,N = max N ≤N τ 0 , c N = min N ≤N c. AcknowledgementWe are grateful to Professor Fioralba Cakoni for helpful discussion on E Blåsten, H Liu, arXiv:1611.03647On corners scattering stably, nearly non-scattering interrogating waves, and stable shape determination by a single far-field pattern. Blåsten, E. and Liu, H.: On corners scattering stably, nearly non-scattering interrogating waves, and stable shape determination by a single far-field pat- tern, arXiv:1611.03647. E Blåsten, H Liu, arXiv:1705.00815Recovering piecewise constant refractive indices by a single far-field pattern. Blåsten, E. and Liu, H.: Recovering piecewise constant refractive indices by a single far-field pattern, arXiv:1705.00815. On vanishing and localization near cusps of transmission eigenfunctions: a numerical study. E Blåsten, X Li, H Liu, Y Wang, preprintBlåsten, E., Li, X., Liu, H. and Wang, Y.: On vanishing and localization near cusps of transmission eigenfunctions: a numerical study, preprint, 2017. Completeness of generalized transmission eigenstates. E Blåsten, L Päivärinta, Inverse Problems. 29104002Blåsten, E. and Päivärinta, L.: Completeness of generalized transmission eigenstates, Inverse Problems, Vol 29, 10 (2013), 104002. Corners always scatter. E Blåsten, L Päivärinta, J Sylvester, Comm. Math. Phys. 331Blåsten, E., Päivärinta, L., and Sylvester, J.: Corners always scatter, Comm. Math. Phys., 331 (2014), 725-753. . F Cakoni, private discussionCakoni, F., private discussion, 2016. The existence of an infinite discrete set of transmission eigenvalues. F Cakoni, D Gintides, H Haddar, SIAM J. Math. Anal. 42Cakoni, F., Gintides, D. and Haddar, H.: The existence of an infinite discrete set of transmission eigenvalues, SIAM J. Math. Anal., 42 (2010), 237- 255. Transmission eigenvalues in inverse scattering theory. F Cakoni, H Haddar, in [35Cakoni, F. and Haddar, H.: Transmission eigenvalues in inverse scattering theory, in [35], 529-578. Visualization of special eigenmodes shapes of a vibrating elliptical membrane. G Chen, P Morris, J Zhou, SIAM Rev. 36Chen, G., Morris, P. and Zhou, J., Visualization of special eigenmodes shapes of a vibrating elliptical membrane, SIAM Rev., 36 (1994), 453-469. A simple method for solving inverse scattering problems in the resonance region. D Colton, A Kirsch, Inverse Problems. 12Colton, D. and Kirsch, A.: A simple method for solving inverse scattering problems in the resonance region, Inverse Problems, Vol 12, 4 (1996), 383-393. Far-field patterns for acoustic waves in an inhomogeneous medium. D Colton, A Kirsch, L Päivärinta, SIAM J. Math. Anal. 20Colton, D., Kirsch, A. and Päivärinta, L.: Far-field patterns for acoustic waves in an inhomogeneous medium, SIAM J. Math. Anal., 20 (1989), 1472- 1482. Inverse Acoustic and Electromagnetic Scattering Theory. D Colton, R Kress, SpringerNew York2nd EdColton, D. and Kress, R.: Inverse Acoustic and Electromagnetic Scattering Theory, 2nd Ed., Springer, New York, 1998. The inverse scattering problem for time-harmonic acoustic waves in an inhomogeneous medium. D Colton, P Monk, The Quarterly Journal of Mechanics and Applied Mathematics. 41Colton, D. and Monk, P.: The inverse scattering problem for time-harmonic acoustic waves in an inhomogeneous medium, The Quarterly Journal of Me- chanics and Applied Mathematics, Vol 41 (1988), 97-125. J Elschner, G Hu, Corners and edges always scatter, Inverse Problems. 31Elschner, J. and Hu, G.: Corners and edges always scatter, Inverse Prob- lems, 31 (2015), 015003, 1-17. J Elschner, G Hu, arXiv:1603.05186Acoustic scattering from corners, edges and circular cones. Elschner, J. and Hu, G.: Acoustic scattering from corners, edges and circu- lar cones, arXiv: 1603.05186. Potential scattering and the continuity of phase-shifts. J Gell-Redman, J Hassel, Math. Res. Lett. 19Gell-Redman, J. and Hassel, J.: Potential scattering and the continuity of phase-shifts, Math. Res. Lett., 19 (2012), 719-729. Invisibility and inverse problems. A Greenleaf, Y Kurylev, M Lassas, G Uhlmann, Bulletin A. M. S. 46Greenleaf, A., Kurylev, Y., Lassas, M. and Uhlmann, G.: Invisibility and inverse problems, Bulletin A. M. S., 46 (2009), 55-97. Cloaking devices, electromagnetic wormholes and transformation optics. A Greenleaf, Y Kurylev, M Lassas, G Uhlmann, SIAM Review. 51Greenleaf, A., Kurylev, Y., Lassas, M. and Uhlmann, G.: Cloaking devices, electromagnetic wormholes and transformation optics, SIAM Review, 51 (2009), 3-33. Geometrical structure of laplacian eigenfunctions. D Grebenkov, B Ngyuen, SIAM Rev. 55Grebenkov, D. and Ngyuen, B.: Geometrical structure of laplacian eigen- functions, SIAM Rev., Vol 55, 4 (2013), 601-667. Shape Identification in Inverse Medium Scattering. G Hu, M Salo, E Vesalainen, SIAM J. Math. Anal. 48Hu, G., Salo, M., and Vesalainen, E.: Shape Identification in Inverse Medium Scattering, SIAM J. Math. Anal., 48 (2016), 152-165. The factorization method for inverse problems. A Kirsch, N Grinberg, Oxford Lecture Series in Mathematics and its Applications. 36Oxford University PressKirsch, A. and Grinberg,N.: The factorization method for inverse problems, Oxford Lecture Series in Mathematics and its Applications, Vol 36. Oxford University Press, Oxford, 2008. . S Heilman, R Strichartz, 5Localized Eigenfunctions: Here You See Them, There You Don't, Notices of the AMSHeilman, S. and Strichartz, R.: Localized Eigenfunctions: Here You See Them, There You Don't, Notices of the AMS, 5 (2010). Inverse Problems for Partial Differential Equations. V Isakov, Springer-VerlagNew York2nd EdIsakov, V.: Inverse Problems for Partial Differential Equations, 2nd Ed., Springer-Verlag, New York, 2006. Applications of elliptic operator theory to the isotropic interior transmission eigenvalue problem. E Lakshtanov, B Vainberg, Inverse Problems. 29104003Lakshtanov, E. and Vainberg, B.: Applications of elliptic operator theory to the isotropic interior transmission eigenvalue problem, Inverse Problems, Vol 29, 10 (2013), 104003. Asymptotic solution of eigenvalue problems. J Keller, S Rubinow, Ann. Phys. 9Keller, J. and Rubinow, S., Asymptotic solution of eigenvalue problems, Ann. Phys., 9 (1960), 24-75. Reconstructions from boundary measurements. A Nachman, Ann. of Math. 1282Nachman, A.: Reconstructions from boundary measurements, Ann. of Math. (2), 128 (1988), 531-576. Localization near the corner point of the principal eigenfunction of the Dirichlet problem in a domain with thin edging. S Nazarov, Siberian Mathematical Journal. 52Nazarov, S.: Localization near the corner point of the principal eigenfunction of the Dirichlet problem in a domain with thin edging, Siberian Mathematical Journal, Vol 52, 2 (2011), 274-290. B Nguyen, English. <pastel-00764806>Localization of Laplacian Eigenfunctions in Simple and Irregular Domains, Mathematical Physics. Ecole Polytechnique Xmath-phNguyen, B.: Localization of Laplacian Eigenfunctions in Simple and Irregu- lar Domains, Mathematical Physics [math-ph], Ecole Polytechnique X, 2012. English. <pastel-00764806> Transmission eigenvalues. L Päivärinta, J Sylvester, SIAM J. Math. Anal. 40Päivärinta, L. and Sylvester, J.: Transmission eigenvalues, SIAM J. Math. Anal., 40 (2008), 738-753. Strictly convex corners scatter. L Päivärinta, M Salo, E Vesalainen, Rev. Mat. Iberoamericana. in pressPäivärinta, L., Salo, M. and Vesalainen, E.: Strictly convex corners scatter, Rev. Mat. Iberoamericana, in press. Spectral analysis of the interior transmission eigenvalue problem. L Robbiano, Inverse Problems. 29104001Robbiano, L.: Spectral analysis of the interior transmission eigenvalue prob- lem, Inverse Problems, Vol 29, 10 (2013), 104001. The interior transmission problem and inverse scattering from inhomogeneous media. B Rynne, B Sleeman, SIAM J. Math. Anal. 22Rynne, B. and Sleeman, B., The interior transmission problem and inverse scattering from inhomogeneous media, SIAM J. Math. Anal., 22 (1991), 1755- 1762. A global uniqueness theorem for an inverse boundary value problem. J Sylvester, G Uhlmann, Ann. of Math. 2Sylvester, J. and Uhlmann, G.: A global uniqueness theorem for an inverse boundary value problem, Ann. of Math. (2), 125 (1987), 153-169. G Uhlmann, Visibility and invisibility, ICIAM 07-6th International Congress on Industrial and Applied Mathematics. ZürichUhlmann, G.: Visibility and invisibility, ICIAM 07-6th International Con- gress on Industrial and Applied Mathematics, Eur. Math. Soc., Zürich, (2009), 381-408. Inverse Problems and Applications: Inside Out II. G Uhlmann, MSRI Publications. 60Cambridge University PressUhlmann, G., edt.: Inverse Problems and Applications: Inside Out II, MSRI Publications, Vol. 60, Cambridge University Press, 2013. Approximation by Herglotz wave functions. N Weck, Math. Meth. Appl. Sci. 27Weck, N.: Approximation by Herglotz wave functions, Math. Meth. Appl. Sci., 27 (2004), 155-162.
[]
[ "GelSight Fin Ray: Incorporating Tactile Sensing into a Soft Compliant Robotic Gripper", "GelSight Fin Ray: Incorporating Tactile Sensing into a Soft Compliant Robotic Gripper" ]
[ "Sandra Q Liu [email protected] \nMassachusetts Institute of Technology\n\n", "Edward H Adelson [email protected] \nMassachusetts Institute of Technology\n\n" ]
[ "Massachusetts Institute of Technology\n", "Massachusetts Institute of Technology\n" ]
[]
To adapt to constantly changing environments and be safe for human interaction, robots should have compliant and soft characteristics as well as the ability to sense the world around them. Even so, the incorporation of tactile sensing into a soft compliant robot, like the Fin Ray finger, is difficult due to its deformable structure. Not only does the frame need to be modified to allow room for a vision sensor, which enables intricate tactile sensing, the robot must also retain its original mechanically compliant properties. However, adding high-resolution tactile sensors to soft fingers is difficult since many sensorized fingers, such as GelSight-based ones, are rigid and function under the assumption that changes in the sensing region are only from tactile contact and not from finger compliance. A sensorized soft robotic finger needs to be able to separate its overall proprioceptive changes from its tactile information. To this end, this paper introduces the novel design of a GelSight Fin Ray, which embodies both the ability to passively adapt to any object it grasps and the ability to perform high-resolution tactile reconstruction, object orientation estimation, and marker tracking for shear and torsional forces. Having these capabilities allow soft and compliant robots to perform more manipulation tasks that require sensing. One such task the finger is able to perform successfully is a kitchen task: wine glass reorientation and placement, which is difficult to do with external vision sensors but is easy with tactile sensing. The development of this sensing technology could also potentially be applied to other soft compliant grippers, increasing their viability in many different fields.
10.1109/robosoft54090.2022.9762175
[ "https://arxiv.org/pdf/2204.07146v1.pdf" ]
248,177,779
2204.07146
c2e64d90cfabef766e96b88f4d0b65f2015aaebb
GelSight Fin Ray: Incorporating Tactile Sensing into a Soft Compliant Robotic Gripper Sandra Q Liu [email protected] Massachusetts Institute of Technology Edward H Adelson [email protected] Massachusetts Institute of Technology GelSight Fin Ray: Incorporating Tactile Sensing into a Soft Compliant Robotic Gripper To adapt to constantly changing environments and be safe for human interaction, robots should have compliant and soft characteristics as well as the ability to sense the world around them. Even so, the incorporation of tactile sensing into a soft compliant robot, like the Fin Ray finger, is difficult due to its deformable structure. Not only does the frame need to be modified to allow room for a vision sensor, which enables intricate tactile sensing, the robot must also retain its original mechanically compliant properties. However, adding high-resolution tactile sensors to soft fingers is difficult since many sensorized fingers, such as GelSight-based ones, are rigid and function under the assumption that changes in the sensing region are only from tactile contact and not from finger compliance. A sensorized soft robotic finger needs to be able to separate its overall proprioceptive changes from its tactile information. To this end, this paper introduces the novel design of a GelSight Fin Ray, which embodies both the ability to passively adapt to any object it grasps and the ability to perform high-resolution tactile reconstruction, object orientation estimation, and marker tracking for shear and torsional forces. Having these capabilities allow soft and compliant robots to perform more manipulation tasks that require sensing. One such task the finger is able to perform successfully is a kitchen task: wine glass reorientation and placement, which is difficult to do with external vision sensors but is easy with tactile sensing. The development of this sensing technology could also potentially be applied to other soft compliant grippers, increasing their viability in many different fields. I. INTRODUCTION Compliant and soft robotic fingers are capable of providing both robust adaptability for grasping a multitude of different objects as well as safety for day-to-day interaction with humans. Due to their material properties and passivity, they are able to easily grasp fragile and soft objects, can accommodate various-sized objects with unique shapes, and do not require energy-extensive actuation. Overall, they provide more universality in their grasping capabilities compared to their rigid, power-expensive robotic counterparts and can be more robust and durable. Recent progress in soft, compliant robotics has enabled an increase in the development of safe and reliable humanrobot interactions, with specific focus towards agricultural purposes, robotic surgery, prosthetics, and aiding the elderly to age with dignity [1], [2], [3]. Many of these fields require robots to have a degree of compliance, similar to the compliance human hands have, whether it is the ability to use less force when interacting with soft, ripe fruit or helping a human without accidentally hurting them in the Fig. 1. In a), the GelSight Fin Ray gripper is holding a ball glass mason jar with its tactile sensing region along the indented letters "MASON" while b) denotes the 90 • counterclockwise-rotated raw tactile image data where the top of the image generally denotes the tip of the finger and c) is its corresponding uncalibrated tactile reconstruction, where you can see most of the "S" letter and parts of the "O" and "A" letters as well. case of a system failure. In particular, as the average age of the human population increases, it is important to develop a safe, but extremely capable, home robotic assistant that can assist with chores in the kitchen. One such task is the ability to reorient dishware (i.e. wineglasses) and safely place them on a table so that they do not break, which could cause harm to others. For a robotic gripper to have such capabilities, it is not sufficient to only be compliant. Robotic grippers must also be able to utilize sensing, such as tactile sensing, which can aid in manipulation [4]. Tactile sensing can provide information about surface roughness, object orientation, contact force, and slip measurements [5]. Specifically, this type of sensing can more effectively determine how a soft compliant gripper is grasping an object and reorient it so that it can be safely placed on a table without a catastrophic failure. Having tactile sensing is essential for building at-home robots that can safely aid the elderly and improve their overall quality of life as they age. It is important to combine the adaptive advantages of a soft compliant gripper with the general-usage manipulation abilities that is provided by tactile sensing. This paper presents the following contributions: • A novel design of a fully-functioning flexible, elongated tactile sensing surface with an embedded vision sensor for tactile reconstruction, orientation estimation, and slip detection; • A novel design of a sensorized GelSight Fin Ray inspired gripper with tactile sensing (Fig. 1); • An algorithm that allows the gripper to be handed a thin object, determine its orientation, and gently set it down upright using only tactile sensing. II. RELATED WORK A. Compliant/Fin Ray Inspired Grippers Recent work in passive adaptive grippers has led to the development of various iterations of a 3D printable Fin Ray Effect gripper, which has the ability to gently comply to an object it comes in contact with via the fin-like structure of the fingers [6]. The idea was originally inspired by the study of fish fin bone structure and its passive, deformable geometry [7]. This geometry allows Fin Ray fingers to conform to a surface, increasing the grasping contact surface area to allow a more secure grasp [8], [9]. These types of fingers, unlike other soft robotic fingers, are relatively simple to manufacture and easy to modify to incorporate various materials or control schemes [10]. Furthermore, their design does not require actuation to securely grasp objects, unlike many other rigid and soft robotic grippers, which allows Fin Ray fingers to potentially be used in cleaner or limited energy environments [11]. These qualities are important for performing manipulation in an unstructured home space, and as a result, a lot of research has been done to optimize the Fin Ray finger design. To streamline the Fin Ray struts and its design to more optimally grasp a large variety of objects, both Shan et al. and Deng et al. developed mathematical models and simulations to assess and optimize its grasping capabilities [12], [13]. Crooks [14]. Another solution to improve grasp quality of a Fin Ray gripper involved adding electroadhesion pads to the tactile surface of the fingers [15]. Despite the progress made towards optimizing the Fin Ray finger design, not much work has been done to sensorize it. Although Yang et al. proposed a Fin Ray finger that incorporates an embedded force sensor for detecting the finger state and to measure contact force [16], it does not provide the higher-detailed tactile sensing necessary for more complex manipulation tasks. B. Soft Robot Tactile Sensing In general, most of the work done with soft robotic tactile sensing has been through strain sensors [17], [18], which are only able to provide information on force contact and cannot give additional tactile information. To mitigate these shortcomings, vision-based tactile sensing can be used instead. Embedded tactile sensing via camera sensors can provide high-resolution information of the contact area that will not be obscured by outside elements. However, the majority of vision-based tactile sensors are currently still built only for rigid fingers or fingertips. Even when embedded visionbased sensing is used for soft robotic finger applications, as they are in the work by Hofer et al. and Oliveira et al, the camera is only used to sense proprioception or force deformations [19], [20]. A vision-based proprioceptive and tactile sensing approach is also used for an exoskeletoncovered soft robotic finger [21]. However, the high-resolution tactile sensing provided by the camera is not used to its full potential and cannot provide any tactile reconstruction or marker tracking heuristics that the GelSight sensor can [22]. The GelSight sensor is a vision-based tactile sensor which uses a patterned soft gel as a sensing medium, tri-colored LEDs directed from three different directions, and an embedded camera that tracks the high-resolution tactile information. To the authors' knowledge, currently no other soft robotic gripper has incorporated the full functionality of a GelSight sensor, allowing tactile reconstruction, object orientation detection, and marker tracking. The design introduced in this paper also allows a GelSight sensor to be elongated and flexible, which captures a larger tactile surface area and enables compliant grippers to utilize GelSight sensor technology. III. METHODS A. Sensor Design Overall, the sensor design involves three major components: finger design, tactile sensing pad manufacturing, and illumination. The resulting finger was attached to a parallel gripper configuration. Fig. 2 shows the entire manufacturing process of the finger. Finger Design The struts of the finger were designed based on the design proposed by Elgeneidy et al. [23]. To create a finger that could house and allow usage of a vision sensor while retaining the structural features of a Fin Ray, the middle part of the finger was hollowed out. This removal of the middle support material caused the finger to become too flexible, so a harder material (1.75 mm TPU, Overture, printed on Prusa MK3S) was used for the finger struts while an even more rigid material (Onyx, Markforged) was attached to the back of the finger as support. To circumvent the unwieldy removal of TPU support material from the main finger print, the struts portion of the finger was separated into two parts along its length. Doing so allowed the parts to be printed with the outer section against the build plate, enabling them to be printed with no support material and eliminating extensive post-manufacturing processes while also shortening the printing processes. Since the sensor utilizes a camera, it was important that any ambient lighting be obscured so that it did not interfere with the tactile sensor. However, any material added to the finger also had to be stretchy so that it would not interfere with the compliant characteristics of the finger. To solve this issue, a laser cut piece of black, stretchy cloth (spandex/nylon) was adhered to the inside of the struts and across the bottom half of the open part of the finger before the three 3D printed parts were pieced together. The bottom half was covered since most of the manipulation and grasp tasks are performed near the tip of the finger, where the finger is more compliant. The dark cloth obstructed outside environmental lighting while its elasticity allowed the finger to function as it would without the addition of the cloth. Afterwards, the two halves of the finger were glued onto the rigid base with cyanoacrylate glue. Tactile Sensing Pad Manufacturing The tactile sensing silicone pad was cast as a rectangularprofile piece with dimensions of 2.5 mm by 18 mm, with a 25 mm curved profile along one of the 18 mm surfaces to serve as the tactile sensing surface. The mold was 3D printed using the Markforged Onyx material. To ensure that the tactile curved surface was smooth, a piece of plastic mylar sheet with 6 mil (0.15 mm) thickness was superglued onto the inner curved mold surface. A platinum catalyst translucent silicone (XP-565, Silicones, Inc.) was combined with a plasticizer (LC1550 Phenyl Trimethicone, Lotioncrafter) in a ratio of 11 to 3 (1 part XP-565 activator to 10 parts base to 3 parts plasticizer) to be used for the silicone pad. The mixture was used to ensure a clear, transparent sensing region for the camera sensor and a plasticizer was added to increase the softness and robustness of the silicone pad. After degassing the silicone mixture and pouring it into a pre-prepared mold, the mold was left to cure for 24 hours at room temperature. Once the silicone pad was cured, a thin layer of silicone paint was brushed on the curved side of the silicone with a foam brush. This paint layer formed a semi-specular tactile sensing surface. To create this painted surface, which would allow the sensor to more easily pick up small details and dimmer illumination in the tactile sensing surface, a mixture of 1 part silicone ink catalyst to 10 parts gray silicone ink base (Raw Materials Inc.) to 2.5 parts 4 µm Aluminum cornflakes (Schlenk) to 30 parts NOVOCS Gloss (Smoothon Inc) was used. After allowing the painted surface to fully cure, 1 mm circular dots spaced 4 mm apart were laser engraved (Epilog Laser Cutter System) on the painted surface, effectively removing the painted layer. A layer of black silicone paint (1 part catalyst to 10 parts black paint to 30 parts NOVOCS Gloss) was then sprayed on using an airbrush, creating patterned black dots on the silver-gray tactile surface. The dots allow for the use of marker tracking to determine motion or slippage of a grasped object. One of the manufactured silicone pads was then glued to a piece of 0.5 mm thick acrylic to provide the silicone with a rigid backing and prevent the silicone from folding in on itself when it comes in contact with an object. A mixture of 1 part Silpoxy (Smooth-on Inc) to 10 parts NOVOCS gloss was used so that the silicone could bond to the acrylic. The thinness of the acrylic is to ensure that the finger has overall compliance and can still bend when it comes into contact with an object. Acrylic was also chosen as the rigid base of the silicone pad for its ease of manufacturing and its reasonably close index match with silicone, which helps to mitigate light interference of the sensing surface caused by refraction. This acrylic piece was then attached to the 3D printed finger. Illumination To illuminate the finger, the sides of the acrylic were respectively covered with a layer of red and green fluorescent acrylic paint (Liquitex BASICS ACRYLIC) and a line of blue LEDs (450 nm) was adhered to one end of the acrylic piece to illuminate the silicone pad and cause the paints to fluoresce. A short strip of TPU filament was superglued to its back and along the empty middle section of the finger to hold the blue LEDs in place. Together with the fluorescent paints, the blue LEDs helped to provide a tri-color sensing region, similar to how a GelSight wedge and digger fingers are illuminated [24], [25]. The presence of all three colors help the sensor to perform depth reconstruction of objects the gripper comes into contact with. The camera is set opposite the tactile surface near the tip at the back of the finger so that it can view the most compliant parts of the gripper as well as where the majority of the tactile interactions will occur. To mitigate the saturation of blue LEDs viewed by the camera, a yellow filter is placed atop the camera sensor. The filter helps make the less vivid red and green colors visible by the camera. Because illumination of the finger as viewed by the camera changes as the finger flexes, separate yellow fluorescent dots (Liquitex BASICS ACRYLIC) were painted on both sides of the acrylic surface to provide a reference for the shape of the finger. The color was deliberately chosen so that it could easily be visible in the dim illumination and so as not to interfere with the tactile sensing. These dots are separate from the black silicone dots used for marker tracking, which depict the shear information of the silicone surface. Gripper Assembly The fully assembled finger was then attached to the parallel WSG 50-110 (Weiss) gripper using an Onyx 3D printed part designed to fit as a replacement for the traditional Weiss gripper fingers. B. Software Similar to a GelSight sensor, the Fin Ray finger is able to perform tactile sensing, measure the orientation of an object it comes into contact with, and track markers for slip and twist detection [22]. All images were captured using a Raspberry Pi 160 • field of view (FOV) camera, before being cropped slightly so that only the tactile sensing portion of the image was shown. A wide FOV camera was used so that it can sense along the entire length of the sensing region. Tactile Sensing/Reference Image The main challenge with incorporating tactile sensing into the Fin Ray finger is that the finger deforms when any object comes in contact with it. As a result, the camera sees both the proprioceptive change when the finger bends, and the minute changes in the silicone pad corresponding to the tactile surface interactions. Tracking only the fine-detailed tactile information requires knowledge of the proprioceptive state of the finger or the bending positions of the acrylic. As such, yellow fluorescent dots were painted on the sides of the acrylic so that the motion of the finger could be tracked and used as a reference. This reference was then used to isolate the small-scale tactile information from the GelSight sensor. A reference video was created with the finger where a human hand flexed the sides of the finger which are not visible by the camera sensor. As such, the camera was able to capture the different illumination schemes without any tactile interference. Using the video, the dots were thresholded out via HSV color segmentation, opening and closing operations were performed on the thresholded image, and the individual dot centers were extracted from each frame of the reference videos using a Python OpenCV contour finding algorithm and converted into a Numpy array for better latency [26], [27]. Subsequent tactile images were then compared to reference images using the dot segmentation method to find the image that most closely matched the current image. The general algorithm flow is shown in Fig. 3. Algorithm flow for obtaining the reference point matrix and live reconstruction of the raw tactile image. The reference point matrix is precalculated using a video taken of the finger without any tactile interference. The matrix is sparse and contains only one nonempty element per column. Subsequent raw tactile images are then compared to the reference point matrix via the dot centers, a difference image is taken, and an uncalibrated reconstruction image is calculated. Pictured in the raw tactile image is the head of a screw that was pressed into the sensing region at a slight angle. Object Orientation After matching the tactile image with a reference image, difference images were taken and then Poisson reconstruction was used to calculate uncalibrated depth/reconstruction images [28]. The contour finding algorithm was then used on thresholded versions of the depth images and principal component analysis was used to find the orientation of the object that was being grasped. Marker Tracking Marker tracking was performed using the difference image between the luminosity channel of the LAB color space and its corresponding median-blurred image. This method allowed the markers to be segmented out of each image. Arrows were then drawn from the reference image marker to its closest marker in the actual image. If the distance between them was significant, the arrow length was scaled by a factor of three to emphasize the shear and twist motion of the object in contact. IV. EXPERIMENT A. Experimental Setup Software was run via Python and ROS [29] and incorporated object orientation estimation and marker tracking of the tactile surface. The camera was run on a Raspberry Pi board and images were streamed to the computer using mjpg streamer. The fingers were attached to the Weiss gripper, which was mounted on a UR5 arm. Although both of the fingers in the parallel gripper were Fin Rays, only one of them was sensorized. To test the compliance and gripping ability of the fingers, many test objects were handed to the gripper to be grasped. These objects included a mini screwdriver, a plastic strawberry, a plastic lemon, a plastic orange, a Rubik's cube, a wine glass, a ball glass mason jar, and a deformable, squishy acrylic paint tube (Fig 4). To test the orientation estimation and marker tracking, the gripper was programmed to be handed a wine glass stem, measure its orientation based on tactile results, and rotate the gripper so that the wine glass would be oriented upright. Afterwards, the gripper would set the wine glass down until the total magnitude, or shear force, detected by the marker tracking algorithm exceeded a threshold, indicating that the glass had touched the table. A wine glass was chosen because it is difficult to segment transparent objects using vision, and thus, tactile sensing greatly simplifies the reorientation and placement problem. B. Results Grasping The Fin Ray fingers were able to comply to and grasp all of the test objects, which included a variety of different sizes and shapes. For curved objects, the Fin Ray fingers were able to wrap around the curvature of the grasped objects while for the more linear objects, such as the screwdriver and the Rubik's cube, the fingers acted more like a rigid, parallel gripper. For the squishy paint tube, the Fin Ray fingers were able to comply around the object without breaking or collapsing the paint tube. Although the grips for heavy and asymmetrically weighted objects were subject to slippage, the grasp became sturdier once the force the gripper could apply was increased. This difficulty in grasping particular objects may be partially due to the slight inadvertent twisting motion that the fingers sometimes exhibited due to the hollowing out of the structure inside, which most likely decreased the gripping strength of the fingers. Despite the addition of the acrylic mount, which did limit the twisting motion, there was still a minute amount of twist in the finger itself. Furthermore, the acrylic piece could have also created challenges in grasping since it may have limited the flexibility of the finger and required more force to ensure greater compliance of the finger to the object it was grasping. There was a tradeoff between the twisting motion of the structure and its compliant flexibility. However, this tradeoff did not result in hindering any of the compliance or grasping abilities of the Fin Ray finger for the objects in the testing set. In summary, the GelSight Fin Ray gripper was able to grasp a multitude of objects, showcasing that the fingers still retain their compliance properties, which makes them useful for a large variety of grasping tasks that potentially need robust, universal grasping abilities combined with tactile sensing. Tactile Sensing The tactile surface of the sensor is able to detect minute details, such as some of the threads of a M2.5 screw, and the indents on the outside of a M4 heated insert. All of the reference images were able to help provide good reconstruction images, despite the slightly nonuniform fluorescing quality of the red and green paints. The raw tactile images and reconstruction images are shown in Fig. 5. Generally, the dot center matching algorithm was quite robust, but it seemed to struggle more when the GelSight Fin Ray fingers wrapped around heavier objects such as the large mason jar. The extra weight caused the fingers to twist a bit and would shift the raw tactile image down by a nontrivial amount, causing some noise in the uncalibrated reconstruction image since it was difficult to include twisting motion in the reference images without interfering with the tactile sensing region. Regardless, the marker tracking was able to sense the shear force motion due to twisting by gravity and was able to sense twist and shearing motions along the center of the tactile surface (Fig. 6). Due to the proximity of the black dots to the fluorescent dots, there were cases where some of the black dots could not be perfectly tracked. This phenomena also occurred for the dots in some of the regions at the top of the raw tactile image, which are furthest from the illumination of the blue LEDs. Having black dots also made the algorithm track a few black spots at the outer edges of the tactile sensing, which served to add some noise into the system as well. Nevertheless, all of this noise did not affect the marker tracking overall, and can be potentially optimized for future design iterations of the GelSight Fin Ray. Wine Glass Reorientation Without tactile sensing, the task of reorienting and setting down a wine glass without tipping it over or crashing it into the table becomes more complex due to its transparency, which makes it harder to see. With the GelSight Fin Ray, however, wine glass reorientation was generally successful (Fig. 7). Out of the 10 trials that were performed, the algorithm was able to succeed in 7 of them; the GelSight Fin Ray finger was able to allow the gripper to successfully reorient and set the wine glass down without it tipping over. In the three cases where the algorithm failed, the grasp was not secure enough and the wine glass slightly slipped out of the Fin Ray grasp, causing the reconstruction of the image to fail, which in turn caused failure of the reorientation portion of the wine glass. However, in those cases, the gripper was still able to detect when the wine glass came in contact with the table and stop itself from crashing the wine glass into the table further and potentially breaking it. Fig. 7. Successful implementation of wine glass reorientation. A wine glass is handed to the gripper at an angle. Using the raw tactile image, the orientation of the wine glass stem is found through the reconstruction algorithm. Using this angle, the UR5 arm reorients the gripper so that it is holding the wine glass right side up. The arm then sets the wine glass down until the shear force from the contact goes above a certain threshold and causes the gripper to release the wine glass on the table without it tipping over. Note that the shear force detected by the marker tracking algorithm is along the major direction of the stem, indicating that the wine glass is upright. To conclude, the marker tracking portion of the software, despite some noise in the system, was more robust than the reconstruction algorithm. This robustness most likely occurs because the marker tracking algorithm does not require firm contact of the silicone pad sensing surface with its object. Bare minimum contact is needed as long as the top part of the silicone pad can be visibly displaced by external forces. Since the silicone pad is soft, the top portion of the pad was easily deformable, causing the marker tracking portion of the tactile sensing to be inherently more robust. The future addition of a neural network could also generally increase the accuracy and results of tactile reconstruction and marker tracking. V. CONCLUSION AND DISCUSSION State estimation via tactile sensing is an important issue in soft robotic hand manipulation. By better understanding and being able to perceive their own state and outside tactile stimuli, soft compliant robots may increase their ability to interact with the environment around them and help increase the quality of life of the people they are trying to aid. The incorporation of tactile sensing also has the potential to allow soft compliant robots to mimic or outperform human capabilities to reconstruct the world around them through tactile manipulation. Such abilities may ultimately allow soft robots to improve their performance in medical fields (i.e. surgical robots or prosthetics), during human-robot interactions, or on rescue missions in unknown situations. In particular, Fin Ray fingers allow manipulation to be performed in limited energy environments due to their simplicity in actuation, enabling them to be of great usage in many different environments around the world. However, many existing sensors that provide very intricate tactile information are rigid, compact, and not suitable for utilization in a soft robot. This paper introduced a novel flexible, elongated GelSight sensor that can be used in conjunction with a Fin Ray inspired finger, allowing it to perform reconstruction, object orientation estimation, and marker tracking. The combination of the two technologies allow the Fin Ray finger to perform a greater variety of manipulation tasks, including the successful reorientation and placement of a transparent wine glass without any catastrophic failures. A few of the limitations are inherent to the twisting motion of the hollowed out finger, which is traded off with the rigidness of the acrylic at the back of the sensing region. This limitation caused the Fin Ray finger to require a nontrivial amount of force to comply to more curved objects. Additionally, if the object being grasped was too heavy or if the force was not enough to grab a smaller object to impart a large imprint on the silicone pad, the gripper would occasionally struggle with successful implementation of reconstruction and object orientation estimation. These issues can be fixed with future optimizations to the design of the hollowed out finger. Furthermore, although the tactile sensing software performed well most of the time, there were some issues which were caused from a lack of perfectly matched reference images. These issues could potentially be resolved with the addition of machine learning. Finally, the reconstruction images were uncalibrated and could not be used for a tactile sensing region height map. Although having a height map was not required for any of the tasks in this paper, future work could involve implementing a neural network to develop accurate height map reconstructions from the raw tactile images of the GelSight Fin Ray. Other future directions involve implementation of a Gel-Sight Fin Ray into a 3-fingered gripper, which could leverage both the compliant grasping characteristics of the Fin Ray and its tactile sensing capabilities, to select and pick ripe fruit without bruising them, reducing food waste, or manipulating more dishes in a dishwasher placement task to work towards the development of a home-assistant robot that could allow the elderly to age with dignity. VI. ACKNOWLEDGEMENTS Toyota Research Institute and the Office of Naval Research provided funds to support this work. The authors would also like to thank the members of the MIT International Design Center, Chris Haynes, Will Lutz, and Bill McKenna, for their help in manufacturing the finger; Alex Alspach for providing a fully 3D printed Fin Ray prototype; Branden Romero, Megha H. Tippur, Shaoxiong Wang, and Jerry Zhang for their helpful discussions that ranged from figures to coding to UR5 arms to design considerations; and finally, Alan Papalia for his assistance with ROS and machining. Fig. 2 . 2Manufacturing process flow of the GelSight Fin Ray finger, which includes finger design, tactile sensing pad manufacturing, and illumination. Fig. 3 . 3Fig. 3. Algorithm flow for obtaining the reference point matrix and live reconstruction of the raw tactile image. The reference point matrix is precalculated using a video taken of the finger without any tactile interference. The matrix is sparse and contains only one nonempty element per column. Subsequent raw tactile images are then compared to the reference point matrix via the dot centers, a difference image is taken, and an uncalibrated reconstruction image is calculated. Pictured in the raw tactile image is the head of a screw that was pressed into the sensing region at a slight angle. Fig. 4 . 4Objects used for the experimental setup. They include a mini screwdriver, plastic fruits, a Rubik's cube, a wine glass, a mason jar, and a squishy acrylic paint tube. Fig. 5 . 5Tactile sensing image reconstruction. The left most images show the objects (M4 heated insert, M2.5 screw) that were placed on the tactile sensing images, the middle images represent the raw tactile data, and the right most images display the reconstruction based on the raw data. Fig. 6 . 6Marker tracking with the GelSight Fin Ray. The yellow arrows denote the tracked position from the reference markers in the found reference image to the markers in the current image. In both images, the fingers are gripping a smooth acrylic cylinder. The left image (a) denotes shear along the cylinder from an external force while the right image (b) displays torsional external force being applied on the cylinder. et al. worked on adapting the original Fin Ray design to ensure a preferred bending direction and increasing the grip force of the fingers [10] while Elgeneidy et al. worked on improving the design of a NinjaFlex Fin Ray finger through experimental design testing methods Biomedical applications of soft robotics. M Cianchetti, C Laschi, A Menciassi, P Dario, Nature Reviews Materials. 3M. Cianchetti, C. Laschi, A. Menciassi, and P. Dario, "Biomedical applications of soft robotics," Nature Reviews Materials, vol. 3, pp. 143-153, 2018. Soft robotics as an enabling technology for agroforestry practice and research. G Chowdhary, M Gazzola, G Krishnan, C Soman, S Lovell, Sustainability. 112019G. Chowdhary, M. Gazzola, G. Krishnan, C. Soman, and S. Lovell, "Soft robotics as an enabling technology for agroforestry practice and research," Sustainability, vol. 11, p. 6751, 11 2019. Soft robots manufacturing: A review. L B Schmitt, Olivier Piccin, B Bayle, Front Robot AI. 584L. B. François Schmitt, Olivier Piccin and B. Bayle, "Soft robots manufacturing: A review," Front Robot AI, vol. 5, p. 84, 2018. Toward perceptive soft robots: Progress and challenges. H Wang, M Totaro, L Beccai, Advanced Science. 591800541H. Wang, M. Totaro, and L. Beccai, "Toward perceptive soft robots: Progress and challenges," Advanced Science, vol. 5, no. 9, p. 1800541, 2018. Recent progress in technologies for tactile sensors. C Chi, S Xuguang, N Xue, T Li, C Liu, Sensors. 18948C. Chi, S. Xuguang, N. Xue, T. Li, and C. Liu, "Recent progress in technologies for tactile sensors," Sensors, vol. 18, p. 948, 03 2018. . &quot; Festo, Multichoicegripper, FESTO, "Multichoicegripper," 2014. [Online]. . Available, Avail- able: https://www.festo.com/net/SupportPortal/Files/333986/Festo MultiChoiceGripper en.pdf The effect of fin ray flexural rigidity on the propulsive forces generated by a biorobotic fish pectoral fin. J L Tangorra, G V Lauder, I W Hunter, R Mittal, P G A Madden, M Bozkurttas, Journal of Experimental Biology. 213232010J. L. Tangorra, G. V. Lauder, I. W. Hunter, R. Mittal, P. G. A. Madden, and M. Bozkurttas, "The effect of fin ray flexural rigidity on the propulsive forces generated by a biorobotic fish pectoral fin," Journal of Experimental Biology, vol. 213, no. 23, pp. 4043-4054, 12 2010. Soft robotic grippers. J Shintake, V Cacucciolo, D Floreano, H Shea, Advanced Materials. 30291707035J. Shintake, V. Cacucciolo, D. Floreano, and H. Shea, "Soft robotic grippers," Advanced Materials, vol. 30, no. 29, p. 1707035, 2018. Biologically inspired gripper based on the fin ray effect. M H Ali, A Zhanabayev, S Khamzhin, K Mussin, 2019 5th International Conference on Control, Automation and Robotics (ICCAR). M. H. Ali, A. Zhanabayev, S. Khamzhin, and K. Mussin, "Biologically inspired gripper based on the fin ray effect," in 2019 5th International Conference on Control, Automation and Robotics (ICCAR), 2019, pp. 865-869. Fin ray effect inspired soft robotic gripper: From the robosoft grand challenge toward optimization. W Crooks, G Vukasin, M O&apos;sullivan, W Messner, C Rogers, Frontiers in Robotics and AI. 32016W. Crooks, G. Vukasin, M. O'Sullivan, W. Messner, and C. Rogers, "Fin ray effect inspired soft robotic gripper: From the robosoft grand challenge toward optimization," Frontiers in Robotics and AI, vol. 3, 11 2016. Passive gripper inspired by manduca sexta and the fin ray® effect. W Crooks, S Rozen-Levy, B Trimmer, C Rogers, W Messner, International Journal of Advanced Robotic Systems. 14W. Crooks, S. Rozen-Levy, B. Trimmer, C. Rogers, and W. Messner, "Passive gripper inspired by manduca sexta and the fin ray® effect," International Journal of Advanced Robotic Systems, vol. 14, 07 2017. Modeling and analysis of soft robotic fingers using the fin ray effect. X Shan, L Birglen, The International Journal of Robotics Research. 39X. Shan and L. Birglen, "Modeling and analysis of soft robotic fingers using the fin ray effect," The International Journal of Robotics Research, vol. 39, 04 2020. Learning optimal fin-ray finger design for soft grasping. Z Deng, M Li, Frontiers in Robotics and AI. 72021Z. Deng and M. Li, "Learning optimal fin-ray finger design for soft grasping," Frontiers in Robotics and AI, vol. 7, 02 2021. Characterising 3d-printed soft fin ray robotic fingers with layer jamming capability for delicate grasping. K Elgeneidy, P Lightbody, S Pearson, G Neumann, 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft). K. Elgeneidy, P. Lightbody, S. Pearson, and G. Neumann, "Charac- terising 3d-printed soft fin ray robotic fingers with layer jamming capability for delicate grasping," in 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft), 2019, pp. 143-148. Bio-inspired shape-adaptive soft robotic grippers augmented with electroadhesion functionality. R Chen, R Song, Z Zhang, L Bai, F Liu, P Jiang, D Sindersberger, G J Monkman, J Guo, Soft roboticsR. Chen, R. Song, Z. Zhang, L. Bai, F. Liu, P. Jiang, D. Sindersberger, G. J. Monkman, and J. Guo, "Bio-inspired shape-adaptive soft robotic grippers augmented with electroadhesion functionality." Soft robotics, 2019. A 3d-printed fin ray effect inspired soft robotic gripper with force feedback. Y Yang, K Jin, H Zhu, G Song, H Lu, L Kang, 12MicromachinesY. Yang, K. Jin, H. Zhu, G. Song, H. Lu, and L. Kang, "A 3d- printed fin ray effect inspired soft robotic gripper with force feedback," Micromachines, vol. 12, no. 10, 2021. Tactile sensing with a tendon-driven soft robotic finger. C Cheng, Y Yan, M Guan, J Zhang, Y Wang, abs/2107.02546ArXiv. C. Cheng, Y. Yan, M. Guan, J. Zhang, and Y. Wang, "Tactile sensing with a tendon-driven soft robotic finger," ArXiv, vol. abs/2107.02546, 2021. Bodily aware soft robots: integration of proprioceptive and exteroceptive sensors. G Soter, A Conn, H Hauser, J Rossiter, 2018 IEEE international conference on robotics and automation (ICRA). IEEEG. Soter, A. Conn, H. Hauser, and J. Rossiter, "Bodily aware soft robots: integration of proprioceptive and exteroceptive sensors," in 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018, pp. 2448-2453. A vision-based sensing approach for a spherical soft robotic arm. M Hofer, C Sferrazza, R D&apos;andrea, Frontiers in Robotics and AI. 82021M. Hofer, C. Sferrazza, and R. D'Andrea, "A vision-based sensing approach for a spherical soft robotic arm," Frontiers in Robotics and AI, vol. 8, 02 2021. Design and experiments on an inflatable link robot with a built-in vision sensor. J Oliveira, Mechatronics. 65J. Oliveira, "Design and experiments on an inflatable link robot with a built-in vision sensor," Mechatronics, vol. 65, 02 2020. Exoskeleton-covered soft finger with vision-based proprioception and tactile sensing. Y She, S Q Liu, P Yu, E Adelson, pp. 10 075-10 0812020 IEEE International Conference on Robotics and Automation (ICRA). Y. She, S. Q. Liu, P. Yu, and E. Adelson, "Exoskeleton-covered soft finger with vision-based proprioception and tactile sensing," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 10 075-10 081. Improved gelsight tactile sensor for measuring geometry and slip. S Dong, W Yuan, E H Adelson, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEES. Dong, W. Yuan, and E. H. Adelson, "Improved gelsight tactile sen- sor for measuring geometry and slip," in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 137-144. Structural optimization of adaptive soft fin ray fingers with variable stiffening capability. K Elgeneidy, A Fansa, I Hussain, K Goher, 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft). K. Elgeneidy, A. Fansa, I. Hussain, and K. Goher, "Structural opti- mization of adaptive soft fin ray fingers with variable stiffening capa- bility," in 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft), 2020, pp. 779-784. Gelsight wedge: Measuring high-resolution 3d contact geometry with a compact robot finger. S Wang, Y She, B Romero, E Adelson, 062021S. Wang, Y. She, B. Romero, and E. Adelson, "Gelsight wedge: Measuring high-resolution 3d contact geometry with a compact robot finger," 06 2021. Digger finger: Gelsight tactile sensor for object identification inside granular media. R Patel, R Ouyang, B Romero, E Adelson, Experimental Robotics, B. Siciliano, C. Laschi, and O. KhatibSpringer International PublishingR. Patel, R. Ouyang, B. Romero, and E. Adelson, "Digger finger: Gelsight tactile sensor for object identification inside granular media," in Experimental Robotics, B. Siciliano, C. Laschi, and O. Khatib, Eds. Cham: Springer International Publishing, 2021, pp. 105-115. The OpenCV Library. G Bradski, Dr. Dobb's Journal of Software Tools. G. Bradski, "The OpenCV Library," Dr. Dobb's Journal of Software Tools, 2000. Array programming with NumPy. C R Harris, K J Millman, S J Van Der Walt, R Gommers, P Virtanen, D Cournapeau, E Wieser, J Taylor, S Berg, N J Smith, R Kern, M Picus, S Hoyer, M H Van Kerkwijk, M Brett, A Haldane, J F Del Río, M Wiebe, P Peterson, P Gérard-Marchant, K Sheppard, T Reddy, W Weckesser, H Abbasi, C Gohlke, T E Oliphant, Nature. 5857825C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant, "Array programming with NumPy," Nature, vol. 585, no. 7825, pp. 357-362, Sept. 2020. Fast poisson reconstruction in python. J Doerner, J. Doerner, "Fast poisson reconstruction in python," https://gist.github. com/jackdoerner/b9b5e62a4c3893c76e4c, 2014. Ros: an open-source robot operating system. M Quigley, K Conley, B Gerkey, J Faust, T Foote, J Leibs, R Wheeler, A Ng, 3M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng, "Ros: an open-source robot operating system," vol. 3, 01 2009.
[]
[ "Transport anisotropy in biaxially strained La 2/3 Ca 1/3 MnO 3 thin films", "Transport anisotropy in biaxially strained La 2/3 Ca 1/3 MnO 3 thin films" ]
[ "J Klein \nII. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937KölnGermany\n", "J B Philipp \nWalther-Meissner-Institut\nBayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany\n", "G Carbone \nMax-Planck-Institut für Metallforschung\nHeisenbergstr. 170569StuttgartGermany\n", "A Vigliante \nMax-Planck-Institut für Metallforschung\nHeisenbergstr. 170569StuttgartGermany\n", "L Alff \nWalther-Meissner-Institut\nBayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany\n", "R Gross \nWalther-Meissner-Institut\nBayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany\n" ]
[ "II. Physikalisches Institut\nUniversität zu Köln\nZülpicher Str. 7750937KölnGermany", "Walther-Meissner-Institut\nBayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany", "Max-Planck-Institut für Metallforschung\nHeisenbergstr. 170569StuttgartGermany", "Max-Planck-Institut für Metallforschung\nHeisenbergstr. 170569StuttgartGermany", "Walther-Meissner-Institut\nBayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany", "Walther-Meissner-Institut\nBayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany" ]
[]
Due to the complex interplay of magnetic, structural, electronic, and orbital degrees of freedom, biaxial strain is known to play an essential role in the doped manganites. For coherently strained La 2/3 Ca 1/3 MnO3 thin films grown on SrTiO3 substrates, we measured the magnetotransport properties both parallel and perpendicular to the substrate and found an anomaly of the electrical transport properties. Whereas metallic behavior is found within the plane of biaxial strain, for transport perpendicular to this plane an insulating behavior and non-linear current-voltage characteristics (IVCs) are observed. The most natural explanation of this anisotropy is a strain induced transition from an orbitally disordered ferromagnetic state to an orbitally ordered state associated with antiferromagnetic stacking of ferromagnetic manganese oxide planes.
10.1103/physrevb.66.052414
[ "https://export.arxiv.org/pdf/cond-mat/0207730v1.pdf" ]
119,500,237
cond-mat/0207730
9400d78f17e7b800164179bd2e1decfd662a292d
Transport anisotropy in biaxially strained La 2/3 Ca 1/3 MnO 3 thin films 31 Jul 2002 J Klein II. Physikalisches Institut Universität zu Köln Zülpicher Str. 7750937KölnGermany J B Philipp Walther-Meissner-Institut Bayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany G Carbone Max-Planck-Institut für Metallforschung Heisenbergstr. 170569StuttgartGermany A Vigliante Max-Planck-Institut für Metallforschung Heisenbergstr. 170569StuttgartGermany L Alff Walther-Meissner-Institut Bayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany R Gross Walther-Meissner-Institut Bayerische Akademie der Wissenschaften, Walther-Meissner Str. 885748GarchingGermany Transport anisotropy in biaxially strained La 2/3 Ca 1/3 MnO 3 thin films 31 Jul 2002(Dated: received June 04, 2002) Due to the complex interplay of magnetic, structural, electronic, and orbital degrees of freedom, biaxial strain is known to play an essential role in the doped manganites. For coherently strained La 2/3 Ca 1/3 MnO3 thin films grown on SrTiO3 substrates, we measured the magnetotransport properties both parallel and perpendicular to the substrate and found an anomaly of the electrical transport properties. Whereas metallic behavior is found within the plane of biaxial strain, for transport perpendicular to this plane an insulating behavior and non-linear current-voltage characteristics (IVCs) are observed. The most natural explanation of this anisotropy is a strain induced transition from an orbitally disordered ferromagnetic state to an orbitally ordered state associated with antiferromagnetic stacking of ferromagnetic manganese oxide planes. It is well known that the physics of the doped perovskite manganites is determined by a complex interplay of structural, magnetic, electronic, and orbital degrees of freedom. While the classical double exchange model can qualitatively explain the transition from a paramagnetic insulating to a ferromagnetic metallic state [1], for a more complete understanding of the physics of the manganites electron-lattice coupling has to be included [2]. Recently, Millis et al. have pointed out that uniform compression, as realized by hydrostatic pressure, increases the electron hopping amplitude favoring a ferromagnetic metallic state [3]. In contrast, biaxial strain, as realized in epitaxial thin films grown on substrates with significant lattice mismatch, enhances the Jahn-Teller distortions favoring an insulating state due to the tendency of the electrons to become localized [3]. Fang et al. have calculated the phase diagram of the almost tetragonal doped manganite La 1−x Sr x MnO 3 as a function of biaxial strain by studying the instabilities of the ferromagnetic state [4]. Their results are in ageement with experiments on biaxially strained La 1−x Sr x MnO 3 thin films [5]. In their work, it has been shown that the orbitally disordered ferromagnetic state (F) is unstable against orbital ordered states with layer (A) and chain (C) type antiferromagnetism. In turn, it is expected that these different magnetic states are associated with different magnetotransport behavior via the double exchange mechanism. The further investigation of the validity of the strain phase diagram, the corresponding magnetotransport properties, and the extendability to doped manganites with strong tilt of MnO 6 octahedra [6] and phenomena as charge ordering as for example La 1−x Ca x MnO 3 is of great interest to gain more insight into the physics of these materials and its dependence on lattice distortions. For the purpose of this study, it is important to verify the coherency of the strained state of the doped manganite, in order to be able to determine properly the intrinsic properties within the biaxial strain phase diagram, and to exclude effects of nonuniform strain distribution or relaxation effects [7,8]. In this Letter we present a careful study of the structural, electronic, and magnetic properties of coherently strained La 2/3 Ca 1/3 MnO 3 (LCMO) thin films and LCMO-La 2/3 Ba 1/3 MnO 3 (LBMO) heterostructures on SrTiO 3 substrates. In addition to previous studies, transport properties have been measured both parallel and perpendicular to the plane of biaxial strain. The key result of our study is that biaxial strain results in highly anisotropic transport properties: Whereas insulating behavior and non-linear IVCs are observed perpendicular to the biaxially strained plane (parallel to the c axis), the in-plane transport is metallic below the Curie temperature T C . The saturation magnetization of biaxially strained LCMO is strongly reduced compared to the bulk material or the less strained LBMO films. It is shown that this behavior is not due to interface effects between different layers [9], but is an intrinsic property of the biaxially strained LCMO arising most likely from a strain induced orbital ordering. Another important result is that in strained LCMO a low-resistance state can be induced by applying either a magnetic field or a high current density. A similar behavior is for example observed in Pr 0.7 Ca 0.3 MnO 3 and Nd 0.5 Sr 0.5 MnO 3 and has been attributed to a magnetic field or current induced (local) melting of a charge resp. orbitally ordered ground state [10,11,12]. We have grown LCMO films and LBMO-LCMO-LBMO heterostructures on SrTiO 3 (a ≃ 3.905Å) substrates using pulsed laser deposition [13,14]. The lattice mismatch between the SrTiO 3 substrate and LCMO (a bulk ≃ 3.864Å in pseudocubic notation) is -1.2% resulting in in-plane tensile strain, whereas the mismatch between SrTiO 3 and LBMO (a bulk ≃ 3.910Å) is only about 0.13% resulting in very small compressive strain. Note that in the LBMO-LCMO-LBMO heterostructures the LBMO layers provide low resistance contacts to the (ultra)thin LCMO films allowing for a homogeneous current feed for transport perpendicular to the film. Furthermore, effects of a possible surface "dead layer" [15] are avoided in the multilayer structure. Layer-by-layer growth of the films was confirmed by a high pressure reflection high energy electron diffraction (RHEED) system showing clear growth oscillations [16,17]. As has already been stressed it is important to prove the coherency of the strained state. The coherent film thickness was determined from Laue oscillations in θ − 2θ x-ray scans. It was found that both LCMO and LBMO films grow coherently strained on SrTiO 3 substrates up to a thickness of at least 60 nm, consistent with literature [9,18]. The out-of-plane (c axis) and in-plane (a axis) lattice parameters of the LCMO thin films grown on SrTiO 3 have been determined from the (002) and (103) reflections. The in-plane film lattice parameters coincide throughout the whole layer with the substrate lattice parameters. That is, the in-plane lattice parameter of LCMO is enlarged, while the out-of-plane lattice constant is reduced, leading to a tetragonal lattice distortion with c/a ≈ 0.985. The tetragonal distortion can be viewed as a Jahn-Teller like distortion resulting in an increased Jahn-Teller splitting of the Mn e g levels and, in turn, in a tendency of the electrons to become localized. In order to measure the electrical transport properties perpendicular to the film plane, mesa structures (see inset of Fig. 1) were patterned using optical lithography and Ar ion beam etching. The mesas have typical area of several µm 2 . Fig. 1 shows the resistance vs temperature, R(T ), curves for LBMO-LCMO-LBMO mesa structures with different thickness d LCMO of the LCMO layer. For comparison, a sample without LCMO layer (d LCMO = 0) is shown. It is evident that the resistance increases with increasing d LCMO . We estimate the involved resistivities ρ c to about several Ωm at 10 K. Note that due to the non-linear IVCs, it is difficult to obtain a meaningful ρ c or thickness dependence ρ c (d). In the inset we show the sheet resistance as a function of d LCMO . The strong increase of resistance with increasing layer thickness can partially be due to tunneling through the barrier. The important point is that the resistivity does not show any tendency to saturation meaning the resistance is intrinsic to the barrier and not to the interface. Furthermore, the metallic R(T ) behavior below T C that is observed for the sample with d LCMO = 0 turns into an semiconducting or insulating R(T ) behavior with increasing d LCMO . The low-temperature plateau (below 60 K) of the resistivity corresponds to a plateau in the temperature dependence of the magnetization (see upper panel of Fig. 3) as expected for double exchange materials. This thickness dependence clearly shows that the insulating R(T ) behavior is not due to the patterning process, due to a contact resistance between the gold contact layer and the manganite film, nor due to interface effects. One can conclude that the insulating behavior for transport along the c axis observed at low temperatures is an intrinsic property of the coherently strained LCMO thin films. We note that in the work of Bibes et al. interfaces between magnetic LCMO and non-magnetic SrTiO 3 are shown to favor phase segregation [9]. Inhomogeneous transport properties were found in La 0.7 Ca 0.3 MnO 3 /SrTiO 3 heterostructures and interpreted as arising from magnetic interface disorder [19]. In our case, the interfaces are between two differently doped manganites. While it remains to be investigated whether interdiffusion is present at such interfaces, the effects observed here are related to the magnetic order of the whole thin film layer. In Fig. 2 we show the IVCs of several mesa structures with different values of d LCMO measured in c axis direction, i. e. with current perpendicular to the biaxially strained plane. While for low enough current densities V ∝ I, all mesas show highly non-linear IVCs for higher current densities following a V ∝ I n dependence with n ≈ 0.2 − 0.3. In contrast, the IVCs measured with the current in-plane are ohmic. For comparison, mesa structures have been patterned into single LBMO films (d LCMO = 0) that are well lattice matched to the SrTiO 3 substrate. For these samples, ohmic IVCs have been obtained both for the current applied inand out-of-plane. From this and the fact that the measured voltage clearly increases with increasing d LCMO (see discussion before), preparation or interface effects can be excluded as the origin of the nonlinear IVCs. Hence, we can conclude that for current perpendicular to the plane of biaxial strain the insulating behavior of the coherently strained LCMO films is associated with non-linear current transport. We note that the non-linearity becomes smaller with increasing T and vanishes at the T C of the LCMO thin films ranging between 100 and 150 K. T C was determined from magnetization measurements of the bi/trilayer films before mesa patterning. This strongly suggests that the electronic anisotropy is coupled to the magnetic properties of the LCMO films. In agreement with previous experiments we found a strain dependence of the saturation magnetization M S of the LCMO films. Zandbergen et al. reported M S ≃ 2.5 µ B /Mn atom at 5 K for a 6 nm thick, coherently strained LCMO film on SrTiO 3 [20]. Consistently, we have measured a value of M S ≃ 2.2 µ B /Mn atom at 5 K for a 57.5 nm thick LCMO film (see inset of Fig. 2). In contrast, almost strain free LBMO films on SrTiO 3 show the expected saturation magnetization of about 3.67 µ B /Mn atom. These findings are in agreement with the phase diagram predicted by Fang et al. [4] where for tensile strain an instability to an A-type antiferromagnetic state is predicted. Another interesting observation is that the size of the hysteresis loop in M (H) curves also depends on strain. While for almost strain free LBMO films on SrTiO 3 a coercive field of µ 0 H c ≃ 10 mT is observed at 5 K, a much larger value of µ 0 H c ≃ 70 mT is measured for the strained LCMO films. Fig. 3 gives an overview on the field, current and temperature dependence of the resistance and magnetization in strained LCMO films by showing R(T ) curves recorded at different applied fields and currents. Applying high magnetic fields results in a strong suppression of the resistance at all T . Also, due to the non-linearity of the IVCs the measured resistance is reduced when the applied current is increased below about 150 K. We note that for T < ∼ 150 K the resistance is dominated by the LCMO layer, only around 250 K it is dominated by the LBMO layer. Fig. 3 clearly shows that with increasing field, the non-linearity becomes weaker and also the onset temperature of the non-linear behavior is shifted from about 100 K at 0 T to 50 K at 8 T. For comparison, the inset of Fig. 3 shows the in-plane ρ(T ) curves of a LCMO film (d LCMO = 57.5 nm). Clearly, a metallic ρ(T ) behavior is observed below the peak temperature. The same film shows an insulating R(T ) behavior for current perpendicular to plane. To further study the effect of the applied current, we have measured IVCs for successive current sweeps with increasing amplitude as shown in Fig. 4. After zero field cooling, the current is first increased to 10 µA (curve a) corresponding to a current density of about 100 A/cm 2 . On decreasing the current again (curve b), a lower voltage is measured at the same current values, i.e. the applied current of 10 µA has switched the sample to a state with lower resistance. Applying a high magnetic field (8 T) has the same effect as applying a high current (1 mA). After applying a field of 8 T, the measured IVC are stable and the resistance is independent on the applied current. Thus, both a high magnetic field and a high current density induce a state with reduced resistivity. We now address the possibility of a phase separated state in fully strained LCMO. We note that in the case of inhomogeneously strained or relaxed films (e.g. due to island growth), it is plausible to assume a phase separated state with ferromagnetically and antiferromagnetically ordered clusters, as has been discussed recently for LCMO thin films grown on LaAlO 3 [7,8]. A similar phenomenon has been observed for bulk Pr 0.7 Ca 0.3 MnO 3 samples [11] as well as for LCMO bulk and thin film samples [21]. However, phase separation cannot explain the transport anisotropy present in our samples, but would be expected to lead to direction independent behavior. All our experimental observation can be naturally described by the assumption of strain induced orbital order as predicted by Fang et al. [4]. For tensile strain (c/a < 1, as present in our samples), a transition from the conventional double exchange mediated, orbital disordered F state to the orbital ordered A state, which is composed mainly by d x 2 −y 2 states, is expected. Whereas in the F state the spins are aligned parallel in adjacent planes, in the A state anti-parallel orientation of the ferromagnetically ordered planes is present. That is, the gradual transition from the F to the A state is accompanied by a strong reduction of saturation magnetization in agreement with the strongly reduced value obtained in our experiments. Furthermore, the A-type antiferromag-netic state can be metallic only within the ferromagnetically ordered plane, but is insulating in perpendicular direction. This again is in agreement with our experimental observation. That is our data can be interpreted by a strain induced orbital ordering effect at fixed doping. Evidently, a sufficiently high current density allows to switch between the competing states with high current density or high field favoring the F state which can be interpreted as (local) melting of the orbital order by introduction of highly spin-polarized carriers. In summary, we have investigated coherently strained LCMO films. The biaxial strain was found to induce anisotropic transport properties at low temperatures with metallic and insulating behavior for current in-and out-of-plane. It has been shown that this behavior is not due to inhomogeneous interface effects or phase separation. We suggest strain induced orbital ordering as the origin of the observed behavior in agreement with theoretical predictions. We also have shown that by applying a high current density, a low-resistance state can be induced. This effect may be of interest with respect to magnetoelectronic devices. This work was supported by the Deutsche Forschungsgemeinschaft. One of us, J. K., acknowledges support by the Graduiertenkolleg 549. PACS numbers: 68.55.-a 75.30.Vn, 75.70.Cn to appear in Physical Review B (Brief report) Figure 1 : 1Resistance vs temperature curves of mesa type LBMO-LCMO-LBMO heterostructures as sketched in the inset. The thickness dLCMO of the LCMO layer, the size of the mesa, and the measurement current is given next to the corresponding curves. The inset shows the sheet resistance vs dLCMO at 10 K. Figure 2 : 2IVCs measured along the c axis direction at 4.2 K for two LBMO-LCMO-LBMO mesa structures with dLCMO = 7.2 nm and 9.6 nm. Also shown are the IVCs for mesa structures patterned into a LBMO-LCMO bilayer with dLCMO = 30 nm and a single LCMO with a mesa height of 37 nm. The inset shows the magnetization vs applied field curve of a 57.5 nm thick LCMO film at 5 K. Figure 3 : 3Resistance vs temperature curves of a LBMO-LCMO-LBMO (dLCMO = 7.2 nm) heterostructure measured for current perpendicular to plane at different applied magnetic fields and currents (0 T: 0.1 and 10 µA; 4 T: 0.1, 10 and 100 µA; 8 T: 0.1, 10 and 100 µA, the resistance decreases with increasing current). The sample configuration is shown in the inset of Fig. 4. The inset shows the ρ(T ) curves of a LCMO film for current applied in-plane. In the upper picture the magnetization m is shown. Figure 4 : 4IVCs of a LBMO-LCMO-LBMO mesa structure (dLCMO = 6.0 nm) measured for current perpendicular to plane at 5 K and zero field. The different curves have been obtained in successive current sweeps with increasing amplitude. The mesa area was 9 µm 2 . . C Zener, Phys. Rev. 82403C. Zener, Phys. Rev. 82, 403 (1951); . P W Anderson, H Hasegawa, Phys. Rev. 100675P. W. Anderson and H. Hasegawa, Phys. Rev. 100, 675 (1955). . A J Millis, P B Littlewood, B I Shraiman, Phys. Rev. Lett. 745144A. J. Millis, P. B. Littlewood, and B. I. Shraiman, Phys. Rev. Lett. 74, 5144 (1995); . A J Millis, B I Shraiman, R Mueller, Phys. Rev. Lett. 77175A. J. Millis, B. I. Shraiman, and R. Mueller, Phys. Rev. Lett. 77, 175 (1996). . A J Millis, T Darling, A Migliori, J. Appl. Phys. 831588A. J. Millis, T. Darling, and A. Migliori, J. Appl. Phys. 83, 1588 (1998). . Z Fang, I V Solovyev, K Terakura, Phys. Rev. Lett. 843169Z. Fang, I. V. Solovyev, and K. Terakura, Phys. Rev. Lett. 84, 3169 (2000). . Y Konishi, Z Fong, M Izumi, T Manako, M Kasai, H Kuwahara, M Kawasaki, K Terakura, Y Tokura, J. Phys. Soc. Jpn. 683790Y. Konishi, Z. Fong, M. Izumi, T. Manako, M. Kasai, H. Kuwahara, M. Kawasaki, K. Terakura, and Y. Tokura, J. Phys. Soc. Jpn. 68, 3790 (1999). . A Vigliante, U Gebhardt, A Rühm, P Wochner, F S Razavi, H U Habermeier, Europhys. Lett. 54619A. Vigliante, U. Gebhardt, A. Rühm, P. Wochner, F. S. Razavi, and H. U. Habermeier, Europhys. Lett. 54, 619 (2001) . Amlan Biswas, M Rajeswari, R C Srivastava, Y H Li, T Venkatesan, R L Greene, A J Millis, Phys. Rev. B. 619665Amlan Biswas, M. Rajeswari, R. C. Srivastava, Y. H. Li, T. Venkatesan, R. L. Greene, and A. J. Millis, Phys. Rev. B 61, 9665 (2000). . Amlan Biswas, M Rajeswari, R C Srivastava, T Venkatesan, R L Greene, Q Lu, A L De Lozanne, A J Millis, Phys. Rev. B. 63184424Amlan Biswas, M. Rajeswari, R. C. Srivastava, T. Venkatesan, R. L. Greene, Q. Lu, A. L. de Lozanne, and A. J. Millis, Phys. Rev. B 63, 184424 (2001). . M Bibes, Ll, S Balcells, J Valencia, M Fontcuberta, E Wojcik, S Jedryka, Nadolski, Phys. Rev. Lett. 8767210M. Bibes, Ll. Balcells, S. Valencia, J. Fontcuberta, M. Wojcik, E. Jedryka, and S. Nadolski, Phys. Rev. Lett. 87, 067210 (2001). . A Asamitsu, Y Tomioka, H Kuwahara, Y Tokura, Nature. 38850A. Asamitsu, Y. Tomioka, H. Kuwahara, Y. Tokura, Na- ture 388, 50 (1997). . M Fiebig, K Miyano, Y Tomioka, Y Tokura, Science. 2801925M. Fiebig, K. Miyano, Y. Tomioka, and Y. Tokura, Sci- ence 280, 1925 (1998). . Ayan Guha, A K Raychaudhuri, A R Raju, C , Ayan Guha, A. K. Raychaudhuri, A. R. Raju, and C. N. . R Rao, Phys. Rev. B. 625320R. Rao, Phys. Rev. B 62, 5320 (2000). R Gross, J Klein, B Wiedenhorst, C Höfener, U Schoop, J B Philipp, M Schonecke, F Herbstritt, L Alff, Yafeng Lu, A Marx, S Schymon, S Thienhaus, W Mader, Superconducting and Related Oxides: Physics and Nanoengineering. IV, D. Pavuna and I. Boso-4058SPIE Conf. Proc.R. Gross, J. Klein, B. Wiedenhorst, C. Höfener, U. Schoop, J. B. Philipp, M. Schonecke, F. Herbstritt, L. Alff, Yafeng Lu, A. Marx, S. Schymon, S. Thien- haus, W. Mader, in Superconducting and Related Oxides: Physics and Nanoengineering IV, D. Pavuna and I. Boso- vic eds., SPIE Conf. Proc. 4058 (2000), pp. 278-294. . J Klein, J B Philipp, L Alff, R Gross, phys. stat. sol. (a). 189617J. Klein, J. B. Philipp, L. Alff, and R. Gross, phys. stat. sol. (a) 189, 617 (2002). . J Z Sun, D W Abraham, R A Rao, C B Eom, Appl. Phys. Lett. 743017J. Z. Sun, D. W. Abraham, R. A. Rao, and C. B. Eom, Appl. Phys. Lett. 74, 3017 (1999). . G J H M Rijnders, G Koster, D H A Blank, H Rogalla, Appl. Phys. Lett. 701888G. J. H. M. Rijnders, G. Koster, D. H. A. Blank, and H. Rogalla, Appl. Phys. Lett. 70, 1888 (1997). . J Klein, C Höfener, L Alff, R Gross, J Magn, J. Klein, C. Höfener, L. Alff, and R. Gross, J. Magn. . Magn, Mater, Supercond. Sci. Technol. 2111023see alsoMagn. Mater. 211, 9 (2000); see also Supercond. Sci. Technol. 12, 1023 (1999). . M Izumi, Y Konishi, T Nishihara, S Hayashi, M Shinohara, M Kawasaki, Y Tokura, Appl. Phys. Lett. 732497M. Izumi, Y. Konishi, T. Nishihara, S. Hayashi, M. Shi- nohara, M. Kawasaki, and Y. Tokura, Appl. Phys. Lett. 73, 2497 (1998). . Moon-Ho Jo, N D Mathur, J E Evetts, M G Blamire, M Bibes, J Fontcuberta, Appl. Phys. Lett. 753689Moon-Ho Jo, N. D. Mathur, J. E. Evetts, M. G. Blamire, M. Bibes, and J. Fontcuberta, Appl. Phys. Lett. 75, 3689 (1999). . H W Zandbergen, S Freisem, T Nojima, J Aarts, Phys. Rev. B. 6010259H. W. Zandbergen, S. Freisem, T. Nojima, and J. Aarts, Phys. Rev. B 60, 10259 (1999). . M Fäth, S Freisem, A A Menovsky, Y Tomioka, J Aarts, J A Mydosh, Science. 2851540M. Fäth, S. Freisem, A. A. Menovsky, Y. Tomioka, J. Aarts, and J. A. Mydosh, Science 285, 1540 (1999).
[]
[ "Light absorption by weakly rough metal surfaces", "Light absorption by weakly rough metal surfaces" ]
[ "Z S Gevorkian \nAlikhanyan National Laboratory\nAlikhanian Brothers St. 20036YerevanArmenia\n\nInstitute of Radiophysics and Electronics\nAshtarak-2 0203Armenia\n", "L S Petrosyan \nDepartment of Physics\nJackson State University\n39217JacksonMississippiUSA\n", "T V Shahbazyan \nDepartment of Physics\nJackson State University\n39217JacksonMississippiUSA\n" ]
[ "Alikhanyan National Laboratory\nAlikhanian Brothers St. 20036YerevanArmenia", "Institute of Radiophysics and Electronics\nAshtarak-2 0203Armenia", "Department of Physics\nJackson State University\n39217JacksonMississippiUSA", "Department of Physics\nJackson State University\n39217JacksonMississippiUSA" ]
[]
We study light absorption by weakly rough metal surfaces with the roughness amplitude and correlation length smaller than the skin depth in metal. We develop a systematic perturbative approach for calculation of the absorptance in such systems and find that roughness-related absorptance variations are determined by an interplay between several system parameters which can result, in particular, in a greater absorption for smaller roughness amplitudes. We show that, for small-scale roughness, the absorptance variations are mainly caused by roughness-induced increase in effective volume of the surface layer, in which the incident light is predominantly absorbed. We argue that such absorptance fluctuations between different samples, even though not related to any electron scattering processes, can appear as sample-to-sample variations of the Drude scattering rate reported in recent measurements of the metal dielectric function.
10.1103/physrevb.106.205302
[ "https://export.arxiv.org/pdf/2208.06796v2.pdf" ]
251,564,722
2208.06796
24552f9e56e8510d177f79d68c754a4b2c6e9955
Light absorption by weakly rough metal surfaces Z S Gevorkian Alikhanyan National Laboratory Alikhanian Brothers St. 20036YerevanArmenia Institute of Radiophysics and Electronics Ashtarak-2 0203Armenia L S Petrosyan Department of Physics Jackson State University 39217JacksonMississippiUSA T V Shahbazyan Department of Physics Jackson State University 39217JacksonMississippiUSA Light absorption by weakly rough metal surfaces arXiv:2208.06796v2 [cond-mat.mes-hall] 9 Nov 2022 We study light absorption by weakly rough metal surfaces with the roughness amplitude and correlation length smaller than the skin depth in metal. We develop a systematic perturbative approach for calculation of the absorptance in such systems and find that roughness-related absorptance variations are determined by an interplay between several system parameters which can result, in particular, in a greater absorption for smaller roughness amplitudes. We show that, for small-scale roughness, the absorptance variations are mainly caused by roughness-induced increase in effective volume of the surface layer, in which the incident light is predominantly absorbed. We argue that such absorptance fluctuations between different samples, even though not related to any electron scattering processes, can appear as sample-to-sample variations of the Drude scattering rate reported in recent measurements of the metal dielectric function. INTRODUCTION Roughness is a property of materials that persists even for the best prepared samples. Numerous papers are devoted to the scattering of electromagnetic waves from rough surfaces and its applications (see [1] for a recent review). It is well understood [2][3][4][5][6][7][8] that, if the characteristic size of roughness parameters is comparable with the wavelength of incident light λ, the multiple scattering from surface imperfections can lead to a diffuse angular distribution of scattered light and to various other effects such as enhanced backscattering and weak localization [9][10][11][12][13]. In the opposite case of weak roughness, when the characteristic roughness size is much smaller than the wavelength, the diffuse component of scattered light is suppressed and incident light is predominantly specularly reflected from the metal surface. While the reflection and scattering from rough metal surfaces have been extensively studied, much less attention was paid to the effect of roughness on the absorption of electromagnetic waves in the metal [14]. However, this is an issue of considerable importance since even a weak roughness can substantially affect the measurements of the metal dielectric function, especially of its imaginary part [15]. Precise knowledge of the dielectric function is crucial, e.g., for understanding the electronic structure of metals, chemical bonding, and optical properties [16][17][18][19][20][21]. The dielectric function also determines many important parameters in plasmonics [22], such as surface plasmon propagation length, plasmon radiation rate, and nonradiative losses [23][24][25]. Early works on absorption of electromagnetic waves in rough metals were focused on the absorption enhancement due to excitation of plasmon polaritons on rough surfaces [26][27][28][29][30][31][32][33][34]. The experimental paper [35] confirmed the dominance of plasmon polaritons for absorption in silver films for relatively short wavelengths λ < 400 nm and large-scale roughness with the root mean square (rms) amplitude δ and correlation length a exceeding 100 nm. These findings were later supported by numerical calculations as well [36]. However, for a weak roughness case (δ < 15 nm), the experiment [35] reported a discrepancy between the absorptance A calculated as A = 1 − R, where R is the measured total reflectance from the rough metal surface, and the absorptance A that is measured directly using photothermal deflection spectroscopy [37]. Notably, no such discrepancy was reported for large-scale roughness characterized by a much stronger absorption. The purpose of this paper is to develop a consistent perturbative approach to light absorption by weakly rough metal surfaces characterized by Gaussian random profile function h with rms amplitude δ and correlation length a. In contrast to the light scattering problem, here the relevant lengthscale that characterizes the incident light is the metal skin depth d, which, in the frequency region we consider, is much smaller (by a factor of 20−50) than the light wavelength λ. Accordingly, all three length scales δ, a and d are assumed to be much smaller than the wavelength λ, and so, the absorption is governed by three dimensionless parameters δ/a, δ/d, and a/d. By "weakly rough" surface we imply that the first two parameters are small but the last one can be arbitrary, so that the perturbation expansion we employ is carried up to the order δ 2 and the final expressions for the absorptance are evaluated numerically. As we show in this paper, the interplay between these parameters is highly nontrivial and, in particular, can lead to a non-monotonic absorptance dependence on the roughness amplitude δ reported in the experiment [35]. We also obtain analytic expressions for small-scale roughness when the third parameter is small as well. A serious limitation of the perturbative approach to such systems is that a small variation of air-metal interface profile function h results in abrupt and significant change in the electric field distribution due to very large difference in magnitude between the air and metal permittivities [38]. We address this issue by extending the boundary conditions for unperturbed fields to the actual surface profile, and show that the accurate treatment of this problem significantly reduces the firstorder absorptance corrections, which otherwise are excessively large. Furthermore, we find that, for small-scale roughness (a/d ≪ 1), the main roughness-related effect comes from increase in effective volume of the surface layer, in which the incident light is predominantly absorbed, rather than from light scattering from surface imperfections. In this regime, the absorptance variation relative to smooth-surface absorptance is estimated as ∆A ∝ δ 2 /a 2 , which can fluctuate substantially between different samples characterized by small δ but larger spread of a. We argue that such absorptance fluctuations can appear as sample-to-sample variations of the Drude scattering rate, reported in recent measurements of the complex dielectric function [15], even though they are not related to any electron scattering processes in metals. The paper is organized as follows. In Sec. , we set out our perturbation approach to the absorption by weakly rough metal surfaces. In Sec. , we calculate various contributions to the absorptance and derive the asymptotic expressions for the case of small-scale roughness. In Sec. , we discuss the results of our numerical calculations for silver films, and in Sec. we present our concluding remarks. PERTURBATION APPROACH TO LIGHT ABSORPTION BY WEAKLY ROUGH SURFACES We consider a monochromatic electric field polarized along the y axis that is incident normally upon the metal occupying infinite region z < h(y) (see Fig. 1). We consider here the one-dimensional roughness case as it captures the essential features of weakly rough metals [1], and assume that h(y) is a Gaussian random field with zero average, i.e., h(y) = 0, and correlation function W (|y − y ′ |) = h(y)h(y ′ ) = δ 2 e −(y−y ′ ) 2 /2a 2 . (1) Here, δ is the roughness rms amplitude and a is its correlation length, both of which are assumed to be much smaller than the incident light wavelength λ. The fields above and below the air-metal interface are determined by the Maxwell equations ∇ × E = ik 0 B, ∇ 2 E − ∇(∇ · E) + k 2 0 ε(r, ω)E = 0,(2) where k 0 = ω/c is the wave vector in the air, ω is the wave frequency, and the system dielectric permittivity ε(r, ω) has the form Here, ε(r, ω) = Θ[z − h(y)] + ε(ω)Θ[h(y) − z] ≈ ε 0 (z, ω) + ε 1 (r, ω).(3)ε 0 (z, ω) = 1, for z > 0 ε(ω), for z < 0(4) is the permittivity for a smooth metal-air interface which, in the following, we refer to as the reference system, ε 1 (r, ω) = [ε(ω) − 1]δ(z)h(y),(5) is the perturbation due to small variations of h, δ(z) is the Dirac delta function, and ε(ω) = ε ′ (ω) + iε ′′ (ω) is the metal dielectric function. We decompose the electric field as E = E 0 + E s , where E 0 is the reference field and E s is the scattered field due to surface roughness. These satisfy the equations (suppressing, for brevity, the ω dependence) ∇ 2 E 0 − ∇(∇ · E 0 ) + k 2 0 ε 0 (z)E 0 = 0,(6) and ∇ 2 E s − ∇(∇ · E s ) + k 2 0 ε(r)E s = −k 2 0 ε 1 (r)E 0 .(7) The scattered fields can be obtained in a standard manner using the dyadic Green's functions defined as [32,40], k 2 0 ε 0 (z) − ∇∇ + ∇ 2 + k 2 0 ε 1 (r) D(r, r ′ ) = 4πδ(r − r ′ ),(8) where perturbation expansion over ε 1 is implied. The reference field E 0 can be chosen as the sum of incident and reflected plane waves in the air and a transmitted plane wave in the metal for a system with a smooth air-metal interface. However, this is not a good choice for the lowest order of perturbation expansion because, due to large value of the metal permittivity for optical and infrared frequencies (|ε ′ | ≫ 1), even a small change in the air-metal interface position leads to abrupt and significant field variations [38]. To avoid this issue, in the case of weak roughness, we can modify the reference field by extending it up to the actual interface: E 0y (r) = e −ik0z − re ik0z , for z > h(y), te ik−z , for z < h(y),(9) where k − = −k 0 √ ε (the negative sign ensures that the transmitted wave decays into metal). Note thatẼ 0x = E 0z = 0 for our choice of polarization. Here, the incident wave amplitude E inc is taken to be unity, while r and t are the standard Fresnel coefficients of reflection and transmission for normal incidence: r = √ ε − 1 √ ε + 1 , t = 2 √ ε + 1 .(10) Accordingly, the field decomposition now has the form E =Ẽ 0 +Ẽ s , where the modified scattered field is expressed through the dyadic Green's function D(r, r ′ ) as E s (r) = − k 2 0 4π dr ′ D(r, r ′ )Ẽ 0 (r ′ )ε 1 (r ′ ).(11) Here, ε 1 (r) = (ε − 1)δ[z − h(y)]h(y) is the modified perturbation obtained from Eq. (5) by the replacement z → z − h(y) in the δ-function in order to make it consistent with the extended boundary conditions (9). Note that, whileε 1 (r) and ε 1 (r) coincide in the first order, the accurate choice of reference field leads to a significant reduction of excessively large first-order correction to the absorptance, as we show later in this paper. For a monochromatic wave with frequency ω, the power absorbed in a metal is given by [39] Q = ω 8π ε ′′ dV |E| 2 ,(12) where integration is carried out over the metal volume. The absorptance A is obtained by normalizing Q by the incident energy flux Q inc = c|E inc | 2 S 0 /8π, where S 0 = L x L y is the normalization area. Using the above field decomposition, the absorptance averaged over the roughness configurations takes the form A = ε ′′ k 0 S 0 dV |Ẽ 0 | 2 + 2Re(Ẽ * 0 ·Ẽ s ) + |Ẽ s | 2 .(13) Below we evaluate all contributions to Eq. (13) perturbatively, i.e., up to the order δ 2 . Specifically, we assume that the dimensionless parameters δ/λ, δ/d and δ/a are small, but no restriction is imposed on the parameter a/d in the general expressions derived in the next section. In addition, we present below the analytical expressions in the asymptotic regime a/d ≪ 1, which are valid for the metal films with small-scale roughness. CALCULATION OF THE ABSORPTANCE Reference field contribution We start with the first term in Eq. (13) describing the reference field contribution: A r = ε ′′ k 0 S 0 dV |Ẽ 0 | 2 .(14) The integration over the metal volume can be decomposed as dV = dz dS, where dS = dxdy √ 1 + h ′2 is the differential area of the surface with a z = h(y) profile. Taking into account the extended boundary conditions Eq. (9) and using Eq. (10), we have A r = ε ′′ k 0 |t| 2 S 0 dxdy 1 + h ′2 (y) h −∞ dze −2κk0z ,(15) where we adopted the standard notation √ ε = n + iκ for the complex refraction index. Integrating over z and expanding the integrand over h and h ′ , we obtain A r = A 0 S 0 dxdy 1 + h ′2 (y) 2 + 2 h 2 (y) d 2 ,(16) where A 0 = ε ′′ |t| 2 /2κ is the absorptance for a smooth metal surface and d = (k 0 κ) −1 is skin depth in the metal. Averaging over roughness configurations as h 2 (y) = δ 2 and h ′2 (y) = δ 2 /a 2 , we finally obtain A r = A 0 1 + δ 2 2a 2 + 2δ 2 d 2 .(17) In the case of small-scale roughness, a ≪ d, the last term is small, and the absorptance variations are determined solely by the roughness parameters: ∆A r /A 0 ≈ δ 2 /2a 2 . This contribution originates from roughness-induced increase in effective volume of the region, in which the incident light is predominantly absorbed. This region can be visualized as a surface layer of thickness d that is measured from the actual surface profile, as opposed to a layer of the same thickness but with smooth boundaries (see Fig. 1). Interference term contribution Consider now the second term in Eq. (13) describing interference between the reference and scattered fields, A i = ε ′′ k 0 S 0 2Re dVẼ * 0 ·Ẽ s ,(18) whereẼ s is given by Eq. (11). Up to the order h 2 , we can present the scattered field as a sum of two terms, E s =Ẽ (1) s +Ẽ (2) s , corresponding, respectively, to the lowest and first order perturbation expansion of the Green function D in Eq. (11). Accordingly, this contribution to the absorptance can also be split as A i = A i1 + A i2 . We start with the first contribution obtained by inserting into Eq. (11) the unperturbed Green's function D 0 obtained by setting ε 1 = 0 in Eq. (8). One might be tempted to think that sinceẼ (1) s ∼ h, the corresponding absorptance A i1 would vanish after performing averaging over the roughness configurations. However, as we show below, the extended boundary conditions Eq. (9) for the modified reference fieldẼ 0 lead to a negative contribution to the absorptance which balances out the excessive absorption increase due to scattered field penetration into the metal. To evaluate the average C 1 = dVẼ * 0 ·Ẽ (1) s , we introduce two-dimensional Fourier transform of the unperturbed Green's function as D 0 (ρ − ρ ′ , z, z ′ ) = dq (2π) 2 d(q, z, z ′ )e iq·(ρ−ρ ′ ) ,(19) where ρ is a two-dimensional position vector in the xy plane and d(q, z, z ′ ) is a matrix function in coordinate space (see below). Employing this expression in Eq. (11), we present C 1 in the form C 1 = − k 2 0 |t| 2 4π (ε − 1) dρdρ ′ dq (2π) 2 e iq·(ρ−ρ ′ ) × h −∞ dzd yy [q, z, h(y ′ )]e −ik * − z h(y ′ ) .(20) The matrix function d(q, z, z ′ ) can be presented as [32] d(q, z, z ′ ) = S −1 g(q, z, z ′ )S, where the matrix function g(q, z, z ′ ) is tabulated in [32,40], while 3 × 3 matrix S has the following elements S xx = S yy = q x /q, S zz = 1, S xy = −S yx = q y /q, and S xz = S zx = S yz = S zy = 0. For normal incidence and one-dimensional roughness profile function h(y), the ρ-integration in Eq. (20) sets q x = 0, and we obtain d yy (q, z, z ′ ) = − 2πik 1 εk 2 0 k 1 + εk k 1 − εk e ik1(z+z ′ ) − e −ik1|z−z ′ | ,(21) where the points z and z ′ are assumed inside the metal, and k(q) = (k 2 0 − q 2 ) 1/2 , q < ω/c i(q 2 − k 2 0 ) 1/2 , q > ω/c k 1 (q) = − εk 2 0 − q 2 1/2 .(22) Inserting Eq. (21) into Eq. (20) and integrating over z, we obtain C 1 = − |t| 2 2ε (ε − 1) dρ dq 2π k 1 k * − − k 1 e i(k1−k * − )h(y) × dy ′ k 1 + εk k 1 − εk e ik1h(y ′ ) − e −ik1h(y ′ ) e iq(y−y ′ ) h(y ′ ) ,(23) where q ≡ q y . Expanding Eq. (23) over h, performing averaging, over roughness configurations, and then calculating the integrals, the result can be presented in the form C 1 = C 1 + C(1)1 , where C(2)1 = −|t| 2 (ε − 1)S 0 dq 2π k(q)k 1 (q)W (q) i[k 1 (q) − εk(q)] ,(1) and C (2) 1 = − i|t| 2 S 0 ε (ε − 1)k 3 1 (q = 0)W (y = 0) [k * − − k 1 (0)][k 1 (0) − εk(0)] .(25) Here, W (q) = δ 2 a √ 2πe −q 2 a 2 /2 is the Fourier transform of correlation function W (y) and W (y = 0) = δ 2 . Note that C (1) 1 originates from the averaging of two profile functions at different points, W (y) = h(y)h(y ′ ) , when the first exponent in Eq. (23) is expanded up to the first order in h(y), while C (2) 1 involves averaging of two profile functions at coinciding points, W (y = 0) = h(y)h(y) , when the exponents in the square brackets are expanded. Let us first estimate C 1 . Substituting k(0) and k 1 (0) from Eq. (22) into Eq. (25), we find C (2) 1 = − |t| 2 δ 2 k 0 2κ S 0 ε − 1 √ ε + 1 .(26) It is easy to check that, in the optical and infrared spectral domain (i.e., |ε ′ | ≫ 1), the corresponding absorptance contribution is A (2) i1 = (ǫ ′′ k 0 /S 0 )2ReC (2) 1 ≈ 2A 0 δ 2 k 2 0 , and, hence, it is suppressed by the small factor |ε ′ | −1 as compared to the last term in Eq. (17). Therefore, this contribution can be neglected. Turning to C 1 , we introduce the dimensionless variable x = qa in the integral and write C (1) 1 = − 2|t| 2 δ 2 a S 0 (ε − 1)I(β),(27) where I(β) = ∞ 0 dx i √ 2π (β 2 − x 2 )(εβ 2 − x 2 )e −x 2 /2 εβ 2 − x 2 + ε β 2 − x 2(28) and β = k 0 a. In this way, we obtain the first interference contribution to the absorptance as A i1 = 2ǫ ′′ k 0 S 0 Re C 1 = −A 0 8δ 2 da Re [(ε − 1)I(β)] .(29) In the case of small-scale roughness a ≪ d, the integral can be evaluated at β = 0, yielding I(0) = 1/ √ 2π(ε + 1). Finally, in the frequency domain |ε ′ | ≫ 1, we obtain the asymptotic expression A i1 ≈ −A 0 8δ 2 √ 2πda ,(30) which has a negative sign. Turning now to the second interference contribution A i2 , we expand the Green's function D, defined by Eq. (8), to the first order in h, and present the secondorder scattered field in Eq. (11) as E (2) s = k 2 0 4π 2 dr ′ dr ′′ D 0 (r, r ′ )ε 1 (r ′ ) × D 0 (r ′ , r ′′ )ε 1 (r ′′ )E 0 (r ′′ ),(31) where, in the order h 2 , the modified reference field and perturbation can be replaced by the original ones. Evaluating A i2 in a similar manner, we obtain A i2 = A 0 4δ 2 ad Re (ε − 1) 2 I(β) iκ( √ ε + 1) .(32) For small-scale roughness case a ≪ d, we can use I(0) = 1/ √ 2π(ε + 1), and then, for |ε ′ | ≫ 1, we obtain the asymptotic expression for this contribution as A i2 ≈ A 0 4δ 2 √ 2πda .(33) Finally, the full interference contribution A i = A i1 + A i2 is still negative and, for small-scale roughness, has the asymptotic form, A i ≈ −A 0 4δ 2 √ 2πda .(34) Scattering term contribution Consider now the scattered field contribution to the absorptance which we present in the form A s = ε ′′ k 0 S 0 dV |Ẽ sx | 2 + |Ẽ sy | 2 + |Ẽ sz | 2 .(35) After evaluating each term in the way outlined in the previous section, the result can be presented as A s = A 0 2δ 2 ad |ε − 1| 2 [I x (β) + I y (β) + I z (β)] ,(36) where I x (β) = ∞ 0 dx √ 2π |β 2 − x 2 | |Im εβ 2 − x 2 | × x 2 e −x 2 /2 | εβ 2 − x 2 + ε β 2 − x 2 | 2 ,(37)I y (β) = ∞ 0 dx √ 2π |β 2 − x 2 | |Im εβ 2 − x 2 | × |εβ 2 − x 2 |e −x 2 /2 | εβ 2 − x 2 + ε β 2 − x 2 | 2 ,(38) and I z (β) = I x (β). In the small-scale roughness case a ≪ d, using I x (0) = I y (0) = I z (0) = 1/ √ 2π|ε + 1| 2 , we obtain the asymptotic (|ε ′ | ≫ 1) expression, A s ≈ A 0 6δ 2 √ 2πda .(39) Finally, the full absorptance is obtained by summing up all contributions: roughness asymptotic expression is obtained by collecting the corresponding terms from Eqs. (17), (34) and (39): A = A r + A i + A s .A ≈ A 0 1 + δ 2 2a 2 + 2δ 2 √ 2πda + 2δ 2 d 2 .(40) Concluding this section, we note that the accurate choice of reference field (9) ensures the small magnitude of first-order correction to the absorptance in the weakroughness case. Specifically, had we chosen the standard, rather than extended, boundary conditions for reference fields, the third term in Eq. (40) would have increased fivefold, signaling a poor choice of basis set for the perturbation expansion. Although the overall absorptance increases, relative to smooth-surface absorptance, by the amount ∝ δ 2 , its precise behavior is determined by the interplay between two scales -the skin depth d and correlation length a, as we discuss in the next section. Finally, note that general expressions Eqs. (17), (29), (32) and (36), used in the numerical calculations below, are accurate up to the order δ 2 with no other conditions, and therefore can be used for dielectric materials as well. However, the analytical expression Eq. (40), obtained in the limit a ≪ d, is only accurate for metals in the frequency region |ε ′ | ≫ 1. NUMERICAL RESULTS AND DISCUSSION Below we present the results of numerical calculations of absorptance for weakly rough opaque silver films. The roughness parameters were chosen in the range δ ≪ d and a d, while the experimental dielectric function of silver was used in all calculations [24,41]. The wavelength interval is chosen from λ = 600 to 1500 nm in order to avoid the influence of surface plasmon (λ ≈ 350 nm in silver) and of interband transitions, both of which lead to enhanced absorption not directly related to roughness. In this interval, the skin depth in silver is d ≈ 25 nm and weakly depends on the wavelength [15]. All numerical calculations were carried out using the full expression for absorptance A, while the small-scale roughness asymptotic expression (40) is used to discuss qualitative features of obtained results. In Fig. 2, we show the calculated absorptance for several values of δ and a at a fixed ratio δ/a = 1/3. In this case, the second term in Eq. (40) is unchanged for all curves. The overall scale of calculated absorptance is consistent with the results reported in the experiment [35], indicating that, in this frequency range, about 99% of incident light is reflected back. We note that for the lowest curve (a = 6 nm), the absorptance is nearly constant for λ > 1000 nm, consistent with weak frequency dependence of smooth-surface absorptance in the Drude regime: A 0 ∼ γ/ω p , where γ and ω p are the Drude scattering rate and plasma frequency, respectively (see below). At the same time, the curves calculated for larger values of correlation length a exhibit enhanced absorptance at shorter wavelength due to roughness-assisted excitation surface plasmon polaritons [1]. In Fig. 3, we show the calculated absorptance and reflectance for several values of δ and different ratios δ/a. A striking feature in the long-wavelength spectral region, where the absorptance variations are primarily dominated by roughness effects, is a larger absorptance for roughness amplitude δ = 2 nm as compared to that for δ = 5 nm but smaller ratio δ/a [see Fig. 3(a)]. This surprising behavior can be understood, using the small-scale roughness asymptotics (40), in terms of competition between the second and third terms: the absorptance for parameter ratios δ/a = 0.5 and δ/d = 0.08 (solid curve) is larger than that for δ/a = 0.2 and δ/d = 0.2 (dashed curve) even though, in the latter case, the roughness amplitude δ is greater. At the same time, for shorter wavelengths, the absorptance curve for a = 25 nm increases faster, as discussed above, and overtakes the a = 4 nm absorptance curve at λ ≈ 870 nm. Note that even for the ratio δ/a = 0.5, all roughness-induced corrections in Eq. (40) are still small. A similar reversal of order takes place for calculated reflectance R = 1 − A, as shown in Fig. 3(b). Such a reversal of order was reported in the absorption experiment [35], albeit for larger roughness. The above results indicate that, from the absorption perspective, the relevant parameter characterizing smallscale roughness is the ratio δ/a rather than the magnitude of δ relative to skin depth d or wavelength λ. As discussed in Sec. , this parameter characterizes increase in effective volume of the surface layer, in which the incident light is predominantly absorbed, but it does not appear, to the best of our knowledge, in direct calculations of the reflection coefficient for rough metal surfaces [8]. Note that the reflectance evaluated as R = 1 − A, shown in Fig. 3, represents the total reflection, includ- ing the nonspecular one, which is difficult to evaluate accurately for the weak roughness case. At the same time, in the absorptance calculation, the parameter δ/a appears very naturally and, in fact, provides the dominant contribution for small-scale roughness. We stress that this contribution describes a "geometric" effect, as illustrated in Fig. 1, and, therefore, it persists for any wavelength. To illustrate this point, in Fig. 4 we plot the absorptance dependence on correlation length a for a small roughness amplitude δ = 2 nm and two wavelength values λ = 800 to 1200 nm. For comparison, we also plot, by dotted lines, the corresponding asymptotics (40) which show reasonably good agreements, especially for small a. While for a > 10 nm (δ/a < 0.2) the absorptance changes only weakly, for smaller a it sharply rises in both cases, increasing by about 50% for a ∼ δ. The high sensitivity of the absorptance A to roughness parameters implies that its relative variation ∆A/A 0 can fluctuate substantially between samples characterized by similar roughness amplitudes δ but larger varia- tions of the correlation length a. In fact, such fluctuations can appear as variations of the Drude scattering rate γ, which were reported in high-precision ellipsometry measurements of the silver dielectric function [15]. Indeed, in the Drude regime, we have ε ′′ ≈ ω 2 p γ/ω 3 , κ ≈ ω p /ω, and |t| 2 ≈ 4/κ 2 , so that the smooth-surface absorptance can be estimated as A 0 = ε ′′ |t| 2 /2κ ≈ 2γ/ω p . In this regime, we can present the rough-surface absorptance as A = 2γ eff /ω p , where the effective scattering rate γ eff has the form γ eff = γ(1 + ∆A/A 0 ). We then find that the apparent fluctuations of Drude scattering rate ∆γ = γ eff −γ are simply given by those of absorptance, i.e., ∆γ γ = ∆A A 0 ≈ δ 2 2a 2 + 2δ 2 √ 2πda + 2δ 2 d 2 ,(41) where, in the Drude regime, we used the asymptotic ex-pression (40). In Fig. (5), we plot the wavelength dependence of ∆A/A 0 calculated for small-roughness amplitude δ = 2 nm and several values of a. In the longwavelength spectral region λ > 1000 nm, the absorptance variations are nearly constant and could indeed appear as fluctuations of γ, even though they are not caused by any electron scattering processes in metals. For the parameters chosen, such fluctuations can reach up to 10% depending on the ratio δ/a, which is comparable to the reported experimental values [15]. Note finally that the increase of ∆A/A 0 for shorter wavelengths, which was discussed above, here appears as "non-Drude" behavior of γ eff that can be more or less pronounced for samples with different roughness parameters, also consistent with the reported behavior of ε ′′ in this frequency domain [15]. CONCLUSIONS In summary, we developed a perturbative approach for absorption of light in weakly rough opaque metal films characterized by a Gaussian surface profile with rms amplitude δ and correlation length a that are smaller than skin depth d in the metal. We have shown that, in such systems, the accurate choice of boundary conditions for unperturbed fields allows one to obtain the first-order roughness corrections to the absorptance A which otherwise would be excessively large. We demonstrated that roughness-related absorptance variations ∆A are determined by the interplay between a and d which, in particular, can result in a larger absorption for smaller roughness amplitudes. We found that, for a small-scale roughness (a ≪ d), the dominant contribution to ∆A comes from roughness-related increase in effective volume of the surface layer, in which the incident light is predominantly absorbed, rather than from light scattering from surface imperfections. Accordingly, ∆A/A 0 ∼ δ 2 /a 2 is nearly independent of the incident light wavelength and can fluctuate substantially between different samples characterized by small rms amplitudes δ but larger spread in a. We argued that such fluctuations, while not related to any electron scattering processes, can appear as sampleto-sample variations of the Drude scattering rate which could explain uncertainties in the imaginary part of the metal dielectric function reported in recent high-precision ellipsometry measurements [15]. Although we considered here the simplest case of normal incidence and one-dimensional roughness profile, it is straightforward to extend our approach to any incidence angle and polarization or to a two-dimensional roughness profile. The different incidence angles would mainly affect the Fresnel coefficient t that defines the smooth-surface absorptance A 0 , while for κ ≫ 1, the skin depth d in the metal changes only weakly. For twodimensional roughness, characterized by surface profile function h(x, y), the effective volume of the surface layer, in which the incident light is predominantly absorbed, increases relative to that in the one-dimensional case. In fact, the first term in Eq. (40) now doubles in magnitude, which implies more pronounced sample-to-sample apparent fluctuations of the Drude scattering rates [15]. To the best of our knowledge, such fluctuations have not been previously described in direct calculations of the reflectance coefficient from rough metal surfaces, but here they emerge naturally when evaluating the absorptance. Note, finally, that the approach developed in this paper can also be used for other roughness-related problems [42,43], as well as for describing absorption in lossy dielectric materials [44]. FIG. 1 . 1Schematic view of the electromagnetic wave incident normally to rough rough air-metal interface. Dotted lines indicate the lower boundary of the surface layer, separated by skin depth d from the interface (rough or smooth), in which light is predominantly absorbed. FIG. 3 . 3Calculated absorptance (a) and reflectance (b) at several values of roughness amplitude δ plotted against incident light wavelength for different ratios δ/a. In the long wavelength domain, the absorptance curves undergo reversal of order implying a larger absorption for smaller δ. FIG. 4 . 4Calculated absorptance at a small value of roughness amplitude δ = 2 nm plotted against the correlation length a for incident light wavelengths λ = 800 to 1200 nm. Dotted lines represent the small-scale roughness asymptotics calculated using Eq.(40). FIG. 5 . 5Calculated absorptance fluctuations at a small value of roughness amplitude δ = 2 nm shown for several values of correlation length a. . I Simonsen, Eur. Phys. J. Special Topics. 1811I. Simonsen, Eur. Phys. J. Special Topics 181, 1 (2010). . G Brown, V Celli, M Haller, A A Maradudin, A Marvin, Phys. Rev. B. 314993G. Brown, V. Celli, M. Haller, A. A. Maradudin, and A. Marvin, Phys. Rev. B 31, 4993 (1985). . A R Mcgurn, A A Maradudin, J. Opt. Soc. Am. B. 4910A. R. McGurn and A. A. Maradudin, J. Opt. Soc. Am. B 4, 910 (1987). . A R Mcgurn, A A Maradudin, Waves in Random Media. 6251A. R. McGurn and A. A. Maradudin, Waves in Random Media 6, 251 (1996). . J T Johnson, J. Opt. Soc. Am. 162720J. T. Johnson, J. Opt. Soc. Am. 16, 2720 (1999). . A Soubret, G Berginc, C Bourrely, Phys. Rev. B. 63245411A. Soubret, G. Berginc, and C. Bourrely, Phys. Rev. B 63, 245411 (2001). . M A Demir, J T Johnson, J. Opt. Soc. Am. A. 202330M. A. Demir and J. T. Johnson, J. Opt. Soc. Am. A 20, 2330 (2003). . A G Navarrete Alcala, E I Chaikina, E R Mendez, T Leskova, A Maradudin, Waves in Random and Complex Media. 19and references thereinA. G. Navarrete Alcala, E. I. Chaikina, E. R. Mendez, T. Leskova, and A. Maradudin, Waves in Random and Complex Media 19, 600, (2009), and references therein. . A A Maradudin, T Michel, A R Mcgurn, E R Mendez, Ann. Phys. (N.Y.). 203A. A. Maradudin, T. Michel, A. R. McGurn, and E. R. Mendez, Ann. Phys. (N.Y.) 203, 155, (1990). . R Tran, A A Maradudin, Opt. Commun. 110269R. Tran and A. A. Maradudin, Opt. Commun. 110, 269 (1994). . K Pak, L Tsang, J T Johnson, J. Opt. Soc. Am. A. 141515K. Pak, L. Tsang, and J. T. Johnson, J. Opt. Soc. Am. A 14, 1515 (1997). . E Mendez, K A , Opt. Commun. 6191E. Mendez and K. A. O'Donnel, Opt. Commun. 61, 91 (1987). . A Mcgurn, A A Maradudin, V Celli, Phys. Rev. B. 314866A. Mcgurn, A. A. Maradudin and V. Celli, Phys. Rev. B 31, 4866 (1985). . D Bergström, J Powell, A F H Kaplan, J. Appl. Phys. 103103515D. Bergström, J. Powell and A. F. H. Kaplan, J. Appl. Phys. 103, 103515 (2008). . H U Yang, J D&apos;archangel, M L Sundheimer, E Tucker, G D Boreman, M B Raschke, Phys. Rev. B. 91235137H. U. Yang, J. D'Archangel, M. L. Sundheimer, E. Tucker, G. D. Boreman, and M. B. Raschke, Phys. Rev. B 91, 235137 (2015). M Dressel, G Gruner, Electrodynamics of Solids. CambridgeCambridge University PressM. Dressel and G. Gruner, Electrodynamics of Solids (Cambridge University Press, Cambridge, 2002). H Kuzmany, Solid-State Spectroscopy. Berlin, HeidelbergSpringer-VerlagH. Kuzmany, Solid-State Spectroscopy (Springer-Verlag, Berlin, Heidelberg, 2009). . R De L, Kronig, Proc. R. Soc. London A. 133255R. de L. Kronig, Proc. R. Soc. London A 133, 255 (1931). . E Taft, H Philipp, Phys. Rev. 1211100E. Taft and H. Philipp, Phys. Rev. 121, 1100 (1961). . H Ehrenreich, H Philipp, Phys. Rev. 1281622H. Ehrenreich and H. Philipp, Phys. Rev. 128, 1622 (1962). . P Lewis, P Lee, Phys. Rev. 175795P. Lewis and P. Lee, Phys. Rev. 175, 795 (1968). M I Stockman, Plasmonics: Theory and Applications. T. V. Shahbazyan and M. I. StockmanNew YorkSpringerM. I. Stockman, in Plasmonics: Theory and Applica- tions, edited by T. V. Shahbazyan and M. I. Stockman (Springer, New York, 2013). . R L Olmon, B Slovick, T W Johnson, D Shelton, S.-H Oh, G D Boreman, M B Raschke, Phys. Rev. B. 86235147R. L. Olmon, B. Slovick, T. W. Johnson, D. Shelton, S.- H. Oh, G. D. Boreman, and M. B. Raschke, Phys. Rev. B 86, 235147 (2012). . D Rioux, S Vallires, S Besner, P Muñoz, E Mazur, M Meunier, Adv. Opt. Mater. 2176D. Rioux, S. Vallires, S. Besner, P. Muñoz, E. Mazur, and M. Meunier, Adv. Opt. Mater. 2, 176 (2014). . Y Lebsir, S Boroviks, M Thomaschewski, S I Bozhevolnyi, V A Zenin, Nano Lett. 225759Y. Lebsir, S. Boroviks, M. Thomaschewski, S. I. Bozhevolnyi, and V. A. Zenin, Nano Lett. 22, 5759 (2022). . P A Fedders, Phys. Rev. 165580P. A. Fedders, Phys. Rev. 165, 580 (1968). . R H Ritchie, R E Wilems, Phys. Rev. 178372R. H. Ritchie and R. E. Wilems, Phys. Rev. 178, 372 (1968). . J Crowell, R H Ritchie, J. Opt. Soc. Am. 60795J.Crowell and R. H. Ritchie, J. Opt. Soc. Am 60, 795 (1970). . J M Elson, R H Ritchie, Phys. Rev. B. 44129J. M. Elson and R. H. Ritchie, Phys. Rev. B 4, 4129 (1971). . E Kretschmann, Z. Phys. 227412E. Kretschmann, Z. Phys. 227, 412 (1969). . H J Juranek, Z. Phys. 233324H. J. Juranek, Z. Phys. 233, 324 (1970). . A A Maradudin, D L Mills, Phys. Rev. B. 111392A. A. Maradudin and D.L.Mills, Phys. Rev. B 11, 1392 (1975). H Rather, Surface Plasmons on Smooth and Rough Surfaces and on Gratings. BerlinSpringerH. Rather, Surface Plasmons on Smooth and Rough Sur- faces and on Gratings (Springer, Berlin 1988). . A V Zayats, I I Smolyaninov, A A Maradudin, Phys. Rep. 408131A. V. Zayats, I. I. Smolyaninov, and A. A. Maradudin, Phys. Rep. 408, 131 (2005). . J Springer, A Poruba, L Müllerova, M Vanecek, O Kluth, B Rech, J. Appl. Phys. 951427J. Springer, A. Poruba, L. Müllerova, M. Vanecek, O. Kluth, and B. Rech, J. Appl. Phys. 95, 1427 (2004). . M Dehghani, C David, Nanomaterials. 11113M. Dehghani and C. David, Nanomaterials 11, 113 (2021). . W B Jackson, N M Amer, A C Boccara, D Fournier, Appl. Optics. 201333W. B.Jackson, N. M. Amer, A. C. Boccara, D. Fournier, Appl. Optics. 20, 1333 (1981). . J.-P Banon, I Simonsen, R Carminati, Phys. Rev. A. 10153847J.-P. Banon, I. Simonsen, and R. Carminati, Phys. Rev. A 101, 053847 (2020). L D Landau, E M Lifshitz, Electrodynamics of Continous Media. AmsterdamElsevierL .D. Landau and E. M. Lifshitz, Electrodynamics of Continous Media (Elsevier, Amsterdam, 2004). . A A Maradudin, W Zierau, Phys. Rev. B. 14484A. A. Maradudin and W. Zierau, Phys. Rev. B 14, 484 (1976). . P B Johnson, R W Christy, Phys. Rev. B. 64370P. B. Johnson and R. W. Christy, Phys. Rev. B 6, 4370 (1972). . . S Zh, Gevorkian, Physical Review Special Topics -Accelerators and Beams. 1370705Zh. S. Gevorkian, Physical Review Special Topics -Ac- celerators and Beams 13, 070705 (2010). . Z S Gevorkian, EPL. 9664004Z. S. Gevorkian, EPL 96, 64004 (2011). . M Qin, L Zhang, H Wu, Adv.Sci. 92105553M. Qin, L. Zhang, and H. Wu, Adv.Sci. 9, 2105553 (2022).
[]
[ "Laplacian integrality in P 4 -sparse and P 4 -extendible graphs", "Laplacian integrality in P 4 -sparse and P 4 -extendible graphs" ]
[ "Renata R Del-Vecchio [email protected] \nFederal Fluminense University\n\n", "Arueira Jones [email protected] \nFederal Fluminense University\n\n" ]
[ "Federal Fluminense University\n", "Federal Fluminense University\n" ]
[]
Let G be a simple graph and L = L(G) the Laplacian matrix of G. G is called L-integral if all its Laplacian eigenvalues are integer numbers. It is known that every cograph, a graph free of P 4 , is L-integral. The class of P 4 -sparse graphs and the class of P 4 -extendible graphs contain the cographs. It seems natural to investigate if the graphs in these classes are still L-integral. In this paper we characterized the L-integral graphs for both cases, P 4 -sparse graphs and P 4 -extendible graphs.
10.1016/j.amc.2018.02.046
[ "https://arxiv.org/pdf/1611.08271v1.pdf" ]
4,459,827
1611.08271
91ab0ee01850d1296fb4ae9a25a2cbda00156eb3
Laplacian integrality in P 4 -sparse and P 4 -extendible graphs Renata R Del-Vecchio [email protected] Federal Fluminense University Arueira Jones [email protected] Federal Fluminense University Laplacian integrality in P 4 -sparse and P 4 -extendible graphs spider graphP 4 -sparse graphP 4 -extendible graphL-integral graph Let G be a simple graph and L = L(G) the Laplacian matrix of G. G is called L-integral if all its Laplacian eigenvalues are integer numbers. It is known that every cograph, a graph free of P 4 , is L-integral. The class of P 4 -sparse graphs and the class of P 4 -extendible graphs contain the cographs. It seems natural to investigate if the graphs in these classes are still L-integral. In this paper we characterized the L-integral graphs for both cases, P 4 -sparse graphs and P 4 -extendible graphs. Basic notions and Spider graphs Laplacian spectrum The Laplacian spectrum of a graph G consists of its s distinct Laplacian eigenvalues and their multiplicities. It will be denoted by ξ(G) = µ 1 µ 2 . . . µ s r 1 r 2 . . . r s where µ i is a Laplacian eigenvalue of G with multiplicity r i , i ∈ {1, . . . , s}. We recall the following result, that will be used later: Proposition 2.1. [16] If G denotes the complement of the graph G with n vertices, then µ i+1 (G) = n − µ n−i+1 (G), i ∈ {1, . . . , n − 1} and µ 1 (G) = 0, considering µ i displayed in a non increasing order. As an immediate consequence of this, we have that G is L-integral if and only if G is L-integral. Remark 2.1. We also recall that the L-spectrum of the union of two graphs G and H, is given by the union of their L-spectra, ξ(G ∪ H) = ξ(G) ∪ ξ(H). Therefore, in order to have G ∪ H L-integral, it is necessary and sufficient that G and H are L-integral. Consequently, for a disconnected graph G, we have that G is L-integral if and only if each connected component is L-integral. Spider graphs We now present the definition of a spider graph: Definition 2.1. [9] G(V, E) is a spider if V can be partitioned into sets S , C and R such that: • |S | = |C| 2 • S = {s 1 , . . . , s k } is an independent set; • C = {c 1 , . . . , c k } is a clique; • There are all edges between vertices of R and C and no edges between vertices of R and S . The adjacence between the vertices of S and C is given by: s i is adjacent to c j if and only if i = j or else, s i is adjacent to c j if and only if i j. If the first case holds, the graph is called a thin spider. In the other case the graph is called a thick spider. The set C is the body of the spider, the set S corresponds to the spider's legs, and the set R is the spider's head. If R is an empty set, the graph is called a headless spider. Notation: The thin spider will be denoted by S t [H, k, j], where the legs and the body have k vertices each and H is the graph induced by the head, with j vertices. Similarly, the thick spider will be represented by S T [H, k, j]. If the thin (respectively thick) spider is headless i.e., R is empty, we will denote it by S t [k] (respectively S T [k]). Example 2.1. Figure 1 shows a thin spider whose head is a graph H (with three vertices) and a thick spider with a head formed by the same graph H. Remark 2.2. Every spider is a connected graph, even if the subgraph induced by its head is disconnected. Clearly, the complement of a spider is a spider. More specifically, given a thin spider with subgraph H induced by the head, its complement is a thick spider with the same number of vertices in the body and the subgraph induced by the head is H or, simply, S t [H, k, j] = S T [H, k, j]. Remark 2.3. The path P 4 is a headless spider whose body induces a subgraph isomorphic to K 2 and the complement of the path P 4 is isomorph to P 4 . Henceforth, 1 j and 0 j are the vectors of order j with all elements equal to 1 and 0, respectively, Θ j, k denotes the j × k all zeros matrix and I j denotes the identity matrix of order j. Moreover, we denote the j × k all ones matrix by J j, k and, in case of k = j, we simply denote it by J j . Proposition 2.2. Let S t [H, k, j] be a thin spider where H is an empty subgraph (a graph without edges). Then its Laplacian spectrum is: ξ(S t [H, k, j]) =             k+ j+2± √ (k+ j) 2 +4 2 k k+ j+2± √ (k+ j) 2 +4−4k 2 0 k − 1, k − 1 j − 1 1, 1 0             If it is a headless thin spider, S t [k], its Laplacian spectrum is: ξ(S t [k]) =            k+2± √ k 2 +4 2 0 2 k − 1, k − 1 1 1            This notation means that k+ j+2+ √ (k+ j) 2 +4 2 is an eigenvalue with multiplicity k − 1 and k+ j+2− √ (k+ j) 2 +4 2 is an eigenvalue with multiplicity k − 1. The same applies for the other cases where it appears ±. Proof: Let S t [H, k, j] be a thin spider graph, whose head is an independent set (H is a graph without edges with at least one vertex). For simplicity we write S t instead of S t [H, k, j]. We label the vertices of the spider so that the matrix L(S t ) is written in blocks, expressing the links between body, legs and head.           body x body body x leg body x head leg x body leg x leg leg x head head x body head x leg head x head           The degrees of the vertices of the body, vertices of the head and vertices of legs are k + j, k and 1, respectively. Then the matrix L(S t ) = L can be written as a block matrix: L(S t ) =           [(k + j + 1)I − J] k, k −I k −J k, j −I k I k Θ k, j −J j, k Θ j, k k.I j           , Note that, for each block matrix composing the matrix L(S t ), the sum of its rows have the same value, leading to an equitable partition, S , C and R. Therefore, we can consider the matrix L 3×3 , whose entries are such sums: L =           j + 1 −1 − j −1 1 0 −k 0 k           Then, a known result about equitable partitions (see [2]) ensures that the eigenvalues of L , which can be easily obtained, are also eigenvalues of L(S t ). x = 0 and x = k + 2 + j ± (k + j) 2 + 4 − 4k 2 , are the eigenvalues of L and then are also eigenvalues of L. On the other hand, for each u j ∈ R j orthogonal to 1 j , if v =           0 k 0 k u j           ∈ R 2k+ j then L.v = k.v. Hence, k is an eigenvalue of the matrix L corresponding to the eigenvector v ∈ R 2k+ j . As there are j − 1 linearly independent vectors in R j orthogonal to 1 j , then k is an laplacian eigenvalue of S t with multiplicity at least j − 1. If the head of the spider has only one vertex ( j = 1) then the vector u does not exist, so k isn't an eigenvalue. We will prove now that k+ j+2± √ (k+ j) 2 +4 2 is another eigenvalue of this graph. By the definition of spider, its body must have at least two vertices (k ≥ 2), so for each vector u ∈ R k orthogonal to 1 k , consider the vector w =             u k+ j+ √ (k+ j) 2 +4 2 u 0 j             ∈ R 2k+ j . For simplicity, set p = k + j e q = (k + j) 2 + 4. Then L.w = L.            u p+ √ q 2 u 0 j            =             p+2− √ q 2 u p−2+ √ q 2 u 0 j             = p+2− √ q 2            u p+ √ q 2 u 0 j            = p+2− √ q 2 w Therefore p+2− √ q 2 is an L-eigenvalue of the graph with multiplicity at least k − 1. By similar procedure, we can conclude that p+2+ √ q 2 is an L-eigenvalue with multiplicity at least k − 1, for the eigenvector w =            u p− √ q 2 u 0 j            ∈ R 2k+ j . As we have obtained exactly j + 2k L-eigenvalues, which is the order of the graph S t , the proof is completed for this case. If the spider S t is headless, then its Laplacian matrix is given by: L(S t ) = [(k + 1)I − J] k, k −I k −I k I k Proceeding analogously to the previous case, considering j = 0 when convenient, we obtain what we wanted. From the proof of the proposition above, we can state the following theorem: Theorem 2.1. If G is a thin spider then G is not L-integral. Proof: Let G = S t [H, k, j] be a thin spider where H is the subgraph induced by the head of the spider, having some vertex ( j > 0). By the definition of spider we have k ≥ 2. Using the same labeling that in the statement above, the Laplacian matrix of S t can be obtaining just replacing the block corresponding to the vertices in the head by L(H)+kI, where L(H) is the Laplacian matrix of the subgraph H. Then, the matrix L(S t ) can be written as a block matrix: L(S t ) =           [(k + j + 1)I − J] k, k −I k −J k, j −I k I k Θ k, j −J j, k Θ j, k L(H) + kI j           , As before, setting p = k + j and q = (k + j) 2 + 4, we have that (p + 2 − √ q)/2 is an eigenvalue of the graph, independent of the spider's head, with multiplicity at least k − 1 ≥ 1. However, as k ≥ 0, q = (k + j) 2 + 4 is not a perfect square, so (k + j + 2 − (k + j) 2 + 4)/2 Z and S t is not L-integral. If the spider is headless, we have already obtained that (k + 2 − √ k 2 + 4)/2 is an L-eigenvalue and it is never an integer. We can state the following corollary: Corollary 2.1. If G is a thick spider then G is not L-integral. Proof: Let G = S T [H, k, j] be a thick spider, then its complement is S t [H, k, j] which is not L-integral, by Theorem 2.1. Then, as a consequence of Proposition 2.1 we conclude that S T [H, k, j] is not L-integral. Remark 2.4. In short, if G is a spider graph, thin or thick, with or without head, it is not L-integral. P 4 -Sparse Graphs A cograph is a P 4 -free graph, i.e. a graph that does not contain a path with four vertices P 4 as an induced subgraph. In [9], Hong introduced the class of P 4 -sparse graphs, containing the class of cographs: Directly from the definition we note that, if a P 4 -sparse graph is disconnected, then all its connected components are P 4 -sparse graphs. Also the union of P 4 -sparses graphs maintain this property. Remark 3.1. The complement of a P 4 -sparse graph is also a P 4 -sparse graph. In fact, supose that G isn't a P 4 -sparse graph, then there is a set {a, b, c, d, e} ⊂ V(G) such that A = {a, b, c, d} and B = {a, b, c, e} induce P 4 , but P 4 ≈ P 4 , then A and B induce P 4 in G, ie, this isn't a P 4 -sparse graph. In [10] is showed that a spider is P 4 -sparse if and only if the subgraph induced by its head is P 4 -sparse. From this we can see that the graphs in example 2.1 are P 4 -sparse. It is also proved an important result relating P 4 -sparse graphs and spider graphs: Theorem 3.1. [10] If G is a non trivial P 4 -sparse graph, then either G or G is disconnected, or G is a spider whose head, if exists, induces a P 4 -sparse graph. Now, we present the result concerning the L-integrality: Theorem 3.2. Let G be a P 4 -sparse graph. Then, G is L-integral if and only if G is a cograph. Proof: Let G be a non-cograph P 4 -sparse graph. By Theorem 3.1 we have three cases to consider: 1. G is a spider: by Theorem 2.1 and Corollary 2.1, G is not L-integral and the desired is proved. 2. G is a disconnected graph: as G is not a cograph then it has a connected component H such that H induces a path P 4 , ie, P 4 ⊆ H ⊆ G. Note that H is a connected P 4 -sparse then, again by Theorem 3.1, H is disconnected or H is a spider. If the latter case occurs, then H is not L-integral and therefore, by Remark 2.1, G is not L-integral, which completes the proof. Otherwise H is disconnected, but P 4 ≈ P 4 is an induced subgraph of H, then there is a connected component H 1 ⊆ H that induces a P 4 and is P 4 -sparse, by Remark 3.1. Again, by Theorem 3.1, we ensure that H 1 is a spider or H 1 is disconnected. If it is a spider, then H 1 is not L-integral and so neither H and Proposition 2.1 ensures that G is not L-integral. On the other hand we have H 1 disconnected with P 4 ≈ P 4 induced subgraph in some connected component H 2 of H 1 , ie, H 2 is connected P 4 -sparse. Then, by Theorem 3.1, H 2 is a spider or H 2 is disconnected and we repeat the procedure. Repeating the above procedure we find a spider graph, in a connected component, or a path P 4 (which is also a spider). Note that it's not possible to obtain a estable set, since G isn't a cograph. Hence, we have a connected component not L-integral, property that will be transmitted to the original graph G. 3. G is disconnected: by Remark 3.1, G is also P 4 -sparse. As G induces some P 4 , we have that G induces P 4 ≈ P 4 , and then G is not a cograph. Therefore, G satisfies the previous case, concluding that G is not L-integral neither is G. We have seen that although any cograph is L-integral, a P 4 -sparse graph non-cograph, is never L-integral. And those graphs that have "many" P 4 s, or simply, those who every five vertices induce more than one P 4 , what can we say about the integrality of their L-spectrum? Observing some examples, we see that we cannot conclude anything, as there are examples of this family that are L-integral and others that are not. Namely: Example 3.1. C 6 and G (in Figure 2) induce for every five vertices, two P 4 's, however the first is L-integral and the second is not. (G) = 1 −1 −1 ± √ 2 1 ± √ 2 1 1 1, 1 1, 1 3.1. (q, q − 4) − graphs There are other families of graphs characterized by their P 4 -structure. In the previous section, we present the P 4 -sparse graphs. Babel and Olariu in [1] propose a new class, generalizing the P 4 -sparse graphs, the class of (q, t) which are those graphs that each q vertices induce at most t P 4 's. By this definition we can see that P 4 -sparse graphs are (5, 1) and cographs are (4, 0). Babel and Olariu, enunciate a theorem characterizing a new class, using the following definition: P 4 -extendible graphs The next class was introduced by Jamison and Olariu, in [12]. In a P 4 , the vertices of degree 1 are called endpoints and the others are called midpoints. A vertex in G is called an endpoint if it is an endpoint for any induced P 4 in the graph. A vertex in G is said a midpoint if it is a midpoint for any induced P 4 in the graph. Let us now consider the graphs below In [12], it is given a characterization of P 4 -extendible graphs, which will be useful later. Theorem 4.1. [12] If G is P 4 -extendible with more than one vertex, then it must satisfy exactly one of the conditions below. From the above theorem we conclude that, if G is a connected P 4 -extendible whose complement is also connected, then G ∈ F ∪ {P 4 } (case iii) or G is as illustrated in the figure below. (i) G is disconnected; (ii) G is disconnected; (iii) G ∈ F ∪ {P 4 };( We can easily verify that: Lemma 4.1. The graphs in F ∪ {P 4 } are not L-integrals. Proof: P 4 is not L-integral (it is the headless spider S m [2]). Note that F 0 = F 0 , F 1 = F 2 , F 3 = F 6 , F 5 = F 4 . It is therefore sufficient to check that F 0 , F 1 , F 3 e F 5 are not L-integral. Moreover we have the following Lemma: Lemma 4.2. If G is a graph satisfying the assertion (iv) of theorem 4.1, then G is not L-integral. Proof: Let G(V, E) be a graph such that there is D ⊂ V inducing a graph of {P 4 , F 3 , F 4 , F 5 , F 6 } and every vertice in V \ D is adjacent to mid points of D and not to its endpoints. Let H be the subgraph induced by V \ D in G, considering |V(H)| = j ≥ 1. We have five cases to consider: Case 1) G(D) ≈ P 4 : in this case G ≈ S t [H, 4, j] and, by theorem 2.1 G is not L-integral. Case 2) G(D) ≈ F 3 : We want to determine x, y, z, w ∈ R such that the vector w = x y 1 1 z w w . . . w t ∈ R j+5 is an eigenvector of L(G). This is equivalent to determine λ satisfying equality                                              3 + j −1 −1 −1 0 −1 −1 . . . −1 −1 2 + j 0 0 −1 −1 −1 . . . −1 −1 0 1 0 0 0 0 . . . 0 −1 0 0 1 0 0 0 . . . 0 0 −1 0 0 1 0 0 . . . 0 −1 −1 0 0 0 −1 −1 0 0 0 L(H) + 2I . . . . . . . . . . . . . . . −1 −1 0 0 0                                                                                           x y 1 1 z w w . . . w                                              = λ.                                              x y 1 1 z w w . . . w                                              . The system S can be rewritten as                    λx = 3x + jx − y − 2 − w j λy = −x + 2y + jy − z − w j λ = 1 − x λz = z − y λw = −x − y + 2w ⇒                    x − x 2 = 3x + jx − y − 2 − w j (1) y − xy = −x + 2y + jy − z − w j (2) λ = 1 − x (3) y = xz (4) w − xw = −x − y + 2w (5)(1) We guarantee that x 0, otherwise y = x = 0 and λ = 1, which generates contradictory values for w. Replacing z = y x of (4), in (2) − (1) we have y(1 − 2x − jx − x 2 ) = −3x 2 − jx 2 + 2x − x 3 . If (1 − 2x − jx − x 2 ) = 0 then −3x 2 − jx 2 + 2x − x 3 = 0 and, as x 0 we have −x 2 − (3 + j) x + 2 = 0 which has no real complex roots, then λ = 1 − x is a non real complex number, which is absurd, since the matrix is symmetric. Then y = −3x 2 − jx 2 + 2x − x 3 1 − 2x − jx − x 2 (6). On the other hand, if x = −1 we have y = 1, z = −1 and λ = 2 which generates contradictory values for w. Then x −1 and by (5) we have w = x+y x+1 . So, by (1) and (4) we ensure that y(x + 1 + j) = x 3 + 3x 2 + jx 2 − 2. If x + 1 + j = 0 we have x = −1 − j and x 3 + 3x 2 + jx 2 − 2 = 0, hence 2 j( j + 2) = 0 with roots 0 and −2, which generates an absurd, as j ≥ 1. Therefore we can write y = x 3 + 3x 2 + jx 2 − 2 x + 1 + j(7) . From (6) and (7) we obtain the equation x 5 + (2 j + 4)x 4 + ( j 2 + 3 j + 1)x 3 + (− j 2 − 5 j − 6)x 2 − 2x + 2 = 0 Note that x = 1 is a root, hence y = z = w = 1 and λ = 0, then w = 1 j+5 , what was expected for the Laplacian matrix. Then we can write (x − 1)q(x) = 0, where q(x) = x 4 + (2 j + 5)x 3 + ( j 2 + 5 j + 6)x 2 − 2 As j ≥ 1, q(0) = −2 < 0 and q(1) = j 2 + 7 j + 10 > 0. By the Intermediate Value Theorem, the polynomial q has a root in the interval (0, 1), then x Z. As λ = 1 − x, we know that this root should be an irrational number, say x = . Then the system solved above have the solution: x = y = 3 + 3 2 + j 2 − 2 + 1 + j z = 3 + 3 2 + j 2 − 2 ( + 1 + j) and w = + 3 +3 2 + j 2 −2 +1+ j + 1 . Then λ = 1 − is irrational and the graph G is not L-integral. Case 3) G(D) ≈ F 5 : In this case, the Laplacian matrix of G is as the precedent one, only changing the firs block of the matrix for the following: 3 + n −1 −1 −1 0 | −1 2 + n 0 0 −1 | −1 0 2 −1 0 | −1 0 −1 2 0 | 0 −1 0 0 1 | − − − − − − − − − − − − − − − | Similarly, we want to determine x, y, z, w, λ ∈ R such a way the vector w, as taken in case 2, satisfies L(G).w = λ.w. As this equality leads to the same system S , again G is not L-integral. Case 4) G(D) ≈ F 4 : As F 4 = F 5 , G is a graph of the previous case. Then, for the proposition 2.1, G is not L-integral. Case 5) G(D) ≈ F 6 : Again, we note that G is a graph of case 2, as F 6 = F 3 and so G is not L-integral. Remark 4.2. It is easy to check that, if G is P 4 -extendible then G is also P 4 -extendible. We also have that G 1 and G 2 are P 4 -extendibles if and only if G 1 ∪ G 2 is P 4 -extendible. These two lemmas, along with the above remark, allow us to completely characterize the L-integral graphs in the class of P 4 -extendible graphs. Again, these are exactly the cographs. Proof: We will apply Theorem 4.1 in G and, making use of the remark above, we look for a connected component with connected complement that, in turn, satisfies (iii) or (iv) of Theorem 4.1, and so it is not L-integral by the Lemma 4.1 and Lemma 4.2 . There are other classes of graphs as P 4 -reducible graphs, [11] and P 4 -lite graphs, [13], composed by graphs containing a restricted number of P 4 . The class of P 4 -reducible graphs is the intersection of P 4 -sparse and P 4 -extendible graphs and obviously contain the cographs. So, a P 4 -reducible graph is L-integral if and only if it is a cograph. It seems that the L-integrality is related to the P 4 structure of the graph. In this paper we have analyzed the behavior of the spectrum, related to L-integrality, for some classes. It remains to search for L-integral graphs in other classes, as P 4 -lite graphs and (q, q − 4)-graphs. Figure 1 : 1S t [H, 4, 3] and S T [H, 4, 3] Definition 3.1. G is P 4 -sparse graph if every set of five vertices in G induces at most one P 4 . Figure 2 : 2C Definition 3. 2 . 2G(V, E) is called p 4 -connected 1 if, for every partition of V in two sets A and B, there is a P 4 induced with vertices in A and in B. Theorem 3. 3 . 3[1] Let G(V, E) be a graph (7, 3) p 4 -connected. Then |V| < 7 or G is a headless spider.By Theorem 3.3 and Theorem 2.1, we conclude the next corollary: Corollary 3.1. If G is a (7, 3) p 4 -connected graph with at least 7 vertices, then G is not L-integral. Figure 3 : 3P 4 -extendible not P 4 -sparse and P 4 -sparse not P 4 -extendible Definition 4. 1 . 1A graph G(V, E) is called P 4 -extendible if, for any W ⊂ V inducing P 4 , there is at most one vertex x W that induces P 4 with some vertices of W. Remark 4. 1 . 1. The class of P 4 -extendible contains strictly the class of cographs and it is distinct from the class of P 4 -sparse graphs. Figure 4 : 4P iv) there is a subset D ⊂ V inducing a graph of the set {P 4 , F 3 , F 4 , F 5 , F 6 } and moreover, every vertex in V(G) \ D is adjacent to the intermediate vertices and is not adjacent to the extreme vertices of D. Figure 5 : 5case (iv) Theorem 4 . 2 . 42Let G be a P 4 -extendible graph. Then, G is L-integral if and only if G is a cograph. In[1], p 4 -connected is denoted simply by p-connected, but we prefer to emphasize the dependence of the P 4 -structure. AcknowledgementsFirst author was partially supported by CNPq-Grant 476363/2012-8 and CAPES-Grant 99999.002658/2015-01. Second author was partially supported by CAPES. L Babel, S Olariu, On the structure of graphs with few P 4 's. 84L. Babel, S. Olariu, On the structure of graphs with few P 4 's. Discrete Applied Mathematics 84, p. 1-13, 1998. A E Brower, W H Haemers, Spectra of Graphs. SpringerA.E.Brower, W.H. Haemers,Spectra of Graphs, Springer (2012). Perfect transfer of arbitrary states in quantumspin networks. M Christandl, N Datta, T C Dorlas, A Ekert, A Kay, A J Landahl, Physical Review Letters. 92187902M. Christandl, N. Datta, T.C. Dorlas, A. Ekert, A. Kay, A.J. Landahl, Perfect transfer of arbitrary states in quantumspin networks, Physical Review Letters 92, 187902 (2004). Complement reducible graphs. D G Comeil, H Lerchs, L S Burlingham, Discrete Appl. Math. 3D.G. Comeil, H. Lerchs, L.S. Burlingham, Complement reducible graphs, Discrete Appl. Math. 3 (1981) 163-174. Multiprocessor interconnection networks with small tightness. D Cvetkovic, T Davidovic, Internat. J. Foundations Computer Sci. 205D. Cvetkovic, T. Davidovic, Multiprocessor interconnection networks with small tightness, Internat. J. Foundations Computer Sci., 20(2009), No. 5, 941-963. Conjugated molecules having integral graph spectra. D Cvetkovic, I Gutman, N Trinajstic, Chemical Physics Letters. 291D. Cvetkovic, I. Gutman , N. Trinajstic, Conjugated molecules having integral graph spectra, Chemical Physics Letters, volume 29, number 1 (1947). Split non-threshold Laplacian integral graphs. Linear and Multilinear Algebra. M Freitas, S Kirkland, R Del-Vecchio, N Abreu, v. 58M. de Freitas, S. Kirkland, R. Del-Vecchio, N. Abreu, Split non-threshold Laplacian integral graphs. Linear and Multilinear Algebra, v. 58, (2010), p. 221-233. The Laplacian spectrum of a graph II. R Grone, R Merris, V S Sunders, SIAM J. Discrete Math. 7R.Grone, R. Merris, V.S.Sunders,The Laplacian spectrum of a graph II, SIAM J. Discrete Math. (1994) v.7, pp. 221-229. Perfect graphs. C T Hong, School of Computer Science, McGill UniversityPh.D. ThesisC.T. Hong,Perfect graphs, Ph.D. Thesis, School of Computer Science, McGill University, 1985. A tree representation for P 4 -sparse graphs. B Jamison, S Olarui, Discrete Appl. Math. 35115129B. Jamison and S. Olarui,A tree representation for P 4 -sparse graphs, Discrete Appl. Math.35 (1992), 115129. P 4 -reducible graphs, a class of uniquely tree representable graphs. B Jamison, S Olariu, Stud. Appl. Math. 81B. Jamison and S. Olariu, P 4 -reducible graphs, a class of uniquely tree representable graphs, Stud. Appl. Math. 81 (1989) 79-87. On a unique tree representation for P 4 -extendible graphs. B Jamison, S Olariu, Discrete Appl. Math. 34B. Jamison and S. Olariu, On a unique tree representation for P 4 -extendible graphs, Discrete Appl. Math. 34 (1991) 151-164. A new class of brittle graphs. B Jamison, S Olariu, Stud. in Appl. Math. 81B. Jamison and S. Olariu, A new class of brittle graphs, Stud. in Appl. Math. 81 (1989) 89-92. Laplacian graph eigenvectors, Linear Algebra and its Applicati-ons. R Merris, 278R. Merris, Laplacian graph eigenvectors, Linear Algebra and its Applicati-ons, v. 278, pp. 221-236, 1998. Degree maximal graphs are Laplacian integral. R Merris, Linear Algebra and its Applications. 199R. Merris, Degree maximal graphs are Laplacian integral,Linear Algebra and its Applications, v. 199, (1994) pp. 381-389 B Mohar, Graph Theory, Combinatorics, and Applications. Y. Alavi, G. Chartrand, O. R. Oellermann, A. J. Schwenk, Wiley2B. Mohar, Graph Theory, Combinatorics, and Applications, Vol. 2, Ed. Y. Alavi, G. Chartrand, O. R. Oellermann, A. J. Schwenk, Wiley (1991), pp. 871-898.
[]
[ "OVERCOMING THE SPECTRAL BIAS OF NEURAL VALUE APPROXIMATION", "OVERCOMING THE SPECTRAL BIAS OF NEURAL VALUE APPROXIMATION" ]
[ "Ge Yang \nNSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n\n", "Anurag Ajay \nNSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n\n", "§ \nNSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n\n", "Pulkit Agrawal \nNSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n\n" ]
[ "NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n", "NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n", "NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n", "NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology\n" ]
[]
Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm. While multi-layer perceptron networks are universal function approximators, recent works in neural kernel regression suggest the presence of a spectral bias, where fitting high-frequency components of the value function requires exponentially more gradient update steps than the low-frequency ones. In this work, we re-examine off-policy reinforcement learning through the lens of kernel regression and propose to overcome such bias via a composite neural tangent kernel. With just a single line-change, our approach, the Fourier feature networks (FFN) produce state-of-the-art performance on challenging continuous control domains with only a fraction of the compute. Faster convergence and better off-policy stability also make it possible to remove the target network without suffering catastrophic divergences, which further reduces TD(0)'s estimation bias on a few tasks. Code and analysis available at https://geyang.github.io/ffn.
10.48550/arxiv.2206.04672
[ "https://arxiv.org/pdf/2206.04672v1.pdf" ]
249,538,529
2206.04672
f57ad283f8d55c3f49d36dcc4673243ec352f6c7
OVERCOMING THE SPECTRAL BIAS OF NEURAL VALUE APPROXIMATION Ge Yang NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology Anurag Ajay NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology § NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology Pulkit Agrawal NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) † Computer Science and Artificial Intelligence Laboratory (CSAIL) ‡ Improbable AI Lab § Massachusetts Institute Technology OVERCOMING THE SPECTRAL BIAS OF NEURAL VALUE APPROXIMATION Published as a conference paper at ICLR 2022 Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm. While multi-layer perceptron networks are universal function approximators, recent works in neural kernel regression suggest the presence of a spectral bias, where fitting high-frequency components of the value function requires exponentially more gradient update steps than the low-frequency ones. In this work, we re-examine off-policy reinforcement learning through the lens of kernel regression and propose to overcome such bias via a composite neural tangent kernel. With just a single line-change, our approach, the Fourier feature networks (FFN) produce state-of-the-art performance on challenging continuous control domains with only a fraction of the compute. Faster convergence and better off-policy stability also make it possible to remove the target network without suffering catastrophic divergences, which further reduces TD(0)'s estimation bias on a few tasks. Code and analysis available at https://geyang.github.io/ffn. INTRODUCTION At the heart of reinforcement learning is the question of how to attribute credits or blame to specific actions that the agent took in the past. This is referred to as the credit assignment problem (Minsky, 1961). Correctly assigning credits requires reasoning over temporally-extended sequences of observation and actions, so that trade-offs can be made, for example, to choose a less desirable action at a current step in exchange for higher reward in the future. Temporal difference methods such as TD(λ) and Watkins' Q-learning (Sutton, 1988;Watkins, 1989) stitch together the immediate rewards local to each state transition, to estimates the discounted sum of rewards over longer horizons. This is an incremental method for dynamic programming (Watkins & Dayan, 1992) that successively improves the value estimate at each step, which reduces the computation that would otherwise be needed to plan ahead at decision time. A key tension in scaling TD learning to more challenging domains is that the state space (and action space in continuous control problems) can be very large. In Neurogammon (Tesauro, 1991), for example, the complexity of the state space is on the scale of O(10 20 ), which prevents one from storing all state-action pairs in a table. Such constraints and the need to generalize to unseen board arrangements prompted Tesauro (1991) to replace the look-up table with an artificial neural network to great effect, achieving master-level play on backgammon. In general, however, adopting neural networks as the value approximator introduces learning instability that can sometimes lead the iterative learning procedure into a divergent regime where errors are amplified at each step (Sutton & Barto, 2018;Bertsekas, 1995;Baird, 1995). A slew of algorithmic fixes have been developed to tackle this problem from different angles, including sampling from a replay buffer (Lin, 1992;Riedmiller, 2005a); using a delayed copy for the bootstrapping target (Mnih et al., 2013); and using ensembles (Hasselt, 2010;Lee et al., 2021). Despite of the popularity of these techniques, the neural value approximator itself remains unchanged. Popular treatments view these networks as universal function approximators that would eventually converge to arbitrary target functions given sufficient number of gradient updates and training data. In practice, however, state-of-the-art off-policy algorithms interleave optimization with sampling, which greatly limits the amount of optimization that is accessible. This puts deep reinforcement learning into the same regime as a neural network regularized through early-stopping where the model bias, the amount of optimization available and the sample size interact with each other in intricate ways (Belkin et al., 2018;Nakkiran et al., 2019;Canatar et al., 2021). Therefore to understand off-policy learning with neural value approximators we first need to understand how deep neural networks generalize, and how they converge under the dynamic process of gradient decent. Then to find a solution, we need to figure out a way to control the generalization and learning bias, so that deliberate bias-variance trade-offs could be made for better convergence. Recent efforts in deep learning theory and computer graphics offer new tools for both. Jacot et al. (2018) uncovers a spectral-bias for shallow, infinitely-wide neural networks to favor low-frequency functions during gradient descent (Kawaguchi & Huang, 2019;Bietti & Mairal, 2019;Ronen et al., 2019). This result has since been extended to deeper networks at finite-width of any "reasonable" architecture via the tensor program (Yang, 2019;2020;Yang & Littwin, 2021;Yang & Salman, 2019). Our analysis on the toy MDP domain (Section 3) shows that value approximator indeed underfits the optimal value function, which tends to be complex due to the recursive nature of unrolling the dynamics. Learning from recent successes in the graphics community (Sitzmann et al., 2020;Tancik et al., 2020), we go on to show that we can overcome the spectral-bias by constructing a composite kernel that first lifts the input into random Fourier features (Rahimi & Recht, 2008;Yang et al., 2015). The resulting neural tangent kernel is tunable, hence offering much needed control over how the network interpolates data. Our main contributions are twofold. First, we show that the (re-)introduction of the Fourier features (Konidaris et al., 2011) enables faster convergence on higher frequency components during value approximation, thereby improving the sample efficiency of off-policy reinforcement learning by reducing the amount of computation needed to reach the same performance. Second, our improved neural tangent kernel produce localized generalization akin to a Gaussian kernel. This reduces the cross-talk during gradient updates and makes the learning procedure more stable. The combined effect of these two improvements further allows us to remove the target network on a few domains, which leads to additional reduction in the value estimation bias with TD(0). BACKGROUND AND NOTATION We consider an agent learning to act in a Markov Decision Process (MDP), S, A, R, P, µ, γ where S and A are the state and action spaces, P : S × A → S is the transition function, R : S × A → R is the reward and µ(s) is the initial state distribution. We consider an infinite horizon problem with the discount factor γ. The goal of the agent is to maximize the expected future discounted return J = E [ ∞ t=0 γ t R(s t , a t )] by learning a policy π(a|s) that maps a state s to a distribution over actions. The state-action value function (Q-function) is defined as Q π (s, a) = E [ ∞ t=0 γ t R(s t , a t )|(s 0 , a 0 ) = (s, a)]. The optimal Q * (s, a) is the fixed point of the Bellman optimality operator B * B * Q(s, a) = R(s, a) + γE s ∼P (s |s,a) [max a * Q(s , a * )].(1) The fitted Q iteration family of algorithms (Ernst et al., 2003;Riedmiller, 2005a) iteratively finds the (sub) optimal Q θ * by recursively applying the gradient update θ = θ − η∇ θ E (s,a,s )∼D Q θ (s, a) − B * Q θ (s, a) 2 2 . (2) Let X = Q θ (s, a) − B * Q θ (s, a) and apply the chain-rule. The update in Equation 2 becomes θ = θ − 2ηE (s,a,s )∼D X ∇ θ Q θ (s, a)(3) We can approximate the updated Q in function form through its first-order Taylor expansion Q θ (s, a) ≈ Q θ (s, a) − 2η E (s ,a )∼D K(s, a; s , a )X(4) where the bilinear form K(s, a; s , a ) = ∇ θ Q θ (s, a) T ∇ θ Q θ (s , a ) is the neural tangent kernel (NTK, see Jacot et al. 2018 and Section 4). This procedure can be interpreted as producing a sequence {Q 0 , Q 1 , Q 2 , . . . } using the iterative rule, Q i+1 = ΠB * (Q i ) starting with Q 0 (s, a) = R(s, a) . Π represents a projection to functions expressible by the class of function approximators, which reduces to the identity in the tabular case. The ith item in the sequence, Q i , is the projected optimal Q function of a derived MDP with a shorter horizon H = i. We motivate through a toy MDP adapted from Dong et al. (2020), and show that neural fitted Q iteration significantly underfits the optimal value function. Despite of the simple-looking reward and forward dynamics, the value function is quite complex. Our spectral analysis in Section 3.1 indicates that such complexity arises from the recursive application of the Bellman operator. A MOTIVATING EXAMPLE Toy MDP Distribution Consider a class of toy Markov decision processes M . The state space is defined on the real-line as S = R [0,1) . The reward is the identity function. The action space is a discrete set with two elements {0, 1}, each corresponds to a distinct forward dynamics that is randomly sampled from the space of piece-wise linear functions with k "kinks." For all of the examples below, we use a fixed number k = 10, and uniformly sample the value of each turning point in this dynamic function between 0 and 1. The result is a distribution M that we can sample from M ∼ M = p(M ). A Curious Failure of Neural Fitted Q Iteration When we apply standard neural fitted Q iteration (FQI, see Riedmiller 2005b) to this toy problem using a simple four-layer multi-layer perceptron (MLP) with 400 hidden units at each layer, we observe that the learned MLP value approximator, produced in Figure 2a, significantly underfits in comparison to the optimal value function. One hypothesis is that Q-learning is to blame, and in fact prior works such as Kumar et al. 2021 have argued that underfitting comes from minimizing the TD (Temporal-Difference) error, because bootstrapping resembles self-distillation (Mobahi et al., 2020), which is known to lead to under-parameterization in the learned neural network features. To show that Q learning is not to blame, we can use the same MLP architecture, but this time directly regress towards the (ground truth) optimal Q function via supervised learning. Figure 2b shows that the under-fitting still persists under supervised learning, and increasing the depth to 12 layers (see Figure 2c) fails to fix the problem. This shows an over-parameterized network alone is insufficient to reduce under-fitting. Finally, if we increase the number of training iterations from 400 epochs to 2000, the original four-layer MLP attains a good fit. This shows that solely focusing on the expressivity of the neural network, and making it bigger (Sinha et al., 2020) can be misguided. The class of function that can be approximated depends on how many gradient updates one is allowed to make, a number capped by the compute budget. SPECTRAL SIGNATURE OF THE TOY MDP The Bellman optimality operator imprints a non-trivial spectrum on the resulting Q i as it is applied recursively during fitted Q iteration. We present the evolving spectra marginalized over M 1 in Figure 3. As the effective horizon increases, the value function accrues more mass in the higher-frequency part of the spectrum, which corresponds to less correlation between the values at near-by states. In a second experiment, we fix the horizon to 200 while increasing the discount factor from 0.1 all the way up to 0.99. We observe a similar whitening of the spectrum at longer effective recursion depths. In other words, the complexity of the value function comes from the repeated application of the Bellman optimality operator in a process that is not dissimilar to an "infinity mirror." The spectrum of the Bellman operator gets folded into the resulting Q function upon each iterative step. Although our analysis focuses on the state space, the same effect can be intuitively extrapolated to the joint state-action space. KERNEL VIEW ON OFF-POLICY DIVERGENCE Following the formulation of convergence in Dayan (1992); Tsitsiklis (1994); Jaakkola et al. (1994): Definition: Consider a complete metric space S with the norm · . An automorphism f on S is a contraction if ∀a, b ∼ S, f (a) − f (b) ≤ γ a − b . Here γ ∈ [0, 1) is called the contraction modulus. When γ = 1, f is a nonexpansion. Banach fixed-point theorem. Let S be non-empty with a contraction mapping f . Then f admits a unique fixed-point x * ∈ S s.t. f (x * ) = x * . Furthermore, ∀x 0 ∈ S, x * is the limit of the sequence given by x i+1 = f (x i ). a.k.a x * = lim i→∞ x i . Without lost of generality, we can discretize the state and action space S and A. The NTK becomes the gram matrix K ∈ R |S×A|×|S×A| . Transition data are sampled from a distribution ρ(s, a). Theorem. (Achiam et al., 2019) Let indices i, j refer to state-action pairs. Suppose that K, η and ρ satisfy the conditions: ∀i, 2ηK ii ρ i < 1,(6)∀i, (1 + γ) j =i |K ij |ρ j ≤ (1 − γ)K ii ρ i ,(7) Then, Equation 4 induces a contraction on Q in the sup norm, with fixed-point Q * and the TD loss optimization converges with enough optimization steps. For relatively large γ (for instance, γ ∈ (0.99, 0.999)), the above theorem implies that small offdiagonal terms in the NTK matrix are sufficient conditions for convergence. SPECTRAL-BIAS AND NEURAL KERNEL REGRESSION Consider a simple regression problem where we want to learn a function f (ξ; θ) ∈ R. ξ is a sample from the dataset. To understand how the output of the network changes w.r.t small perturbations to the parameters, we can Taylor expand around θ f (ξ; θ + δθ) − f (ξ; θ) ≈ ∇θf (ξ; θ), δθ .(8) During stochastic gradient descent using a training sampleξ with a loss function L, the parameter update is given by the product between the loss derivative L • f (ξ) and the neural tangent kernel K (Jacot et al., 2018) δθ = −ηL (f (ξ))K(ξ,ξ) where K(ξ,ξ) = ∇ θ f (ξ; θ), ∇ θ f (ξ; θ) .(9) In the infinite-limit with over-parameterized networks, the function remains close to initialization during training (Chizat et al., 2019). The learning dynamics in this "lazy" regime, under an L 2 regression loss behaves as a minimum norm least-square solution f t − f * = e −ηKt(ξ,ξ) (f 0 − f * ),(10) where f t is the function under training at time t and f 0 is the neural network at initialization. Replacing K with the Gramian matrix K between all pairs of the training data, we can re-write Equation 4 in its spectral form K = OΛO T where each entry in the diagonal matrix Λ is the eigenvalue λ i > 0 for the basis function O i in the orthogonal matrix O. Using the identity e A = Oe Λ O T , we can decompose the learning dynamics in Equation 4 into O T (f t − f * ) = e −ηΛt O T (f 0 − f * ). (11) The key observation is that the convergence rate on the component O i , ηλ i , depends exponentially on its eigenvalue λ i . a.k.a the absolute error The Gramian matrix form of the NTK also offers an intuitive way to inspect state-aliasing during gradient descent. This is because the offdiagonal entries corresponds to the similarity between gradient vectors for different state-action pairs. The kernel of an multi-layer perceptron is not stationary when the input is not restricted to a hypersphere as in Lee et al. (2017). We can, however, compute K of popular network architectures over R [0,1) , shown in Figure 4. The spectral-bias of the NTK of both ReLU networks and hyperbolic tangent networks produce large off-diagonal elements, increasing the chance of divergence. |O T i (f t − f * )| = e −ηλit |O T i (f 0 − f * )|(12) OVERCOMING THE SPECTRAL-BIAS OF NEURAL VALUE APPROXIMATION To correct the spectral-bias of a feed-forward neural network, we construct a composite kernel where a random map z : R d → R D first "lifts" the input into a randomized harmonics basis. This explicit kernel lifting trick was introduced by Rahimi & Recht (2007) and it allowed the authors to fit complex datasets using a linear machine. The mixing brings high-frequency input signals down to a lower and more acceptable band for the ReLU network. Data also appears more sparse in the higher-dimensional spectral basis, further simplifies learning. To emulate arbitrary shift-invariant kernel K * , Rahimi & Recht (2007) offered a procedure that sample directly from the nominal distribution given by K * 's spectrum F (K * ) k(x, y) = φ(ξ), φ(ξ) ≈ z(ξ) T z(ξ) where z(ξ) = i w i e 2πki and w i ∼ F (K * ).(13) We choose to initialize FFN by sampling the weights from an isotropic multivariate Gaussian with a tunable cutoff frequency b. We modify the normalization scheme from Rahimi & Recht (2007) The kernel spectra. Peak in the center is due to windowing effect. Higher band-limit leads to flatter spectra. (b) The crosssection of the kernel at different band-limit. Algorithm Learned Fourier Features (LFF) class LFF(nn.Linear): def init (self, in, out, b scale): super(). init (in, out) init.normal (self.weight, std=b scale/in) init.uniform (self.bias, −1.0, 1.0) def forward(self, x): x = np.pi * super().forward(x) return torch.sin(x) variety of reinforcement learning problems with drastically different state and action space dimensions. We slightly abuse the notion by using b j as the bias parameters, and b (without subscript) for the bandwidth RFF(x) j = sin( d i=1 w i,j x i + b j ) where w i,j ∼ N (0, πb/d) and b j ∼ U(−π, π).(14) To attain better performance, Rahimi & Recht (2008) learns a weighted sum of these random kitchen sink features, whereas Yang et al. (2015) adapts the sampled spectral mixture itself through gradient descent. Using the modern deep learning tool chain, we can view the entire network including the random Fourier features as a learnable kernel. For small neural networks with limited expressivity, we found that enabling gradient updates on the RFF parameters is important for performance. We refer to this adaptive variant as the learned Fourier features (LFF), and the shallow MLP with an adaptive Fourier feature expansion as the Fourier feature networks (FFN). Kernel Spectrum, Convergence, and Interpolation The composite kernel of the FFN has larger eigenvalues on the high-frequency harmonics (see Figure 5), which improves the speed at which the network converges. We can tune this kernel spectrum by changing the band-limit parameter b. With a larger bandwidth, the interpolation becomes more local. On the toy domain used to motivate this work FFN achieves a perfect fit with just 400 optimization epochs whereas the MLP baseline requires at least two thousand gradient steps (see Figure 2e). The gain is even more prominent on more challenging environments in the results section (Quadruped run, see Figure 9). Optimization overhead is a key bottleneck in off-policy learning, therefore faster convergence also translates into better sample efficiency, and faster wall-clock time. Improving Off-policy Stability "cross-talks" between the gradients of near-by states is a key factor in off-policy divergence. Such "crosstalk" manifest as similarity between gradient vectors, the measure of which is captured by the Gram matrix of the NTK. The Fourier feature network kernel is both localized ( Figure 4c) and tunable ( Figure 5), offering direct control over the biasvariance trade-off. The improved learning stability allows us to remove the target network on a number of domains while retaining substantial performance. A curious find is when the target network is used, the learned value approximation still contains a large constant offset from the optimal value function (yellow curve in Figure 6). Such offset disappears after we remove the target value network (blue curve). Visualizing Mountain Car The Mountain Car environment (Moore, 1990;Sutton, 2000) has a simple, two-dimensional state space that can be directly visualized to show the learned value func- (e) shows the ground-truth optimal value function produced by running tabular value iteration. The two axis corresponds to velocity (horizontal) and position (vertical) of the car. tion. We compare the value estimate from three baselines In Figure 7: the ground-truth value estimate acquired using tabular value iteration; one obtained from a four-layer ReLU network using fitted Q iteration; and one from the same network, but using a 16 dimensional Fourier features on the input. All networks use 400 latent neurons and are optimized for 2000 epochs. SCALING UP TO COMPLEX CONTROL TASKS We scale the use of FFN to high-dimensional continuous control tasks from the DeepMind control suite (Tassa et al., 2018) using soft actor-critic (SAC, Haarnoja et al. 2018) as the base algorithm. Our implementation is built off the pytorch codebase from Yarats & Kostrikov (2020) which is the current state-of-the-art on state space DMC. Both the actor and the critic have three latent layers in this base implementation. We use FFN for both the actor and the critic by replacing the first layer in the MLP with learned Fourier features (LFF) without increasing the total number of parameters. We provide extensive empirical analysis on eight common DMC domains and additional results with DDPG in Appendix A.9. Due to space constraints we present the result on four representative domains in Figure 8. FFN consistently improves over the MLP baseline in complex domains while matching the performance in simpler domains. We include additional ablations and hyperparameter sensitivity analysis in Appendix A.7. Figure 11: Weight and bias changes in FFN and MLP during training, using SAC. While FFN's bias parameters undergo less change than MLP's bias parameters, the results are mixed when it comes to weight parameters on the Quadruped environment. Faster Convergence via Fourier Feature Networks The key benefit of the Fourier feature network is that it reduces the computation needed for good approximation. Off-policy reinforcement learning algorithms tend to be bottlenecked by optimization whereas on-policy algorithms tend to be bottlenecked by simulation time. This means with a reduced replay ratio, FFN can achieve faster wall-clock time. Figure 9 shows that we can indeed reduce the update frequency to 1/4 in Walkerwalk and 1/6 in Quadruped-run while still matching the performance of the MLP baseline. In a control experiment (see Figure 10), we find that increasing the replay ratio causes the MLP to crash on Quadruped-run, illustrating the intricate balance one otherwise needs to maintain. FFN brings additional improvements on learning stability besides faster convergence. We additionally visualize the value approximation error in Figure 25. FFN consistently produces smaller value approximation error compare to the MLP. Figure 12: Value approximation error with FFN vs MLP using SAC. The divergence is especially prominent at the beginning of learning when the data is sparse. FFN reduces the variance in these regimes by making more appropriate bias-variance trade-off than the MLP baseline. Improved Feature Representation Makes Learning Easier The random Fourier features sample from a Gaussian distribution in the frequency domain. This is a more uniform functional distribution compared to the MLP, which exponentially favors low-frequency functions (Bietti & Mairal, 2019). Let W t and B t refer to the concatenated weight and bias vectors. In this experiment we inspect the 2 -norm of the changes of weights and biases w.r.t to their initial value, W t −W 0 2 and B t −B 0 2 . We include all eight DMC domains in Appendix A.11 and present the most representative domains in Figure 11. In seven out of eight domains, the first layer in both the MLP and the FFN experience roughly the same amount of change in the weights, but the FFN requires less weight change in the later layers. This indicates that LFF makes learning easier for the rest of the network. Quadrupedrun is the only outlier where the LFF experience more weight change than the first layer of the MLP. Our hypothesis is that this is a harder task, and we are either underparameterized in terms of Feature dimension, or there is a distribution-misalignment between the task and the model. So far we have focused on the weights. The biases change consistently less in the FFN than the MLP. Fourier Features Improve Off-Policy Stability Without a target network, gradient updates through the TD objective causes the regression target itself to change, making the learning procedure less stable. Mnih et al. (2013) replaces the value network on the r.h.s of the Bellman equation with a copy that is updated at a slower rate, so that the TD objective is more stationary. In Figure 13 we remove the target network. While the MLP baseline completely fails in all environments, the improved neural tangent kernel of the FFN sufficiently stabilizes Q learning, that only minor performance losses occur with seven out of eight domains. We offer the complete result in Appendix A.12. Improving the Convolution Kernel The filters in a standard convolution neural network suffer the same spectral bias in the RGB and feature dimension as the state and action space above. We can in principle extend our spectral-bias fix to convolution networks by replacing the first layer with a 1 × 1 convolution with a sine non-linearity. In this experiment we build upon the DrQv2 implementation (Yarats et al., 2021) and replace the CNN in both the actor and the critic with this Fourier-CNN (F-CNN) architecture according to Equation 14. The results indicate that while F-CNN's performance is within the variance of CNN's performance, its variance is much lower (see Figure 14). Such technique has also been used by Kingma et al. (2021) DISCUSSION The inspiration of this work comes from our realization that techniques used by the graphics community (Tancik et al., 2020;Sitzmann et al., 2020;Mildenhall et al., 2020) to learn high-fidelity continuous neural representations represent a new way to explicitly control generalization in neural networks. We refer to Achiam et al. (2019)'s pioneering work for the theoretical setup on off-policy divergence, which predates many of the recent analysis on spectral-bias in neural networks, and the (re-)introduction of the random Fourier features as a solution to correct such learning biases. Functional regularization through gradient conditioning is an alternative to re-parameterizing the network. A recent effort could be found in Piché et al. (2021). A key benefit of reparameterizing the network is speed. We managed to reduce the wall-time by 32% on the challenging Quadruped environment, by reducing the replay ratio to 1/6 of the baseline. Fourier features for reinforcement learning dates back as early as Konidaris et al. (2011). While this manuscript was under review, a few similar publication and preprints came out, all developed independently. Li & Pathak (2021) focuses on the smoothing effect that Fourier feature networks have in rejecting noise. Our finding is that vanilla feed-forward neural networks are on the biased-end of the bias-variance trade-off. In other words, we believe the main issue with neural value approximation is that the network underfits real signal, as opposed to being overfit to random noise. The rank of the neural representation describes the portion of the linear space occupied by the largest eigenvalues of the kernel regardless of the spatial frequency of those corresponding eigenfunctions. Therefore lower rank does not correspond to smoother functions. We additionally find that maintaining a Fourier feature to input feature ratio (D/d > 40) is critical to the expressiveness of the network, which allowed us to scale up to Quadruped without needing to concatenating the raw input as a crutch. Brellmann et al. (2022) is a concurrent submission to ours that delightfully includes the random tile encoding scheme from Rahimi & Recht (2007), and on-policy algorithm, PPO. Our intuition is that policy gradient needs to be considered a life-long learning problem, and locality in the policy network can speed up learning by eliminating the need to repeatedly sample areas the policy has experienced. We are excited about these efforts from the community, and urge the reader to visit them for diverse treatments and broader perspectives. A APPENDIX A.1 EXPERIMENTAL DETAILS ON THE TOY MDP Offline Data Generation We divided the 1-dimensional state space into 1000 discrete bins and used the midpoints of the bins as initial states. We then took both the actions from these initial states to get corresponding next states. We used the dataset of these transitions (2000 total transitions) as our offline dataset. Optimization details We use 4-layer MLP with ReLU Activation, with 400 latent neurons. We use Adam optimization with a learning rate of 1e-4, and optimize for 400 epochs. We use gradient descent with a batch size of 200. For a deeper network, we use a 12-layer MLP, with 400 latent neurons but keep other optimization hyperparameters fixed. To show that longer training period help MLPs evade the spectral bias, we use a 4-layer MLP, with 400 latent neurons and train it for 2000 epochs. All the optimization hyperparameters are kept the same. A.2 EXPERIMENT DETAILS ON MOUNTAIN CAR Environment We use the implementation from the OpenAI gym (Brockman et al., 2016), and discretize the state space into 150 bins. This is a relatively small number because otherwise the space complexity becomes infeasible large. We use a four-layer MLP as the Q function, and a bandwidth parameter of 10. A.3 EXPERIMENTAL SETUP ON DEEPMIND CONTROL SUITE DMC domains We list the 8 DMC domains along with their observation space dimension and action space dimension in Table 1. We also list the optimal bandwidth b used by FFN for each of these envs. MLP(x) = f t • ReLU • · · · f 2 • ReLU • f 1 (x) • Fourier features network (FFN) (Ours) uses sine activation with random phase-shift, to replace the first layer of the network with learned Fourier features LFF(x) = sin(W x + c), W i,j ∼ N (0, πb/d), c i ∼ U(−π, π) so that the Fourier features network (FFN) is FFN(x) = f t • ReLU • · · · f 2 • LFF(x) • RFF (Tancik et al., 2020) that uses sine and cosine pairs concatenated together RFF Tancik (x) = [sin(2πW x), cos(2πW x)], W i,j ∼ N (0, σ 2 ) • SIREN network (Sitzmann et al., 2020) that stacks learned Fourier layers through our the entire network, using the Sitzmann initialization according to lff(x) = sin(W x + c), W i,j ∼ U(− √ 6/ √ n, √ 6/ √ n), c i ∼ U(−π, π) where the t-layer network SIREN(x) = lff t • · · · lff 2 • lff 1 (x). It is critical to note what each layer has a distinct bandwidth scaling parameter. Hence instead of a single scaling hyperparameter b for the random matrices, Siren has a set {b i } that each need to be tuned. We found that it is important to use isotropic distributions for the weights. This means using Gaussian distributions or orthogonal matrices for the weight matrix. Uniform distribution is anisotropic. To stack LFF (which becomes SIREN), the network requires a "fan-out" in the latent dimensions to accommodate the successive increase in the number of features. SIREN is difficult to tune because the initialization scheme is incorrect, as it depends on the inverse quadratic root of the input dimension. Correcting the initialization to depend inversely on the input dimension fixes such issue. A.6 EFFECT OF BANDWIDTH PARAMETER b ON TOY MDP The weight distribution in the random Fourier features offers a way to control the generalization (interpolation) in the neural value approximator. In Figure 15, we compare the learned Q function using Fourier feature networks initialized with different band-limit parameter b. With larger bs, the network recovers sharper details on the optimal Q function. We also provide the result for an FFN with the target network turned off. We use SAC as the RL algorithm. We observe that different B values lead to different performances and hence, we must choose the B value carefully for each environment. MLP Actor + FFN Critic Experiments we have presented so far use FFN for both the actor and the critic. Our intuition is that the main benefit comes from using FFN in the critic. Results in Figure 17 shows that this is indeed the case -using an MLP for the actor does not affect the learning performance, whereas using an MLP for the critic causes the performance to match those of the MLP baseline. Sensitivity to (Fourier) Feature-Input Ratio In addition to the bandwidth b, we need to choose the Fourier feature dimension D for FFN. We maintained the feature-input ratio D d to be 40. But can we get away with a lower feature-input ratio? Figure 18 shows that it is important to maintain feature-input ratio to be at least 40 and any reduction in the feature-input ratio hurts the performance of FFN. We can visualize the amount of changes the FFN and the MLP experience in the weight space. Let W k t and B k t refer to the flattened weight and bias vectors from layer k in the critic after training over t environment frames which corresponds to the number of optimization steps times a multiple. Figure 22 shows the evolution of W t − W 0 for all eight domains. Figure 23 shows the evolution of B t − B 0 . Both collected with soft actor-critic. A.12 REMOVING THE TARGET NETWORK We present the full result without the target network in Figure 24. While MLP completely flat-lines in all environments, FFN matches its original performance with target networks in Finger-turn, and Quadruped-walk. It suffers performance loss but still manages to learn in Cheetah-run, Walker-run, and Quadruped-run. Hopper-hop, Humanoid-run, and Acrobat-swing up are difficult for model-free algorithms, and FFN fails to learn without target network. We can inspect the value approximation error. Without a target network, both MLP and FFN do poorly on Hopper-hop, so the result on this domain is uninformative. In Walker-run, removing the target network reduces the value approximation error even further. In Quadruped walk, FFN without target network perform worse than FFN with target network, and has slightly higher approximation error. Both FFN baselines however have lower approximation errors than the MLP. In Quadrupedrun without the target network, the approximation error diverges to large value, then converges back. The source of this divergence remains unclear. The divergence is especially prominent at the beginning of learning when the data is sparse. FFN reduces the variance in these regimes by making more appropriate bias-variance trade-off than the MLP baseline. Figure 1 : 1A Toy MDP with a simple forward dynamics and complex value function, adapted from Dong et al. (2020). Figure 3 : 3The spectrum of the optimal value function noticeably whitens over (a) longer horizon and (b) larger γ. Figure 4 : 4Multiple works(Rahaman et al., 2019;Shah et al., 2020;Yang & Salman, 2019; Huh et al., 2021) have shown that the NTK spectrum (λ i ) of a regular ReLU network decays rapidly at higher frequency. In particular, Bietti & Mairal (2019) provided a bound of Ω(k −d−1 ) for the kth spherical harmonics 2 . Such results have been extended to finite-width networks of arbitrary architecture and depth via the tensor programs(Yang, 2019;2020). NTK comparison between (a) MLP with ReLU activation, (b) MLP with tanh activation, and (c) Fourier feature networks (Ours). The MLP NTK in (a,b) both contain large off-diagonal elements. The addressing by the gradient vectors is not specific to each datapoint. Figure 5 : 5and divide b with the input dimension d, s.t. similar bandwidth values could be applied across a wide FFN provide direct control over generalization. (a) Figure 6 : 6(a) Comparison of the learned Q function (for action II). (b) Approximation error averaged over 10 random seeds. FFN with a target network still contains an offset, whereas removing the target network eliminates such bias. Figure 7 : 7Visualizing the learned value function for action 0 in Mountain Car using (a) a fourlayer ReLU network, (b) an eight-layer network, (c) a four-layer network with hyperbolic-tangent activation, which is commonly used in deep reinforcement learning. (d) that of a four-layer FFN. Figure 8 :Figure 9 :Figure 10 : 8910Learning curve with FFN applied to SAC on the DeepMind control suite. Domains are ordered by input dimension, showing an overall trend where domains with higher state dimension benefits more from Fourier features. We use a (Fourier) feature-input ratio of 40 : 1. FFN needs only a fraction of the compute to match the performance of the MLP baseline on both Walker-run and Quadruped-run. Increasing the gradient updates to sampling ratio causes Quadruped-run to crash. On Walker-run it improves the performance. Figure 13 :Figure 14 : 1314Learning curves showing that FFN improves learning stability to the extent that learning can happen on some domains even without the target network. On the contrary, MLP consistently fails without a target value network. FFN has consistently less variance than the MLP. Control from pixels learning curve on the DeepMind control suite using Fourier-CNN (F-CNN) and DrQv2. We use a feature-input ratio of 40 : 1. Performance with the F-CNN has much lower variance and is consistently at the top of the confidence range of the vanilla CNN. to improve image generation with diffusion models. For more detailed theoretical analysis of the CNN NTK, one can refer toArora et al. (2019) andLi et al. (2019). Figure 15 :Figure 16 : 1516Q-value approximation on the toy MDP with different bandlimit parameter b. (a -c) showing b = {1, 3, 5}. (d) showing a FFN without target network. A.7 ABLATIONS AND SENSITIVITY ANALYSES Sensitivity to bandwidth b Since FFN introduces a bandwidth hyperparameter b, it is natural to ask how the choice of b affects FFN's performance.Figure 16shows that FFN's performance does vary with choice of b. Furthermore, the best performing value of b differs with environment. This difference arises from the fact that the optimal value function for different environments have different spectral bias and hence requires different bandwidth b for FFN. Learning curve with FFN for different values of B on Walker-run and Quadruped-run. Figure 17 :Figure 18 :Figure 19 :Figure 20 : 17181920Using FFN for critic is as good as using FFN for both actor and critic. However, using learned FFN only for actor is similar to the MLP baseline. This indicates the gain mainly comes from better value approximation. Learning curve with different Fourier dimension ratio on Quadruped-run. Lowering the ratio decreases the performance.A.8 FULL RESULTS WITH SACWe include the full set of results on eight DMC domains inFigure 19using soft actor-critic (Haarnoja et al., 2018). A.9 FULL RESULTS WITH DDPG The benefits FFN brings generalize beyond soft actor-critic to other RL algorithms. In this section we present results based on deep deterministic policy gradient (DDPG, see Lillicrap et al. 2015). Figure 20 shows learning curve of the FFN vs the MLP on eight DMC domains. A.10 REDUCING VALUE APPROXIMATION ERROR We include the full result on value approximation error on eight DMC domains in Figure 21. FFN consistently reduces the value approximation error w.r.t the MLP baseline especially at earlier stages of training when insufficient amount of optimization has occurred. Learning curve with FFN applied to SAC on the DeepMind control suite. Domains are ordered by input dimension, showing an overall trend where domains with higher state dimension benefits more from Fourier features. We use a (fourier) feature-input ratio of 40 : 1. Learning curve with FFN applied to DDPG on the DeepMind control suite. Domains are ordered by input dimension. The over trend agrees with results on soft actor-critic, where domains with higher state dimension benefits more from random Fourier features. The same feature-input ratio of 1 : 40 is applied. A.11 MEASURING PARAMETER CHANGES IN FFN AND MLP Figure 21 : 21Value approximation error with SAC on the DeepMind control suite. FFN reduces the error w.r.t the MLP, especially during earlier stages of learning where the gradient updates is low. Figure 23 :Figure 24 :Figure 25 : 232425Bias change in FFN and MLP during training of RL agents with SAC on the DeepMind control suite. Given FFN's bias parameters have better initialization, they undergo less change than MLP's bias parameters. Since FFN require fewer gradients for value function estimation, its performance doesn't degrade as much when target value networks are removed. On the contrary, MLP completely fails when target value networks are removed. Value approximation error with FFN vs MLP using SAC. Kefan Dong, Yuping Luo, Tianhe Yu, Chelsea Finn, and Tengyu Ma. On the expressivity of neural networks for deep reinforcement learning. In International Conference on Machine Learning, pp. Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. July 2021. George Konidaris, Sarah Osentoski, and Philip Thomas. Value function approximation in reinforcement learning using the fourier basis. In Twenty-fifth AAAI conference on artificial intelligence, 2011. Kimin Lee, Michael Laskin, A. Srinivas, and P. Abbeel. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning. In ICML, 2021.2627-2637. PMLR, 2020. Damien Ernst, Pierre Geurts, and Louis Wehenkel. Iteratively extending time horizon reinforcement learning. In Machine Learning: ECML 2003, pp. 96-107. Springer Berlin Heidelberg, 2003. doi: 10.1007/978-3-540-39857-8\ 11. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. Hado Hasselt. Double q-learning. volume 23, pp. 2613-2621, 2010. Minyoung Huh, Hossein Mobahi, Richard Zhang, Brian Cheung, Pulkit Agrawal, and Phillip Isola. The low-rank simplicity bias in deep networks. arXiv preprint arXiv:2103.10427, 2021. Tommi Jaakkola, Michael I Jordan, and Satinder P Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural computation, 6(6):1185-1201, 1994. Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and gen- eralization in neural networks. arXiv preprint arXiv:1806.07572, 2018. Kenji Kawaguchi and Jiaoyang Huang. Gradient descent finds global minima for generalizable deep neural networks of practical sizes, 2019. Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, and Sergey Levine. Implicit under-parameterization inhibits data-efficient deep reinforcement learning. International Conference on Learning Repre- sentations, 2021. Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165, 2017. Alexander Li and Deepak Pathak. Functional regularization for reinforcement learning via learned fourier features. Adv. Neural Inf. Process. Syst., 34, 2021. ISSN 1049-5258. Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S Du, Wei Hu, Ruslan Salakhutdinov, and Sanjeev Arora. Enhanced convolutional neural tangent kernels. November 2019. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teach- ing. Mach. Learn., 8(3):293-321, May 1992. ISSN 0885-6125, 1573-0565. doi: 10.1007/ BF00992699. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pp. 405-421. Springer, 2020. Marvin Minsky. Steps toward artificial intelligence. Proc. IRE, 49(1):8-30, January 1961. ISSN 0096-8390. doi: 10.1109/JRPROC.1961.287775. Table 1 : 1DMC domains in increasing order of state space and action space dimensionalityName Observation space Action space Bandwidth b Figure 22: Weight change in FFN and MLP during training of RL agents with SAC on the DeepMind control suite. The results are mixed and FFN's weight parameters undergo less change than MLP's weight parameters only in some environments.Acrobot-swingup (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 50 100 150 Weight change Finger-turn_hard (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 50 100 150 Weight change Hopper-hop (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 25 50 75 100 125 Weight change Cheetah-run (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 20 40 60 80 100 Weight change Walker-run (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.5 1.0 1.5 2.0 Frames 1e6 0 50 100 150 Weight change Humanoid-run (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 50 100 150 Weight change Quadruped-walk (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 50 100 150 Weight change Quadruped-run (Weights) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 2 4 6 Bias change Acrobot-swingup (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 2 4 6 Bias change Finger-turn_hard (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 2 4 6 8 Bias change Hopper-hop (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 2 4 6 8 10 Bias change Cheetah-run (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 2 4 6 8 Bias change Walker-run (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.5 1.0 1.5 2.0 Frames 1e6 0 5 10 15 20 Bias change Humanoid-run (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 5 10 15 20 25 Bias change Quadruped-walk (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] 0.0 0.2 0.4 0.6 0.8 1.0 Frames 1e6 0 5 10 15 Bias change Quadruped-run (Bias) FFN Layer[1] FFN Layer[2:n] MLP Layer[1] MLP Layer[2:n] In this distribution, we modify the reward function to be sin(2πs) to simplify the reward spectrum. the input is restricted to a sphere. This bound is loose, and for a tighter bound refer toCao et al. (2019). ACKNOWLEDGMENTSThe authors would like to thank Leslie Kaelbling for her feedback on the manuscript; Jim Halverson and Dan Roberts at IAIFI for fruitful discussions over neural tangent kernels, and its connection to off-policy divergence; Aviral Kumar at UC Berkeley for bringing into our attention the expressivity issues with off-policy value approximation; and the reviewer and area chair for their kind feedback.This work is supported in part by the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions (IAIFI, https://iaifi.org/) under the Cooperative Agreement PHY-2019786; the Army Research Office under Grant W911NF-21-1-0328; and the MIT-IBM Watson AI Lab and Research Collaboration Agreement No.W1771646. The authors also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing high performance computing resources. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.REPRODUCIBILITY STATEMENTWe include detailed implementation details in Appendix A.4. Towards characterizing divergence in deep qlearning. Joshua Achiam, Ethan Knight, Pieter Abbeel, arXiv:1903.08894arXiv preprintJoshua Achiam, Ethan Knight, and Pieter Abbeel. Towards characterizing divergence in deep q- learning. arXiv preprint arXiv:1903.08894, 2019. S Arora, W Du, Hu, Li, Salakhutdinov, On exact computation with an infinitely wide neural net. arXiv preprint arXiv. S Arora, S S Du, W Hu, Z Li, R Salakhutdinov, and others. On exact computation with an infinitely wide neural net. arXiv preprint arXiv, 2019. Residual algorithms: Reinforcement learning with function approximation. Leemon Baird, Machine Learning Proceedings. ElsevierLeemon Baird. Residual algorithms: Reinforcement learning with function approximation. In Machine Learning Proceedings 1995, pp. 30-37. Elsevier, 1995. Reconciling modern machine learning practice and the bias-variance trade-off. undefined. Mikhail Belkin, J Daniel, Siyuan Hsu, Soumik Ma, Mandal, Mikhail Belkin, Daniel J Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning practice and the bias-variance trade-off. undefined, 2018. A counterexample to temporal differences learning. P Dimitri, Bertsekas, 10.1162/neco.1995.7.2.270Neural Comput. 72Dimitri P Bertsekas. A counterexample to temporal differences learning. Neural Comput., 7(2): 270-279, March 1995. ISSN 0899-7667, 1530-888X. doi: 10.1162/neco.1995.7.2.270. On the inductive bias of neural tangent kernels. Alberto Bietti, Julien Mairal, 1049-5258Adv. Neural Inf. Process. Syst. 32Alberto Bietti and Julien Mairal. On the inductive bias of neural tangent kernels. Adv. Neural Inf. Process. Syst., 32, 2019. ISSN 1049-5258. Fourier features in reinforcement learning with neural networks. David Brellmann, Goran Frehse, David Filliat, David Brellmann, Goran Frehse, and David Filliat. Fourier features in reinforcement learning with neural networks, 2022. URL https://openreview.net/forum?id=VO7bAwdWRjg. Openai gym. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan, 10.1038/s41467-021-23103-1Nat. Commun. 1212914Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Nat. Commun., 12(1):2914, May 2021. ISSN 2041-1723. doi: 10.1038/s41467-021-23103-1. Towards understanding the spectral bias of deep learning. Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, Quanquan Gu, Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, and Quanquan Gu. Towards understanding the spectral bias of deep learning. December 2019. On lazy training in differentiable programming. Lénaïc Chizat, Edouard Oyallon, Francis Bach, 1049-5258Adv. Neural Inf. Process. Syst. 32Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. Adv. Neural Inf. Process. Syst., 32, 2019. ISSN 1049-5258. The convergence of td(λ) for general λ. Peter Dayan, 10.1007/bf00992701Mach. Learn. 83-4Peter Dayan. The convergence of td(λ) for general λ. Mach. Learn., 8(3-4):341-362, May 1992. ISSN 0885-6125, 1573-0565. doi: 10.1007/bf00992701. Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, arXiv:1312.5602arXiv preprintIoannis AntonoglouVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Self-distillation amplifies regularization in hilbert space. Hossein Mobahi, Mehrdad Farajtabar, Peter L Bartlett, Advances in Neural Information Processing Systems. 33Hossein Mobahi, Mehrdad Farajtabar, and Peter L. Bartlett. Self-distillation amplifies regularization in hilbert space. Advances in Neural Information Processing Systems, 33, 2020. Efficient memory-based learning for robot control. Andrew W Moore, Andrew W. Moore. Efficient memory-based learning for robot control. 1990. Deep double descent: Where bigger models and more data hurt. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, Ilya Sutskever, Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. December 2019. URL http:// arxiv.org/abs/1912.02292. Beyond target networks: Improving deep q-learning with functional regularization. Alexandre Piché, Joseph Marino, Gian Maria Marconi, Christopher Pal, Mohammad Emtiyaz Khan, Alexandre Piché, Joseph Marino, Gian Maria Marconi, Christopher Pal, and Mohammad Emtiyaz Khan. Beyond target networks: Improving deep q-learning with functional regularization. June 2021. On the spectral bias of neural networks. Aristide Nasim Rahaman, Devansh Baratin, Felix Arpit, Min Draxler, Fred Lin, Yoshua Hamprecht, Aaron Bengio, Courville, Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. pp. 5301-5310, 2019. Random features for large-scale kernel machines. Ali Rahimi, Benjamin Recht, 1049-5258Adv. Neural Inf. Process. Syst. 20Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. Adv. Neural Inf. Process. Syst., 20, 2007. ISSN 1049-5258. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. Ali Rahimi, Benjamin Recht, 1049-5258Adv. Neural Inf. Process. Syst. 21Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. Adv. Neural Inf. Process. Syst., 21, 2008. ISSN 1049-5258. Neural fitted q iteration-first experiences with a data efficient neural reinforcement learning method. Martin Riedmiller, European conference on machine learning. SpringerMartin Riedmiller. Neural fitted q iteration-first experiences with a data efficient neural reinforce- ment learning method. In European conference on machine learning, pp. 317-328. Springer, 2005a. Neural fitted q iteration -first experiences with a data efficient neural reinforcement learning method. Martin Riedmiller, 978-3-540-31692-3Machine Learning: ECML 2005. João Gama, Rui Camacho, Pavel B. Brazdil, Alípio Mário Jorge, and Luís TorgoBerlin, Heidelberg; Berlin HeidelbergSpringerMartin Riedmiller. Neural fitted q iteration -first experiences with a data efficient neural rein- forcement learning method. In João Gama, Rui Camacho, Pavel B. Brazdil, Alípio Mário Jorge, and Luís Torgo (eds.), Machine Learning: ECML 2005, pp. 317-328, Berlin, Heidelberg, 2005b. Springer Berlin Heidelberg. ISBN 978-3-540-31692-3. The convergence rate of neural networks for learned functions of different frequencies. David Basri Ronen, Yoni Jacobs, Shira Kasten, Kritchman, Advances in Neural Information Processing Systems. Curran Associates, Inc32Basri Ronen, David Jacobs, Yoni Kasten, and Shira Kritchman. The convergence rate of neural networks for learned functions of different frequencies. In Advances in Neural Information Pro- cessing Systems, volume 32. Curran Associates, Inc., 2019. Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, Praneeth Netrapalli, arXiv:2006.07710The pitfalls of simplicity bias in neural networks. arXiv preprintHarshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. The pitfalls of simplicity bias in neural networks. arXiv preprint arXiv:2006.07710, 2020. Samarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, Animesh Garg, arXiv:2010.09163Deep dense architectures in reinforcement learning. 2arXiv preprintSamarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, and Animesh Garg. D2rl: Deep dense architectures in reinforcement learning. arXiv preprint arXiv:2010.09163, 2020. Implicit neural representations with periodic activation functions. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, Gordon Wetzstein, Advances in Neural Information Processing Systems. 33Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Im- plicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 33, 2020. Mountain car software. R Sutton, R Sutton. Mountain car software, December 2000. URL http://www.cs.ualberta.ca/ sutton/MountainCar/MountainCar.html. Learning to predict by the methods of temporal differences. S Richard, Sutton, 10.1007/BF00115009Mach. Learn. 31Richard S Sutton. Learning to predict by the methods of temporal differences. Mach. Learn., 3(1): 9-44, August 1988. ISSN 0885-6125, 1573-0565. doi: 10.1007/BF00115009. Reinforcement Learning: An Introduction. S Richard, Andrew G Sutton, Barto, MIT PressISBN 9780262352703Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, October 2018. ISBN 9780262352703. Fourier features let networks learn high frequency functions in low dimensional domains. Matthew Tancik, P Pratul, Ben Srinivasan, Sara Mildenhall, Nithin Fridovich-Keil, Utkarsh Raghavan, Ravi Singhal, Jonathan T Ramamoorthi, Ren Barron, Ng, NeurIPSMatthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let net- works learn high frequency functions in low dimensional domains. NeurIPS, 2020. . Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and Martin RiedmillerDeepmind control suiteYuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Bud- den, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and Martin Ried- miller. Deepmind control suite. January 2018. Practical issues in temporal difference learning. Gerald Tesauro, 1049-5258Adv. Neural Inf. Process. Syst. 4Gerald Tesauro. Practical issues in temporal difference learning. Adv. Neural Inf. Process. Syst., 4, 1991. ISSN 1049-5258. Asynchronous stochastic approximation and q-learning. N John, Tsitsiklis, 10.1007/BF00993306Mach. Learn. 163John N Tsitsiklis. Asynchronous stochastic approximation and q-learning. Mach. Learn., 16(3): 185-202, September 1994. ISSN 0885-6125, 1573-0565. doi: 10.1007/BF00993306. Q-learning. Jch Christopher, Peter Watkins, Dayan, Machine learning. 83-4Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992. Learning from delayed rewards. Christopher John Cornish Hellaby Watkins, Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. 1989. Tensor programs i: Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Greg Yang, Greg Yang. Tensor programs i: Wide feedforward or recurrent neural networks of any architecture are gaussian processes. October 2019. Tensor programs ii: Neural tangent kernel for any architecture. Greg Yang, Greg Yang. Tensor programs ii: Neural tangent kernel for any architecture. June 2020. Tensor programs iib: Architectural universality of neural tangent kernel training dynamics. Greg Yang, Etai Littwin, Proceedings of the 38th International Conference on Machine Learning. Marina Meila and Tong Zhangthe 38th International Conference on Machine LearningPMLR139Greg Yang and Etai Littwin. Tensor programs iib: Architectural universality of neural tangent kernel training dynamics. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11762-11772. PMLR, 2021. A fine-grained spectral perspective on neural networks. Greg Yang, Hadi Salman, arXiv:1907.10599arXiv preprintGreg Yang and Hadi Salman. A fine-grained spectral perspective on neural networks. arXiv preprint arXiv:1907.10599, 2019. A la carte-learning fast kernels. Zichao Yang, Andrew Wilson, Alex Smola, Le Song, Artificial Intelligence and Statistics. Zichao Yang, Andrew Wilson, Alex Smola, and Le Song. A la carte-learning fast kernels. In Artificial Intelligence and Statistics, pp. 1098-1106, 2015. Soft actor-critic (sac) implementation in pytorch. Denis Yarats, Ilya Kostrikov, 2020Denis Yarats and Ilya Kostrikov. Soft actor-critic (sac) implementation in pytorch. https:// github.com/denisyarats/pytorch_sac, 2020. Mastering visual continuous control: Improved data-augmented reinforcement learning. Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto, arXiv:2107.09645Acrobot-swingup BoxarXiv preprintDenis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous con- trol: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. Acrobot-swingup Box(6,) . Box, 0.003Box(1,) 0.003 . Finger-Turn-Hard, Box, 0.00112Finger-turn-hard Box(12,) Box(2,) 0.001 . Hopper-Hop, Box, 0.00315Hopper-hop Box(15,) Box(4,) 0.003 . Cheetah-Run Box, 0.00117Cheetah-run Box(17,) Box(6,) 0.001 . Walker-Run, Box, 0.00124Walker-run Box(24,) Box(6,) 0.001 . Humanoid-Run Box, 0.00167Humanoid-run Box(67,) Box(21,) 0.001 . Quadruped-Run, Box, 0.000178Quadruped-run Box(78,) Box(12,) 0.0001 ARCHITECTURAL DETAILS For the state space problems, we parameterize the actor and the critic in the MLP baseline with three latent layers. For FFN (ours), we replace the first layer in the MLP a learned Fourier features layer (LFF). If d is the input dimension, both MLP and FFN have. 40 × d, 1024, 1024] as hidden dimension for each of their layers. Note that in the actor, d = observation dimension whereas in the critic, d = observation dimension + actor dimensionA.4 ARCHITECTURAL DETAILS For the state space problems, we parameterize the actor and the critic in the MLP baseline with three latent layers. For FFN (ours), we replace the first layer in the MLP a learned Fourier features layer (LFF). If d is the input dimension, both MLP and FFN have [40 × d, 1024, 1024] as hidden dimension for each of their layers. Note that in the actor, d = observation dimension whereas in the critic, d = observation dimension + actor dimension. 2021), and derive conv FFN by replacing the first convolutional layer with a 1x1 convolution using the sine function as non-linearity. We use the initialization scheme described in Equation 14 for the weights and biases. Yarats, We build upon DrQv2. We build upon DrQv2 (Yarats et al., 2021), and derive conv FFN by replacing the first convolutional layer with a 1x1 convolution using the sine function as non-linearity. We use the initialization scheme described in Equation 14 for the weights and biases. A.5 SUMMARY OF VARIOUS FOURIER FEATURES. A.5 SUMMARY OF VARIOUS FOURIER FEATURES Using Fourier features to correct the spectral-bias is a general technique that goes beyond a particular parameterization. Hence we present comparison betweenUsing Fourier features to correct the spectral-bias is a general technique that goes beyond a particular parameterization. Hence we present comparison between
[]
[ "Critical Higgs Mass and Temperature Dependence of Gauge Boson Masses in the SU(2) Gauge-Higgs Model", "Critical Higgs Mass and Temperature Dependence of Gauge Boson Masses in the SU(2) Gauge-Higgs Model" ]
[ "F Karsch ", "T Neuhaus \nFachbereich Physik\nBUGH Universität Wuppertal\nD-42097WuppertalGermany\n", "A Patkós \nInstitute of Physics\nEötvös University\nBudapestHungary\n", "J Rank \nSCRI\nFlorida State University\n32306-4052TallahasseeFLUSA\n", "\nFakultät für Physik\nUniversität Bielefeld\nP.O. Box 100131D-33501Bielefeld\n" ]
[ "Fachbereich Physik\nBUGH Universität Wuppertal\nD-42097WuppertalGermany", "Institute of Physics\nEötvös University\nBudapestHungary", "SCRI\nFlorida State University\n32306-4052TallahasseeFLUSA", "Fakultät für Physik\nUniversität Bielefeld\nP.O. Box 100131D-33501Bielefeld" ]
[]
We study the effective 3-D SU(2) Gauge-Higgs model at finite temperature for Higgs-masses in the range from 60 GeV up to 100 GeV. The first order electroweak phase transition weakens with increasing Higgs-mass and terminates at a critical end-point. For Higgs-mass values larger than about mH,c = 75.4(6) GeV the thermodynamic signature of the transition is described by a crossover. Close to this Higgs-mass value we investigate the vector boson propagator in Landau gauge. The calculated W-boson screening masses are compared with predictions based on gap equations.
10.1016/s0920-5632(96)00736-0
[ "https://arxiv.org/pdf/hep-lat/9608087v1.pdf" ]
17,097,160
hep-lat/9608087
21dda45b92f087ee1e8ab029112cd043b6f9d825
Critical Higgs Mass and Temperature Dependence of Gauge Boson Masses in the SU(2) Gauge-Higgs Model Aug 1996 August 1996 F Karsch T Neuhaus Fachbereich Physik BUGH Universität Wuppertal D-42097WuppertalGermany A Patkós Institute of Physics Eötvös University BudapestHungary J Rank SCRI Florida State University 32306-4052TallahasseeFLUSA Fakultät für Physik Universität Bielefeld P.O. Box 100131D-33501Bielefeld Critical Higgs Mass and Temperature Dependence of Gauge Boson Masses in the SU(2) Gauge-Higgs Model Aug 1996 August 1996arXiv:hep-lat/9608087v1 15 1 We study the effective 3-D SU(2) Gauge-Higgs model at finite temperature for Higgs-masses in the range from 60 GeV up to 100 GeV. The first order electroweak phase transition weakens with increasing Higgs-mass and terminates at a critical end-point. For Higgs-mass values larger than about mH,c = 75.4(6) GeV the thermodynamic signature of the transition is described by a crossover. Close to this Higgs-mass value we investigate the vector boson propagator in Landau gauge. The calculated W-boson screening masses are compared with predictions based on gap equations. Introduction The standard model of electroweak interactions predicts the existence of a phase transition inbetween a low temperature symmetry broken and a high temperature symmetric phase [1]. Its thermodynamic properties lead to cosmological consequences. One might hope that the baryon asymmetry can be generated at the electroweak phase transition, if the transition is of strong first order. For values of the zero temperature Higgs-mass m H ≤ 70 GeV the phase transition is of first order [2,3]. As the Higgs-mass is increased further, the thermodynamic singularity at the electroweak phase transition weakens. It is even possible, that for large values of the Higgs-mass the phase transition loses its nonanalytical structure and is described by a crossover [4]. In the strict sense the electroweak phase transition would then cease to exist. The exact determination of the critical Higgs-mass value m H,c , at which the first order electroweak phase transition changes from first order to a crossover is important in view of its implications for the standard model. Another aspect of the electroweak theory is * Talk presented at the Lattice 96 conference by J.R. the non-vanishing magnetic screening mass. It controls the infrared behavior of the theory at high temperatures and influences the nature of the phase transition itself. The 3-D SU(2) Gauge-Higgs model on the lattice This work is a continuation of earlier studies and for details on the formulation of the theory we refer to [5]. Here we recall the simulated 3 − D action functional: S 3D lat = β 2 P TrU P (x) + 1 2 x,i TrΦ † x U x,i Φ x+î − 1 2κ x 1 2 TrΦ † x Φ x − λ3 24 x ( 1 2 TrΦ † x Φ x ) 2 . (1) The relationship of the dimensionless lattice couplings β, λ 3 , κ to the couplings of the T = 0 SU (2) Gauge-Higgs system can be found in [5]. Our simulation is performed at β = 9.0 and at 5 values Determination of the critical Higgsmass To illustrate the critical behavior as a function of λ 3 , or the Higgs-mass, we display in figure 1 the maxima of the Φ 2 -susceptibility, which is given by The determination of the critical Higgs-mass value relies on the analysis of Fisher or Lee-Yang zeroes [6] in the crossover-region of the theory. The partition function Z is analytically continued into the complex plane as a function of the complex hopping parameter κ. Denoting with z 0 the lowest zero of Z, i.e. the zero in κ with small-est length, we expect in the vicinity of the critical end-point the scaling law χ Φ 2 = V Φ † Φ − Φ † Φ 2 .(2)Im(z 0 ) = CL −1/ν + R(λ 3 ).(3) Such a scaling behavior can also be observed for Lee-Yang zeroes in the high temperature phase of the Ising model. Our strategy to localize the end-point then is to determine the value of λ 3 , at which the regular contribution R to the scaling law vanishes: R(λ 3,c ) = 0. In figure 2 we display lnIm(z 0 ) vs. lnL. The solid curves in figure 2 correspond to fits with the scaling law. Gauge Boson masses in Landau gauge The W-boson mass is determined from the Wboson propagator in Landau-gauge, |∂ µ A µ (x)| 2 = 0. For a detailed discussion of the implementation of the Landau-gauge on a lattice see [5] and references therein. We investigate the W-boson propagator at λ 3 = 0.291275, which is very close to the critical Higgs-mass. Simulations have been performed on a 16 2 × 32 lattice. The propagators have been analyzed in the same way as discussed in [5]. Our results are shown in figure 4. In the high temperature phase the W-boson propagator stays constant. A fit to the data for κ ≤ κ c , the full triangles in the figure, yields m W (κ ≤ κ c ) = 0.161(3). It is worthwhile to note, that in pure SU(2) gauge theory the mag-netic screening mass has a value 0.165(12), the horizontal full and dotted lines in figure 4. In the symmetry broken phase the mass increases rapidly. In this region the data are very well described by the ansatz m w = 0.161 + a(κ − κ c ) β κ ≥ κ c .(4) The fit to the full circles in the figure results into a value for the exponent β of β ≈ 0.4. We are now able to compare our results with predictions based on gap equations [7]. Using similar parameters in the gap equations as in our study we observe a qualitative agreement, the dashed curve in the figure. Summary We have exploited the scaling behavior of partition function zeroes in the vicinity of the critical end-point in order to determine the critical Higgs-mass m H,c . The electroweak phase transition looses its first order character at a Higgsmass value of about m H,c = 75.4(6) GeV. Close to the critical Higgs-mass we have measured the W-boson propagator in Landau gauge. W-boson screening masses remain constant in the high temperature symmetric phase and increase in the low temperature Higgs phase. The agreement of these data with predictions based on gap equations is of qualitative nature. λ 3 = 0.170190, 0.291275, 0.313860, 0.401087 and 0.498579. The hopping parameter is varied across the phase transition. These λ 3 -values correspond to Higgs-mass values of m H = 59.2, 76.1, 78.9, 88.7 and 98.5 GeV, if the 1-loop parameter mapping of [5] is used. For possible correction terms inducing small corrections to the here cited Higgs-mass values we refer to a forthcoming publication. Figure 1 . 1Maxima of the Φ 2 -susceptibility. We investigate lattice sizes ranging from L = 8 up to L = 48. The labeling of the data in figure 1 corresponds to the one used in figure 2. The data at λ 3 = 0.170190 are consistent with a first order transition, the dotted line with slope 3 in the figure. At λ 3 = 0.313860, 0.401087 and 0.498579 a crossover is observed, while at λ 3 = 0.291275 the behavior is almost critical, the solid scaling curve in figure 2. Figure 2 . 2Imaginary parts of the lowest zeroes of the partition function. In figure 3 the constant R(λ 3 ) is displayed as function of λ 3 . The data are consistent with a linear dependence on λ 3 and the fit results into λ 3,c = 0.2853(48), corresponding to a critical Higgs-mass value of approximately m H,c = 75.4(6) GeV. Figure 3 . 3The determination of λ 3,c . Figure 4 . 4W-boson screening masses, calculated on a 16 2 × 32 lattice at λ 3 = 0.291275. The full curve describes the fit to the data. The dashed curve represents the result obtained from gap equations. . A D Linde, D A Kirzhnits, Phys. Lett. 72471A.D. Linde and D.A. Kirzhnits, Phys. Lett. B72 (1972) 471. . Z Fodor, Phys. Lett. 380113Z. Fodor et al., Phys. Lett. B380 (1996) 113. Detailed Phase Transition Study at M H ≤ 70 GeV in a 3-dimensional SU (2)-Higgs Model. M Gürtler, these proceedingsM. Gürtler et al., Detailed Phase Transition Study at M H ≤ 70 GeV in a 3-dimensional SU (2)-Higgs Model, these proceedings. K Kajantie, hep- lat/9605288Is there a Hot Electroweak Phase Transition at M H ≈ M W ?. K. Kajantie et al., Is there a Hot Electroweak Phase Transition at M H ≈ M W ?, hep- lat/9605288. Gauge Boson Masses in the 3-d, SU(2) Gauge-Higgs Model. F Karsch, hep- lat/9603004Nucl. Phys. B). to appear inF. Karsch et al., Gauge Boson Masses in the 3-d, SU(2) Gauge-Higgs Model, hep- lat/9603004 (to appear in Nucl. Phys. B). . C N Yang, T D Lee, Phys. Rev. 87404C.N. Yang, T.D. Lee, Phys. Rev. 87 (1952) 404. . W Buchmüller, O Philipsen, Nucl. Phys. 44347W. Buchmüller and O. Philipsen, Nucl. Phys. B443 (1995) 47.
[]
[ "A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation", "A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation" ]
[ "Jinxia Zhang \nMinistry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina\n\nSoutheast University Shenzhen Research Institute\n518057ShenzhenGuangdongChina\n", "Xinyi Chen \nMinistry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina\n", "Haikun Wei \nMinistry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina\n", "Kanjian Zhang \nMinistry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina\n" ]
[ "Ministry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina", "Southeast University Shenzhen Research Institute\n518057ShenzhenGuangdongChina", "Ministry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina", "Ministry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina", "Ministry of Education\nSchool of Automation\nKey Laboratory of Measurement and Control of CSE\nSoutheast University\n210096NanjingJiangsuChina" ]
[]
Nowadays, the rapid development of photovoltaic(PV) power stations requires increasingly reliable maintenance and fault diagnosis of PV modules in the field. Due to the effectiveness, convolutional neural network (CNN) has been widely used in the existing automatic defect detection of PV cells. However, the parameters of these CNN-based models are very large, which require stringent hardware resources and it is difficult to be applied in actual industrial projects. To solve these problems, we propose a novel lightweight high-performance model for automatic defect detection of PV cells in electroluminescence(EL) images based on neural architecture search and knowledge distillation. To auto-design an effective lightweight model, we introduce neural architecture search to the field of PV cell defect classification for the first time. Since the defect can be any size, we design a proper search structure of network to better exploit the multi-scale characteristic. To improve the overall performance of the searched lightweight model, we further transfer the knowledge learned by the existing pre-trained large-scale model based on knowledge distillation. Different kinds of knowledge are exploited and transferred, including attention information, feature information, logit information and task-oriented information. Experiments have demonstrated that the proposed model achieves the state-of-the-art performance on the public PV cell dataset of EL images under online data augmentation with accuracy of 91.74% and the parameters of 1.85M. The proposed lightweight high-performance model can be easily deployed to the end devices of the actual industrial projects and retain the accuracy.
10.48550/arxiv.2302.07455
[ "https://export.arxiv.org/pdf/2302.07455v1.pdf" ]
256,868,776
2302.07455
e826647b6791766706072ccf8f2cb699c8f67154
A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation Jinxia Zhang Ministry of Education School of Automation Key Laboratory of Measurement and Control of CSE Southeast University 210096NanjingJiangsuChina Southeast University Shenzhen Research Institute 518057ShenzhenGuangdongChina Xinyi Chen Ministry of Education School of Automation Key Laboratory of Measurement and Control of CSE Southeast University 210096NanjingJiangsuChina Haikun Wei Ministry of Education School of Automation Key Laboratory of Measurement and Control of CSE Southeast University 210096NanjingJiangsuChina Kanjian Zhang Ministry of Education School of Automation Key Laboratory of Measurement and Control of CSE Southeast University 210096NanjingJiangsuChina A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation Defect detectionPhotovoltaic cellsElectroluminescenceDeep learningNeural architecture searchKnowledge distillation Nowadays, the rapid development of photovoltaic(PV) power stations requires increasingly reliable maintenance and fault diagnosis of PV modules in the field. Due to the effectiveness, convolutional neural network (CNN) has been widely used in the existing automatic defect detection of PV cells. However, the parameters of these CNN-based models are very large, which require stringent hardware resources and it is difficult to be applied in actual industrial projects. To solve these problems, we propose a novel lightweight high-performance model for automatic defect detection of PV cells in electroluminescence(EL) images based on neural architecture search and knowledge distillation. To auto-design an effective lightweight model, we introduce neural architecture search to the field of PV cell defect classification for the first time. Since the defect can be any size, we design a proper search structure of network to better exploit the multi-scale characteristic. To improve the overall performance of the searched lightweight model, we further transfer the knowledge learned by the existing pre-trained large-scale model based on knowledge distillation. Different kinds of knowledge are exploited and transferred, including attention information, feature information, logit information and task-oriented information. Experiments have demonstrated that the proposed model achieves the state-of-the-art performance on the public PV cell dataset of EL images under online data augmentation with accuracy of 91.74% and the parameters of 1.85M. The proposed lightweight high-performance model can be easily deployed to the end devices of the actual industrial projects and retain the accuracy. Introduction The lifetime of photovoltaic(PV) modules is essential for power supply and sustainable development of solar technology. However, the PV cells are easily affected by various external factors. During the manufacturing process, minor operational errors may result in module damages. In addition, vibration and shock during transportation and installation may also cause module breakage. Defects such as cracks, solder corrosion, cell interconnect breakage can make PV modules unusable, and microcracks that are hard to observe will potentially affect future output power and lifetime [1,2]. The above defects in PV cells may cause module failure during operation, which can lead to power reduction and even safety problems for the whole system [3]. The current-voltage(I-V) curve is used for detection of defective PV modules. Changes in I-V characteristics can reflect those heavily degraded modules. However, tiny cracks can hardly affect I-V characteristics, thus are difficult to identify. These microcracks have the potential possibility of separation and degradation, which can seriously affect the future use [4]. As described in some research, microcracks can cause power attenuation, the loss of which varies from 0.9% to 42.8%, and may cause hot spot effect [5,6]. Besides I-V curve, infrared thermal(IRT) imaging [7] is another technology which can be used to detect defects. The temperature of PV cells with defects is significantly higher than other cells around them. However, the hot spots of the PV modules are not necessarily caused by the defects. Other factors like object occlusion can also lead to the abnormal detection results. Also, microcracks which have not yet affecting power efficiency can not be recognized by IRT images with a relatively low resolution. Due to the high resolution of imaging, electroluminescence (EL) imaging [8] has become one of the most commonly used methods for defect detection of PV modules. EL imaging system is a non-destructive technology with high imaging resolution which can be used to detect microcracks [9]. In EL images, cracks and other defects in defective PV cells appear as dark gray lines and areas. In early stage, traditional methods based on manual features are proposed to detect defects in EL images. These methods depend on large amounts of manual design experiments and the performances are limited. Because of the strong feature capturing ability of convolutional neural network(CNN), the methods using deep learning have gradually become the mainstream to detect defects in EL images. However, while CNN has greatly improved the detection accuracy, it also requires more time and hardware re-sources, making it hard to be deployed in end devices of practical applications. In order to meet the requirements of both the accuracy and speed of defect detection in the industrial field, a lightweight and efficient detection network is required. There are a few works [10,11,12] which have proposed lightweight CNN-based methods to detect defective PV cells in EL images. These lightweight CNN-based methods are all based on manual design, which require a lot of experiments to find a suitable network structure. To obtain a lightweight structure for practical application with much less manual work, we introduce neural architecture search (NAS) into the defective PV cell classification task, which is the first method using NAS to automatically design networks in the field of PV defect detection. Aiming at the automatic design of network architecture, NAS can reduce manual intervention and make better use of computing resources in an automated manner. Since the defects can be any size, we propose a search space which can enhance features at different scales, obtaining a lightweight network architecture that can better extract multi-scale features. To make better use of the prior knowledge, knowledge distillation is introduced to learn the priors obtained by the existing pre-trained large-scale model to improve the performance of the searched lightweight network. Different kinds of knowledge are transferred, including attention information, feature information, logit information and task-oriented information. The obtained lightweight network has a high performance, which even outperforms the existing large-scale teacher model. The contributions of the proposed method can be summarized as follows: 1) We propose a lightweight network structure for detection of defective PV cells with high accuracy of 91.74% and size of 1.85M parameters, achieving the state-of-the-art performance on public PV cell dataset [13] of EL images under online data augmentation. The proposed model also has high accuracy on defective PV cells up to 94.26% on our private dataset. 2) We introduce NAS to the field of PV cell defect detection for automatic lightweight network design, which reduces the workload of manual design. To detect defects with any size, the search space is designed by considering multi-scale characteristic into the network architecture. 3) To make full use of the priors already learned by the existing large-scale network, we utilize knowledge distillation to transfer various prior knowledge into our model. We consider attention information, feature information, logit information and task-oriented information into the knowledge transfer process and the experiments prove the effectiveness of knowledge distillation to enhance the ability of recognizing defective PV cells. Related work Traditional methods Some research uses traditional image processing methods to detect defects in EL images. These methods usually rely on the manually selected features. Dhimish et al. [14,15] used the bit-by-bit OR gate method to process EL images and enhance crack images, but the detection accuracy and other results were not given. Tsai et al. [16] presented an independent component analysis technique, but finger cracks that have little effect on crack detection were identified as other cracks. Anwar et al. [17] proposed an improved image segmentation method based on anisotropic diffusion filter and support vector machine(SVM) was employed to detect microcrack defects based on medium-sized datasets, but this method requires a higher level of pre-processing. Su et al. [18] improved a new feature descriptor, which combines central pixel gradient information with central symmetric local binary mode to obtain more recognizable defect features under uneven background interference. In traditional image processing methods, edge gradient information is often used to describe the features. Due to the similarity between the change of edge gradient of defects and the grain under complex background, it is easy to be disturbed when distinguishing defects from background grains. At the same time, these methods tend to be applied only on small datasets, and their generalization ability is not strong. Deep learning based methods Due to the popularity of deep learning, surface defect detection of PV cells based on deep learning has become a research hotspot in this field. CNN is becoming a widely used detection method because of its strong feature extraction ability. Sun et al. [19] proposed a crack classification network based on LeNet5 [20], which can classify four kinds of crack defects. Bartler et al. [21] designed an improved classification network based on VGG16 [22] structure and explored the effects of a few oversampling and data expansion methods on performance improvement. Deitsch et al. [23] conducted two defect classification methods based on VGG19 and SVM, and contributed a PV cell dataset of EL images. Shou et al. [24] presented an unsupervised defect detection method based on generative adversarial networks(GAN), but the stability of the network needs to be further discussed. Both studies by Liu et al. [25] and Su et al. [26] improved the region proposal network on the basis of Faster-RCNN, and realized the detection of small cracks in PV cells. These studies are based on existing networks by transfer learning or improvement on some layers and parameters. Compared with traditional methods using EL images, deep learning methods have better generalization ability and higher accuracy. Lightweight methods Most of the existing deep learning models are large, requiring high hardware deployment in field for PV cell defect detection. To solve this problem, a few researchers proposed lightweight networks by manual design. Karimi et al. [10] designed a 4layer CNN structure for classification of 3 kinds of defects. Tang et al. [11] designed a 9-layer CNN structure and improved the performance with a mixture of GAN generation and traditional data augmentation. Inspired by VGG11, a 9-layer CNN structure was designed by Akram et al. [12] and validated on the public PV cell dataset [13]. Wang et al. [27] utilized octave convolution to build a lightweight network with high inference speed. All these studies are based on manual network structure design, which are difficult and require a large amount of experiments. Besides, manual structure design depends a lot on the existing data and is less universal. To reduce the manual workload in model design, we introduce neural architecture search(NAS) into the task of PV cell classification for effective automatic architecture design. For better model training, we transfer different prior knowledge already learned by large-scale model based on knowledge distillation. In this process, attention information, feature information, logit information and task-oriented information are exploited and transferred to enhance the performance of the searched lightweight model. Methodology An effective lightweight network is proposed in this section for detection of defective PV cell by NAS and knowledge transfer. To automatically design lightweight network, NAS is introduced to the field of PV cell defect detection for the first time. To detect defects with any size, the network architecture search space is designed by adding multi-scale characteristic. Then a variety of prior knowledge is transferred by knowledge distillation to make full use of the priors already learned by the large-scale network. The illustration of our method is depicted in Figure 1. Automatic lightweight network design We employ a continuous gradient-based NAS framework, i.e. DARTS [28] to design the lightweight network automatically for PV cell defect detection since DARTS has a fast search speed. We further design a suitable search space by consideration of the visual multi-scale characteristic of the PV cell defects. The defects of PV cells can be of any size, and the tiny microcrack detection in particular is a difficult issue. Considering the multi-scale characteristic of the defects, the search space is designed to enhance features of different size. The employed search space for the lightweight network is mainly stacked by two kinds of cell structures called normal cell and reduction cell, which is based on the idea of reusing blocks like ResNet [29]. The possible connection type of nodes in cell structure will be chosen from candidates operations. Cell structure stacked in network architecture can be considered as a kind of convolutional operations. Normal cell is set to maintain the size of the input, while reduction cell has the function of down-sampling. To obtain multi-scale information, the search space for the lightweight network architecture is designed by stacking five normal cell and four reduction cells. The designed search space for the PV cell defect recognition task is shown in Figure 2. Each cell fuses two features with different scales of the previous two cells, and the first normal cell takes the same feature twice as two inputs. The details of the lightweight network is presented in Table 1. Here the first three reduction cells perform downsampling and channel expansion, while the channel number of the last reduction cell remains the same. The proposed network will finally classify the input PV cell as functional or defective. Concat −1 −2 −1 −2 o∈ O ℎ cell Node 2 Node 4 Node 3 (3) (2) (4) (1) Node 1 (a) Cell search space. Concat −1 −2 −1 −2 o∈ O Node 2 Node 4 Node 3 (3) (2)(4) (1) The internal structure of the cell is a directed acyclic graph containing N nodes, where each node represents the computed temporary feature map as in Eq. (1). Let x ( j) denote the computed temporary feature at the j th node. Each node is computed based on the input feature maps from two previous nodes. Node 1 argmax ℎ cell (b) Final cell structure.x ( j) = i< j o (i, j) (x (i) ) (1) Figure 3 shows the example of the k th cell with 4 internal nodes. Different operations o (i, j) denoted as colored lines represent the function of candidate operations between the i th node and the j th node as shown in Figure 3. (a). O is a set of candidate operations which are listed in Table 2. The candidate operations can be mainly summarized into convolutional and other kinds. Convolutional layers of the candidate operations consist of depthwise separable convolution(SepConv) and dilated depthwise separable convolution(DilConv) with two optional values of the stride. In normal cells, the stride of convolutional layers is set as 1 to remain the size of the input. But in reduction cells, the stride is set as 2 for down sampling. The setting of stride varies in two kinds of cells, which results in the different output feature size. The feature graph computed by all operations from the i th node to the j th node is calculated as in Eq. (2). The search space become continuous with the softmax transformation of each candidate operation between pairs of nodes (i, j), where the weight of candidate operation o (i, j) is denoted as α (i, j) o . o (i, j) (x) = o∈O exp(α (i, j) o ) o ∈O exp(α (i, j) o ) o(x)(2) The search framework of DARTS focuses on the learning of a set of structure variables α = {α normal , α reduction } which denotes the weight of each candidate operation. The choice of final structure is obtained as in Eq. (3). 1 SepConv represents depthwise separable convolution. 2 Stride is set as 1 in normal cells and as 2 in reduction cells. 3 DilConv represents dilated depthwise separable convolution. o (i, j) = argmax α (i, j) o , o ∈ O(3) The operations with the highest structure weight will be the final choice in the network. As shown in Figure 3.(b), the two operations of each node with highest structure weight are selected as the final cell structure. The learning process of α is treated as a bilevel optimization problem, with structure α as the upper-level variable and weights ω(weights of the convolution filters) as the lower-level variable. Because of the continuity of the search space, the gradient-based strategy optimizes the final structure α * by minimizing the validation loss and optimizes the weight ω by minimizing the training loss simultaneously [28]: min α L val (ω * (α), α) s.t. ω * (α) = argmin w L train (ω, α)(4) Knowledge transfer To make full use of the priors already learned by existing large-scale network and further improve the performance of the lightweight network architecture obtained by search process, several kinds of knowledge priors are exploited and transferred from the large network to the lightweight network. Knowledge distillation is one of the most effective methods for model compression. It enables the transfer of knowledge from a teacher model to a student model. Networks that cannot use the the prior knowledge in the pre-trained model can improve performance by learning the knowledge of the teacher network. Since the lightweight network can only be trained from scratch, by using knowledge distillation, the priors can be utilized for better training. Inspired by different knowledge distillation works [30,31,32,33,34], four different knowledge priors are transferred: attention information, feature information, logit information and task-oriented information in order to enhance the distillation effect of the PV cell defect detection task. The details of the knowledge distillation process are shown in Figure 4. Transformation functions are used to exact useful information. The shape of features in teacher model and student model are usually different, and thus transformation can also make pairs of features to match the shape in computation. Attention map of each feature is introduced in transfer process. By comparing the distance of transformed features, the feature information are also utilized. Output logit provides information of logit prediction, which is also added into knowledge transfer. Convolutional layers and pooling layers are chosen as transformation functions to capture the task-oriented information from original feature map. The knowledge transfer paths are established before the feature map channel expanding. Knowledge transfer is carried out through the auxiliary classifiers at feature maps of different resolution, ensuring knowledge learning containing both low-level information and the high-level information. Note that the auxiliary classifiers are used only in the distillation process, not affecting the inference stage. Let x i represent the i th input of total m images. Total N(in our method N = 5) feature maps are selected into the transfer process. The j th feature map with different resolution is denoted as F j (·). Attention information is provided by the spatial attention map. To get a spatial attention map from the corresponding feature, the mapping function A(·) through the channel dimension is applied, as illustrated in Eq. (5). A(x) = 1 K K k=1 ||x k || p(5) In this way, the attention map can reflect the neuron activa-tion spatially. When the feature x has K channels, the attention map A(x) consists of average of absolute values of the feature map across channel dimension, and each value is raised to the power of p. The operations of it are elementwise. The sum and power operation makes attention map focus more on spatial locations with high activations, i.e., the more discriminative parts. The attention loss L attention is denoted in Eq. (6). L attention = 1 m m i=1 N j=1 L MS E A T j F t j (x i ) , A T j F s j (x i )(6) Mean square error(MSE) is used to compute the distance between the attention maps of teacher model and student model as shown in Eq. (6). Here the transformation function used to extract features is recorded as T j (·). The superscript s and t are used in order to distinguish the teacher model and the student model. Besides attention, feature maps also contain important information for knowledge distillation to improve the performance. L feature = 1 m m i=1 N j=1 L 2−norm T j F t j (x i ) , T j F s j (x i )(7) The feature distillation loss uses L2-norm loss(L 2−norm ) to compute the distance between each pair of features from teacher and student as shown in Eq. (7). Inspired by task-oriented feature distillation [33], we extract information for classification tasks by building auxiliary classifiers between features at different depths. The task loss function is presented in Eq. (8). L task = 1 m m i=1 N j=1 L CE c j T j F s j (x i ) , y i(8) In Eq. (8), the j th auxiliary classifier is recorded as c j (·). It returns the classification result as a vector. Cross entropy loss(L CE )is used here to compute the prediction logits obtained by auxiliary classifiers and corresponding true label y. The logit distillation loss [30] is also added for student model to learn the output label distribution from teacher as formulated in Eq. (9). By learning the logit information, the student model can make use of it for prediction. L logit = 1 m m i=1 N j=1 L KL c j T j F s j (x i ) , c j T j F t j (x i )(9) In Eq. (9), L KL refers to the KL(Kullback-Leibler) divergence, which is used to measure the difference between two probability distributions. It makes the output of student model close to the one of teacher model. The final loss function in the whole distillation process can be summarized as in Eq. (10) with hyper-parameters α, β and γ balancing the proportion of each part: L total = α · L attention + β · L feature + γ · L logit + L task (10) Experimental results The details of the internal structure and performance of the lightweight network obtained by the designed searching algorithm is presented in this section. The comparison results between the proposed lightweight network and different methods are listed. Furthermore, ablation experiments are conducted to prove the effectiveness of our proposed method. Dataset and data augmentation The dataset used is the public PV cell dataset contributed by the study [13]. There are 2624 EL images of PV cells with resolution of 300×300 pixels, including both monocrystalline and polycrystalline types. The images in this public dataset are labeled as defective with a probability. We divide the samples into functional and defective ones with 0.5 as the threshold. And 75% of the images, i.e. 1970 images are randomly chosen as train set and the left 654 images are test set. All the images are resized to 150×150 pixels. The specific division is shown in Table 3. The proportion of positive and negative samples are the same for the train set and the test set. The ratio of the monocrystalline and polycrystalline types are also fixed for train set and test set. For the lightweight network search process, the whole train set is further divided into searching train set and searching test set for NAS by 50% each as explained in Figure 5. Data augmentation is to obtain more representations from the original data without substantially adding data, improving the quality of the original data. It can help the model reduce overfitting and enhance robustness. The data augmentation operations include random horizontal flip, random vertical flip, random rotation within (−2 • ,2 • ), random rotation within {0 • , 90 • , 180 • , 270 • } and random affine transformation. Final searched model structure The proposed lightweight network is stacked by normal cells and reduction cells. The final internal architectures of the two kinds of cells by search algorithm are shown in Figure 6. Experimental parameters In the process of architecture search, all the convolutional operations follow the order of Rectified linear unit-Convolutional layer-Batch normalization(ReLU-Conv-BN). The cell structure consists of 4 nodes, with two inputs from previous two cells and one output. The initial channels is set as 16. The obtained network by search algorithm tends to choose skip-connect when searching too long, which is called the 'Collapse', resulting in poor performance [35]. This is due to the structure variables α and convolution filters parameters ω in Eq. (4) competing with each other in the later optimization process. Early stopping is an effective way to suppress this phenomenon [35,36]. Hence the max search epoch is set as Figure 3. The detail information of candidate operations can be found in Table 2. 50 and the number of skip-connect in each cell is limited to less than 2 to avoid deteriorating results. In knowledge transfer, VGG16(teacher model) and its auxiliary classifiers are trained in advance and are frozen in distillation. The transformation function and auxiliary classifiers chosen for both models are shown in Table 4. The shape of each pair of features is unified through transformation function. The parameter p in Eq. (5) is set as 2. The hyper-parameters α, β and γ in Eq. (10) are set as 1000, 0.05 and 1 respectly. Our lightweight model is trained in 200 epochs with a batch size of 32. The initial learning rate is set as 0.0025 with weight decay of 7 × 10 −3 by stochastic gradient descent(SGD) optimizer. Model performance In this subsection, we show the performance of the proposed network by quantitative evaluation and comparison with the teacher network and other studies. We also evaluate our network by assessment of performance on each category of PV cells. The implementation information is also provided for application reference. To verify the model generalization, we test our model on the private dataset for further demonstrating the effectiveness of the model proposed. Performance comparison Our proposed method is compared with 6 manually designed neural networks [13,19,21,10,11,12] and the teacher model on the public dataset [13] under the same augmentation in 200 epochs. ShuffleNet [37] and MobileNet [38] are also included in experiments as a general efficient light model. Several traditional feature extraction techniques are also conducted. Open-source algorithms including HOG(histograms of oriented gradients), SIFT(scale-invariant feature transform) and SURF(speeded up robust features) are fed to RBF-kernel SVM classifier for comparison. These algorithms compare the information of centre pixel and neighbourhood pixels to compute a local key point. The parameters of SVM are selected by grid search experiments. These models are modified to classify PV cells into two classes: functional ones or defective ones. The quantitative comparisons on performance of the test set and model size are shown in Table 5. In Table 5 91.74%, exceeding other methods. As can be seen, the proposed model achieves or even outperforms the level of teacher model by 1.22%. It also shows that the proposed model has much less parameters, which can be deployed in practical end devices with less resources than classic large models. Balanced accuracy(B Acc) here means the average of two recall values on defective PV cells and functional PV cells for more fair evaluation. The deep learning based methods tend to perform better than traditional feature extraction on the dataset. Table 5.(b) reveals the accuracy of defective PV cells(Acc defective) and the accuracy of functional PV cells (Acc functional), which reflects performance of recognizing each kinds of PV cells respectively. The ability to correctly recognize defective PV cells is the most core function, which reaches 86.28% in our network and far exceeds other methods. Comparing with other manually designed models, our proposed network is automatically searched via NAS algorithm with less manual labors. Furthermore, the obtained network achieves the best comprehensive results with a relative light architecture, which proves the effectiveness of the proposed method. Performance on specific categories The performance on monocrystalline or polycrystalline PV cells separately of the proposed model are provided to further evaluate the model as shown in Table 6 and Table 7. On both categories of PV cells, our proposed model has reached the best comprehensive level. On monocrystalline PV cells, every metric of ours achieves the top as described in Table 6. With regard to the polycrystalline type that is more difficult to deal with, our model can also exceed others by far in Table 7, demonstrating the outstanding detection performance on challenging images. Traditional extraction methods tend to perform worse on polycrystalline PV cells, because they focus on high-frequency components of images. It is hard for them to distinguish between cracks and noise of background. Efficiency Comparison For end device deployment, a comprehensive consideration needs to be given to the model size and the calculation. To test the efficiency, the proposed model is evaluated on CPU platform(Intel i9-10980XE 24.75M Cache, 3 GHz). Efficiency comparison, including parameters, FLOPs(floating point operations), inference latency and memory cost of different models are displayed in Table 8. For better comparison, the performances of model balanced accuracy and recall are also included in this table. By the comparison with the secondbest network VGG16(the teacher network) , it can save nearly 180MB memory and 5.6G FLOPs. Traditional methods can run with a fast speed but the performances are poor. Compared with Adapted VGG19 and VGG16(the teacher network), our proposed model requires much less parameters and FLOPs and can reach the comparable speed. At this speed, the proposed model can diagnose 29 samples per second. Even it costs more than other lighter models, our model is far more accurate than light models [13,19,21,10,11,12,37,38] by a gap of 3.8% ∼ 11.1%. By the comprehensive consideration of information of resources and performance in Table 8, our proposed model is far superior to other small models and even the large classic networks. The proposed lightweight model can meet the deployment requirements of some common embedded devices with low power consumption, such as Raspberry Pi-4B (4GB, 15W, 9∼10 GFLOPS) and NVIDIA Jetson Nano(4GB, 10W, 7.368 GFLOPS FP64). Model generalization ability To verify the generalization performance of the models on different data sources, we trained our model on a private PV dataset. The total 8580 images with 256×256 pixel resolution are extracted from different PV panels of 6×10, 6×12 or 6×24 specifications, containing 482 defective samples and 8098 functional samples. These solar cell samples contain different bus specifications, cell edges and types. Different defects such as material defect, crack, deep crack, disconnected cell interconnect and micro crack are also included in these images. 25% of images of each class(defective or functional) are randomly selected as test set and the rest as train set. In terms of the extremely imbalanced class distribution, offline data augmentation is utilized to strengthen learning ability of defect features. Several operations are used on defective ones in train set to avoid overfitting, including horizontal flip, vertical flip, rotation within (−2 • ,2 • ), rotation within{90 • , 180 • , 270 • }, contrast enhancement, gaussian blur, affine transformation, center cropping, gaussian noise, added black border. These operations tend to simulate actual condition of PV images. Different methods are trained and tested on this private dataset under the same data augmentation. The results of each model are shown in Table 9. For the imbalanced distribution of the dataset, balanced accuracy (B Acc) and recall of defective PV cells(Acc defective) are employed to reflect the model performance precisely. With these extremely imbalanced images from actual PV plants of various sources, our model outperforms teacher model by approximately 2.3% and 5.7% in terms of balanced accuracy and accuracy of defective samples, and outperforms other method with a big gap. The accuracy of defective ones of our model is up to 94.26%, especially showing the better ability in recognizing cell defects in real-world scene. [21] 89.01 79.51 0.37M CNN [10] 76.14 58.20 0.14M CNN [11] 77.06 57. 38 12.32M CNN [12] 87.87 76.23 4.73M ShuffleNetV2 [37] 89.77 80.33 1.24M MobileNetV3 [38] 88 Ablation experiments In this subsection, we discuss three ablation experiments to demonstrate the effectiveness of our method. Firstly experiments have been carried out to find that training by knowledge transfer achieves better results than training from scratch, demonstrating the importance of prior knowledge in training. Secondly, the roles of different transfer branches are proved in experiments, which verifies the function of both shallow and deep features in transfer process. Finally, the roles of different knowledge priors are illustrated for the function in defect detection. The role of prior knowledge In our proposed method, the knowledge transfer functions as the prior knowledge which is usually provided by pre-trained large network on large-scale datasets. In Table 10, the results of models training from scratch are added, which is compared with our proposed one with knowledge transfer. Under the same conditions, there is a big gap between the cases of whether to use the prior knowledge. For F1-score, the performance improves about 13%, shown in Table 10.(a). The prior knowledge also helps to recognize defective cells, where the accuracy has improved about 22%, shown in Table 10.(b). Figure 7 shows typical types of samples in the test set of the public dataset [13], where the columns (a)-(c) show the functional PV cells including finger failure that do not necessarily affect the power efficiency, while the columns (d)-(h) show the typical defects. The original PV cell images are listed in the first row. The second row shows the detection result by the network training from scratch, while the third row displays the results by our proposed method using knowledge transfer. The results using Grad-CAM [39] show the places that the model pays attention to. The areas by model using knowledge transfer Figure 7: The performance on test set of the public PV cell dataset overlaid by Grad-CAM [39]. Typical images including functional PV cells and defective PV cells are presented. The area that model focuses on is highlighted by heat map. focus more on defects. It shows that the prior knowledge plays a significant role in performance improvement. detection. The lack of any branch can lead to degradation of the model, which proves the function of features at different depths in knowledge transfer. The role of different knowledge in knowledge transfer We exploit different kinds of prior knowledges to improve the performance of the lightweight model on task of defective PV cell detection, including attention information, feature information, logit information and task-oriented information. The roles of different knowledge are evaluated, as shown in Table 12. It shows that the task loss takes effect in the rise of the recall at the expense of some other performance, effectively increasing the ability of recognize the defects on PV cells, which is significant to the practical application. Adding attention knowledge also brings the increase of the performance. The model combined all kinds of loss performs top on most metrics especially the recall. Conclusion We have proposed a novel approach to acquire a lightweight network for detection of defective PV cells using EL images. The proposed network achieves the state-of-the-art performance on the public PV cell dataset of EL images under online data augmentation with high accuracy of 91.74% and 1.85M parameters. It requires less computation and hardware sources, and retains superior performance at the same time. Especially the ability on recognizing defective PV cells appears much higher. To automatically obtain the lightweight network, we exploit the NAS algorithm to search the network with less manual workload. Based on the multi-scale characteristics of PV cells, the search space is designed to utilize useful scale information. Furthermore, training the proposed lightweight model from scratch can not well utilize useful prior knowledge. To make full use of the priors already learned by the large-scale network, knowledge distillation is utilized and various kinds of knowledge are considered into the transfer process. The ablation experiments are also conducted to prove the effectiveness of our method. Experiments and evaluation on both public dataset and private dataset have demonstrated that our model exceeds other method by much better performance and relatively lighter size, which shows that it can be an effective tool for practical applications and terminal deployment in field and industry. The proposed method has provided a new idea of designing models for application scenes. In future, different types of defects can be considered into further study. The way of obtaining lightweight models on segmentation task can be further considered, because specific location or segmentation result of defects can provide more benefit for practical deployment in industry. Existing segmentation models can be hard to be applied in PV cell defect detection task for their larger network volume. Computing acceleration on terminals will also be investigated for further network compression and speeding up. Figure 1 : 1The architecture of the proposed lightweight network design. Firstly, the lightweight network is automatically obtained by NAS algorithm in a designed search space. Then, the priors learned by the large network are transferred to the lightweight network through knowledge distillation. Figure 2 : 2Designed search space for the lightweight defect classification network. Figure 3 : 3Internal structure of cells. It is an example of the k th cell with 4 internal nodes. Colored lines between two nodes represent the different candidate operations. (a) shows the search space in cell structure. Each node gets a mixture of features by different candidate operations. (b) is the final structure determined by the structure weight computed in the search algorithm. Figure 4 : 4The overview of the knowledge transfer. Teacher model and our student model are attached with auxiliary classifiers at each feature of different size. The diagram of feature transformation are clearly revealed. Features at different depth are selected for transfer by using auxiliary classifiers. The colored arrows point out the different loss components in the whole transfer process, including attention information, feature information, logit information and task-oriented information. The operations in transformation functions and auxiliary classifiers are only activated in knowledge transfer to capture target information. Figure 5 : 5Details of dataset division. The Train set is split into two sets for NAS process with a ratio of 50%. Figure 6 : 6Cell internal structures obtained by searching on public PV cell dataset. The structures are consistent with the schematic diagram in Table 1 : 1The detailed structure of the proposed lightweight network. Table 2 : 2Candidate operations for structure search space. Table 3 : 3Dataset division. The PV cell images are split by a ratio of 75% for Train set and Test set. The distribution of positive and negative samples and each category of PV cells in the dataset are consistent for Train set and Test set. Notice that data used for NAS process are split from the Train set as the searching train set and the searching test set shown inFigure 5.Test Set 25% Train Set 75% 50% searching train set for NAS 50% searching test set for NAS Table 4 : 4Transformation functions and auxiliary classifiers designed for the teacher model and the student model. The serial numbers of branches are sorted by the shallowest to the deepest. Here 'SepConv' in transformation functions means depthwise separable convolution. Table 5 : 5Comparison with other methods on ELPV public dataset under the same data augmentation. .(a), the overall accuracy(Acc) on test set of the proposed model is up to Accuracy on defective PV cells and functional PV cells respectively of ours and other methods.Performance on Monocrystalline PV Cells Model Acc B Acc Prec Rec F1 (%) (%) (%) (%) (%) SVM+HOG 89.18 86.70 81.82 80.77 81.29 SVM+SIFT 73.13 76.52 33.67 84.62 48.21 SVM+SURT 82.40 81.45 27.73 79.22 41.08 Adapted VGG19[13] 86.19 83.78 82.35 76.09 79.10 Adapted LeNet5[19] 82.09 78.32 78.21 66.30 71.76 Adapted VGG16[21] 85.45 82.44 82.72 72.83 77.46 CNN[10] 83.58 78.68 85.29 63.04 72.50 CNN[11] 81.72 74.41 92.16 51.09 65.73 CNN[12] 83.96 80.00 82.67 67.39 74.25 ShuffleNetV2[37] 90.30 88.46 88.37 82.61 85.39 MobilenNetV3[38] 83.96 79.48 84.51 65.22 73.62 VGG16(teacher model) 91.42 89.58 90.59 83.70 87.01 Ours 93.28 92.30 91.11 89.13 90.11 (a) Accuracy, Balanced accuracy, Precision, Recall, F1-score and parameters of ours and other methods. Performance on Monocrystalline PV Cells Model Acc defective Acc functional (%) (%) SVM+HOG 80.77 92.63 SVM+SIFT 84.62 68.42 SVM+SIFT 79.22 83.69 Adapted VGG19[13] 76.09 91.48 Adapted LeNet5[19] 66.30 90.34 Adapted VGG16[21] 72.83 92.05 CNN[10] 63.04 94.32 CNN[11] 51.09 97.73 CNN[12] 67.39 92.61 ShuffleNetV2[37] 82.61 94.32 MobilenNetV3[38] 65.22 93.75 VGG16(teacher model) 83.70 95.46 Ours 89.24 94.22 (b) Table 6 : 6Comparison with other methods on only monocrystalline PV cells of ELPV public dataset under the same data augmentation. Table 7 : 7Comparison with other methods on only polycrystalline PV cells of ELPV public dataset under the same data augmentation. Table 8 : 8Efficiency Comparison on CPU platform(Intel i9-10980XE 24.85M Cache, 3 GHz). FLOPs denotes floating point operations. Table 9 : 9Comparison with other methods on private dataset under the same data augmentation. Functional PV cellsDefective PV cells trainning from scratch trainning by knowleage transferTrue Label Prediction (a)Monocrystalline (b)Polycrystalline (c)Finger failure (d)Material defect (e)Crack (f)Deep crack (g)Microcrack (h)Disconnected cell interconnect functional functional functional functional functional functional defective defective defective defective defective defective defective defective defective functional (misclassification) (a) Accuracy, Balanced accuracy, Precision, Recall, F1-score and parameters of the proposed model training in different ways. Accuracy on defective PV cells and functional PV cells respectively of the proposed model training in different ways.with Prior Acc B Acc Prec Rec F1 Knowledge (%) (%) (%) (%) (%) × 85.63 79.77 86.18 64.22 73.60 (ours) 91.74 90.25 87.13 86.28 86.70 with Prior Acc defective Acc functional Knowledge (%) (%) × 64.22 95.33 (ours) 86.28 94.22 (b) Table 10 : 10Ablation study of using prior knowledge in model training. Training from scratch and training using knowledge transfer are denoted by '×' and' ' respectively. 4.5.2. The role of knowledge at different depths in knowledge transfer Feature maps with different resolution represent various information including deep semantics and shallow details of the objects. To grasp abundant features of the PV cells, the auxiliary classifiers are attached at different depths. Table 11 shows the role of each auxiliary classifier, where the serial numbers of branches are sorted by the shallowest to the deepest. As is presented, the deeper branch plays a more important role in defect 91.74 90.25 87.13 86.28 86.70 (a) Accuracy, Balanced accuracy, Precision, Recall, F1-score and parameters of the proposed model using features at different depths. (b) Accuracy on defective PV cells and functional PV cells respectively of the proposed model using features at different depths.Branch Acc B Acc Prec Rec F1 1(shallowest) 2 3 4(deepest) (%) (%) (%) (%) (%) × 89.91 87.58 85.57 81.37 83.42 × 89.60 87.49 84.34 81.86 83.08 × 88.99 86.91 83.00 81.37 82.18 × 88.84 86.53 83.25 80.39 81.80 (ours) Branch Acc defective Acc functional 1(shallowest) 2 3 4(deepest) (%) (%) × 81.37 93.78 × 81.86 93.11 × 81.37 92.44 × 80.39 92.67 (ours) 86.28 94.22 Table 11 : 11Ablation study of branches at different depths in knowledge transfer. Transfer branches used are denoted by ' ' and removed ones are marked by '×'. L total totalAcc B Acc PrecRec F1 L attention L f eature L logit L task 89.24 92.74 81.37 86.68 (ours) 91.74 90.25 87.13 86.28 86.70(a) Accuracy, Balanced accuracy, Precision, Recall, F1-score and parameters of the proposed model using different knowledge. Accuracy on defective PV cells and functional PV cells respectively of the proposed model using different knowledge.(%) (%) (%) (%) (%) × 89.91 88.51 83.17 84.80 83.98 × 90.37 87.64 87.70 80.39 83.89 × 89.30 87.67 82.52 83.33 82.93 × 92.20 L total Acc defective Acc functional L attention L f eature L logit L task (%) (%) × 84.80 92.22 × 80.39 94.89 × 83.33 92.00 × 81.37 97.11 (ours) 86.28 94.22 (b) Table 12 : 12Ablation study of different knowledge in knowledge transfer. Knowledge used in training are denoted by ' ' and ones marked by '×' are not added in loss function. Degradations of silicon photovoltaic modules: A literature review. A Ndiaye, A Charki, A Kobi, C M Kébé, P A Ndiaye, V Sambou, 10.1016/j.solener.2013.07.005Solar Energy. 96A. Ndiaye, A. Charki, A. Kobi, C. M. Kébé, P. A. Ndiaye, V. Sambou, Degradations of silicon photovoltaic modules: A literature review, Solar Energy 96 (2013) 140-151. doi:10.1016/j.solener.2013.07.005. Thermo-mechanical behavior assessment of smart wire connected and busbarpv modules during production, transportation, and subsequent field loading stages. G Li, M Akram, Y Jin, X Chen, C Zhu, A Ahmad, R Arshad, X Zhao, 10.1016/j.energy.2018.12.002Energy. 168G. Li, M. Akram, Y. Jin, X. Chen, C. Zhu, A. Ahmad, R. Arshad, X. Zhao, Thermo-mechanical behavior assessment of smart wire connected and busbarpv modules during production, transportation, and subsequent field loading stages, Energy 168 (2019) 931-945. doi:10.1016/j.energy. 2018.12.002. Review of Failures of Photovoltaic Modules, IEA-Photovoltaic Power Systems Programme Technical Report. M Köntges, S Kurtz, C Packard, U Jahn, K Berger, K Kato, T Friesen, H Liu, M Van Iseghem, J Wohlgemuth, 13-01M. Köntges, S. Kurtz, C. Packard, U. Jahn, K. Berger, K. Kato, T. Friesen, H. Liu, M. Van Iseghem, J. Wohlgemuth, et al., Review of Failures of Pho- tovoltaic Modules, IEA-Photovoltaic Power Systems Programme Techni- cal Report 13-01 (2014) 1-140. S Kajari-Schršder, I Kunze, M Kšntges, 10.1016/j.egypro.2012.07.125Criticality of cracks in pv modules, Energy Procedia. 27S. Kajari-Schršder, I. Kunze, M. Kšntges, Criticality of cracks in pv mod- ules, Energy Procedia 27 (2012) 658-663. doi:10.1016/j.egypro. 2012.07.125. Micro cracks distribution and power degradation of polycrystalline solar cells wafer: Observations constructed from the analysis of 4000 samples. M Dhimish, 10.1016/j.renene.2019.06.057Renewable Energy. 145M. Dhimish, Micro cracks distribution and power degradation of poly- crystalline solar cells wafer: Observations constructed from the anal- ysis of 4000 samples, Renewable Energy 145 (2020) 466-477. doi: 10.1016/j.renene.2019.06.057. Review of microcrack detection techniques for silicon solar cells. M Abdelhamid, R Singh, M Omar, 10.1109/JPHOTOV.2013.2285622IEEE Journal of Photovoltaics. 41M. Abdelhamid, R. Singh, M. Omar, Review of microcrack detection techniques for silicon solar cells, IEEE Journal of Photovoltaics 4 (1) (2013) 514-524. doi:10.1109/JPHOTOV.2013.2285622. Faults and infrared thermographic diagnosis in operating c-si photovoltaic modules: A review of research and future challenges. J A Tsanakas, L Ha, C Buerhop, 10.1016/j.rser.2016.04.079Renewable and Sustainable Energy Reviews. 62J. A. Tsanakas, L. Ha, C. Buerhop, Faults and infrared thermographic diagnosis in operating c-si photovoltaic modules: A review of research and future challenges, Renewable and Sustainable Energy Reviews 62 (2016) 695-709. doi:10.1016/j.rser.2016.04.079. Photographic diagnosis of crystalline silicon solar cells utilizing electroluminescence. T Fuyuki, A Kitiyanan, 10.1007/s00339-008-4986-0Applied Physics A. 961T. Fuyuki, A. Kitiyanan, Photographic diagnosis of crystalline silicon so- lar cells utilizing electroluminescence, Applied Physics A 96 (1) (2009) 189-196. doi:10.1007/s00339-008-4986-0. Can luminescence imaging replace lock-in thermography on solar cells. O Breitenstein, J Bauer, K Bothe, D Hinken, J Müller, W Kwapil, M C Schubert, W Warta, 10.1109/JPHOTOV.2011.2169394IEEE Journal of Photovoltaics. 12O. Breitenstein, J. Bauer, K. Bothe, D. Hinken, J. Müller, W. Kwapil, M. C. Schubert, W. Warta, Can luminescence imaging replace lock-in thermography on solar cells, IEEE Journal of Photovoltaics 1 (2) (2011) 159-167. doi:10.1109/JPHOTOV.2011.2169394. Automated pipeline for photovoltaic module electroluminescence image processing and degradation feature classification. A M Karimi, J S Fada, M A Hossain, S Yang, T J Peshek, J L Braid, R H French, 10.1109/JPHOTOV.2019.2920732doi:10.1109/ JPHOTOV.2019.2920732IEEE Journal of Photovoltaics. 95A. M. Karimi, J. S. Fada, M. A. Hossain, S. Yang, T. J. Peshek, J. L. Braid, R. H. French, Automated pipeline for photovoltaic module elec- troluminescence image processing and degradation feature classification, IEEE Journal of Photovoltaics 9 (5) (2019) 1324-1335. doi:10.1109/ JPHOTOV.2019.2920732. Deep learning based automatic defect identification of photovoltaic module using electroluminescence images. W Tang, Q Yang, K Xiong, W Yan, 10.1016/j.solener.2020.03.049Solar Energy. 201W. Tang, Q. Yang, K. Xiong, W. Yan, Deep learning based automatic defect identification of photovoltaic module using electroluminescence images, Solar Energy 201 (2020) 453-460. doi:10.1016/j.solener. 2020.03.049. Cnn based automatic detection of photovoltaic cell defects in electroluminescence images. M W Akram, G Li, Y Jin, X Chen, C Zhu, X Zhao, A Khaliq, M Faheem, A Ahmad, 10.1016/j.energy.2019.116319Energy. 189116319M. W. Akram, G. Li, Y. Jin, X. Chen, C. Zhu, X. Zhao, A. Khaliq, M. Faheem, A. Ahmad, Cnn based automatic detection of photovoltaic cell defects in electroluminescence images, Energy 189 (2019) 116319. doi:10.1016/j.energy.2019.116319. Automatic classification of defective photovoltaic module cells in electroluminescence images. S Deitsch, V Christlein, S Berger, C Buerhop-Lutz, A Maier, F Gallwitz, C Riess, 10.1016/j.solener.2019.02.067Solar Energy. 185S. Deitsch, V. Christlein, S. Berger, C. Buerhop-Lutz, A. Maier, F. Gall- witz, C. Riess, Automatic classification of defective photovoltaic module cells in electroluminescence images, Solar Energy 185 (2019) 455-468. doi:10.1016/j.solener.2019.02.067. Novel photovoltaic micro crack detection technique. M Dhimish, V Holmes, P Mather, 10.1109/TDMR.2019.2907019IEEE Transactions on Device and Materials Reliability. 192M. Dhimish, V. Holmes, P. Mather, Novel photovoltaic micro crack de- tection technique, IEEE Transactions on Device and Materials Reliability 19 (2) (2019) 304-312. doi:10.1109/TDMR.2019.2907019. Solar cells micro crack detection technique using state-of-the-art electroluminescence imaging. M Dhimish, V Holmes, 10.1016/j.jsamd.2019.10.004doi:10.1016/j. jsamd.2019.10.004Journal of Science: Advanced Materials and Devices. 44M. Dhimish, V. Holmes, Solar cells micro crack detection technique us- ing state-of-the-art electroluminescence imaging, Journal of Science: Ad- vanced Materials and Devices 4 (4) (2019) 499-508. doi:10.1016/j. jsamd.2019.10.004. Defect detection in solar modules using ica basis images. D.-M Tsai, S.-C Wu, W.-Y Chiu, 10.1109/TII.2012.2209663IEEE Transactions on Industrial Informatics. 91D.-M. Tsai, S.-C. Wu, W.-Y. Chiu, Defect detection in solar modules us- ing ica basis images, IEEE Transactions on Industrial Informatics 9 (1) (2012) 122-131. doi:10.1109/TII.2012.2209663. Micro-crack detection of multicrystalline solar cells featuring an improved anisotropic diffusion filter and image segmentation technique. S A Anwar, M Z Abdullah, 10.1186/1687-5281-2014-15EURASIP Journal on Image and Video Processing. 20141S. A. Anwar, M. Z. Abdullah, Micro-crack detection of multicrystalline solar cells featuring an improved anisotropic diffusion filter and image segmentation technique, EURASIP Journal on Image and Video Process- ing 2014 (1) (2014) 1-17. doi:10.1186/1687-5281-2014-15. Classification of manufacturing defects in multicrystalline solar cells with novel feature descriptor. B Su, H Chen, Y Zhu, W Liu, K Liu, 10.1109/TIM.2019.2900961IEEE Transactions on Instrumentation and Measurement. 6812B. Su, H. Chen, Y. Zhu, W. Liu, K. Liu, Classification of manufacturing defects in multicrystalline solar cells with novel feature descriptor, IEEE Transactions on Instrumentation and Measurement 68 (12) (2019) 4675- 4688. doi:10.1109/TIM.2019.2900961. M Sun, S Lv, X Zhao, R Li, W Zhang, X Zhang, 10.1007/978-3-319-73564-1_13Defect detection of photovoltaic modules based on convolutional neural network, in: International Conference on Machine Learning and Intelligent Communications. SpringerM. Sun, S. Lv, X. Zhao, R. Li, W. Zhang, X. Zhang, Defect detection of photovoltaic modules based on convolutional neural network, in: Interna- tional Conference on Machine Learning and Intelligent Communications, Springer, 2017, pp. 122-132. doi:10.1007/978-3-319-73564-1_ 13. Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, 10.1109/5.726791Proceedings of the IEEE. 8611Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning ap- plied to document recognition, Proceedings of the IEEE 86 (11) (1998) 2278-2324. doi:10.1109/5.726791. Automated detection of solar cell defects with deep learning. A Bartler, L Mauch, B Yang, M Reuter, L Stoicescu, 10.23919/EUSIPCO.2018.8553025IEEE26th European signal processing conference (EUSIPCOA. Bartler, L. Mauch, B. Yang, M. Reuter, L. Stoicescu, Automated de- tection of solar cell defects with deep learning, in: 2018 26th European signal processing conference (EUSIPCO), IEEE, 2018, pp. 2035-2039. doi:10.23919/EUSIPCO.2018.8553025. K Simonyan, A Zisserman, arXiv:1409.1556arXiv: 1409.1556Very deep convolutional networks for largescale image recognition. arXiv preprintK. Simonyan, A. Zisserman, Very deep convolutional networks for large- scale image recognition, arXiv preprint arXiv:1409.1556 (2014). arXiv: 1409.1556. Automatic classification of defective photovoltaic module cells in electroluminescence images. S Deitsch, V Christlein, S Berger, C Buerhop-Lutz, A Maier, F Gallwitz, C Riess, 10.1016/j.solener.2019.02.067Solar Energy. 185S. Deitsch, V. Christlein, S. Berger, C. Buerhop-Lutz, A. Maier, F. Gall- witz, C. Riess, Automatic classification of defective photovoltaic module cells in electroluminescence images, Solar Energy 185 (2019) 455-468. doi:10.1016/j.solener.2019.02.067. Defect detection with generative adversarial networks for electroluminescence images of solar cells. C Shou, L Hong, W Ding, Q Shen, W Zhou, Y Jiang, C Zhao, 10.1109/YAC51587.2020.93376762020 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC). IEEEC. Shou, L. Hong, W. Ding, Q. Shen, W. Zhou, Y. Jiang, C. Zhao, Defect detection with generative adversarial networks for electroluminescence images of solar cells, in: 2020 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC), IEEE, 2020, pp. 312-317. doi:10.1109/YAC51587.2020.9337676. Surface defect detection of solar cells based on feature pyramid network and ga-faster-rcnn. L Liu, Y Zhu, M R U Rahman, P Zhao, H Chen, 10.1109/CCHI.2019.89019522019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI). IEEEL. Liu, Y. Zhu, M. R. U. Rahman, P. Zhao, H. Chen, Surface defect detec- tion of solar cells based on feature pyramid network and ga-faster-rcnn, in: 2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI), IEEE, 2019, pp. 292-297. doi:10.1109/CCHI. 2019.8901952. Deep learning-based solar-cell manufacturing defect detection with complementary attention network. B Su, H Chen, P Chen, G Bian, K Liu, W Liu, 10.1109/TII.2020.3008021IEEE Transactions on Industrial Informatics. 176B. Su, H. Chen, P. Chen, G. Bian, K. Liu, W. Liu, Deep learning-based solar-cell manufacturing defect detection with complementary attention network, IEEE Transactions on Industrial Informatics 17 (6) (2020) 4084-4095. doi:10.1109/TII.2020.3008021. High-efficiency low-power microdefect detection in photovoltaic cells via a field programmable gate array-accelerated dual-flow network. H Wang, H Chen, B Wang, Y Jin, G Li, Y Kan, 10.1016/j.apenergy.2022.119203Applied Energy. 318119203H. Wang, H. Chen, B. Wang, Y. Jin, G. Li, Y. Kan, High-efficiency low-power microdefect detection in photovoltaic cells via a field pro- grammable gate array-accelerated dual-flow network, Applied Energy 318 (2022) 119203. doi:10.1016/j.apenergy.2022.119203. H Liu, K Simonyan, Y Yang, arXiv:1806.09055arXiv:1806.09055Darts: Differentiable architecture search. arXiv preprintH. Liu, K. Simonyan, Y. Yang, Darts: Differentiable architecture search, arXiv preprint arXiv:1806.09055 (2018). arXiv:1806.09055. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, 10.1109/CVPR.2016.90Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recog- nition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. doi:10.1109/CVPR.2016.90. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv:1503. 02531Distilling the knowledge in a neural network. 2arXiv preprintG. Hinton, O. Vinyals, J. Dean, et al., Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531 2 (7) (2015). arXiv:1503. 02531. A Gotmare, N S Keskar, C Xiong, R Socher, arXiv:1810.13243arXiv:1810.13243A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. arXiv preprintA. Gotmare, N. S. Keskar, C. Xiong, R. Socher, A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation, arXiv preprint arXiv:1810.13243 (2018). arXiv:1810.13243. A Romero, N Ballas, S E Kahou, A Chassang, C Gatta, Y Bengio, arXiv:1412.6550arXiv:1412.6550Fitnets: Hints for thin deep nets. arXiv preprintA. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, Y. Bengio, Fitnets: Hints for thin deep nets, arXiv preprint arXiv:1412.6550 (2014). arXiv:1412.6550. Task-oriented feature distillation. L Zhang, Y Shi, Z Shi, K Ma, C Bao, Advances in Neural Information Processing Systems. 33L. Zhang, Y. Shi, Z. Shi, K. Ma, C. Bao, Task-oriented feature distillation, Advances in Neural Information Processing Systems 33 (2020) 14759- 14771. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. S Zagoruyko, N Komodakis, arXiv:1612.03928arXiv:1612.03928arXiv preprintS. Zagoruyko, N. Komodakis, Paying more attention to attention: Im- proving the performance of convolutional neural networks via attention transfer, arXiv preprint arXiv:1612.03928 (2016). arXiv:1612.03928. H Liang, S Zhang, J Sun, X He, W Huang, K Zhuang, Z Li, arXiv:1909.06035arXiv:1909.06035Darts+: Improved differentiable architecture search with early stopping. arXiv preprintH. Liang, S. Zhang, J. Sun, X. He, W. Huang, K. Zhuang, Z. Li, Darts+: Improved differentiable architecture search with early stopping, arXiv preprint arXiv:1909.06035 (2019). arXiv:1909.06035. Progressive darts: Bridging the optimization gap for nas in the wild. X Chen, L Xie, J Wu, Q Tian, 10.1007/s11263-020-01396-xInternational Journal of Computer Vision. 1293X. Chen, L. Xie, J. Wu, Q. Tian, Progressive darts: Bridging the opti- mization gap for nas in the wild, International Journal of Computer Vision 129 (3) (2021) 638-655. doi:10.1007/s11263-020-01396-x. Practical guidelines for efficient CNN architecture design. N Ma, X Zhang, H.-T Zheng, J Sun, V2 Shufflenet, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)N. Ma, X. Zhang, H.-T. Zheng, J. Sun, Shufflenet V2: Practical guidelines for efficient CNN architecture design, in: Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116-131. A Howard, M Sandler, G Chu, L.-C Chen, B Chen, M Tan, W Wang, Y Zhu, R Pang, V Vasudevan, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionSearching for MobileNetV3A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, et al., Searching for MobileNetV3, in: Proceedings of the IEEE/CVF international conference on computer vi- sion, 2019, pp. 1314-1324. R R Selvaraju, A Das, R Vedantam, M Cogswell, D Parikh, D Batra, Grad-Cam, arXiv:1610.02391arXiv:1610.02391Why did you say that? visual explanations from deep networks via gradient-based localization. arXiv preprintR. R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, D. Batra, Grad-cam: Why did you say that? visual explanations from deep net- works via gradient-based localization, arXiv preprint arXiv:1610.02391 (2016). arXiv:1610.02391.
[]
[ "Unconditionally secure ciphers with a short key for a source with unknown statistics", "Unconditionally secure ciphers with a short key for a source with unknown statistics" ]
[ "Boris Ryabko \nFederal Research Center for Information and Computational Technologies\nNovosibirsk State University\n\n" ]
[ "Federal Research Center for Information and Computational Technologies\nNovosibirsk State University\n" ]
[]
We consider the problem of constructing an unconditionally secure cipher with a short key for the case where the probability distribution of encrypted messages is unknown. Note that unconditional security means that an adversary with no computational constraints can obtain only a negligible amount of information ("leakage") about an encrypted message (without knowing the key).Here we consider the case of a priori (partially) unknown message source statistics. More specifically, the message source probability distribution belongs to a given family of distributions. We propose an unconditionally secure cipher for this case. As an example, one can consider constructing a single cipher for texts written in any of the languages of the European Union. That is, the message to be encrypted could be written in any of these languages.
10.48550/arxiv.2303.15258
[ "https://export.arxiv.org/pdf/2303.15258v1.pdf" ]
257,767,043
2303.15258
8d43d918c24604a9c6c7008c891f14945c9b1f5f
Unconditionally secure ciphers with a short key for a source with unknown statistics 27 Mar 2023 Boris Ryabko Federal Research Center for Information and Computational Technologies Novosibirsk State University Unconditionally secure ciphers with a short key for a source with unknown statistics 27 Mar 2023cryptographyunconditionally secure cipherentropically- secure symmetric encryption schemeindistinguishabilitydata compressionuniversal code We consider the problem of constructing an unconditionally secure cipher with a short key for the case where the probability distribution of encrypted messages is unknown. Note that unconditional security means that an adversary with no computational constraints can obtain only a negligible amount of information ("leakage") about an encrypted message (without knowing the key).Here we consider the case of a priori (partially) unknown message source statistics. More specifically, the message source probability distribution belongs to a given family of distributions. We propose an unconditionally secure cipher for this case. As an example, one can consider constructing a single cipher for texts written in any of the languages of the European Union. That is, the message to be encrypted could be written in any of these languages. Introduction The concept of unconditional security is very attractive to cryptography and has found many applications since C. Shannon described it in his famous article [1]. The concept refers to secret-key cryptography involving three participants Alice, Bob and Eve, where Alice wants to send a message to Bob in secret from Eve, who has the ability to read all correspondence between Alice and Bob. To do this, Alice and Bob use a cipher with a secret key k (i.e. a word from some alphabet), which is known to them in advance (but not to Eve). When Alice wants to send some message m, she first encrypts m using key k and sends it to Bob, who in turn decrypts the received encrypted message using the key k. Eve also receives the encrypted message and tries to decrypt it without knowing the key. The system is called unconditionally secure, or perfect, if Eve, with computers and other equipment of unlimited power and unlimited time, cannot obtain any information about the encrypted message. Not only did C. Shannon provide a formal definition of perfect (or unconditional) secrecy, but he also showed that the so-called one-time pad (or Vernam cipher) is such a system. One of the specific properties of this system is the equivalence of the length of the secret key and the message (or its entropy). Moreover, C. Shannon proved that this property must be true for any perfect system. Quite often this property has limited practical application as many modern telecommunication systems forward and store megabytes of information and the requirement to have secret keys of the same length seems to be quite stringent. There are, therefore, many different approaches to overcoming this obstacle. These include the ideal systems proposed by C. Shannon [1], the so-called honeycomb cipher proposed by Jewels and Ristenpart [2], the so-called entropy security proposed by Russell and Wang [3] and some others developed in recent decades [4][5][6][7][8][9][10][11]. The present work is concerned with entropically secure ciphers. It is important to note that an entropically secure cipher is not perfect, and Eve may obtain some information about the message -the property referred to as "leakage," see the definition below, but this leakage can be made negligible. On the other hand, an entropically secure cipher makes it possible to significantly reduce the key length (compared to the perfect cipher). Recently, an entropically secure cipher has been proposed for the case where encrypted messages have a known distribution, and for the case where messages are generated by a Markov chain [11]. In the case of a known distribution, the length of the secret key is independent of message length, while in the case of a Markov chain, the length of the key grows logarithmically with message length; in both cases the length of the key depends on the amount of leakage. In this paper we consider the situation where encrypted messages obey an unknown (or partially unknown) probability distribution. We propose an entropically secure cipher for which the key length depends on universal code (or data compressor) used for encoding the source and on the admissible leakage of the cipher. In a sense, the problem under consideration includes as special cases the previously solved problems with known probability distribution and the case where messages are generated by a Markov chain. The construction of the cipher is based on entropically secure ciphers [3,5,10,11] and universal coding [12]. It is worth noting that the proposed cipher uses data compression and randomisation, both of which are quite popular in unconditional security, cf. [13][14][15] and [15,16], respectively. Definitions and preliminaries Basic concepts We consider the problem of symmetric encryption, where Alice wants to securely transmit a message to Bob. The messages are n-letter binary words, they obey a certain probability distribution p defined on the set {0, 1} n , n ≥ 1. This distribution is only partially known, i.e. it is known that p belongs to some given set P , P ⊂ R n . Alice and Bob have a shared secret key K = K 1 ...K k , and Alice encrypts the message M ∈ {0, 1} n using K and possibly some random bits. Then she sends the word cipher(M, K) to Bob, who decrypts the received cipher(M, K) and obtains M. The third participant is a computationally unconstrained adversary Eve, who knows cipher(M, K) and distribution p, and wants to find some information about M without knowing K. Russell and Wang [3] suggested a definition of entropic security which was generalised by Dodis and Smith [5] as follows: A probabilistic map Y is said to hide all functions on {0, 1} n with leakage ǫ if, for every adversary A, there exists some adversary (who does not know Y (M)) such that for all functions f , Note that, in a sense, Definition 1 is a generalisation of Shannon's notion of perfect security. Namely, if we take ǫ = 0 and Y = cipher(M, K) and f (x) = x, we obtain that for any M |P r{A(cipher(M, K)) = M} − P r{Â( ) = M} | = 0 So, A and obtained the same result, but A estimates the probability based on cipher(M, K), whereas does it without knowledge of cipher(M, K). Thus, the entropic security (1) can be considered as a generalisation of the Shannon's perfect secrecy. | P r{A(Y (M)) = f (M)} − P r{Â( ) = f (M)} | ≤ ǫ.(1) We will use another important concept, the notion of indistinguishability. Definition 2 A randomised map Y : {0, 1} n → {0, 1} n , n ≥ 1, is ǫ- indistinguishable for some family of destributions P and ǫ > 0 if there is a probability distribution G on {0, 1} n such that for every probability distri- bution p ∈ P we have SD(Y (M), G) ≤ ǫ, where for two distributions A, B SD(A, B) = 1 2 U ∈M |P r{A = U} − P r{B = U}| . Importantly, G is independent of Y (M). Dodis and Smith [5] showed that the concepts of ǫ-entropic security and ǫ-indistinguishability are equivalent up to small parameter changes. ǫ-entropically secure ciphers for distributions with bounded min-entropy In 2006 [3], the first entropy secure cipher was developed for probability distributions with a limited value of the so-called minimum entropy, which is defined as follows h min (p) = − log max a∈A p(a) .(2) where p is a probability distribution, log = log 2 . The Russell and Wang [3] cipher was generalized and developed by Dodis and Smith [5] and their result can be formulated as follows: Theorem 1 [5]. Let p be a probability distribution on {0, 1} n , n > 0, whose min-entropy is not less then h, h ∈ [0, n]. Then there exists an ǫentropically secure cipher with the k-bit key where k = n − h + 2log(1/ǫ) + 2.(3) Let's denote this cipher as cipher rw−ds . In a sense, this cipher generalizes the perfect Shannon cipher as follows: In a perfect cipher the key is the word from {0, 1} n , while in an entropically secure cipher the key belongs to the 2 k -element subset K ⊂ {0, 1} n , which is a so-called small-biased set. Informally, this means that for any m ≤ n and a uniformly chosen binary word u ∈ {0, 1} m , for any m positions i 1 i 2 , ..., i m , the probability that K i 1 , K i 2 , ...., K im = u is close to 2 −m . (This construction is based on some deep results in combinatorics [5,17,18].) Thus, the key length decreases from n to k. Note that the leakage ǫ and hence the summand 2 log(1/ǫ) + 2 depends on the size of the "small-biased set" 2 k (In general, larger k implies smaller ǫ.) ǫ-entropically secure ciphers with reduced secret key In equality (3), the linearly increasing summand n − h depends on the minentropy h. So, it seems natural to transform the set {0, 1} n so as to reduce the min-entropy of the original distribution p and hence the summand n − h. In [11] this approach was realised as follows: let there be a set of probability distributios P defined on {0, 1} n , n ≥ 1. The key part of the cipher is such a randomised map φ : {0, 1} n → {0, 1} n * , n * ≥ n, that there exists a map φ −1 (i.e ∀ u φ −1 (φ(u)) = u) and a min-entropy of the transform probabiity distribution π p is close to n * (here the distribution π p is such that p(u) = v:φ −1 (v)=u π p (v)). And then the cipher rw−ds can be applied to φ(m) with a shorter key, because the difference n * − h min (π p ) will be less that n − h min (p), see (3). Thus, the smaller sup p∈P (n * − h min (π p )), the shorter the secret key. The described cipher is based on data compression and randomisation and denoted in [11] by cipher c&r . The following theorem describes its properties. Theorem 2 [11]. Suppose there is a family P of probability distributions defined on {0, 1} n and there is a randomised mapping φ : {0, 1} n → {0, 1} n * , n * ≥ n for which there exists a mapping φ −1 and let sup p∈P (n * − h min (π p )) ≤ ∆ . for some ∆. Then i) cipher c&r is ǫ-entropically secure with secret key length ∆+ 2 log(1/ǫ) + 2, and ii) cipher c&r is ǫ-indistinguishable with secret key length ∆ + 2 log(1/ǫ) + 6. Now we consider a simple example to illustrate the basic idea. Let n = 2, p(00) = 1/2, p(01) = 1/4, p(10) = p(11) = 1/8. Obviously, h min (p) = 1 and ∆ = (2 − 1). The map φ is constructed in two steps: first, "compress" the letters till − log p(a), that is, in our example, 00 → 0, 01 → 10 and 10 → 110, 11 → 111. Secondly, randomise as follows: 00 uniformly → {000, 001, 010, 011}, 10 → {100, 101} and two last letters as {110} and {111} correspondingly. As a result, we obtain a set {0, 1} 3 subject to a uniform distribution whose min-entropy is equal to three, and hence ∆ = 3 − 3 = 0. Thus, the key length becomes 1 bit shorter, but the message length is longer. It is proved that such a "bloated" cipher is ǫ-entropically secure [11]. Obviously, the key length depends on the efficiency of the compression method, or code. Thus, in the case of known statistics (i.e., known p), the key length is ∆+2log(1/ǫ)+2, where ∆ is 1 or 2 and depends on the compression code chosen. If p is unknown, but the messages are known to be generated by a Markov chain with known memory, then ∆ = O(log n) (and the key length is O(log n) + 2log(1/ǫ) [11] ). Universal coding The problem of constructing a single code for multiple probability distributions (information sources) is well known in information theory, and there are currently dozens of effective universal codes based on different ideas and approaches. It is worth noting that, at present, there are dozens universal codes, which are the basis for so-called archivers (e.g., ZIP). The first universal code for Bernoulli and Markov processes was proposed by Fitinghof [19], and then Krichevsky found an asymptotically optimal code for these processes [12,20]. Other universal codes include the PPM universal code [21], which is used together with the arithmetic code [22], the Lempel-Ziv (LZ) codes [23], the Burrows-Wheeler transformation [24], which is used together with the book-stack code (or MTF) [25] (see also also [26,27]), grammar codes [28,29] and some others [30][31][32][33]. The universal code c has to"compress" sequences x = x 1 ...x n that obey the distribution p ∈ P down to Shannon entropy p, that is h Sh (p), and the difference between E p (|c(x)|) − h Sh (p) is called redundancy r(p) [12] (here E p is the expectation and |u| is the legth u). In [34], an algorithm was proposed to construct a code c opt whose redundancy is minimal on P, that is, r popt = inf p∈P r(p). In [34] it was shown that r popt is equal to the capacity of a channel whose input alphabet is P, whose output alphabet is the alphabet on which distributions from P are defined (in our case it is the alphabet {0, 1} n ), and the lines of the channel matrix are probability distributions from P (see also [35] for the history of this discovery). This fact is important, because it allows us to use known methods to compute the channel capacity to find the optimal code. In this paper, we will use the so-called Shtarkov maximum likelihood code c Sht [36], whose construction is much simpler, and its redundancy is often close to that of the optimal code. This code is described as follows: first define p max (u) = sup p∈P p(u), u ∈ {0, 1} n , S P = u∈{0,1} n p max (u), q(u) = p max (u)/S P . (5) Clearly, ∀u : p(u)/q(u) ≤ S P .(6) Shtakov proposed to build code c Sht for which |c Sht (u)| = ⌈− log q(u)⌉. (Such a code exists, see [37]. ) Note that for a finite set P S P ≤ |P | (In particular, this is true when P contains probability distributions corresponding to several languages). The cipher Now we are going to construct an ǫ-entropically secure cipher c c&r for the case of unknown statistics, i.e., there exists some set of probability distributions P generating words from {0, 1} n , n ≥ 1, and the constructed cipher should be applicable to messages obeying any p ∈ P with leakage no larger than ǫ. In short, we apply the general method from [11] to the probability distribution q (5). In detail, Alice wants to send messages m ∈ {0, 1} n to Bob, and they both know in advance that m can obey any probability distribution p of the set of distributions P. The cipher algorithm is as follows. Constructing the cipher. We describe all calculations in the following steps: i) compute the distribution q according to (5) and order the set q(u), u ∈ {0, 1} n . (Denote the ordered probabilities as q 1 , q 2 , ..., q N , N = 2 n and let ν(u) = i for which q(u) = q i .) ii) encode the "letters" 1, 2, ..., N with the distribution q by the trimmed Shannon code from [11] . Denote this code λ and note that ∀i : |λ(i)| < − log q i + 2 (7) and λ is prefix-free, that is, for any i and j, i = j, neither λ(i) is a prefix λ(j), no λ(j) is a prefix λ(i) [11]. iii) build the following randomised map φ First, find n * = max i λ(i) and then define for u ∈ {0, 1} n , φ(u) = λ(ν(u))r |λ(ν(u)|+1 ... r n * ,(8) where r j are equiprobable independent binary digits. iv) For the desired leakage ǫ build cipher rw−ds with secret key length ⌈log S P ⌉ + 2 log(1/ǫ) + δ , where δ = 2 for ǫ-entropically secure cipher and δ = 6 for ǫ-indistinguishable one. It is worth noting that Alice and Bob (and Eve) can do all the calculations described independently of each other. Use of the cipher. Suppose Alice and Bob have a randomly chosen secret key K, |K| = k, and Alice wants to send Bob a message m. To do this, she computes cipher c&r (m, K), as described above, and sends it to Bob. Bob receives the word cipher c&r (m, K) and decrypts it with the key K. As a result he gets the word φ(m) = λ(ν(m)r |λ(ν(m)|+1 ... r n * whose prefix λ(ν(m)) defines the message m (this is possible because λ is prefix-free). The properties of this cipher are described in the following theorem. Theorem 3. Suppose there is a family P of probability distributions defined on {0, 1} n and some ǫ > 0. If the described cipher c&r is applied then i) the cipher c&r is ǫ-entropically secure with secret key length ⌈log S P ⌉ + 2 log(1/ǫ) + 2 and ii) the cipher c&r is ǫ-indistinguishable with secret key length ⌈log S P ⌉ + 2 log(1/ǫ) + 6. Proof. For any p ∈ P the random map φ defines a probability distribution π p (v), v ∈ {0, 1} * as follows: for any u ∈ {0, 1} n and v ∈ φ(u) π p (v) = p(u)2 −(n * −|λ(ν(u)|) , see (8). From definitions φ and (8), (7) we obtain π p (v) = p(m)2 −(n * −|λ(ν(m)|) ≤ p(m)2 −(n * −(log q ν(m) +2)) for any m ∈ {0, 1} n and v ∈ φ(m) ⊂ {0, 1} n * . Then − log π p (v) ≥ − log p(m) − (n * − (log q ν(m) + 2)) ≥ log S P − log q ν(m) − (n * − (log q ν(m) + 2)) = log S P + 2 − n * for any m and v ∈ φ(m) ⊂ {0, 1} n * . So, h min (π p ) = min v∈{0,1} n * − log π p (v) ≥ log S P + 2 − n * and, hence, sup p∈P (n * − h min (π p )) ≤ log S P + 2 . From (4) (Theorem 2) and the description of the cipher (9) we can see that the cipher c&r is i) ǫ-entropically secure with a secret key of length ⌈log S P ⌉+2 log(1/ǫ)+4 and ii) ǫ-indistinguishable with a secret key of length ⌈log S P ⌉ + 2 log(1/ǫ) + 8. Conclusion We described the cipher for a family of probability distributions P defined on the set {0, 1} n , n ≥ 1, for which the length of the secret key does not depend directly on n, but depends on P. For example, if P is finite, the key length is less than log |P| + 2 log(1/ǫ) + O(1) and hence independent of n. This example includes the case where one needs to have the same cipher for texts written in different languages. Here, the size of the set P is equal to the number of languages. Thus, in some practically interesting cases, the extra length of the secret key is quite small. ( note that does not know Y (M) and, in fact, she guesses the meaning of the function f (M).) In what follows, the probabilistic map Y will be cipher(M, K) and f is a map f : {0, 1} n → {0, 1} * . Definition 1. The map Y () is called ǫ-entropically secure for family probability distributions P if Y () hides all functions on {0, 1} n with leakage of ǫ, whenever p ∈ P . Communication theory of secrecy systems. The Bell system technical journal. C E Shannon, 28Shannon C. E. Communication theory of secrecy systems. The Bell sys- tem technical journal. 1949 Oct; 28(4):656-715. Honey encryption: Security beyond the brute-force bound. A Juels, T Ristenpart, Annual international conference on the theory and applications of cryptographic techniques. Berlin, HeidelbergSpringerJuels A, Ristenpart T. Honey encryption: Security beyond the brute-force bound. In Annual international conference on the theory and applications of cryptographic techniques 2014 May 11 (pp. 293-310). Springer, Berlin, Heidelberg. How to fool an unbounded adversary with a short key. A Russell, H Wang, IEEE Transactions on Information Theory. 523Russell A, Wang H. How to fool an unbounded adversary with a short key. IEEE Transactions on Information Theory. 2006 Mar 6;52(3):1130-40. Honey encryption beyond message recovery security. IACR Cryptology ePrint Archive. J Jaeger, Ristenpart, Tang, J Jaeger, T Ristenpart, Q Tang. Honey encryption beyond message re- covery security. IACR Cryptology ePrint Archive; 2016. Entropic security and the encryption of high entropy messages. Y Dodis, A Smith, Theory of Cryptography Conference. Berlin, HeidelbergSpringerDodis Y., Smith A. Entropic security and the encryption of high entropy messages. In: Theory of Cryptography Conference 2005 Feb 10 (pp. 556- 577). Springer, Berlin, Heidelberg. Lists that are smaller than their parts: A coding approach to tunable secrecy. F Du Pin Calmon, M Medard, L M Zeger, J Barros, M M Christiansen, K R Duffy, 50th Annual Allerton Conference on Communication, Control, and Computing. AllertonIEEEF. du Pin Calmon, M. Medard, L. M. Zeger, J. Barros, M. M. Chris- tiansen, and K. R. Duffy. Lists that are smaller than their parts: A cod- ing approach to tunable secrecy. In 50th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2012, October 1-5, 2012, pp. 1387?1394. IEEE, 2012. Information-theoretic metrics for security and privacy (Doctoral dissertation. F D Calmon, Massachusetts Institute of TechnologyCalmon F. D. Information-theoretic metrics for security and privacy (Doctoral dissertation, Massachusetts Institute of Technology), 2015. A simply realizable ideal cryptographic system. Problems of Information Transmission. B Ryabko, 36see also IACR Cryptology ePrint archive, report 2001/046Ryabko, B. A simply realizable ideal cryptographic system. Problems of Information Transmission, 36, (2000), no. 1, pp. 84-89. (see also IACR Cryptology ePrint archive, report 2001/046). The Vernam Cipher Is Robust to Small Deviations from Randomness. Problems of Information Transmission. B Ryabko, 51Ryabko, B. The Vernam Cipher Is Robust to Small Deviations from Ran- domness. Problems of Information Transmission, 2015, 51(1), pp. 82-86. Fooling an Unbounded Adversary with a Short Key, Repeatedly: The Honey Encryption Perspective. X Li, Q Tang, Z Zhang, 2nd Conference on Information-Theoretic Cryptography (ITC 2021) 2021. Schloss Dagstuhl-Leibniz-Zentrum Informatik. Li X., Tang Q., Zhang Z. Fooling an Unbounded Adversary with a Short Key, Repeatedly: The Honey Encryption Perspective. In 2nd Confer- ence on Information-Theoretic Cryptography (ITC 2021) 2021. Schloss Dagstuhl-Leibniz-Zentrum Informatik. Unconditionally secure short key ciphers based on data compression and randomization. B Ryabko, Des. Codes Cryptogr. Ryabko, B. Unconditionally secure short key ciphers based on data com- pression and randomization. Des. Codes Cryptogr., pp.1-12, 2023. Universal Compression and Retrival. R Krichevsky, Kluver Academic PublishersKrichevsky R. Universal Compression and Retrival. Kluver Academic Publishers, 1993. A compression perspective on secrecy measures. Y Y Shkel, H V Poor, IEEE Journal on Selected Areas in Information Theory. 21Shkel YY, Poor HV. A compression perspective on secrecy measures. IEEE Journal on Selected Areas in Information Theory. 2021 Feb 2;2(1):163-76. An overview of information-theoretic security and privacy: Metrics, limits and applications. M Bloch, O Günlü, A Yener, F Oggier, H V Poor, L Sankar, R F Schaefer, IEEE Journal on Selected Areas in Information Theory. 21Bloch M, Günlü O, Yener A, Oggier F, Poor HV, Sankar L, Schaefer RF. An overview of information-theoretic security and privacy: Metrics, limits and applications. IEEE Journal on Selected Areas in Information Theory. 2021 Mar 17;2(1):5-22. Cryptography in the Information Society. B Ryabko, A Fionov, World Scientific Publishing280Ryabko B., Fionov A. Cryptography in the Information Society. -World Scientific Publishing. -2020. -280 p. A universal algorithm for homophonic coding. InWorkshop on the Theory and Application of Cryptographic Techniques. C G Gunther, SpringerBerlin, HeidelbergGunther C. G. A universal algorithm for homophonic coding. InWork- shop on the Theory and Application of Cryptographic Techniques 1988 May 25 (pp. 405-414). Springer, Berlin, Heidelberg. Small-bias probability spaces: Efficient constructions and applications. InProceedings of the twenty-second annual ACM symposium on Theory of computing. J Naor, M Naor, Naor J, Naor M. Small-bias probability spaces: Efficient constructions and applications. InProceedings of the twenty-second annual ACM sym- posium on Theory of computing 1990 Apr 1 (pp. 213-223). Simple constructions of almost k-wise independent random variables. Random Structures & Algorithms. N Alon, O Goldreich, J Håstad, R Peralta, 3Alon N, Goldreich O, Håstad J, Peralta R. Simple constructions of al- most k-wise independent random variables. Random Structures & Algo- rithms. 1992;3(3):289-304. Optimal coding in the case of unknown and changing message statistics. B M Fitingof, Problemy Peredachi Informatsii. 22Fitingof B. M. Optimal coding in the case of unknown and changing message statistics, Problemy Peredachi Informatsii, 2(2), 3-11, 1966 A relation between the plausibility of information about a source and encoding redundancy. R Krichevsky, Problems Inform. Transmission. 43Krichevsky R. A relation between the plausibility of information about a source and encoding redundancy. Problems Inform. Transmission. 1968;4(3):48-57. Data compression using adaptive coding and partial string matching. J Cleary, I Witten, IEEE transactions on Communications. 324J. Cleary and I. Witten, "Data compression using adaptive coding and partial string matching," IEEE transactions on Communications, vol. 32, no. 4, pp. 396-402, 1984. Arithmetic coding. J Rissanen, G G Langdon, IBM Journal of research and development. 232J. Rissanen and G. G. Langdon, "Arithmetic coding," IBM Journal of research and development, vol. 23, no. 2, pp. 149-162, 1979. A universal algorithm for sequential data compression. J Ziv, A Lempel, IEEE Transactions on information theory. 233J. Ziv and A. Lempel, "A universal algorithm for sequential data com- pression," IEEE Transactions on information theory, vol. 23, no. 3, pp. 337-343, 1977. A block-sorting lossless data compression algorithm. M Burrows, D J Wheeler, M. Burrows and D. J. Wheeler, "A block-sorting lossless data compres- sion algorithm," 1994. Data compression by means of a "book stack. B Y Ryabko, Problemy Peredachi Informatsii. 164B. Y. Ryabko, "Data compression by means of a "book stack"," Prob- lemy Peredachi Informatsii, vol. 16, no. 4, pp. 16-21, 1980. A locally adaptive data compression scheme. J Bentley, D Sleator, R Tarjan, V Wei, Communications of the ACM. 294J. Bentley, D. Sleator, R. Tarjan, and V. Wei, " A locally adaptive data compression scheme," Communications of the ACM, vol. 29, no. 4, pp. 320-330, 1986. . B Ryabko, N R Horspool, G V Cormack, S Sekar, S B Ahuja, Communications of the ACM. 309Technical correspondenceB. Ryabko, N. R. Horspool, G. V. Cormack, S. Sekar, and S. B. Ahuja, "Technical correspondence," Communications of the ACM, vol. 30, no. 9, pp. 792-797, 1987. Grammar-based codes: a new class of universal lossless source codes. J C Kieffer, E.-H Yang, IEEE Transactions on Information Theory. 463J. C. Kieffer and E.-H. Yang, "Grammar-based codes: a new class of uni- versal lossless source codes," IEEE Transactions on Information Theory, vol. 46, no. 3, pp. 737-754, 2000. Efficient universal lossless data compression algorithms based on a greedy sequential grammar transform. i. without context models. E.-H Yang, J C Kieffer, IEEE Transactions on Information Theory. 463E.-H. Yang and J. C. Kieffer, "Efficient universal lossless data com- pression algorithms based on a greedy sequential grammar transform. i. without context models," IEEE Transactions on Information Theory, vol. 46, no. 3, pp. 755-777, 2000. Tunstall code, Khodak variations, and random walks. M Drmota, Yu Reznik, W Szpankowski, IEEE Transactions on Information Theory. 566M. Drmota, Yu. Reznik, and W. Szpankowski, " Tunstall code, Kho- dak variations, and random walks," IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2928-2937, 2010. Average profile and limiting distribution for a phrase size in the Lempel-Ziv parsing algorithm. G Louchard, W Szpankowski, IEEE Transactions on Information Theory. 412G. Louchard, W.Szpankowski, "Average profile and limiting distribution for a phrase size in the Lempel-Ziv parsing algorithm", IEEE Transactions on Information Theory. vol. 41, no. 2, pp. 478-488, 1995. Twice-universal coding. B Ryabko, Problems of Information Transmission. 3B. Ryabko, "Twice-universal coding," Problems of Information Trans- mission, vol. 3, pp. 173-177, 1984. Reznik Coding of Sets of Words. Y A , Data Compression Conference. IEEEY. A. Reznik Coding of Sets of Words. 2011 Data Compression Confer- ence, IEEE, 2011. Coding of a source with unknown but ordered probabilities. B Ryabko, Problems Inform. Transmission. 152Ryabko, B. Coding of a source with unknown but ordered probabilities. Problems Inform. Transmission 15 (1979), no. 2, 134-138; A source matching approach to finding minimax codes. B Ryabko, Comments On, IEEE Trans. Inform. Theory. 276Ryabko, B. Comments on: "A source matching approach to finding minimax codes", IEEE Trans. Inform. Theory 27 (1981), no. 6, 780-781. Universal sequential coding of single messages. Y M Shtar&apos;kov, Problemy Peredachi Informatsii. 233Shtar'kov YM. Universal sequential coding of single messages. Problemy Peredachi Informatsii. 1987;23(3):3-17. Elements of information theory. T M Cover, J A Thomas, Wiley-InterscienceNew York, NY, USAT. M. Cover and J. A. Thomas, Elements of information theory. New York, NY, USA: Wiley-Interscience, 2006.
[]
[ "LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments", "LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments" ]
[ "Tae Soo Kim [email protected] ", "Arghya Sarkar [email protected] ", "Yoonjoo Lee [email protected] ", "Minsuk Chang [email protected] ", "Juho Kim [email protected] ", "Tae Soo Kim ", "Arghya Sarkar ", "Yoonjoo Lee ", "Minsuk Chang ", "Juho Kim ", "\nSchool of Computing\nSchool of Computing\nKAIST Daejeon\nRepublic of Korea\n", "\nSchool of Computing\nNaver AI Lab Seongnam\nKAIST\nDaejeonRepublic of Korea, Republic of Korea\n", "\nKAIST\nDaejeonRepublic of Korea\n" ]
[ "School of Computing\nSchool of Computing\nKAIST Daejeon\nRepublic of Korea", "School of Computing\nNaver AI Lab Seongnam\nKAIST\nDaejeonRepublic of Korea, Republic of Korea", "KAIST\nDaejeonRepublic of Korea" ]
[ "CHI 2023 Workshop on Generative AI and HCI)" ]
Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers' workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs-requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with "blocks" in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.
10.48550/arxiv.2303.15125
[ "https://export.arxiv.org/pdf/2303.15125v1.pdf" ]
257,767,286
2303.15125
2cdff023cd4b185bb452f3c7399580db2d0fdfcd
LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments ACMCopyright ACMApr 28, 2023 Tae Soo Kim [email protected] Arghya Sarkar [email protected] Yoonjoo Lee [email protected] Minsuk Chang [email protected] Juho Kim [email protected] Tae Soo Kim Arghya Sarkar Yoonjoo Lee Minsuk Chang Juho Kim School of Computing School of Computing KAIST Daejeon Republic of Korea School of Computing Naver AI Lab Seongnam KAIST DaejeonRepublic of Korea, Republic of Korea KAIST DaejeonRepublic of Korea LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments CHI 2023 Workshop on Generative AI and HCI) New York, NY, USAACM2023Apr 28, 202310.1145/XXXXXXX.XXXXXXXNew York University New York, USA ACM Reference Format:Generative AILarge Language ModelsWriting Support ToolsObject-Oriented Interaction Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers' workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs-requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with "blocks" in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept. INTRODUCTION The advent of large language models (LLMs)-e.g., GPT-3 [4], GPT-NeoX [3], Jurassic-1 [15], LaMDA [20]-has transformed the writing process. Instead of manually drafting passages of text, writers can now hand over this effort to these models and almost instantly * Minsuk is now at Google. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CHI '23 Workshop on Generative AI and HCI, Apr 28, 2023, Virtual © 2023 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/XXXXXXX.XXXXXXX generate passages from an initial sentence or phrase. Beyond their generative capabilities, LLMs demonstrate significant few-shot and zero-shot performance [4] meaning that they are able to perform previously unseen tasks with only an instruction and/or a couple of examples-i.e., a prompt. By leveraging this ability of LLMs, writers can potentially automate or augment specific tasks in their workflows by using adequate prompts and, thus, further facilitate the writing process. For instance, based only on prompt examples provided for GPT-3 [17], writers can use LLMs to correct grammar, create an outline, produce analogies, or even change the point-ofview of a scene. To seize the opportunity presented by LLMs, an assortment of products and interfaces have been created that leverage these models to provide writers with specific tools that automate steps in their writing workflows. For example, tools such as WordTune [11] and NotionAI [12] provide editing buttons that the user can click after selecting text to automatically rewrite it, change its tone, summarize it, elaborate on it, etc. Additionally, a variety of LLM-powered copywriting tools [1,7] have also been created that provide writers with a variety of template forms that they can fill to generate specific types of writing (e.g., video description or script, blog introduction, article headline). Similarly in academia, various interfaces have been designed to leverage LLMs to support specific tasks: generate various forms of figurative language [5], summarize a writer's writing [8], brainstorm and combine ideas [9], or propagate writing edits across a story [14]. While the proliferation of these LLM-driven tools means that various writing tasks can now be supported, the individual needs and challenges of writers might not be fulfilled by these tools. Due to their type of writing, their fluency with a language, or other factors such as their style and workflow, a writer may have specific needs and challenges during their writing process. However, while existing interfaces provide a general set of tools, they provide limited or no support for the writer to create their own tools to support their unique tasks. Further, an interface may not provide a comprehensive set of tools that supports all of the writer's tasks and, thus, the writer may need to constantly switch between multiple interfaces to support their workflows. As a result, the writer needs to scatter and adapt their writing workflow across a variety of interfaces. In this work, we envision a canvas-based interface that enables writers to create their own personalized LLM-driven tools and configure them into one cohesive writing environment. Inspired by object-oriented interaction [2,6,10,[22][23][24] and block-based programming [18], we present the design for LMCanvas, an interface that enables users to interact with text and model blocks to flexibly create and arrange LLM-powered tools. Through the interface, users can create text blocks to encapsulate both their writing and LLM prompts, keep drafts as separate blocks, and organize them in the canvas. By connecting text blocks to model blocks (i.e., blocks that represent a set of model configurations), users can create LM pipelines, tools, that generate outputs as text blocks based on the input text and model block. After creating a set of tools, the user can flexibly arrange them in the canvas to create a writing environment customized to their needs and preferences. In this workshop paper, we discuss our design of the envisioned LMCanvas and future work to develop this concept. LMCANVAS: DESIGN CONCEPT In our envisioned interface, LMCanvas, writers can create four types of objects in an infinite canvas: text blocks, model blocks, and pipeline blocks. All of these blocks can be flexibly moved, copypasted, deleted, and connected to each other. Below, we detail the specific interactions that we aim to support for each type of block. 2.0.1 Text Blocks. In the canvas, the writer can create text blocks, which are objects that the writer can type text into and edit. When writing with LLMs, text can represent different types of content: actual writing, prompts, examples for prompts, generated outputs, etc. By compartmentalizing text into modular blocks, our interface allows the writer to flexibly organize and structure these different forms of text in their writing environment. For example, the writer can use a text block as their main text editor, maintain text blocks on the side containing alternative versions for certain paragraphs, and keep a text block with a prompt template to reuse when creating LLM-powered tools. To support flexible use of text blocks, we design the following interactions specific to these blocks. Resizing allows the writer to change the format of text blocks for different types of usage (e.g., a larger block for a text editor) or to decrease their size to decrease clutter in the screen. When the writer decides that they do not have to keep two text blocks separate anymore (e.g., decided on the final versions for the first two verses of their poem), they can concatenate these blocks by drag-and-dropping one text block into the other. Alternatively, if the writer needs to modularize or separate certain parts of a text (e.g., to only draft one part of a paragraph), they can split off text by selecting it and dragging it outwards-creating a new text block. To allow writers to create reusable LLM-powered tools, the interface allows the user to create text blocks to which they can input other text blocks. Specifically, the user types the "[[input]]" command in a text block to create an "input prong". Then they can attach other blocks into this prong to replace the "[[input]]" command with the content of the attached text block. Finally, as writers may want writing support tools to act on selected text (e.g., generate metaphor for selected phrase or edit selected text to be shorter), the interface also allows users to create select blocks by typing the "[[select]]" command in a text block. The content of these blocks are replaced by any text that the user selects in the canvas. Model Blocks. LLMs possess various parameters that control the generation process. For example, the temperature parameter determines the probability of the model generating more out-ofdistribution or improbable text. Prior work has demonstrated that, Figure 2: Model blocks represent a set of parameter configurations that the writer can configure, copy, and reuse. By connecting model blocks to text blocks, writers can create pipeline blocks that allow them to generate outputs based on the nested text and parameters. when writing with LLMs, different configurations of these parameters can satisfy different user needs [13]. Thus, to support writers to set, test, and reuse parameter configurations, LMCanvas allows users to create multiple model blocks with different combinations of parameters ( Figure 2). These blocks represents an instance of parameter configurations that the writer can reconfigure by clicking on a parameter and using the displayed widgets to change its value. Pipeline Blocks. To generate text with the LLM, the user can connect a text block to a model block to create a pipeline block (Figure 2). When a writer clicks on "generate" in a pipeline block, the interface uses the nested text block as input and the model block as the parameter configurations to generate an output, which is presented as a text block. To test multiple inputs and parameter configurations, the writer can also expand a pipeline block by adding additional text and model blocks. In this case, when the pipeline block generates, it produces a generation for each pairing of text and model blocks inside the pipeline block. By default, each time a pipeline block generates, it adds the generation as a text block in the output container-the box containing "1" that prongs out of the pipeline block as seen in Figure 2. However, through drag-and-drop, the writer can connect this output container to (1) a text block to add the generations from the pipeline as continuations to that text block (left in Figure 3), or (2) an input prong to chain multiple pipeline blocks together and create more complex tools [21] (right in Figure 3). Additionally, if the writer connects the output of a pipeline to a select block, the interface replaces any text selected across all text blocks in the canvas with the generation produced by the pipeline. These various forms of connecting pipeline blocks can enable the writer to create a variety of tools from the same basic blocks. FUTURE WORK In this workshop paper, we outlined the foundational objects and interactions that we aim to support in our envisioned LMCanvas. Our goal with this interface is to enable writers to more effectively leverage LLMs by personalizing their use to fit their unique workflows, needs, and challenges. At the current stage of the project, we have developed an initial prototype that supports the three objects presented in this paper and their basic interactions. With this prototype, we are planning to conduct formative studies to understand the various tools that writers can create with LMCanvas, the benefits and drawbacks of the interfaces, and additional blocks and interactions that writers might need. Based on the findings, we plan to improve and expand on the concept. Beyond the directions for improvement to be distilled from the formative study, there are additional future directions that we plan to pursue with LMCanvas. First, an additional benefit of representing writing as text blocks is that this can enable the interface to maintain a separate history for each text block. With this modularized history, writers can check on and revert changes for only specific parts of their writing, and they can also reflect back on their generation attempts by seeing what inputs and parameter configurations were previously used. We are planning to implement this modularized history for text blocks and to enable users to interact with it-e.g., dragging the text input that generated a text block out from the history and into the canvas. Second, in future versions of LMCanvas, we aim to support various types of output containers for pipeline blocks. Currently, the prototype only supports containers that keep generated text blocks as a list. However, when dealing with a large quantity of generated outputs, writers may need alternative methods to look at and explore generated outputs. For example, generations could be encoded in a scatterplot [16] to enable the writer to visualize the output space. Finally, identifying effective prompts (i.e., prompt engineering) is a major hurdle in leveraging LLMs. While tools have been designed to facilitate this in well-defined tasks where there is a "groundtruth" [19], there is limited work that investigated how to support prompt engineering in open-ended and more creative tasks. Through our initial versions of LMCanvas, we aim to investigate mechanisms to facilitate prompt engineering in open-ended writing and to incorporate these into the interface. For example, the interface could allow writers to drag-and-drop text blocks into pipeline blocks as positive or negative examples, and leverage these in the back-end to produce outputs more aligned to the writers' preferences. ACKNOWLEDGMENTS This work was supported by KAIST-NAVER Hypercreative AI Center. Figure 1 : 1Illustrations for the concatenate, split, and input interactions supported in text blocks. Figure 3 : 3The output container of pipeline blocks can be connected to text blocks to add generations as continuations, or to the input prongs of text blocks to chain pipelines. Jasper -AI Copywriter | AI Content Generator for Teams. A I Jasper, Jasper AI. 2022. Jasper -AI Copywriter | AI Content Generator for Teams. Retrieved Feb 23, 2023 from https://www.jasper.ai/ Instrumental Interaction: An Interaction Model for Designing Post-WIMP User Interfaces. Michel Beaudouin-Lafon, 10.1145/332040.332473Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsThe Hague, The Netherlands; New York, NY, USAAssociation for Computing MachineryCHI '00)Michel Beaudouin-Lafon. 2000. Instrumental Interaction: An Interaction Model for Designing Post-WIMP User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands) (CHI '00). Association for Computing Machinery, New York, NY, USA, 446-453. https: //doi.org/10.1145/332040.332473 Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle Mcdonell, Jason Phang, Michael Pieler, Shivanshu Usvsn Sai Prashanth, Purohit, GPT-NeoX-20B: An Open-Source Autoregressive Language Model. Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel WeinbachSid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An Open-Source Autore- gressive Language Model. (2022). Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, arXiv:2005.14165Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radfordcs.CLTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs.CL] Tuhin Chakrabarty, arXiv:2210.13669Vishakh Padmakumar, and He He. 2022. Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing. arXiv preprintTuhin Chakrabarty, Vishakh Padmakumar, and He He. 2022. Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing. arXiv preprint arXiv:2210.13669 (2022). Beyond Snapping: Persistent, Tweakable Alignment and Distribution with StickyLines. Marianela Ciolfi Felice, Nolwenn Maudet, Wendy E Mackay, Michel Beaudouin-Lafon, 10.1145/2984511.2984577Proceedings of the 29th Annual Symposium on User Interface Software and Technology. the 29th Annual Symposium on User Interface Software and TechnologyTokyo, Japan; New York, NY, USAUIST '16). Association for Computing MachineryMarianela Ciolfi Felice, Nolwenn Maudet, Wendy E. Mackay, and Michel Beaudouin-Lafon. 2016. Beyond Snapping: Persistent, Tweakable Alignment and Distribution with StickyLines. In Proceedings of the 29th Annual Sympo- sium on User Interface Software and Technology (Tokyo, Japan) (UIST '16). As- sociation for Computing Machinery, New York, NY, USA, 133-144. https: //doi.org/10.1145/2984511.2984577 Write better marketing copy and content with AI. CopyAI. 2022. Copy.aiRetrievedCopyAI. 2022. Copy.ai: Write better marketing copy and content with AI. Retrieved Feb 23, 2023 from https://www.copy.ai/ Beyond Text Generation: Supporting Writers with Continuous Automatic Text Summaries. Hai Dang, Karim Benharrak, Florian Lehmann, Daniel Buschek, Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. the 35th Annual ACM Symposium on User Interface Software and TechnologyHai Dang, Karim Benharrak, Florian Lehmann, and Daniel Buschek. 2022. Beyond Text Generation: Supporting Writers with Continuous Automatic Text Summaries. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1-13. The Idea Machine: LLM-based Expansion, Rewriting, Combination, and Suggestion of Ideas. Giulia Di Fede, Davide Rocchesso, P Steven, Salvatore Dow, Andolina, Creativity and Cognition. Giulia Di Fede, Davide Rocchesso, Steven P Dow, and Salvatore Andolina. 2022. The Idea Machine: LLM-based Expansion, Rewriting, Combination, and Sugges- tion of Ideas. In Creativity and Cognition. 623-627. Passages: Interacting with Text Across Documents. Han L Han, Junhang Yu, Raphael Bournet, Alexandre Ciorascu, Wendy E Mackay, Michel Beaudouin-Lafon, 10.1145/3491102.3502052Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. the 2022 CHI Conference on Human Factors in Computing SystemsNew Orleans, LA, USA; New York, NY, USA, ArticleAssociation for Computing Machinery338CHI '22)Han L. Han, Junhang Yu, Raphael Bournet, Alexandre Ciorascu, Wendy E. Mackay, and Michel Beaudouin-Lafon. 2022. Passages: Interacting with Text Across Documents. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 338, 17 pages. https://doi.org/10.1145/ 3491102.3502052 WordTune | Your personal writing assistant and editor. Ai21 Labs, AI21 Labs. 2022. WordTune | Your personal writing assistant and editor. Retrieved Feb 23, 2023 from https://www.wordtune.com/ Notion Labs. 2022. Notion AI. Retrieved. Notion Labs. 2022. Notion AI. Retrieved Feb 23, 2023 from https://www.notion. so/product/ai CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. Mina Lee, Percy Liang, Qian Yang, arXiv:2201.06796Mina Lee, Percy Liang, and Qian Yang. 2022. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. CoRR abs/2201.06796 (2022). arXiv:2201.06796 https://arxiv.org/abs/2201.06796 Interactive Children's Story Rewriting Through Parent-Children Interaction. Yoonjoo Lee, Soo Kim, Minsuk Chang, Juho Kim, Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants. the First Workshop on Intelligent and Interactive Writing AssistantsYoonjoo Lee, Tae Soo Kim, Minsuk Chang, and Juho Kim. 2022. Interactive Children's Story Rewriting Through Parent-Children Interaction. In Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022). 62-71. Jurassic-1: Technical Details And Evaluation. Opher Lieber, Or Sharir, Barak Lenz, Yoav Shoham, AI21 LabsTechnical ReportOpher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. 2021. Jurassic-1: Technical Details And Evaluation. Technical Report. AI21 Labs. Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets. Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, George Fitzmaurice, 10.1145/3173574.3173943Association for Computing MachineryNew York, NY, USAJustin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018. Dream Lens: Exploration and Visualization of Large- Scale Generative Design Datasets. Association for Computing Machinery, New York, NY, USA, 1-12. https://doi.org/10.1145/3173574.3173943 Examples -OpenAI API. Openai, OpenAI. 2022. Examples -OpenAI API. Retrieved Feb 23, 2023 from https: //platform.openai.com/examples/ Scratch: programming for all. Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman, Commun. ACM. 52Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman, et al. 2009. Scratch: programming for all. Commun. ACM 52, 11 (2009), 60-67. Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models. Hendrik Strobelt, Albert Webson, Victor Sanh, Benjamin Hoover, Johanna Beyer, Hanspeter Pfister, Alexander M Rush, 10.48550/ARXIV.2208.07852Hendrik Strobelt, Albert Webson, Victor Sanh, Benjamin Hoover, Johanna Beyer, Hanspeter Pfister, and Alexander M. Rush. 2022. Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models. https: //doi.org/10.48550/ARXIV.2208.07852 . Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze, Alicia Cheng, Taylor Jin, Leslie Bos, Yu Baker, Yaguang Du, Hongrae Li, Lee, Amin Huaixiu Steven Zheng, Marcelo Ghafouri, Yanping Menegali, Maxim Huang, Dmitry Krikun, James Lepikhin, Dehao Qin, Yuanzhong Chen, Zhifeng Xu, Adam Chen, Maarten Roberts, Yanqi Bosma, Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Kathleen S. Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray KurzweilBlaise Aguera-ArcasRomal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kul- shreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Kathleen S. Meier- Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, LaMDA: Language Models for Dialog Applications. Claire Cui, Marian Croak, Ed Chi, Quoc Le, arXiv:2201.08239Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. LaMDA: Language Mod- els for Dialog Applications. CoRR abs/2201.08239 (2022). arXiv:2201.08239 https://arxiv.org/abs/2201.08239 AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. Tongshuang Wu, Michael Terry, Carrie J Cai, arXiv:2110.01691Tongshuang Wu, Michael Terry, and Carrie J. Cai. 2021. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. CoRR abs/2110.01691 (2021). arXiv:2110.01691 https://arxiv.org/abs/ 2110.01691 Object-Oriented Drawing. Haijun Xia, Bruno Araujo, Tovi Grossman, Daniel Wigdor, 10.1145/2858036.2858075Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. the 2016 CHI Conference on Human Factors in Computing SystemsSan Jose, California, USA; New York, NY, USAAssociation for Computing MachineryCHI '16)Haijun Xia, Bruno Araujo, Tovi Grossman, and Daniel Wigdor. 2016. Object- Oriented Drawing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 4610-4621. https://doi.org/10.1145/ 2858036.2858075 Collection Objects: Enabling Fluid Formation and Manipulation of Aggregate Selections. Haijun Xia, Bruno Araujo, Daniel Wigdor, 10.1145/3025453.3025554Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. the 2017 CHI Conference on Human Factors in Computing SystemsDenver, Colorado, USA; New York, NY, USAAssociation for Computing MachineryCHI '17)Haijun Xia, Bruno Araujo, and Daniel Wigdor. 2017. Collection Objects: Enabling Fluid Formation and Manipulation of Aggregate Selections. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 5592-5604. https://doi.org/10.1145/3025453.3025554 DataInk: Direct and Creative Data-Oriented Drawing. Haijun Xia, Nathalie Henry Riche, Fanny Chevalier, Daniel Bruno De Araujo, Wigdor, 10.1145/3173574.3173797Association for Computing MachineryNew York, NY, USAHaijun Xia, Nathalie Henry Riche, Fanny Chevalier, Bruno De Araujo, and Daniel Wigdor. 2018. DataInk: Direct and Creative Data-Oriented Drawing. Association for Computing Machinery, New York, NY, USA, 1-13. https://doi.org/10.1145/ 3173574.3173797
[]
[ "Unpredictable solutions of Duffing type equations with Markov coefficients", "Unpredictable solutions of Duffing type equations with Markov coefficients" ]
[ "Marat Akhmet \nDepartment of Mathematics\nMiddle East Technical University\nAnkaraTurkey\n", "| Madina Tleubergenova \nDepartment of Mathematics\nAktobe Regional University\nAktobeKazakhstan\n\nInstitute of Information and Computational Technologies\nAlmatyKazakhstan\n", "Akylbek Zhamanshin \nDepartment of Mathematics\nMiddle East Technical University\nAnkaraTurkey\n\nDepartment of Mathematics\nAktobe Regional University\nAktobeKazakhstan\n" ]
[ "Department of Mathematics\nMiddle East Technical University\nAnkaraTurkey", "Department of Mathematics\nAktobe Regional University\nAktobeKazakhstan", "Institute of Information and Computational Technologies\nAlmatyKazakhstan", "Department of Mathematics\nMiddle East Technical University\nAnkaraTurkey", "Department of Mathematics\nAktobe Regional University\nAktobeKazakhstan" ]
[]
The paper considers a stochastic differential equation of Duffing type with Markov coefficients. The existence of unpredictable solutions is considered. The unpredictability is a property of bounded functions characterized by unbounded sequences of moments of divergence and convergence in Bebutov dynamics. Markov components of the equation coefficients admit the unpredictability property. The components of the equation coefficients are derived from a Markov chain. The existence, uniqueness and exponential stability of an unpredictable solution are proved. The sequences of divergence and convergence of the coefficients and the solution are synchronized. Numerical example that support the theoretical results are provided.
null
[ "https://export.arxiv.org/pdf/2303.17336v1.pdf" ]
257,833,957
2303.17336
ad79ef69d76b33de2bcbcae4dde18c0e5f18c042
Unpredictable solutions of Duffing type equations with Markov coefficients 29 Mar 2023 Marat Akhmet Department of Mathematics Middle East Technical University AnkaraTurkey | Madina Tleubergenova Department of Mathematics Aktobe Regional University AktobeKazakhstan Institute of Information and Computational Technologies AlmatyKazakhstan Akylbek Zhamanshin Department of Mathematics Middle East Technical University AnkaraTurkey Department of Mathematics Aktobe Regional University AktobeKazakhstan Unpredictable solutions of Duffing type equations with Markov coefficients 29 Mar 2023Received: Added at production Revised: Added at production Accepted: Added at production DOI: xxx/xxxx ARTICLE TYPE Correspondence Marat Akhmet,stochastic differential equationDuffing typeMarkovian coefficientsunpredictable solution The paper considers a stochastic differential equation of Duffing type with Markov coefficients. The existence of unpredictable solutions is considered. The unpredictability is a property of bounded functions characterized by unbounded sequences of moments of divergence and convergence in Bebutov dynamics. Markov components of the equation coefficients admit the unpredictability property. The components of the equation coefficients are derived from a Markov chain. The existence, uniqueness and exponential stability of an unpredictable solution are proved. The sequences of divergence and convergence of the coefficients and the solution are synchronized. Numerical example that support the theoretical results are provided. INTRODUCTION It is of great importance to study a Duffing equation with variable coefficients. It was emphasized in the book by Moon 1 that the case when the coefficients are irregular is of strong interest. The question is, what if the perturbations are random, and are of noise type, and the noise is articulated with asymptotic properties of divergence and convergence. Obviously, there are two problems appear. The first one is, how to insert the deviations in coefficients and to be able for proper evaluations. The second task is, how the stochastic processes relate to what we understand as deterministic chaos. That is, not to say that a random process is a deterministic phenomenon, but to demonstrate that some significant features recognized for the chaos can be seen in dynamics originated, for example, from Markov events, which happen with probabilities. This two questions are concerned in the present article. The most related results which are utilized to answer the questions, have been already obtained in our previous research. In papers 2,3 , we introduced a new type of recurrence, the unpredictable point, which is an unpredictable function in Bebutov dynamics. In the research of article 4 , it was proved that any infinite time realization of a Markov process with finite state space and without memory is an unpredictable sequence. Most common form of stochastic differential equation (SDE) is a differential equation with one or more stochastic processes as terms. The solutions of SDEs are also stochastic processes. Typically, a SDE is an ordinary differential equation perturbed by a term, which depends on a white noise variable calculated as the derivative of Brownian motion or the Wiener process. SDEs often are understood as continuous time limit of stochastic difference equations. In our research, we follow the suggestion to consider realizations of random dynamics as functions, in general, and sequences, in particular. As it is said in the book 5 "We have described stochastic dynamics in terms of probability distributions and their various moments. A complimentary, and for many purposes especially illuminating approach, is the study of individual outcomes of the stochastic process of interest." We agree with the authors, and think that the outcomes have to be considered not only for applications, but also as perturbations for various theoretical models. In the present paper, both: inputs, which are the cofficients of the equation of the Duffing type, and correspondingly, outputs, that is, the solutions of the equation are individualized. In the present study, stochastic processes appear in various roles. The first, it is the discrete Markov chain with finite state space and without memory (one can use processes with memory in future extension of the method). A realization of the chain is applied as an input for a dissipative stochastic inhomogeneous equations. And the random solutions of the equations in their own turn are used as coefficients and inputs for the stochastic Duffing equation. Finally, it is approved that the solution of the equation of Duffing type is a continuous unpredictable function. It is clear that the scheme of the present study can be extended for other many theoretical tasks as well as applications. The concept of unpredictability was introduced in papers 2,3 , and has been applied for various problems of differential equations, neural networks and in gas discharge-semiconductor systems 6,7,8,9 . It is powerful instrument for chaos indication 10,11,12,13,14,15 . The Markov research 16 was considered to show that random processes of dependent events can also behave as independent events. Thus, simple dynamics were invented, which have been approved as most effective for many applications. It is impossible underestimate the role of the Markov processes in development of random dynamics theory and its applications. For example, the ergodic theorem was strictly approved at the first time for the dynamics. There are several observations that the chains are strongly connected to symbolic dynamics and to Bernoulli scheme. The final step for the comprehension was done by Donald Ornstein, who verified that -automorphisms such as subshifts of finite type and Markov shifts, Anosov flows and Sinai's billiards, ergodic automorphisms of the -torus and the continued fraction transforms are, in fact, isomorphic 17 . Considering these results it is of great necessity to show that various random processes can be described in terms of chaos, and that they relate equally in the sense. Investigators have worked in both directions, for chaos in random dynamics, as well as for stochastic features in deterministic motions 18 . Thus, the problem of chaos in Markov chains, which is in focus of our interest, is a part of the more general and significant project. The unpredictable orbit 19 as a single isolated motion, presenting the Poincaré chaos 2 , was identified as a certain event in the Markov chains 4 , and our present results are not surprising in this sense, if one issues from the research in 17 and 4 . The Duffing equation has the form 20 ′′ + ′ + + 3 = 0 ( ),(1) where is the damping coefficient, and are stiffness (restoring) coefficients, 0 is the coefficient of excitation, is the frequency of excitation and is the time. The major part of papers on the equation assume that the coefficients , , and 0 are constant 21,22 . Considering the original model one can assume mechanical reasons for variable coefficients. For example, not constant damping and driving force 23 . The main subject of this article is the following stochastic differential equation (SDE) ′′ ( ) + ( 0 ( ) + 1 ( )) ′ ( ) + ( 0 ( ) + 1 ( )) ( ) + ( 0 ( ) + 1 ( )) 3 ( ) = ( 0 ( ) + 1 ( )) ( ), where , ∈ ℝ; is a real constant; 0 ( ), 0 ( ), 0 ( ) and 0 ( ) are continuous periodic functions; coefficient components 1 ( ), 1 ( ), 1 ( ) and 1 ( ) are derived from realizations of Markov processes. This is why, we say that the coefficients are Markovian. Let us remind that the right-hand-side of the equation is also assumed as a coefficient. If the periodic components of the coefficients are inserted for the stability of the solution of equation (2), whereas Markov components cause irregularity of solutions. It is important to emphasize that the main goal of the research not to approve a chaos for the output. But to show existence of the stochastic output for the equation, which admits the unpredictability property, and moreover, the property of the output is synchronized with the asymptotic characteristics of stochastic perturbations in the model. PRELIMINARIES In this section the definitions of unpredictable and Poisson stable functions as well as definition of unpredictable sequences are given. Moreover, the algorithm of construction for Markovian coefficients of SDE (2) is provided. Unpredictable functions The following definitions are basic in the theory of unpredictable points, orbits and functions introduced 2,3 and developed further in papers 6,7,8,9,10,11,12,13,14,15 . In what follows, we shall call and as convergence and divergence sequences, respectively. The presence of the convergence sequence is the argument that any unpredictable function is Poisson stable 4,19,24 , but not vice versa. Definition 2. 24 A function ( )∶ ℝ → ℝ, bounded and continuous, is said to be Poisson stable if there is a sequence of moments , → ∞ as → ∞, such that the sequence ( + ) uniformly converges to ( ) on each bounded interval of the real axis. The discrete version of the Definition 1 is as follows. Definition 3. 3 A bounded sequence { } ∈ ℝ, ∈ ℤ, is called unpredictable if there exist a positive number 0 and the sequences { }, { }, ∈ ℕ, of positive integers both of which diverge to infinity such that | + − | → 0 as → ∞ for each in bounded intervals of integers and | + − | ≥ 0 for each ∈ ℕ. In this paper, we shall consider unpredictable sequences with non-negative arguments and call them also unpredictable sequences 10 . Let us give examples of unpredictable functions. Using an unpredictable sequence, , one can construct a piecewise constant function ( ), such that ( ) = on intervals ∈ [ℎ , ℎ( + 1)), where ℎ is a real number. In papers 6,7 , the function ( ) is determined through the solution of the logistic map and the Bernoulli process is used. Another unpredictable function, ( ), is a continuous solution of differential equation ′ ( ) = ( ) + ( ), where is a negative number. In Figure 1 (a) the graph of function ( ) = , for ∈ [ , + 1), = 0, 1, 2, ..., is shown, where is the unpredictable solution 2 of the logistic map, Figure 1 (b) depicts the graph of the solution, ( ), of the equation with (0) = 0.6 and = −2, which exponentially approaches to the unique unpredictable solution, ( ), of the non-homogeneous equation. This is why, the red line can be considered 8 for t>40 as the graph of an unpredictable function. In the present paper, the coefficients of the SDE (2) are determined by applying the algorithm for ( ), but randomly such that a Markov chain is used instead of the logistic equation. +1 = (1 − ), ∈ ℤ, with 0 = 0.4, = 3.9. Markovian coefficients In this part of the paper, we demonstrate algorithms how to construct Markovian coefficients for Duffing type equation (2). A Markov chain is a stochastic model, which describes a sequence of possible events such that the probability of each event depends only on the state attained in the previous one 25,26,27 . Since we expect for the chaotic dynamics realizations to be bounded, the special Markov chain with boundaries is constructed below. Let the real valued scalar dynamics +1 = + , ≥ 0,(3) be given such that is a random variable with values in {−2, 2} with probability distribution (2) = (−2) = 1∕2, if ≠ −4, 6, and certain events = 2, if = −4, and = −2, if = 6. To satisfy the construction of the present research, we will make the following agreements. First of all, denote 0 = −4, 1 = −2, 2 = 0, 3 = 2, 4 = 4, 5 = 6. Consider, the state space of the process = { 0 , 1 , 2 , 3 , 4 , 5 }, and the value ∈ is the state of the process at time . The Markov chain, is a random process which satisfy property { +1 = | 0 , ..., } = { +1 = | } for all , ∈ and ≥ 0, and, moreover, { +1 = | = } = , where is the transition probability that the chain jumps from state to state . It is clear that ∑ 5 =0 = 1 for all = 0, ..., 5. The unpredictability of infinite realizations of the dynamics is approved by Theorem 2.2 4 . Next, we shall need the −type piecewise constant unpredictable functions, which are defined through the Markov chain such that ( ) = , if ∈ [ℎ , ℎ( + 1)). To visualize the −type functions in Figure 2 (a) the graph of the function ( ) = , if ∈ [ℎ , ℎ( + 1)), where ℎ = 0.5, 0 ≤ ≤ 100 is drawn. On the basis of the −type functions we introduce the −type piecewise constant unpredictable functions such that ( ) = Σ( ( )),(4) where Σ( ) is a continuous function, which satisfies the inverse Lipschitz condition. It can be shown that −type functions are discontinuous unpredictable 10 . Figure 2 (b) depicts the graph of piecewise constant unpredictable function ( ) = 2 ( ) + ( ). Now, let us define another type functions to finalize construction of continuous unpredictable functions through Markov process. Consider ordinary differential equation ′ ( ) = ( ) + ( ),(5) where is a negative number. The equation (5) admits a unique exponentially stable unpredictable solution 3 . We say that the solution of the equation (5) is Θ−type unpredictable function. It is impossible to specify the initial value of the solution, but applying the property of exponential stability one can consider any solution as arbitrary close. In Figure 3, the graph of the solution, ( ), (0) = 0.6 of equation (5), where the parameter is equal to -3, and ( ) = 2 ( ) + ( ) is shown. The solution exponentially approaches an unpredictable function Θ( ). Thus, the algorithm for three types of unpredictable functions, which will be applied to build the Markovian coefficients has been finalized. Next, we shall apply it for each of the coefficients in the SDE (2). Let us consider the following dissipative equations (2). ′ 1 ( ) = 1 ( ) + ( ),(6)′ 1 ( ) = 1 ( ) + ( ),(7)′ 1 ( ) = 1 ( ) + ( ),(8)′ 1 ( ) = 1 ( ) + ( ),(9) In this paper, we utilize Markov chains without memory for the coefficients, but it is clear that one can consider chains with memories of arbitrary finite length in future studies. MAIN RESULTS In the present section, under certain conditions, it is rigorously proved that an exponentially stable unpredictable solution takes place in the dynamics of the SDE with Markovian coefficients. We will make use of the norm ‖ ‖ = max(| 1 |, | 2 |), for a two-dimensional vector = ( 1 , 2 ), and corresponding norm for square matrices will be utilized. For SDE (2) it is provided that a solution ( ) and its derivative ′ ( ) are bounded such that sup ∈ℝ | ( )| < , sup ∈ℝ | ′ ( )| < , where is a fixed positive number. Assume that SDE (2) satisfies the following conditions, (C1) the functions 0 ( ), 0 ( ), 0 ( ), and 0 ( ) are continuous periodic with common positive period such that = 2 ; (C2) the Markovian components 1 ( ), 1 ( ), 1 ( ) 1 ( ) are of Θ−type with common sequences of convergence , and divergence such that there exist positive numbers , 0 , which satisfy (2) can be written as the system | 1 ( + ) − 1 ( )| ≥ 0 , | 1 ( + ) − 1 ( )| ≥ 0 , | 1 ( + ) − 1 ( )| ≥ 0 , | 1 ( + ) − 1 ( )| ≥ 0 , for all ∈ [ − ; + ]; (C3) → 0 ( ) as → ∞; (C4) → 0 ( 2 ) as → ∞. The equation Let ( ), ∈ ℝ, is the fundamental matrix of system (11) such that (0) = , and is the 2 × 2 identical matrix. Moreover, ( , ) = ( ) −1 ( ) is the transition matrix of system (11) such that ( + , + ) = ( , ) for all , ∈ ℝ. The following assumption is needed, (C5) the multipliers of system (11) in modulus are less than one. The last condition implies that there exist positive numbers > 1 and , which satisfy ‖ ( , )‖ ≤ − ( − ) ,(12) for ≥ 28 . For convenience, let introduce notations, 0 = sup ∈ℝ | 0 ( )|, 0 = sup ∈ℝ | 0 ( )|, 1 = sup ∈ℝ | 1 ( )|, 1 = sup ∈ℝ | 1 ( )|, 0 = sup ∈ℝ | 0 ( )|, 1 = sup ∈ℝ | 1 ( )|, = sup ∈ℝ | 0 ( ) + 1 ( )|. Throughout the paper, the following additional conditions are required, (C6) ( ( 1 + 1 ) + ( 0 + 1 ) 3 + ) < ; (C7) ( 1 + 1 + 3( 0 + 1 ) 2 ) < 1. We will consider the system (10) in the matrix form ′ = ( ) + ( ) + ( , ) + ( ),(13) where ( ) = ( 1 ( ), 2 ( )), = 0 1 − 0 ( ) − 0 ( ) , ( ) = 0 0 − 1 ( ) − 1 ( ) , ( , ) = 0 −( 0 ( ) + 1 ( )) 3 1 , ( ) = 0 ( 0 ( ) + 1 ( )) ( ) . Let us show the unpredictability of the function ( ). Moreover, that the convergence and divergence sequences of the function are common with those for Markovian components. and | cos( ( + )) − cos( )| < 0 8 , ∈ ℝ. Applying conditions (C3),(C4), we obtain that cos( ( + )) = 0 ( 2 ) as → ∞. Hence, due to the uniform continuity of the cosine function, there exists positive number 1 < such that min | cos( ( + ))| > 0 2 for ∈ [ − 1 , + 1 ] and → ∞. This is why, we get that ‖ ( + ) − ( )‖ = |( 0 ( + ) + 1 ( + )) cos( ( + )) − ( 0 ( ) + 1 ( )) cos( )| = | | | ( 1 ( + ) − 1 ( )) cos( ( + )) + ( 1 ( ) + 0 ( )(cos( ( + )) − cos( )) + ( 0 ( + ) − 0 ( )) cos( ( + )) | | | ≥ |( 1 ( + ) − 1 ( )) cos( ( + ))| − |( 1 ( ) + 0 ( ))(cos( ( + )) − cos( ))| − |( 0 ( + ) − 0 ( )) cos( ( + ))| > 0 min | cos( ( + ))| − Proof. Fix a function ( ) that belongs to . We have that ‖ ( )‖ ≤ ∫ −∞ ‖ ( , )‖(‖ ( )‖‖ ( )‖ + ‖ ( , ( ))‖ + ‖ ( )‖) ≤ (( 1 + 1 ) + ( 0 + 1 ) 3 + ) for all ∈ ℝ. Therefore, by the condition (C6) it is true that ‖ ‖ 0 < . Next, the method of included intervals 6,7 will be utilized to prove invariantness of Poisson stability in . Let us show that ‖ ( + ) − ( )‖ → 0 on each bounded interval of ℝ. Fix an arbitrary positive number and a closed interval [ , ], −∞ < < < ∞, of the real axis. Let us choose two numbers < , and > 0 satisfying . From inequalities (16) and (17) it follows that ‖ ( + ) − ( )‖ < for ∈ [ , ]. Therefore, the sequence ( + ) uniformly converges to ( ) on each bounded interval of ℝ. The function ( ) is a uniformly continuous, since its derivative is a uniformly bounded on the real axis. Thus, the set is invariant for the operator . Theorem 1. The SDE (2) with Markovian coefficients admits a unique exponentially stable unpredictable solution provided that the conditions (C1)-(C7) are valid. Moreover, the divergence and convergence sequences of the output stochastic dynamics are common with those, and , of the stochastic components of the coefficients. ( ( 1 + 1 ) + ( 0 + 1 ) 3 + ) − ( − ) < 4 ,(16)( 1 + 1 + + 3( 0 + 1 ) 2 + 3 + 1)[1 − − ( − ) ] < 2 .(17) Proof. Let us prove completeness of the set . Consider a Cauchy sequence ( ) in , which converges to a limit function ( ) on ℝ. Fix a closed and bounded interval ⊂ ℝ. We get that ‖ ( + ) − ( )‖ ≤ ‖ ( + ) − ( + )‖ + ‖ ( + ) − ( )‖ + ‖ ( ) − ( )‖.(18) One can choose sufficiently large and , such that each term on the right side of (18) is smaller than 3 for an arbitrary > 0 and ∈ . Thus, we conclude that the sequence ( + ) is uniformly converging to ( ) on . That is, the set is complete. Next, we shall show that the operator ∶ → is a contraction. For any ( ), ( ) ∈ , one can attain that ‖ ( ) − ( )‖ ≤ ∫ −∞ ‖ ( , )‖(‖ ( )‖‖ ( ) − ( )‖ + ‖ ( , ( )) − ( , ( ))‖) ≤ ( 1 + 1 )‖ ( ) − ( )‖ 0 + ( 0 + 1 )(| 2 1 ( )| + | 1 ( )|| 1 ( )| + | 2 1 ( )|)‖ ( ) − ( )‖ 0 < ( 1 + 1 + 3( 0 + 1 ) 2 )‖ ( ) − ( )‖ 0 . Therefore, the inequality ‖ − ‖ 0 < ( 1 + 1 + 3( 0 + 1 ) 2 ) ‖ − ‖ 0 holds, and according to the condition (C7) the operator Π ∶ → is a contraction. By the contraction mapping theorem there exists the unique fixed point, ( ) ∈ of the operator , which is the unique solution of SDE (2). In what follows, we will show that the solution ( ) is unpredictable. Applying the relations Using conditions (C3), (C4) and uniform continuity of the entries of the matrix ( ), periodic function 0 ( ) and solution ( ), one can find a positive numbers 2 and integers , , 0 such that the following inequalities are satisfied 2 < 1 ;(19)‖ ( + ) − ( )‖ < 0 ( 1 + 2 ), ∈ ℝ, > 0 ;(20)| 0 ( + ) − 0 ( )| < 0 ( 1 + 2 ), ∈ ℝ, > 0 ;(21)‖ ( + ) − ( )‖ < 0 min( 1 , 1 4 ), ∈ ℝ, | | < 2 .(23) Let the numbers 2 , and as well as numbers ∈ ℕ, be fixed. Consider the following two alternatives: (i) ‖ ( + )− ( )‖ < 0 ∕ ; (ii) ‖ ( + ) − ( )‖ ≥ 0 ∕ . (i) Using (23) one can show that ‖ ( + ) − ( )‖ ≤ ‖ ( + ) − ( + )‖ + ‖ ( + ) − ( )‖ + ‖ ( ) − ( )‖ < 0 + 0 + 0 = 0 ( 1 + 2 ),(24) if ∈ [ , + 2 ]. Therefore, the inequalities (19)- (24) imply that ‖ ( + ) − ( )‖ ≥ ∫ ‖ ( + ) − ( )‖ − ‖ ( + ) − ( )‖ − ∫ ‖ ( + ) ( + ) − ( ) ( )‖ − ∫ ‖ ( + ) ( + ) − ( ) ( )‖ − ∫ ‖ ( + , ( + )) − ( , ( ))‖ ≥ ∫ ‖ ( + ) − ( )‖ − ‖ ( + ) − ( )‖ − ∫ ‖ ( + ) − ( )‖‖ ( + )‖ − ∫ ‖ ( )‖‖ ( + ) − ( )‖ − ∫ ‖ ( + ) − ( )‖‖ ( + )‖ − ∫ ‖ ( )‖‖ ( + ) − ( )‖ − ∫ | 0 ( + ) 3 1 ( + ) − 0 ( + ) 3 1 ( ))| − ∫ | 0 ( + ) 3 1 ( ) − 0 ( ) 3 1 ( ))| − ∫ | 1 ( + ) 3 1 ( + )) − 1 ( + ) 3 1 ( ))| − ∫ | 1 ( + ) 3 1 ( ) − 1 ( ) 3 1 ( )| ≥ 2 0 4 − 0 − 2 2 ( 1 + 1 ) − 2 ( 1 + 1 ) 0 ( 1 + 2 ) − 2 0 ( 1 + 2 ) − 2 max( 0 + 0 , 1) 0 ( 1 + 2 ) − 3 2 0 2 0 ( 1 + 2 ) − 2 0 ( 1 + 2 ) 3 − 3 2 1 2 0 ( 1 + 2 ) − 2 2 1 3 > 0 2 for ∈ [ , + 2 ]. (ii) If | ( + ) − ( )| ≥ 0 ∕ it is not difficult to find that (23) implies ‖ ( + ) − ( )‖ ≥ ‖ ( + ) − ( )‖ − ‖ ( ) − ( )‖ − ‖ ( + ) − ( + )‖ ≥ 0 − 0 − 0 4 = 0 2 ,(25) for ∈ [ − 2 , + 2 ] and ∈ ℕ. Thus, it can be conclude that ( ) is unpredictable solution with sequences , and positive numbers 2 2 , 0 2 . Finally, let us discuss the exponential stability of the solution ( ). It is true that for ∈ ℝ. With the aid of the Gronwall-Bellman Lemma, one can verify that ‖̄ ( ) − ( )‖ ≤ ( ( 1 + 1 +3( 0 + 1 ) 2 )− )( − 0 ) ‖̄ ( 0 ) − ( 0 )‖,(27) for all ≥ 0 , and condition (C7) implies that the unpredictable solution, ( ), is exponentially stable solution of SDE (2). The theorem is proved. The following section provides an example to confirm the theoretical results by using numerical simulations. It illustrates various unpredictable dynamics of the stochastic equation of Duffing type (2) for different contributions of periodic and nonperiodic components of coefficients. (28), with Markovian components obtained for ℎ = 0.1 . The stochastic influence is strong, since of the time step of the Markov chain is smaller than the period. Consider the following stochastic Duffing equation ′′ ( ) + ( 0 ( ) + 1 ( )) ′ ( ) + ( 0 ( ) + 1 ( )) + ( 0 ( ) + 1 ( )) 3 ( ) = ( 0 ( ) + 1 ( )) cos( ), The 1 , 2 − coordinates and trajectory of the solution of (28), with Markovian components obtained for ℎ = 2 . That is, the value of time steps is equal to the period 2 and our simulations show that the periodicity still cannot be seen clearly in this case. A NUMERICAL EXAMPLE AND DISCUSSIONS The Theorem 1 can be interpreted as a result on response-driver synchronization ? of the unpredictability in the stochastic system (6)-(9) and the stochastic Duffing equation (2). That is, the theorem claims, in particular, that the unpredictable solution ( 1 ( ), 1 ( ), 1 ( )) of the system and the unpredictable solution, ( ), admit common sequences of convergence and divergence. Delta synchronization of the unpredictability for gas discharge-semiconductor systems is considered in 9 . Figure 6 The graphs of coordinates and trajectory of the solution ( ) of equation (28), where Markovian components obtained for ℎ = 8 . One can see that several intervals of periodicity are placed in one step of the constancy. We consider the various simulations for the model, since they are with different steps ℎ of the Markov function ( ). The choice makes qualitative difference in the stochastic dynamics. If the step is in the range ℎ ≤ 2 the behavior is strongly irregular, and there is no any indication of periodicity. For ℎ > 2 one can observe that periodicity seen locally in time, and phenomenon of intermittency 29 appears. Thus, our results demonstrate not only quantitative asymptotic characteristics, but also possibility to learn reasons for different phenomena of chaos. Possibly, the simulations may give lights on the origins of intermittency. Additionally, for ℎ ≤ 2 the effect of periodicity is seen in the "symmetry" of the phase portraits, which is reasoned also by the finite values of the state space. For the values of ℎ less than 2 , any symmetry can not be seen, since the stochastic dynamics dominates significantly. ACKNOWLEDGMENTS Author contributions Marat Akhmet:Conceptualization, formal analysis, investigation, methodology. Madina Tleubergenova:Formal analysis, investigation, supervision, validation. Akylbek Zhamanshin:Investigation, methodology, software. Financial disclosure None reported. and bounded function ∶ ℝ → ℝ is unpredictable if there exist positive numbers 0 , and sequences , both of which diverge to infinity such that | ( + ) − ( )| → 0 as → ∞ uniformly on compact subsets of ℝ and | ( + ) − ( )| ≥ 0 for each ∈ [ − , + ] and ∈ ℕ. Figure 1 1The graphs of the discontinuous and continuous functions, ( ) and ( ). Figure 2 2The piecewise constant functions ( ) and ( ). The vertical lines are drawn for better visibility. Figure 3 3The solution ( ) of equation(5)with initial value (0) = 0.6 exponentially approaches the unpredictable Markovian function. where , , and are negative real numbers, and ( ), ( ), ( ) and ( ) are unpredictable functions of −type. That is, ( ) = ( ( )), ( ) = ( ( )), ( ) = ( ( )) and ( ) = ( ( )), where ( ), ( ), ( ), and ( ), are continuous functions with inverse Lipschitz property and the function ( ) is determined above. The exponentially stable and bounded solutions 1 ( ), 1 ( ), 1 ( ) and 1 ( ) of equations (6)-(9) are Θ−type functions. The functions are considered as Markovian components of the coefficients in the Duffing type equation ) = − 0 ( ) 1 ( ) − 0 ( ) 2 ( ). Fix a positive number , and a bounded interval ⊂ ℝ. Duo to condition (C2), (C3), there exists a natural number 1 such that | cos( ( + )) − cos( )| < 2 ,for all ∈ ℝ and > 1 . Besides it, there exists a natural number 2 such that| 0 ( + ) + 1 ( + ) − 0 ( ) − 1 ( )| < 2for all ∈ and > 2 . Therefore, it is true that ‖ ( + ) − ( )‖ = |( 0 ( + ) + 1 ( + )) cos( ( + )) − ( 0 ( ) + 1 ( )) cos( )| ≤ | 0 ( + ) + 1 ( + )|| cos( ( + )) − cos( )| + | cos( )|| 0 ( + )+ 1 ( + ) − 0 ( ) − 1 ( )| ≤ 2 + 2 < ,for all ∈ and > max( 1 , 2 ). On the other hand, there exist positive numbers 0 , and sequence such that | 1 ( + ) − 1 ( )| ≥ 0 for each ∈ [ − , + ], ∈ ℕ. Moreover, for sufficiently large number one can attain that | 0 ( + )− 0 ( )| < 0 8 for ∈ [ − 1 , + 1 ]. Thus, the function ( ) is unpredictable with sequences , , and positive numbers 0 4 , 1 . Condition (C5) implies that a bounded on the real axis function ( ) is a solution of system (13) if and only if it satisfies the equation ( ) = ∫ −∞ ( , )[ ( ) ( ) + ( , ( )) + ( )] , ∈ ℝ. Lemma 1 . 1The operator is invariant in . (( Conditions (C3), (C4) imply that for sufficiently large the following inequalities are valid ‖ ( + ) − ( )‖ < , ‖ ( + ) − ( )‖ < , | 0 ( + ) + 1 ( + ) − 0 ( ) − 1 ( )| < and ‖ ( + ) − ( )‖ < for ∈ [ , ]. We obtain that , )( ( ) ( ) + ( , ( )) + ( )) , )( ( + ) ( + ) − ( ) ( ) + ( + , ( + )) − ( , ( )) + ( + ) − ( )) ‖ ≤ ∫ −∞ ‖ ( , )‖ ‖ ( + ) ( + ) − ( ) ( )‖ + ‖ ( + , ( + )) − ( , ( ))‖ + ‖ ( + ) − ( )‖ + ∫ ‖ ( , )‖ ‖ ( + )( ( + ) − ( ))‖ + ‖ ( )( ( + ) − ( ))‖ + ∫ ‖ ( , )‖ ‖ ( + , ( + )) − ( + , ( ))‖ + ‖ ( + , ( )) − ( , ( ))‖ + ∫ ‖ ( , )‖‖ ( + ) − ( )‖ ≤ 2 (( 1 + 1 ) + ( 0 + 1 ) 3 + ) − ( − ) + ( ( 1 + 1 ) + )[1 − − ( − ) ] + (3 ( 0 + 1 ) 2 + 3 )[1 − − ( − ) ] + [1 − − ( − ) ], is correct for all ∈ [ , ] + , ( + )) − ( , ( ))) + ∫ ( ( + ) − ( )) . (‖ , )( ( ) ( ) + ( , ( )) + ( )) . Denote bȳ ( ) another solution of SDE (2) such that ( ) = ( , 0 )̄ ( 0 ) + ∫ 0 ( , )( ( )̄ ( ) + ( ,̄ ( )) + ( )) . ( , )‖(‖ ( )‖‖̄ ( ) − ( )‖ + ‖ ( ,̄ ( )) − ( , ( ))‖) Figure 4 4Below, to visualize the exponentially stable unpredictable solution of Θ−type and determine dynamics of Markov coefficients we shall apply solutions ( ), and ( ) = ( ) = ( ) = ( ) = ( ). The piecewise constant function ( ), is constructed by Markov chain with values over intervals [ℎ , ℎ( + 1)), ∈ ℕ, and described in Section 2.The time series of the coordinates and trajectory of the solution ( ) of equation .Figure 5 5According to Theorem 1, the equation (28) admits a unique exponentially stable unpredictable solution. In Figures 4-6, the graphs of the coordinates and trajectories of solutions ( ) for SDE (28) with ℎ = 0.2 , 2 , 8 , and initial values 1 (0) = 2 (0) = 0 are shown. The solutions ( ) exponentially approach the unpredictable solutions, ( ), as time increases. M.Akhmet and A. Zhamanshin have been supported by 2247-A National Leading Researchers Program of TUBITAK, Turkey, N 120C138. M. Tleubergenova has been supported by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (grant No. AP14870835). Conflict of interestThe authors declare no potential conflict of interests. Chaotic vibrations: an introduction for applied scientists and engineers. F Moon, John Wiley and Sons9783527602841New Jersey, USAMoon F. Chaotic vibrations: an introduction for applied scientists and engineers. New Jersey, USA: John Wiley and Sons . 2004. ISBN 9783527602841. Poincare chaos and unpredictable functions. M Akhmet, M Fen, Commun. Nonlinear Sci. Numer. Simulat. 48Akhmet M, Fen M. Poincare chaos and unpredictable functions. Commun. Nonlinear Sci. Numer. Simulat. 2016; 48: 85-94. Non-autonomous equations with unpredictable solutions. M Akhmet, M Fen, Commun. Nonlinear Sci. Numer. Simulat. 59Akhmet M, Fen M. Non-autonomous equations with unpredictable solutions. Commun. Nonlinear Sci. Numer. Simulat. 2018; 59: 657-670. Unpredictability in Markov chains. M Akhmet, Carpathian Journal of Mathematics. 381Akhmet M. Unpredictability in Markov chains. Carpathian Journal of Mathematics 2022; 38(1): 13-19. Exploring Complexity. New Yorke, USA: W.H. Freeman and company. G Nicolis, I Prigogine, 0716718596Nicolis G, Prigogine I. Exploring Complexity. New Yorke, USA: W.H. Freeman and company . 1989. ISBN 0716718596. Unpredictable solutions of linear impulsive systems. M Akhmet, M Tleubergenova, M Fen, Z Nugayeva, Mathematics. 810Akhmet M, Tleubergenova M, Fen M, Nugayeva Z. Unpredictable solutions of linear impulsive systems. Mathematics 2020; 8(10): 1-16. Quasilinear differential equations with strongly unpredictable solutions. M Akhmet, M Tleubergenova, A Zhamanshin, Carpathian Journal of Mathematics. 363Akhmet M, Tleubergenova M, Zhamanshin A. Quasilinear differential equations with strongly unpredictable solutions. Carpathian Journal of Mathematics 2020; 36(3): 341-349. Shunting inhibitory cellular neural networks with strongly unpredictable oscillations. M Akhmet, M Tleubergenova, A Zhamanshin, Commun. Nonlinear Sci. Numer. Simulat. 89105287Akhmet M, Tleubergenova M, Zhamanshin A. Shunting inhibitory cellular neural networks with strongly unpredictable oscillations. Commun. Nonlinear Sci. Numer. Simulat. 2020; 89: 105287. Delta synchronization of Poincaré chaos in gas discharge-semiconductor systems. M Akhmet, K Başkan, C Yeşil, Chaos. 3283137Akhmet M, Başkan K, Yeşil C. Delta synchronization of Poincaré chaos in gas discharge-semiconductor systems. Chaos 2022; 32: 083137. Domain Structured Dynamics: unpredictability, chaos, randomness, fractals, differential equations and neural networks. M Akhmet, 978-0-7503-3507-2IOP PublishingBristol, UKAkhmet M. Domain Structured Dynamics: unpredictability, chaos, randomness, fractals, differential equations and neural networks. Bristol, UK: IOP Publishing . 2021. ISBN 978-0-7503-3507-2. Abstract Similarity. M Akhmet, E Alejaily, Fractals and Chaos. Discrete and Continuous Dynamical Systems. 26Akhmet M, Alejaily E. Abstract Similarity, Fractals and Chaos. Discrete and Continuous Dynamical Systems 2021; 26: 2479-2497. Domain-Structured Chaos in a Hopfield neural network. M Akhmet, E Alejaily, Int. J. Bifurc. Chaos. 29141950205Akhmet M, Alejaily E. Domain-Structured Chaos in a Hopfield neural network. Int. J. Bifurc. Chaos 2019; 29(14): 1950205. . M Akhmet, Abstract Hyperbolic Chaos. Discontinuity, Nonlinearity and Complexity. 111Akhmet M. Abstract Hyperbolic Chaos. Discontinuity, Nonlinearity and Complexity 2022; 11(1): 133-138. Unpredictable points and stronger versions of Ruelle-Takens and Auslander-Yorke chaos. A Miller, Topol. Appl. 253Miller A. Unpredictable points and stronger versions of Ruelle-Takens and Auslander-Yorke chaos. Topol. Appl. 2019; 253: 7-16. Strongly Ruelle-Takens, strongly Auslander-Yorke and Poincare chaos on semiflows. R Thakur, R Das, Commun. Nonlinear Sci. Numer. Simulat. 81105018Thakur R, Das R. Strongly Ruelle-Takens, strongly Auslander-Yorke and Poincare chaos on semiflows. Commun. Nonlinear Sci. Numer. Simulat. 2020; 81: 105018. Extension of the limit theorems of probability theory to a sum of variables connected in a chain. A Markov, Howard R.,John Wiley and Sons9780471416654Hoboken, New Jersey, USAed. reprinted in Appendix B in Dynamic Probabilistic SystemsSeries in Decision and ControlMarkov A. Extension of the limit theorems of probability theory to a sum of variables connected in a chain. In: Howard R., ed. reprinted in Appendix B in Dynamic Probabilistic SystemsSeries in Decision and Control. Hoboken, New Jersey, USA: John Wiley and Sons. 1971. ISBN 9780471416654. Bernoulli shifts with the same entropy are isomorphic. D Ornstein, Advances in Math. 4337352Ornstein D. Bernoulli shifts with the same entropy are isomorphic. Advances in Math. 1970; 4: 337352. Markov partitions for Axiom A diffeomorphisms. R Bowen, Am. J. Math. 92725747Bowen R. Markov partitions for Axiom A diffeomorphisms. Am. J. Math. 1970; 92: 725747. Unpredictable points and chaos. M Akhmet, M Fen, Commun. Nonlinear Sci. Numer. Simulat. 40Akhmet M, Fen M. Unpredictable points and chaos. Commun. Nonlinear Sci. Numer. Simulat. 2016; 40: 1-5. Erzwungene Schwingungen bei Veranderlicher Eigen-frequenz und ihre technische Bedeutung. G Duffing, F. Vieweg und SohnBraunschweig , GermanyDuffing G. Erzwungene Schwingungen bei Veranderlicher Eigen-frequenz und ihre technische Bedeutung. Braunschweig , Germany: F. Vieweg und Sohn . 1918. Pseudo almost periodic solutions for a class of nonlinear Duffing system with a deviating argument. B Liu, C Tunc, J. Appl. Math. Comput. 49Liu B, Tunc C. Pseudo almost periodic solutions for a class of nonlinear Duffing system with a deviating argument. J. Appl. Math. Comput. 2015; 49: 233-242. Almost periodic solutions for nonlinear Duffing equations. W Zeng, Acta Math. Sin. 13Zeng W. Almost periodic solutions for nonlinear Duffing equations. Acta Math. Sin. 1997; 13: 373-380. Solutions of a Class of Duffing Oscillators with Variable Coefficients. P Estevez, S Kuru, J Negro, L Nieto, International Journal of Theoretical Physics. 50Estevez P, Kuru S, Negro J, Nieto L. Solutions of a Class of Duffing Oscillators with Variable Coefficients. International Journal of Theoretical Physics 2011; 50: 2046-2056. Topological dynamics and ordinary differential equations. G Sell, Van Nostrand ReinholdLondon , UKSell G. Topological dynamics and ordinary differential equations. London , UK: Van Nostrand Reinhold . 1971. ISBN 978-0442075026. Random Processes for Engineers. R Hajek, Cambridge University Press9781316164600Cambridge, EnglandHajek R. Random Processes for Engineers. Cambridge, England: Cambridge University Press . 2015. ISBN 9781316164600. A First Course in Stochastic Processes. S Karlin, H Taylor, Academic Press1483254240Cambridge, MassachusettsKarlin S, Taylor H. A First Course in Stochastic Processes. Cambridge, Massachusetts: Academic Press . 2012. ISBN 1483254240. . S Meyn, R Tweedie, Stochastic Markov Chains, Stability, Cambridge University Press9780511626630Cambridge, EnglandMeyn S, Tweedie R. Markov Chains and Stochastic Stability. Cambridge, England: Cambridge University Press . 2009. ISBN 9780511626630. Ordinary Differential Equations. P Hartman, Birkhauser9783764330682Boston, MA, USAHartman P. Ordinary Differential Equations. Boston, MA, USA: Birkhauser . 2002. ISBN 9783764330682. Intermittent transition to turbulence in dissipative dynamical systems. Y Pomeau, P Manneville, Commun. Math. Phys. 74Pomeau Y, Manneville P. Intermittent transition to turbulence in dissipative dynamical systems. Commun. Math. Phys. 1980; 74: 189-197. An Introduction to Ergodic Theory. P Walters, 978-0-387-95152-2Springer-VerlagNew YorkWalters P. An Introduction to Ergodic Theory. New York: Springer-Verlag . 1982. ISBN 978-0-387-95152-2.
[]